Search This Blog

Sunday, June 11, 2023

Argument from reason

From Wikipedia, the free encyclopedia

The argument from reason is an argument against metaphysical naturalism and for the existence of God (or at least a supernatural being that is the source of human reason). The best-known defender of the argument is C. S. Lewis. Lewis first defended the argument at length in his 1947 book, Miracles: A Preliminary Study. In the second edition of Miracles (1960), Lewis substantially revised and expanded the argument.

Contemporary defenders of the argument from reason include Alvin Plantinga, Victor Reppert and William Hasker.

The argument

Metaphysical naturalism is the view that nature as studied by the natural sciences is all that exists. Naturalists deny the existence of a supernatural God, souls, an afterlife, or anything supernatural. Nothing exists outside or beyond the physical universe.

The argument from reason seeks to show that naturalism is self-refuting, or otherwise false and indefensible.

According to Lewis,

One absolutely central inconsistency ruins [the naturalistic worldview].... The whole picture professes to depend on inferences from observed facts. Unless inference is valid, the whole picture disappears.... [U]nless Reason is an absolute--all is in ruins. Yet those who ask me to believe this world picture also ask me to believe that Reason is simply the unforeseen and unintended by-product of mindless matter at one stage of its endless and aimless becoming. Here is flat contradiction. They ask me at the same moment to accept a conclusion and to discredit the only testimony on which that conclusion can be based.

— C. S. Lewis, "Is Theology Poetry?", The Weight of Glory and Other Addresses

More precisely, Lewis's argument from reason can be stated as follows:

1. No belief is rationally inferred if it can be fully explained in terms of nonrational causes.

Support: Reasoning requires insight into logical relations. A process of reasoning (P therefore Q) is rational only if the reasoner sees that Q follows from, or is supported by, P, and accepts Q on that basis. Thus, reasoning is trustworthy (or "valid", as Lewis sometimes says) only if it involves a special kind of causality, namely, rational insight into logical implication or evidential support. If a bit of reasoning can be fully explained by nonrational causes, such as fibers firing in the brain or a bump on the head, then the reasoning is not reliable, and cannot yield knowledge. Consider this example: Person A refuses to go near the neighbor’s dog because he had a bad childhood experience with dogs. Person B refuses to go near the neighbor’s dog because one month ago he saw it attack someone. Both have given a reason for staying away from the dog, but person A’s reason is the result of nonrational causes, while person B has given an explanation for his behavior following from rational inference (animals exhibit patterns of behavior; these patterns are likely to be repeated; this dog has exhibited aggression towards someone who approached it; there is a good chance that the dog may exhibit the same behavior towards me if I approach it). Consider a second example: person A says that he is afraid to climb to the 8th story of a bank building because he and humans in general have a natural fear of heights resulting from the processes of evolution and natural selection. He has given an explanation of his fear, but since his fear results from nonrational causes (natural selection), his argument does not follow from logical inference.

2. If naturalism is true, then all beliefs can be fully explained in terms of nonrational causes.

Support: Naturalism holds that nature is all that exists, and that all events in nature can in principle be explained without invoking supernatural or other nonnatural causes. Standardly, naturalists claim that all events must have physical causes, and that human thoughts can ultimately be explained in terms of material causes or physical events (such as neurochemical events in the brain) that are nonrational.

3. Therefore, if naturalism is true, then no belief is rationally inferred (from 1 and 2).

4. We have good reason to accept naturalism only if it can be rationally inferred from good evidence.

5. Therefore, there is not, and cannot be, good reason to accept naturalism.

In short, naturalism undercuts itself. If naturalism is true, then we cannot sensibly believe it or virtually anything else.

In some versions of the argument from reason, Lewis extends the argument to defend a further conclusion: that human reason depends on an eternal, self-existent rational Being (God). This extension of the argument from reason states:

1. Since everything in nature can be wholly explained in terms of nonrational causes, human reason (more precisely, the power of drawing conclusions based solely on the rational cause of logical insight) must have a source outside of nature.

2. If human reason came from non-reason it would lose all rational credentials and would cease to be reason.

3. So, human reason cannot come from non-reason (from 2).

4. So human reason must come from a source outside nature that is itself rational (from 1 and 3).

5. This supernatural source of reason may itself be dependent on some further source of reason, but a chain of such dependent sources cannot go on forever. Eventually, we must reason back to the existence of eternal, non-dependent source of human reason.

6. Therefore, there exists an eternal, self-existent, rational Being who is the ultimate source of human reason. This Being we call God (from 4-5). (Lewis, Miracles, chap. 4)

Anscombe's criticism

On 2 February 1948, Oxford philosopher Elizabeth Anscombe read a paper to the Oxford Socratic Club criticizing the version of the argument from reason contained in the third chapter of Lewis's Miracles.

Her first criticism was against the use of the word "irrational" by Lewis (Anscombe 1981: 225-26). Her point was that there is an important difference between irrational causes of belief, such as wishful thinking, and nonrational causes, such as neurons firing in the brain, that do not obviously lead to faulty reasoning. Lewis accepted the criticism and amended the argument, basing it on the concept of nonrational causes of belief (as in the version provided in this article).

Anscombe's second criticism questioned the intelligibility of Lewis's intended contrast between "valid" and "invalid" reasoning. She wrote: "What can you mean by 'valid' beyond what would be indicated by the explanation you would give for distinguishing between valid and invalid, and what in the naturalistic hypothesis prevents that explanation from being given and from meaning what it does?" (Anscombe 1981: 226) Her point is that it makes no sense to contrast "valid" and "invalid" reasoning unless it is possible for some forms of reasoning to be valid. Lewis later conceded (Anscombe 1981: 231) that "valid" was a bad word for what he had in mind. Lewis didn't mean to suggest that if naturalism is true, no arguments can be given in which the conclusions follow logically from the premises. What he meant is that a process of reasoning is "veridical", that is, reliable as a method of pursuing knowledge and truth, only if it cannot be entirely explained by nonrational causes.

Anscombe's third objection was that Lewis failed to distinguish between different senses of the terms "why", "because", and "explanation", and that what counts as a "full" explanation varies by context (Anscombe 1981: 227-31). In the context of ordinary life, "because he wants a cup of tea" may count as a perfectly satisfactory explanation of why Peter is boiling water. Yet such a purposive explanation would not count as a full explanation (or an explanation at all) in the context of physics or biochemistry. Lewis accepted this criticism, and created a revised version of the argument in which the distinction between "because" in the sense of physical causality, and "because" in the sense of evidential support, became the central point of the argument (this is the version described in this article).

More recent critics have argued that Lewis's argument at best refutes only strict forms of naturalism that seek to explain everything in terms ultimately reducible to physics or purely mechanistic causes. So-called "broad" naturalists that see consciousness as an "emergent" non-physical property of complex brains would agree with Lewis that different levels or types of causation exist in nature, and that rational inferences are not fully explainable by nonrational causes.

Other critics have objected that Lewis's argument from reason fails because the causal origins of beliefs are often irrelevant to whether those beliefs are rational, justified, warranted, etc. Anscombe, for example, argues that "if a man has reasons, and they are good reasons, and they are genuinely his reasons, for thinking something—then his thought is rational, whatever causal statements we make about him" (Anscombe 1981: 229). On many widely accepted theories of knowledge and justification, questions of how beliefs were ultimately caused (e.g., at the level of brain neurochemistry) are viewed as irrelevant to whether those beliefs are rational or justified. Some defenders of Lewis claim that this objection misses the mark, because his argument is directed at what he calls the "veridicalness" of acts of reasoning (i.e., whether reasoning connects us with objective reality or truth), rather than with whether any inferred beliefs can be rational or justified in a materialistic world.

Criticism by eliminative materialists

The argument from reason claims that if beliefs, desires, and other contentful mental states cannot be accounted for in naturalism then naturalism is false. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, cannot be explained on naturalism and therefore concludes that such entities do not exist. Even if successful, the argument from reason only rules out certain forms of naturalism and fails to argue against a conception of naturalism which accepts eliminative materialism to be the correct scientific account of human cognition.

Criticism by computationalists

Some people think it is easy to refute any argument from reason just by appealing to the existence of computers. Computers, according to the objection, reason, they are also undeniably a physical system, but they are also rational. So whatever incompatibility there might be between mechanism and reason must be illusory. Since computers do not operate on beliefs and desires and yet come to justified conclusions about the world as in object recognition or proving mathematical theorems, it should not be a surprise on naturalism that human brains can do the same. According to John Searle, computation and syntax are observer-relative but the cognition of the human mind is not observer-relative. Such a position seems to be bolstered by arguments from the indeterminacy of translation offered by Quine and Kripke's skeptical paradox regarding meaning which support the conclusion that the interpretation of algorithms is observer-relative. However, according to the Church–Turing thesis the human brain is a computer and computationalism is a viable and developing research program in neuroscience for understanding how the brain works. Moreover, any indeterminacy of brain cognition does not entail human cognitive faculties are unreliable because natural selection has ensured they result in the survival of biological organisms, contrary to claims by the evolutionary argument against naturalism.

Similar views by other thinkers

Philosophers such as Victor Reppert, William Hasker and Alvin Plantinga have expanded on the argument from reason, and credit C.S. Lewis as an important influence on their thinking.

Lewis never claimed that he invented the argument from reason; in fact, he refers to it as a "venerable philosophical chestnut." Early versions of the argument occur in the works of Arthur Balfour (see, e.g., The Foundations of Belief, 1879, chap. 13) and G.K. Chesterton. In Chesterton's 1908 book Orthodoxy, in a chapter titled "The Suicide of Thought", he writes of the "great and possible peril . . . that the human intellect is free to destroy itself....It is idle to talk always of the alternative of reason and faith. It is an act of faith to assert that our thoughts have any relation to reality at all. If you are merely a sceptic, you must sooner or later ask yourself the question, "Why should anything go right; even observation and deduction? Why should not good logic be as misleading as bad logic? They are both movements in the brain of a bewildered ape?"

Similarly, Chesterton asserts that the argument is a fundamental, if unstated, tenet of Thomism in his 1933 book St. Thomas Aquinas: "The Dumb Ox":

Thus, even those who appreciate the metaphysical depth of Thomism in other matters have expressed surprise that he does not deal at all with what many now think the main metaphysical question; whether we can prove that the primary act of recognition of any reality is real. The answer is that St. Thomas recognised instantly, what so many modern sceptics have begun to suspect rather laboriously; that a man must either answer that question in the affirmative, or else never answer any question, never ask any question, never even exist intellectually, to answer or to ask. I suppose it is true in a sense that a man can be a fundamental sceptic, but he cannot be anything else: certainly not even a defender of fundamental scepticism. If a man feels that all the movements of his own mind are meaningless, then his mind is meaningless, and he is meaningless; and it does not mean anything to attempt to discover his meaning. Most fundamental sceptics appear to survive, because they are not consistently sceptical and not at all fundamental. They will first deny everything and then admit something, if for the sake of argument--or often rather of attack without argument. I saw an almost startling example of this essential frivolity in a professor of final scepticism, in a paper the other day. A man wrote to say that he accepted nothing but Solipsism, and added that he had often wondered it was not a more common philosophy. Now Solipsism simply means that a man believes in his own existence, but not in anybody or anything else. And it never struck this simple sophist, that if his philosophy was true, there obviously were no other philosophers to profess it.

In Miracles, Lewis himself quotes J. B. S. Haldane, who appeals to a similar line of reasoning in his 1927 book, Possible Worlds: "If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true ... and hence I have no reason for supposing my brain to be composed of atoms."

Other versions of the argument from reason occur in C.E.M. Joad's Guide to Modern Philosophy (London: Faber, 1933, pp. 58–59), Richard Taylor's Metaphysics (Englewood Cliffs, NJ: Prentice Hall, 3rd ed., 1983, pp. 104–05), and J. P. Moreland's Scaling the Secular City: A Defense of Christianity (Grand Rapids, MI: Baker, 1987, chap. 3).

Peter Kreeft used the argument from reason to create a formulation of the argument from consciousness for the existence of God. He phrased it as follows:

  1. "We experience the universe as intelligible. This intelligibility means that the universe is graspable by intelligence."
  2. "Either this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence, or both intelligibility and intelligence are the products of blind chance."
  3. "Not blind chance."
  4. "Therefore this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence."

He used the argument from reason to affirm the third premise.

Scientific evidence

From Wikipedia, the free encyclopedia

Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with scientific methods. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

Principles of inference

A person's assumptions or beliefs about the relationship between observations and a hypothesis will affect whether that person takes the observations as evidence. These assumptions or beliefs will also affect how a person utilizes the observations as evidence. For example, the Earth's apparent lack of motion may be taken as evidence for a geocentric cosmology. However, after sufficient evidence is presented for heliocentric cosmology and the apparent lack of motion is explained, the initial observation is strongly discounted as evidence.

When rational observers have different background beliefs, they may draw different conclusions from the same scientific evidence. For example, Priestley, working with phlogiston theory, explained his observations about the decomposition of mercuric oxide using phlogiston. In contrast, Lavoisier, developing the theory of elements, explained the same observations with reference to oxygen. A causal relationship between the observations and hypothesis does not exist to cause the observation to be taken as evidence, but rather the causal relationship is provided by the person seeking to establish observations as evidence.

A more formal method to characterize the effect of background beliefs is Bayesian inference. In Bayesian inference, beliefs are expressed as percentages indicating one's confidence in them. One starts from an initial probability (a prior), and then updates that probability using Bayes' theorem after observing evidence. As a result, two independent observers of the same event will rationally arrive at different conclusions if their priors (previous observations that are also relevant to the conclusion) differ. However, if they are allowed to communicate with each other, they will end in agreement (per Aumann's agreement theorem).

The importance of background beliefs in the determination of what observations are evidence can be illustrated using deductive reasoning, such as syllogisms. If either of the propositions is not accepted as true, the conclusion will not be accepted either.

Utility of scientific evidence

Philosophers, such as Karl R. Popper, have provided influential theories of the scientific method within which scientific evidence plays a central role. In summary, Popper provides that a scientist creatively develops a theory that may be falsified by testing the theory against evidence or known facts. Popper's theory presents an asymmetry in that evidence can prove a theory wrong, by establishing facts that are inconsistent with the theory. In contrast, evidence cannot prove a theory correct because other evidence, yet to be discovered, may exist that is inconsistent with the theory.

Philosophical versus scientific views

In the 20th century, many philosophers investigated the logical relationship between evidence statements and hypotheses, whereas scientists tended to focus on how the data used for statistical inference are generated. But according to philosopher Deborah Mayo, by the end of the 20th century philosophers had come to understand that "there are key features of scientific practice that are overlooked or misdescribed by all such logical accounts of evidence, whether hypothetico-deductive, Bayesian, or instantiationist".

There were a variety of 20th-century philosophical approaches to decide whether an observation may be considered evidence; many of these focused on the relationship between the evidence and the hypothesis. In the 1950s, Rudolf Carnap recommended distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis). A 1983 anthology edited by Peter Achinstein provided a concise presentation by prominent philosophers on scientific evidence, including Carl Hempel (on the logic of confirmation), R. B. Braithwaite (on the structure of a scientific system), Norwood Russell Hanson (on the logic of discovery), Nelson Goodman (of grue fame, on a theory of projection), Rudolf Carnap (on the concept of confirming evidence), Wesley C. Salmon (on confirmation and relevance), and Clark Glymour (on relevant evidence). In 1990, William Bechtel provided four factors (clarity of the data, replication by others, consistency with results arrived at by alternative methods, and consistency with plausible theories of mechanisms) that biologists used to settle controversies about procedures and reliability of evidence.

In 2001, Achinstein published his own book on the subject titled The Book of Evidence, in which, among other topics, he distinguished between four concepts of evidence: epistemic-situation evidence (evidence relative to a given epistemic situation), subjective evidence (considered to be evidence by a particular person at a particular time), veridical evidence (a good reason to believe that a hypothesis is true), and potential evidence (a good reason to believe that a hypothesis is highly probable). Achinstein defined all his concepts of evidence in terms of potential evidence, since any other kind of evidence must at least be potential evidence, and he argued that scientists mainly seek veridical evidence but they also use the other concepts of evidence, which rely on a distinctive concept of probability, and Achinstein contrasted this concept of probability with previous probabilistic theories of evidence such as Bayesian, Carnapian, and frequentist.

Simplicity is one common philosophical criterion for scientific theories. Based on the philosophical assumption of the strong Church-Turing thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam's razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized." However, some philosophers (including Richard Boyd, Mario Bunge, John D. Norton, and Elliott Sober) have adopted a skeptical or deflationary view of the role of simplicity in science, arguing in various ways that its importance has been overemphasized.

Emphasis on hypothesis testing as the essence of science is prevalent among both scientists and philosophers. However, philosophers have noted that testing hypotheses by confronting them with new evidence does not account for all the ways that scientists use evidence. For example, when Geiger and Marsden scattered alpha particles through thin gold foil, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. Rutherford used the data to develop a new atomic model, not only to test an existing hypothesis; such use of evidence to produce new hypotheses is sometimes called abduction (following C. S. Peirce). Social-science methodologist Donald T. Campbell, who emphasized hypothesis testing throughout his career, later increasingly emphasized that the essence of science is "not experimentation per se" but instead the iterative competition of "plausible rival hypotheses", a process that at any given phase may start from evidence or may start from hypothesis. Other scientists and philosophers have emphasized the central role of questions and problems in the use of data and hypotheses.

Concept of scientific proof

While the phrase "scientific proof" is often used in the popular media, many scientists and philosophers have argued that there is really no such thing as infallible proof. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by 'proof' an argument which establishes once and for ever the truth of a theory." Albert Einstein said:

The scientific theorist is not to be envied. For Nature, or more precisely experiment, is an inexorable and not very friendly judge of his work. It never says "Yes" to a theory. In the most favorable cases it says "Maybe", and in the great majority of cases simply "No". If an experiment agrees with a theory it means for the latter "Maybe", and if it does not agree it means "No". Probably every theory will someday experience its "No"—most theories, soon after conception.

However, in contrast to the ideal of infallible proof, in practice theories may be said to be proved according to some standard of proof used in a given inquiry. In this limited sense, proof is the high degree of acceptance of a theory following a process of inquiry and critical evaluation according to the standards of a scientific community.

Empirical evidence

From Wikipedia, the free encyclopedia

Empirical evidence for a proposition is evidence, i.e. what supports or counters this proposition, that is constituted by or accessible to sense experience or experimental procedure. Empirical evidence is of central importance to the sciences and plays a role in various other fields, like epistemology and law.

There is no general agreement on how the terms evidence and empirical are to be defined. Often different fields work with quite different conceptions. In epistemology, evidence is what justifies beliefs or what determines whether holding a certain belief is rational. This is only possible if the evidence is possessed by the person, which has prompted various epistemologists to conceive evidence as private mental states like experiences or other beliefs. In philosophy of science, on the other hand, evidence is understood as that which confirms or disconfirms scientific hypotheses and arbitrates between competing theories. For this role, it is important that evidence is public and uncontroversial, like observable physical objects or events and unlike private mental states, so that evidence may foster scientific consensus. The term empirical comes from Greek ἐμπειρία empeiría, i.e. 'experience'. In this context, it is usually understood as what is observable, in contrast to unobservable or theoretical objects. It is generally accepted that unaided perception constitutes observation, but it is disputed to what extent objects accessible only to aided perception, like bacteria seen through a microscope or positrons detected in a cloud chamber, should be regarded as observable.

Empirical evidence is essential to a posteriori knowledge or empirical knowledge, knowledge whose justification or falsification depends on experience or experiment. A priori knowledge, on the other hand, is seen either as innate or as justified by rational intuition and therefore as not dependent on empirical evidence. Rationalism fully accepts that there is knowledge a priori, which is either outright rejected by empiricism or accepted only in a restricted way as knowledge of relations between our concepts but not as pertaining to the external world.

Scientific evidence is closely related to empirical evidence but not all forms of empirical evidence meet the standards dictated by scientific methods. Sources of empirical evidence are sometimes divided into observation and experimentation, the difference being that only experimentation involves manipulation or intervention: phenomena are actively created instead of being passively observed.

Background

The concept of evidence is of central importance in epistemology and in philosophy of science but plays different roles in these two fields. In epistemology, evidence is what justifies beliefs or what determines whether holding a certain doxastic attitude is rational. For example, the olfactory experience of smelling smoke justifies or makes it rational to hold the belief that something is burning. It is usually held that for justification to work, the evidence has to be possessed by the believer. The most straightforward way to account for this type of evidence possession is to hold that evidence consists of the private mental states possessed by the believer.

Some philosophers restrict evidence even further, for example, to only conscious, propositional or factive mental states. Restricting evidence to conscious mental states has the implausible consequence that many simple everyday beliefs would be unjustified. This is why it is more common to hold that all kinds of mental states, including stored but currently unconscious beliefs, can act as evidence. Various of the roles played by evidence in reasoning, for example, in explanatory, probabilistic and deductive reasoning, suggest that evidence has to be propositional in nature, i.e. that it is correctly expressed by propositional attitude verbs like "believe" together with a that-clause, like "that something is burning". But it runs counter to the common practice of treating non-propositional sense-experiences, like bodily pains, as evidence. Its defenders sometimes combine it with the view that evidence has to be factive, i.e. that only attitudes towards true propositions constitute evidence. In this view, there is no misleading evidence. The olfactory experience of smoke would count as evidence if it was produced by a fire but not if it was produced by a smoke generator. This position has problems in explaining why it is still rational for the subject to believe that there is a fire even though the olfactory experience cannot be considered evidence.

In philosophy of science, evidence is understood as that which confirms or disconfirms scientific hypotheses and arbitrates between competing theories. Measurements of Mercury's "anomalous" orbit, for example, constitute evidence that plays the role of neutral arbiter between Newton's and Einstein's theory of gravitation by confirming Einstein's theory. For scientific consensus, it is central that evidence is public and uncontroversial, like observable physical objects or events and unlike private mental states. This way it can act as a shared ground for proponents of competing theories. Two issues threatening this role are the problem of underdetermination and theory-ladenness. The problem of underdetermination concerns the fact that the available evidence often provides equal support to either theory and therefore cannot arbitrate between them. Theory-ladenness refers to the idea that evidence already includes theoretical assumptions. These assumptions can hinder it from acting as neutral arbiter. It can also lead to a lack of shared evidence if different scientists do not share these assumptions. Thomas Kuhn is an important advocate of the position that theory-ladenness in relation to scientific paradigms plays a central role in science.

Definition

A thing is evidence for a proposition if it epistemically supports this proposition or indicates that the supported proposition is true. Evidence is empirical if it is constituted by or accessible to sensory experience. There are various competing theories about the exact definition of the terms evidence and empirical. Different fields, like epistemology, the sciences or legal systems, often associate different concepts with these terms. An important distinction among theories of evidence is whether they identify evidence with private mental states or with public physical objects. Concerning the term empirical, there is a dispute about where to draw the line between observable or empirical objects in contrast to unobservable or merely theoretical objects.

The traditional view proposes that evidence is empirical if it is constituted by or accessible to sensory experience. This involves experiences arising from the stimulation of the sense organs, like visual or auditory experiences, but the term is often used in a wider sense including memories and introspection. It is usually seen as excluding purely intellectual experiences, like rational insights or intuitions used to justify basic logical or mathematical principles. The terms empirical and observable are closely related and sometimes used as synonyms.

There is an active debate in contemporary philosophy of science as to what should be regarded as observable or empirical in contrast to unobservable or merely theoretical objects. There is general consensus that everyday objects like books or houses are observable since they are accessible via unaided perception, but disagreement starts for objects that are only accessible through aided perception. This includes using telescopes to study distant galaxies, microscopes to study bacteria or using cloud chambers to study positrons. So the question is whether distant galaxies, bacteria or positrons should be regarded as observable or merely theoretical objects. Some even hold that any measurement process of an entity should be considered an observation of this entity. So in this sense, the interior of the sun is observable since neutrinos originating there can be detected. The difficulty with this debate is that there is a continuity of cases going from looking at something with the naked eye, through a window, through a pair of glasses, through a microscope, etc. Because of this continuity, drawing the line between any two adjacent cases seems to be arbitrary. One way to avoid these difficulties is to hold that it is a mistake to identify the empirical with what is observable or sensible. Instead, it has been suggested that empirical evidence can include unobservable entities as long as they are detectable through suitable measurements. A problem with this approach is that it is rather far from the original meaning of "empirical", which contains the reference to experience.

Related concepts

Knowledge a posteriori and a priori

Knowledge or the justification of a belief is said to be a posteriori if it is based on empirical evidence. A posteriori refers to what depends on experience (what comes after experience), in contrast to a priori, which stands for what is independent of experience (what comes before experience). For example, the proposition that "all bachelors are unmarried" is knowable a priori since its truth only depends on the meanings of the words used in the expression. The proposition "some bachelors are happy", on the other hand, is only knowable a posteriori since it depends on experience of the world as its justifier. Immanuel Kant held that the difference between a posteriori and a priori is tantamount to the distinction between empirical and non-empirical knowledge.

Two central questions for this distinction concern the relevant sense of "experience" and of "dependence". The paradigmatic justification of knowledge a posteriori consists in sensory experience, but other mental phenomena, like memory or introspection, are also usually included in it. But purely intellectual experiences, like rational insights or intuitions used to justify basic logical or mathematical principles, are normally excluded from it. There are different senses in which knowledge may be said to depend on experience. In order to know a proposition, the subject has to be able to entertain this proposition, i.e. possess the relevant concepts. For example, experience is necessary to entertain the proposition "if something is red all over then it is not green all over" because the terms "red" and "green" have to be acquired this way. But the sense of dependence most relevant to empirical evidence concerns the status of justification of a belief. So experience may be needed to acquire the relevant concepts in the example above, but once these concepts are possessed, no further experience providing empirical evidence is needed to know that the proposition is true, which is why it is considered to be justified a priori.

Empiricism and rationalism

In its strictest sense, empiricism is the view that all knowledge is based on experience or that all epistemic justification arises from empirical evidence. This stands in contrast to the rationalist view, which holds that some knowledge is independent of experience, either because it is innate or because it is justified by reason or rational reflection alone. Expressed through the distinction between knowledge a priori and a posteriori from the previous section, rationalism affirms that there is knowledge a priori, which is denied by empiricism in this strict form. One difficulty for empiricists is to account for the justification of knowledge pertaining to fields like mathematics and logic, for example, that 3 is a prime number or that modus ponens is a valid form of deduction. The difficulty is due to the fact that there seems to be no good candidate of empirical evidence that could justify these beliefs. Such cases have prompted empiricists to allow for certain forms of knowledge a priori, for example, concerning tautologies or relations between our concepts. These concessions preserve the spirit of empiricism insofar as the restriction to experience still applies to knowledge about the external world. In some fields, like metaphysics or ethics, the choice between empiricism and rationalism makes a difference not just for how a given claim is justified but for whether it is justified at all. This is best exemplified in metaphysics, where empiricists tend to take a skeptical position, thereby denying the existence of metaphysical knowledge, while rationalists seek justification for metaphysical claims in metaphysical intuitions.

Scientific evidence

Scientific evidence is closely related to empirical evidence. Some theorists, like Carlos Santana, have argued that there is a sense in which not all empirical evidence constitutes scientific evidence. One reason for this is that the standards or criteria that scientists apply to evidence exclude certain evidence that is legitimate in other contexts.[38] For example, anecdotal evidence from a friend about how to treat a certain disease constitutes empirical evidence that this treatment works but would not be considered scientific evidence.[38][39] Others have argued that the traditional empiricist definition of empirical evidence as perceptual evidence is too narrow for much of scientific practice, which uses evidence from various kinds of non-perceptual equipment.[40]

Central to scientific evidence is that it was arrived at by following scientific method in the context of some scientific theory.[41] But people rely on various forms of empirical evidence in their everyday lives that have not been obtained this way and therefore do not qualify as scientific evidence. One problem with non-scientific evidence is that it is less reliable, for example, due to cognitive biases like the anchoring effect,[42] in which information obtained earlier is given more weight, although science done poorly is also subject to such biases, as in the example of p-hacking.[38]

Observation, experimentation and scientific method

In the philosophy of science, it is sometimes held that there are two sources of empirical evidence: observation and experimentation.[43] The idea behind this distinction is that only experimentation involves manipulation or intervention: phenomena are actively created instead of being passively observed.[44][45][46] For example, inserting viral DNA into a bacterium is a form of experimentation while studying planetary orbits through a telescope belongs to mere observation.[47] In these cases, the mutated DNA was actively produced by the biologist while the planetary orbits are independent of the astronomer observing them. Applied to the history of science, it is sometimes held that ancient science is mainly observational while the emphasis on experimentation is only present in modern science and responsible for the scientific revolution.[44] This is sometimes phrased through the expression that modern science actively "puts questions to nature".[47] This distinction also underlies the categorization of sciences into experimental sciences, like physics, and observational sciences, like astronomy. While the distinction is relatively intuitive in paradigmatic cases, it has proven difficult to give a general definition of "intervention" applying to all cases, which is why it is sometimes outright rejected.[47][44]

Empirical evidence is required for a hypothesis to gain acceptance in the scientific community. Normally, this validation is achieved by the scientific method of forming a hypothesis, experimental design, peer review, reproduction of results, conference presentation, and journal publication. This requires rigorous communication of hypothesis (usually expressed in mathematics), experimental constraints and controls (expressed in terms of standard experimental apparatus), and a common understanding of measurement. In the scientific context, the term semi-empirical is used for qualifying theoretical methods that use, in part, basic axioms or postulated scientific laws and experimental results. Such methods are opposed to theoretical ab initio methods, which are purely deductive and based on first principles. Typical examples of both ab initio and semi-empirical methods can be found in computational chemistry.

 

Objective idealism

From Wikipedia, the free encyclopedia

Objective idealism is a philosophical theory that affirms the ideal and spiritual nature of the world and conceives of the idea of which the world is made as the objective and rational form in reality rather than as subjective content of the mind or mental representation. Objective idealism thus differs both from materialism, which holds that the external world is independent of cognizing minds and that mental processes and ideas are by-products of physical events, and from subjective idealism, which conceives of reality as totally dependent on the consciousness of the subject and therefore relative to the subject itself.

Objective idealism starts with Plato’s theory of forms, which mantains that objectively existing but non-material "ideas" give form to reality, thus shaping its basic building blocks.

Within German idealism, objective idealism identifies with the philosophy of Friedrich Schelling. According to Schelling, the rational or spiritual elements of reality are supposed to give conceptual structure to reality and ultimately constitute reality, to the point that nature and mind, matter and concept, are essentially identical: their distinction is merely psychological and depends on our predisposition to distinguish the "outside us" (nature, world) from the "in us" (mind, spirit). Within that tradition of philosophical thought, the entire world manifests itself through ideas and is governed by purposes or ends: regardless of the existence of a self-conscious subject, all reality is a manifestation of reason.

The philosopher Charles Sanders Peirce defined his own version of objective idealism as follows:

The one intelligible theory of the universe is that of objective idealism, that matter is effete mind, inveterate habits becoming physical laws (Peirce, CP 6.25).

By "objective idealism", Pierce meant that material objects such as organisms have evolved out of mind, that is, out of feelings ("such as pain, blue, cheerfulness") that are immediately present to consciousness. Contrary to Hegel, who identified mind with conceptual thinking or reason, Pierce identified it with feeling, and he claimed that at the origins of the world there was "a chaos of unpersonalized feelings", i.e., feelings that were not located in any individual subject. Therefore, in the 1890s Pierce's philosophy referred to itself as subjective idealism because it held that the mind comes first and the world is essentially mind (idealism) and the mind is independent of individuals (objectivism).

Objective idealism has also been defined as a form of metaphysical idealism that accepts Naïve realism (the view that empirical objects exist objectively) but rejects epiphenomenalist materialism (according to which the mind and spiritual values have emerged due to material causes), as opposed to subjective idealism denies that material objects exist independently of human perception and thus stands opposed to both realism and naturalism.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...