Search This Blog

Monday, June 12, 2023

Constitutional monarchy

From Wikipedia, the free encyclopedia
 
A constitutional monarchy, parliamentary monarchy, or democratic monarchy is a form of monarchy in which the monarch exercises their authority in accordance with a constitution and is not alone in making decisions. Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework.

Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, Bahrain and Bhutan, where the constitution grants substantial discretionary powers to the sovereign, to countries such as Australia, the United Kingdom, Canada, the Netherlands, Spain, Belgium, Sweden, Malaysia, Thailand, Cambodia, and Japan, where the monarch retains significantly less, if any, personal discretion in the exercise of their authority.

World's states colored by form of government1
The three constitutional monarchs of the Scandinavian kingdoms of Sweden, Norway & Denmark gathered in November 1917 in Oslo.

From left to right: Gustaf V, Haakon VII & Christian X.
 
A meeting in the Japanese privy council in 1946 led by Emperor Shōwa.

Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether codified or uncodified. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule".

In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers, and may also play an important political role.

The United Kingdom and the other Commonwealth realms are all constitutional monarchies in the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, in which the ruler is periodically selected by a small electoral college.

Strongly limited constitutional monarchies, such as the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore.

The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in a presidential or semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as 'parliamentary monarchies' to differentiate them from semi-constitutional monarchies.

History

The oldest constitutional monarchy dating back to ancient times was that of the Hittites. They were an ancient Anatolian people that lived during the Bronze Age whose king had to share his authority with an assembly, called the Panku, which was the equivalent to a modern-day deliberative assembly or a legislature. Members of the Panku came from scattered noble families who worked as representatives of their subjects in an adjutant or subaltern federal-type landscape.

Constitutional and absolute monarchy

England, Scotland and the United Kingdom

In the Kingdom of England, the Glorious Revolution of 1688 furthered the constitutional monarchy, restricted by laws such as the Bill of Rights 1689 and the Act of Settlement 1701, although the first form of constitution was enacted with the Magna Carta of 1215. At the same time, in Scotland, the Convention of Estates enacted the Claim of Right Act 1689, which placed similar limits on the Scottish monarchy.

Queen Anne was the last monarch to veto an Act of Parliament when, on 11 March 1708, she blocked the Scottish Militia Bill. However Hanoverian monarchs continued to selectively dictate government policies. For instance King George III constantly blocked Catholic Emancipation, eventually precipitating the resignation of William Pitt the Younger as prime minister in 1801. The sovereign's influence on the choice of prime minister gradually declined over this period. King William IV was the last monarch to dismiss a prime minister, when in 1834 he removed Lord Melbourne as a result of Melbourne's choice of Lord John Russell as Leader of the House of Commons. Queen Victoria was the last monarch to exercise real personal power, but this diminished over the course of her reign. In 1839, she became the last sovereign to keep a prime minister in power against the will of Parliament when the Bedchamber crisis resulted in the retention of Lord Melbourne's administration. By the end of her reign, however, she could do nothing to block the unacceptable (to her) premierships of William Gladstone, although she still exercised power in appointments to the Cabinet. For example in 1886 she vetoed Gladstone's choice of Hugh Childers as War Secretary in favour of Sir Henry Campbell-Bannerman.

Today, the role of the British monarch is by convention effectively ceremonial. The British Parliament and the Government – chiefly in the office of Prime Minister of the United Kingdom – exercise their powers under "Royal (or Crown) Prerogative": on behalf of the monarch and through powers still formally possessed by the monarch.

No person may accept significant public office without swearing an oath of allegiance to the King. With few exceptions, the monarch is bound by constitutional convention to act on the advice of the Government.

Continental Europe

Poland developed the first constitution for a monarchy in continental Europe, with the Constitution of 3 May 1791; it was the second single-document constitution in the world just after the first republican Constitution of the United States. Constitutional monarchy also occurred briefly in the early years of the French Revolution, but much more widely afterwards. Napoleon Bonaparte is considered the first monarch proclaiming himself as an embodiment of the nation, rather than as a divinely appointed ruler; this interpretation of monarchy is germane to continental constitutional monarchies. German philosopher Georg Wilhelm Friedrich Hegel, in his work Elements of the Philosophy of Right (1820), gave the concept a philosophical justification that concurred with evolving contemporary political theory and the Protestant Christian view of natural law. Hegel's forecast of a constitutional monarch with very limited powers whose function is to embody the national character and provide constitutional continuity in times of emergency was reflected in the development of constitutional monarchies in Europe and Japan.

Executive monarchy versus ceremonial monarchy

There exist at least two different types of constitutional monarchies in the modern world — executive and ceremonial. In executive monarchies, the monarch wields significant (though not absolute) power. The monarchy under this system of government is a powerful political (and social) institution. By contrast, in ceremonial monarchies, the monarch holds little or no actual power or direct political influence, though they frequently have a great deal of social and cultural influence.

Ceremonial and executive monarchy should not be confused with democratic and non-democratic monarchical systems. For example, in Liechtenstein and Monaco, the ruling monarchs wield significant executive power. However, while they are theoretically very powerful within their small states, they are not absolute monarchs and have very limited de facto power compared to the Islamic monarchs, which is why their countries are generally considered to be liberal democracies. For instance, when Hereditary Prince Alois of Liechtenstein threatened to veto a referendum to legalize abortion in 2011, it came as a surprise because the prince had not vetoed any law for over 30 years (in the end, this referendum failed to make it to a vote).

Modern constitutional monarchy

As originally conceived, a constitutional monarch was head of the executive branch and quite a powerful figure even though their power was limited by the constitution and the elected parliament. Some of the framers of the U.S. Constitution may have envisioned the president as an elected constitutional monarch, as the term was then understood, following Montesquieu's account of the separation of powers.

The present-day concept of a constitutional monarchy developed in the United Kingdom, where the democratically elected parliaments, and their leader, the prime minister, exercise power, with the monarchs having ceded power and remaining as a titular position. In many cases the monarchs, while still at the very top of the political and social hierarchy, were given the status of "servants of the people" to reflect the new, egalitarian position. In the course of France's July Monarchy, Louis-Philippe I was styled "King of the French" rather than "King of France".

Following the unification of Germany, Otto von Bismarck rejected the British model. In the constitutional monarchy established under the Constitution of the German Empire which Bismarck inspired, the Kaiser retained considerable actual executive power, while the Imperial Chancellor needed no parliamentary vote of confidence and ruled solely by the imperial mandate. However, this model of constitutional monarchy was discredited and abolished following Germany's defeat in the First World War. Later, Fascist Italy could also be considered a constitutional monarchy, in that there was a king as the titular head of state while actual power was held by Benito Mussolini under a constitution. This eventually discredited the Italian monarchy and led to its abolition in 1946. After the Second World War, surviving European monarchies almost invariably adopted some variant of the constitutional monarchy model originally developed in Britain.

Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of state—monarch or president—serves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament.

However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are:

  • the Royal Prerogative, under which the monarch may exercise power under certain very limited circumstances;
  • Sovereign Immunity, under which the monarch may do no wrong under the law because the responsible government is instead deemed accountable; and
  • the immunity of the monarch from some taxation or restrictions on property use.

Other privileges may be nominal or ceremonial (e.g. where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown).

Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet.

In nearly all cases, the monarch is still the nominal chief executive, but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is no longer even the nominal chief executive.

There are fifteen constitutional monarchies under King Charles III, which are known as Commonwealth realms. Unlike some of their continental European counterparts, the Monarch and his Governors-General in the Commonwealth realms hold significant "reserve" or "prerogative" powers, to be wielded in times of extreme emergency or constitutional crises, usually to uphold parliamentary government. For example, during the 1975 Australian constitutional crisis, the Governor-General dismissed the Australian Prime Minister Gough Whitlam. The Australian Senate had threatened to block the Government's budget by refusing to pass the necessary appropriation bills. On 11 November 1975, Whitlam intended to call a half-Senate election to try to break the deadlock. When he sought the Governor-General's approval of the election, the Governor-General instead dismissed him as Prime Minister. Shortly after that, he installed leader of the opposition Malcolm Fraser in his place. Acting quickly before all parliamentarians became aware of the government change, Fraser and his allies secured passage of the appropriation bills, and the Governor-General dissolved Parliament for a double dissolution election. Fraser and his government were returned with a massive majority. This led to much speculation among Whitlam's supporters as to whether this use of the Governor-General's reserve powers was appropriate, and whether Australia should become a republic. Among supporters of constitutional monarchy, however, the event confirmed the monarchy's value as a source of checks and balances against elected politicians who might seek powers in excess of those conferred by the constitution, and ultimately as a safeguard against dictatorship.

In Thailand's constitutional monarchy, the monarch is recognized as the Head of State, Head of the Armed Forces, Upholder of the Buddhist Religion, and Defender of the Faith. The immediate former King, Bhumibol Adulyadej, was the longest-reigning monarch in the world and in all of Thailand's history, before passing away on 13 October 2016. Bhumibol reigned through several political changes in the Thai government. He played an influential role in each incident, often acting as mediator between disputing political opponents. (See Bhumibol's role in Thai Politics.) Among the powers retained by the Thai monarch under the constitution, lèse majesté protects the image of the monarch and enables him to play a role in politics. It carries strict criminal penalties for violators. Generally, the Thai people were reverent of Bhumibol. Much of his social influence arose from this reverence and from the socioeconomic improvement efforts undertaken by the royal family.

In the United Kingdom, a frequent debate centres on when it is appropriate for a British monarch to act. When a monarch does act, political controversy can often ensue, partially because the neutrality of the crown is seen to be compromised in favour of a partisan goal, while some political scientists champion the idea of an "interventionist monarch" as a check against possible illegal action by politicians. For instance, the monarch of the United Kingdom can theoretically exercise an absolute veto over legislation by withholding royal assent. However, no monarch has done so since 1708, and it is widely believed that this and many of the monarch's other political powers are lapsed powers.

List of current constitutional monarchies

There are currently 43 monarchies worldwide.

Ceremonial constitutional monarchies

Executive constitutional monarchies

Former constitutional monarchies

Unusual constitutional monarchies

Sunday, June 11, 2023

Argument from reason

From Wikipedia, the free encyclopedia

The argument from reason is an argument against metaphysical naturalism and for the existence of God (or at least a supernatural being that is the source of human reason). The best-known defender of the argument is C. S. Lewis. Lewis first defended the argument at length in his 1947 book, Miracles: A Preliminary Study. In the second edition of Miracles (1960), Lewis substantially revised and expanded the argument.

Contemporary defenders of the argument from reason include Alvin Plantinga, Victor Reppert and William Hasker.

The argument

Metaphysical naturalism is the view that nature as studied by the natural sciences is all that exists. Naturalists deny the existence of a supernatural God, souls, an afterlife, or anything supernatural. Nothing exists outside or beyond the physical universe.

The argument from reason seeks to show that naturalism is self-refuting, or otherwise false and indefensible.

According to Lewis,

One absolutely central inconsistency ruins [the naturalistic worldview].... The whole picture professes to depend on inferences from observed facts. Unless inference is valid, the whole picture disappears.... [U]nless Reason is an absolute--all is in ruins. Yet those who ask me to believe this world picture also ask me to believe that Reason is simply the unforeseen and unintended by-product of mindless matter at one stage of its endless and aimless becoming. Here is flat contradiction. They ask me at the same moment to accept a conclusion and to discredit the only testimony on which that conclusion can be based.

— C. S. Lewis, "Is Theology Poetry?", The Weight of Glory and Other Addresses

More precisely, Lewis's argument from reason can be stated as follows:

1. No belief is rationally inferred if it can be fully explained in terms of nonrational causes.

Support: Reasoning requires insight into logical relations. A process of reasoning (P therefore Q) is rational only if the reasoner sees that Q follows from, or is supported by, P, and accepts Q on that basis. Thus, reasoning is trustworthy (or "valid", as Lewis sometimes says) only if it involves a special kind of causality, namely, rational insight into logical implication or evidential support. If a bit of reasoning can be fully explained by nonrational causes, such as fibers firing in the brain or a bump on the head, then the reasoning is not reliable, and cannot yield knowledge. Consider this example: Person A refuses to go near the neighbor’s dog because he had a bad childhood experience with dogs. Person B refuses to go near the neighbor’s dog because one month ago he saw it attack someone. Both have given a reason for staying away from the dog, but person A’s reason is the result of nonrational causes, while person B has given an explanation for his behavior following from rational inference (animals exhibit patterns of behavior; these patterns are likely to be repeated; this dog has exhibited aggression towards someone who approached it; there is a good chance that the dog may exhibit the same behavior towards me if I approach it). Consider a second example: person A says that he is afraid to climb to the 8th story of a bank building because he and humans in general have a natural fear of heights resulting from the processes of evolution and natural selection. He has given an explanation of his fear, but since his fear results from nonrational causes (natural selection), his argument does not follow from logical inference.

2. If naturalism is true, then all beliefs can be fully explained in terms of nonrational causes.

Support: Naturalism holds that nature is all that exists, and that all events in nature can in principle be explained without invoking supernatural or other nonnatural causes. Standardly, naturalists claim that all events must have physical causes, and that human thoughts can ultimately be explained in terms of material causes or physical events (such as neurochemical events in the brain) that are nonrational.

3. Therefore, if naturalism is true, then no belief is rationally inferred (from 1 and 2).

4. We have good reason to accept naturalism only if it can be rationally inferred from good evidence.

5. Therefore, there is not, and cannot be, good reason to accept naturalism.

In short, naturalism undercuts itself. If naturalism is true, then we cannot sensibly believe it or virtually anything else.

In some versions of the argument from reason, Lewis extends the argument to defend a further conclusion: that human reason depends on an eternal, self-existent rational Being (God). This extension of the argument from reason states:

1. Since everything in nature can be wholly explained in terms of nonrational causes, human reason (more precisely, the power of drawing conclusions based solely on the rational cause of logical insight) must have a source outside of nature.

2. If human reason came from non-reason it would lose all rational credentials and would cease to be reason.

3. So, human reason cannot come from non-reason (from 2).

4. So human reason must come from a source outside nature that is itself rational (from 1 and 3).

5. This supernatural source of reason may itself be dependent on some further source of reason, but a chain of such dependent sources cannot go on forever. Eventually, we must reason back to the existence of eternal, non-dependent source of human reason.

6. Therefore, there exists an eternal, self-existent, rational Being who is the ultimate source of human reason. This Being we call God (from 4-5). (Lewis, Miracles, chap. 4)

Anscombe's criticism

On 2 February 1948, Oxford philosopher Elizabeth Anscombe read a paper to the Oxford Socratic Club criticizing the version of the argument from reason contained in the third chapter of Lewis's Miracles.

Her first criticism was against the use of the word "irrational" by Lewis (Anscombe 1981: 225-26). Her point was that there is an important difference between irrational causes of belief, such as wishful thinking, and nonrational causes, such as neurons firing in the brain, that do not obviously lead to faulty reasoning. Lewis accepted the criticism and amended the argument, basing it on the concept of nonrational causes of belief (as in the version provided in this article).

Anscombe's second criticism questioned the intelligibility of Lewis's intended contrast between "valid" and "invalid" reasoning. She wrote: "What can you mean by 'valid' beyond what would be indicated by the explanation you would give for distinguishing between valid and invalid, and what in the naturalistic hypothesis prevents that explanation from being given and from meaning what it does?" (Anscombe 1981: 226) Her point is that it makes no sense to contrast "valid" and "invalid" reasoning unless it is possible for some forms of reasoning to be valid. Lewis later conceded (Anscombe 1981: 231) that "valid" was a bad word for what he had in mind. Lewis didn't mean to suggest that if naturalism is true, no arguments can be given in which the conclusions follow logically from the premises. What he meant is that a process of reasoning is "veridical", that is, reliable as a method of pursuing knowledge and truth, only if it cannot be entirely explained by nonrational causes.

Anscombe's third objection was that Lewis failed to distinguish between different senses of the terms "why", "because", and "explanation", and that what counts as a "full" explanation varies by context (Anscombe 1981: 227-31). In the context of ordinary life, "because he wants a cup of tea" may count as a perfectly satisfactory explanation of why Peter is boiling water. Yet such a purposive explanation would not count as a full explanation (or an explanation at all) in the context of physics or biochemistry. Lewis accepted this criticism, and created a revised version of the argument in which the distinction between "because" in the sense of physical causality, and "because" in the sense of evidential support, became the central point of the argument (this is the version described in this article).

More recent critics have argued that Lewis's argument at best refutes only strict forms of naturalism that seek to explain everything in terms ultimately reducible to physics or purely mechanistic causes. So-called "broad" naturalists that see consciousness as an "emergent" non-physical property of complex brains would agree with Lewis that different levels or types of causation exist in nature, and that rational inferences are not fully explainable by nonrational causes.

Other critics have objected that Lewis's argument from reason fails because the causal origins of beliefs are often irrelevant to whether those beliefs are rational, justified, warranted, etc. Anscombe, for example, argues that "if a man has reasons, and they are good reasons, and they are genuinely his reasons, for thinking something—then his thought is rational, whatever causal statements we make about him" (Anscombe 1981: 229). On many widely accepted theories of knowledge and justification, questions of how beliefs were ultimately caused (e.g., at the level of brain neurochemistry) are viewed as irrelevant to whether those beliefs are rational or justified. Some defenders of Lewis claim that this objection misses the mark, because his argument is directed at what he calls the "veridicalness" of acts of reasoning (i.e., whether reasoning connects us with objective reality or truth), rather than with whether any inferred beliefs can be rational or justified in a materialistic world.

Criticism by eliminative materialists

The argument from reason claims that if beliefs, desires, and other contentful mental states cannot be accounted for in naturalism then naturalism is false. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, cannot be explained on naturalism and therefore concludes that such entities do not exist. Even if successful, the argument from reason only rules out certain forms of naturalism and fails to argue against a conception of naturalism which accepts eliminative materialism to be the correct scientific account of human cognition.

Criticism by computationalists

Some people think it is easy to refute any argument from reason just by appealing to the existence of computers. Computers, according to the objection, reason, they are also undeniably a physical system, but they are also rational. So whatever incompatibility there might be between mechanism and reason must be illusory. Since computers do not operate on beliefs and desires and yet come to justified conclusions about the world as in object recognition or proving mathematical theorems, it should not be a surprise on naturalism that human brains can do the same. According to John Searle, computation and syntax are observer-relative but the cognition of the human mind is not observer-relative. Such a position seems to be bolstered by arguments from the indeterminacy of translation offered by Quine and Kripke's skeptical paradox regarding meaning which support the conclusion that the interpretation of algorithms is observer-relative. However, according to the Church–Turing thesis the human brain is a computer and computationalism is a viable and developing research program in neuroscience for understanding how the brain works. Moreover, any indeterminacy of brain cognition does not entail human cognitive faculties are unreliable because natural selection has ensured they result in the survival of biological organisms, contrary to claims by the evolutionary argument against naturalism.

Similar views by other thinkers

Philosophers such as Victor Reppert, William Hasker and Alvin Plantinga have expanded on the argument from reason, and credit C.S. Lewis as an important influence on their thinking.

Lewis never claimed that he invented the argument from reason; in fact, he refers to it as a "venerable philosophical chestnut." Early versions of the argument occur in the works of Arthur Balfour (see, e.g., The Foundations of Belief, 1879, chap. 13) and G.K. Chesterton. In Chesterton's 1908 book Orthodoxy, in a chapter titled "The Suicide of Thought", he writes of the "great and possible peril . . . that the human intellect is free to destroy itself....It is idle to talk always of the alternative of reason and faith. It is an act of faith to assert that our thoughts have any relation to reality at all. If you are merely a sceptic, you must sooner or later ask yourself the question, "Why should anything go right; even observation and deduction? Why should not good logic be as misleading as bad logic? They are both movements in the brain of a bewildered ape?"

Similarly, Chesterton asserts that the argument is a fundamental, if unstated, tenet of Thomism in his 1933 book St. Thomas Aquinas: "The Dumb Ox":

Thus, even those who appreciate the metaphysical depth of Thomism in other matters have expressed surprise that he does not deal at all with what many now think the main metaphysical question; whether we can prove that the primary act of recognition of any reality is real. The answer is that St. Thomas recognised instantly, what so many modern sceptics have begun to suspect rather laboriously; that a man must either answer that question in the affirmative, or else never answer any question, never ask any question, never even exist intellectually, to answer or to ask. I suppose it is true in a sense that a man can be a fundamental sceptic, but he cannot be anything else: certainly not even a defender of fundamental scepticism. If a man feels that all the movements of his own mind are meaningless, then his mind is meaningless, and he is meaningless; and it does not mean anything to attempt to discover his meaning. Most fundamental sceptics appear to survive, because they are not consistently sceptical and not at all fundamental. They will first deny everything and then admit something, if for the sake of argument--or often rather of attack without argument. I saw an almost startling example of this essential frivolity in a professor of final scepticism, in a paper the other day. A man wrote to say that he accepted nothing but Solipsism, and added that he had often wondered it was not a more common philosophy. Now Solipsism simply means that a man believes in his own existence, but not in anybody or anything else. And it never struck this simple sophist, that if his philosophy was true, there obviously were no other philosophers to profess it.

In Miracles, Lewis himself quotes J. B. S. Haldane, who appeals to a similar line of reasoning in his 1927 book, Possible Worlds: "If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true ... and hence I have no reason for supposing my brain to be composed of atoms."

Other versions of the argument from reason occur in C.E.M. Joad's Guide to Modern Philosophy (London: Faber, 1933, pp. 58–59), Richard Taylor's Metaphysics (Englewood Cliffs, NJ: Prentice Hall, 3rd ed., 1983, pp. 104–05), and J. P. Moreland's Scaling the Secular City: A Defense of Christianity (Grand Rapids, MI: Baker, 1987, chap. 3).

Peter Kreeft used the argument from reason to create a formulation of the argument from consciousness for the existence of God. He phrased it as follows:

  1. "We experience the universe as intelligible. This intelligibility means that the universe is graspable by intelligence."
  2. "Either this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence, or both intelligibility and intelligence are the products of blind chance."
  3. "Not blind chance."
  4. "Therefore this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence."

He used the argument from reason to affirm the third premise.

Scientific evidence

From Wikipedia, the free encyclopedia

Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with scientific methods. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

Principles of inference

A person's assumptions or beliefs about the relationship between observations and a hypothesis will affect whether that person takes the observations as evidence. These assumptions or beliefs will also affect how a person utilizes the observations as evidence. For example, the Earth's apparent lack of motion may be taken as evidence for a geocentric cosmology. However, after sufficient evidence is presented for heliocentric cosmology and the apparent lack of motion is explained, the initial observation is strongly discounted as evidence.

When rational observers have different background beliefs, they may draw different conclusions from the same scientific evidence. For example, Priestley, working with phlogiston theory, explained his observations about the decomposition of mercuric oxide using phlogiston. In contrast, Lavoisier, developing the theory of elements, explained the same observations with reference to oxygen. A causal relationship between the observations and hypothesis does not exist to cause the observation to be taken as evidence, but rather the causal relationship is provided by the person seeking to establish observations as evidence.

A more formal method to characterize the effect of background beliefs is Bayesian inference. In Bayesian inference, beliefs are expressed as percentages indicating one's confidence in them. One starts from an initial probability (a prior), and then updates that probability using Bayes' theorem after observing evidence. As a result, two independent observers of the same event will rationally arrive at different conclusions if their priors (previous observations that are also relevant to the conclusion) differ. However, if they are allowed to communicate with each other, they will end in agreement (per Aumann's agreement theorem).

The importance of background beliefs in the determination of what observations are evidence can be illustrated using deductive reasoning, such as syllogisms. If either of the propositions is not accepted as true, the conclusion will not be accepted either.

Utility of scientific evidence

Philosophers, such as Karl R. Popper, have provided influential theories of the scientific method within which scientific evidence plays a central role. In summary, Popper provides that a scientist creatively develops a theory that may be falsified by testing the theory against evidence or known facts. Popper's theory presents an asymmetry in that evidence can prove a theory wrong, by establishing facts that are inconsistent with the theory. In contrast, evidence cannot prove a theory correct because other evidence, yet to be discovered, may exist that is inconsistent with the theory.

Philosophical versus scientific views

In the 20th century, many philosophers investigated the logical relationship between evidence statements and hypotheses, whereas scientists tended to focus on how the data used for statistical inference are generated. But according to philosopher Deborah Mayo, by the end of the 20th century philosophers had come to understand that "there are key features of scientific practice that are overlooked or misdescribed by all such logical accounts of evidence, whether hypothetico-deductive, Bayesian, or instantiationist".

There were a variety of 20th-century philosophical approaches to decide whether an observation may be considered evidence; many of these focused on the relationship between the evidence and the hypothesis. In the 1950s, Rudolf Carnap recommended distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis). A 1983 anthology edited by Peter Achinstein provided a concise presentation by prominent philosophers on scientific evidence, including Carl Hempel (on the logic of confirmation), R. B. Braithwaite (on the structure of a scientific system), Norwood Russell Hanson (on the logic of discovery), Nelson Goodman (of grue fame, on a theory of projection), Rudolf Carnap (on the concept of confirming evidence), Wesley C. Salmon (on confirmation and relevance), and Clark Glymour (on relevant evidence). In 1990, William Bechtel provided four factors (clarity of the data, replication by others, consistency with results arrived at by alternative methods, and consistency with plausible theories of mechanisms) that biologists used to settle controversies about procedures and reliability of evidence.

In 2001, Achinstein published his own book on the subject titled The Book of Evidence, in which, among other topics, he distinguished between four concepts of evidence: epistemic-situation evidence (evidence relative to a given epistemic situation), subjective evidence (considered to be evidence by a particular person at a particular time), veridical evidence (a good reason to believe that a hypothesis is true), and potential evidence (a good reason to believe that a hypothesis is highly probable). Achinstein defined all his concepts of evidence in terms of potential evidence, since any other kind of evidence must at least be potential evidence, and he argued that scientists mainly seek veridical evidence but they also use the other concepts of evidence, which rely on a distinctive concept of probability, and Achinstein contrasted this concept of probability with previous probabilistic theories of evidence such as Bayesian, Carnapian, and frequentist.

Simplicity is one common philosophical criterion for scientific theories. Based on the philosophical assumption of the strong Church-Turing thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam's razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized." However, some philosophers (including Richard Boyd, Mario Bunge, John D. Norton, and Elliott Sober) have adopted a skeptical or deflationary view of the role of simplicity in science, arguing in various ways that its importance has been overemphasized.

Emphasis on hypothesis testing as the essence of science is prevalent among both scientists and philosophers. However, philosophers have noted that testing hypotheses by confronting them with new evidence does not account for all the ways that scientists use evidence. For example, when Geiger and Marsden scattered alpha particles through thin gold foil, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. Rutherford used the data to develop a new atomic model, not only to test an existing hypothesis; such use of evidence to produce new hypotheses is sometimes called abduction (following C. S. Peirce). Social-science methodologist Donald T. Campbell, who emphasized hypothesis testing throughout his career, later increasingly emphasized that the essence of science is "not experimentation per se" but instead the iterative competition of "plausible rival hypotheses", a process that at any given phase may start from evidence or may start from hypothesis. Other scientists and philosophers have emphasized the central role of questions and problems in the use of data and hypotheses.

Concept of scientific proof

While the phrase "scientific proof" is often used in the popular media, many scientists and philosophers have argued that there is really no such thing as infallible proof. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by 'proof' an argument which establishes once and for ever the truth of a theory." Albert Einstein said:

The scientific theorist is not to be envied. For Nature, or more precisely experiment, is an inexorable and not very friendly judge of his work. It never says "Yes" to a theory. In the most favorable cases it says "Maybe", and in the great majority of cases simply "No". If an experiment agrees with a theory it means for the latter "Maybe", and if it does not agree it means "No". Probably every theory will someday experience its "No"—most theories, soon after conception.

However, in contrast to the ideal of infallible proof, in practice theories may be said to be proved according to some standard of proof used in a given inquiry. In this limited sense, proof is the high degree of acceptance of a theory following a process of inquiry and critical evaluation according to the standards of a scientific community.

Politics of Europe

From Wikipedia, the free encyclopedia ...