Search This Blog

Sunday, October 12, 2025

Egoism

From Wikipedia, the free encyclopedia

Egoism is a philosophy concerned with the role of the self, or ego, as the motivation and goal of one's own action. Different theories of egoism encompass a range of disparate ideas and can generally be categorized into descriptive or normative forms. That is, they may be interested in either describing that people do act in self-interest or prescribing that they should. Other definitions of egoism may instead emphasise action according to one's will rather than one's self-interest, and furthermore posit that this is a truer sense of egoism.

The New Catholic Encyclopedia states of egoism that it "incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable." The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject the idea that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche.

Overview

The term egoism is derived from the French égoïsme, from the Latin ego (first person singular personal pronoun; "I") with the French -ïsme ("-ism").

Descriptive theories

The descriptive variants of egoism are concerned with self-regard as a factual description of human motivation and, in its furthest application, that all human motivation stems from the desires and interest of the ego. In these theories, action which is self-regarding may be simply termed egoistic.

The position that people tend to act in their own self-interest is called default egoism, whereas psychological egoism is the position that all motivations are rooted in an ultimately self-serving psyche. That is, in its strong form, that even seemingly altruistic actions are only disguised as such and are always self-serving. Its weaker form instead holds that, even if altruistic motivation is possible, the willed action necessarily becomes egoistic in serving one's own will. In contrast to this and philosophical egoism, biological egoism (also called evolutionary egoism) describes motivations rooted solely in reproductive self-interest (i.e. reproductive fitness). Furthermore, selfish gene theory holds that it is the self-interest of genetic information that conditions human behaviour.

Normative theories

Theories which hold egoism to be normative stipulate that the ego ought to promote its own interests above other values. Where this ought is held to be a pragmatic judgment it is termed rational egoism and where it is held to be a moral judgment it is termed ethical egoism. The Stanford Encyclopedia of Philosophy states that "ethical egoism might also apply to things other than acts, such as rules or character traits" but that such variants are uncommon. Furthermore, conditional egoism is a consequentialist form of ethical egoism which holds that egoism is morally right if it leads to morally acceptable ends. John F. Welsh, in his work Max Stirner's Dialectical Egoism: A New Interpretation, coins the term dialectical egoism to describe an interpretation of the egoist philosophy of Max Stirner as being fundamentally dialectical.

Normative egoism, as in the case of Stirner, need not reject that some modes of behavior are to be valued above others—such as Stirner's affirmation that non-restriction and autonomy are to be most highly valued. Contrary theories, however, may just as easily favour egoistic domination of others.

Theoreticians

Stirner

Stirner's egoism argues that individuals are impossible to fully comprehend, as no understanding of the self can adequately describe the fullness of experience. Stirner has been broadly understood as containing traits of both psychological egoism and rational egoism. Unlike the self-interest described by Ayn Rand, Stirner did not address individual self-interest, selfishness, or prescriptions for how one should act. He urged individuals to decide for themselves and fulfill their own egoism.

He believed that everyone was propelled by their own egoism and desires and that those who accepted this—as willing egoists—could freely live their individual desires, while those who did not—as unwilling egoists—will falsely believe they are fulfilling another cause while they are secretly fulfilling their own desires for happiness and security. The willing egoist would see that they could act freely, unbound from obedience to sacred but artificial truths like law, rights, morality, and religion. Power is the method of Stirner's egoism and the only justified method of gaining philosophical property. Stirner did not believe in the one-track pursuit of greed, which as only one aspect of the ego would lead to being possessed by a cause other than the full ego. He did not believe in natural rights to property and encouraged insurrection against all forms of authority, including disrespect for property.

Nietzsche

I submit that egoism belongs to the essence of a noble soul, I mean the unalterable belief that to a being such as "we," other beings must naturally be in subjection, and have to sacrifice themselves. The noble soul accepts the fact of his egoism without question, and also without consciousness of harshness, constraint, or arbitrariness therein, but rather as something that may have its basis in the primary law of things:—if he sought a designation for it he would say: "It is justice itself."
— Friedrich Nietzsche, Beyond Good and Evil

The philosophy of Friedrich Nietzsche has been linked to forms of both descriptive and normative egoism. Nietzsche, in attacking the widely held moral abhorrence for egoistic action, seeks to free higher human beings from their belief that this morality is good for them. He rejects Christian and Kantian ethics as merely the disguised egoism of slave morality.

The word "good" is from the start in no way necessarily tied up with "unegoistic" actions, as it is in the superstition of those genealogists of morality. Rather, that occurs for the first time with the collapse of aristocratic value judgments, when this entire contrast between "egoistic" and "unegoistic" pressed itself ever more strongly into human awareness—it is, to use my own words, the instinct of the herd which, through this contrast, finally gets its word (and its words). — Friedrich Nietzsche, On the Genealogy of Morals

In his On the Genealogy of Morals, Friedrich Nietzsche traces the origins of master–slave morality to fundamentally egoistic value judgments. In the aristocratic valuation, excellence and virtue come as a form of superiority over the common masses, which the priestly valuation, in ressentiment of power, seeks to invert—where the powerless and pitiable become the moral ideal. This upholding of unegoistic actions is therefore seen as stemming from a desire to reject the superiority or excellency of others. He holds that all normative systems which operate in the role often associated with morality favor the interests of some people, often, though not necessarily, at the expense of others.

Nevertheless, Nietzsche also states in the same book that there is no 'doer' of any acts, be they selfish or not:

...there is no "being" behind doing, effecting, becoming; "the doer" is merely a fiction added to the deed—the deed is everything.(§13)
— Friedrich Nietzsche, On the Genealogy of Morals

Jonas Monte of Brigham Young University argues that Nietzsche doubted if any 'I' existed in the first place, which the former defined as "a conscious Ego who commands mental states".

Other theoreticians

Relation to altruism

In 1851, French philosopher Auguste Comte coined the term altruism (French: altruisme; from Italian altrui, from Latin alteri 'others') as an antonym for egoism. In this sense, altruism defined Comte's position that all self-regard must be replaced with only the regard for others.

While Friedrich Nietzsche does not view altruism as a suitable antonym for egoism, Comte instead states that only two human motivations exist, egoistic and altruistic, and that the two cannot be mediated; that is, one must always predominate the other. For Comte, the total subordination of the self to altruism is a necessary condition to both social and personal benefit. Nietzsche, rather than rejecting the practice of altruism, warns that despite there being neither much altruism nor equality in the world, there is almost universal endorsement of their value and, notoriously, even by those who are its worst enemies in practice. Egoist philosophy commonly views the subordination of the self to altruism as either a form of domination that limits freedom, an unethical or irrational principle, or an extension of some egoistic root cause.

In evolutionary theory, biological altruism is the observed occurrence of an organism acting to the benefit of others at the cost of its own reproductive fitness. While biological egoism does grant that an organism may act to the benefit of others, it describes only such when in accordance with reproductive self-interest. Kin altruism and selfish gene theory are examples of this division. On biological altruism, the Stanford Encyclopedia of Philosophy states: "Contrary to what is often thought, an evolutionary approach to human behaviour does not imply that humans are likely to be motivated by self-interest alone. One strategy by which ‘selfish genes’ may increase their future representation is by causing humans to be non-selfish, in the psychological sense." This is a central topic within contemporary discourse of psychological egoism.

Philosophies of personal identity such as open individualism have implications for egoism and altruism. Daniel Kolak argues that closed individualism, the idea that one's identity consists of a line stretching across time and that a future self exists, is incoherent. Kolak instead argues that personal identity is an illusion, and the "self" doesn't actually exist, similar to the idea of anattā in Buddhist philosophy. Thus, it could be argued that egoism is incoherent, since there is no "self" in the first place. Similar arguments have been made by Derek Parfit in the book Reasons and Persons with ideas such as the teletransportation paradox.

Relation to nihilism

The history of egoist thought has often overlapped with that of nihilism. For example, Max Stirner's rejection of absolutes and abstract concepts often places him among the first philosophical nihilists. The popular description of Stirner as a moral nihilist, however, may fail to encapsulate certain subtleties of his ethical thought. The Stanford Encyclopedia of Philosophy states, "Stirner is clearly committed to the non-nihilistic view that certain kinds of character and modes of behaviour (namely autonomous individuals and actions) are to be valued above all others. His conception of morality is, in this respect, a narrow one, and his rejection of the legitimacy of moral claims is not to be confused with a denial of the propriety of all normative or ethical judgement." Stirner's nihilism may instead be understood as cosmic nihilism. Likewise, both normative and descriptive theories of egoism further developed under Russian nihilism, shortly giving birth to rational egoism. Nihilist philosophers Dmitry Pisarev and Nikolay Chernyshevsky were influential in this regard, compounding such forms of egoism with hard determinism.

Max Stirner's philosophy strongly rejects modernity and is highly critical of the increasing dogmatism and oppressive social institutions that embody it. In order that it might be surpassed, egoist principles are upheld as a necessary advancement beyond the modern world. The Stanford Encyclopedia states that Stirner's historical analyses serve to "undermine historical narratives which portray the modern development of humankind as the progressive realisation of freedom, but also to support an account of individuals in the modern world as increasingly oppressed". This critique of humanist discourses especially has linked Stirner to more contemporary poststructuralist thought.

Political egoism

Since normative egoism rejects the moral obligation to subordinate the ego to society-at-large or to a ruling class, it may be predisposed to certain political implications. The Internet Encyclopedia of Philosophy states:

Egoists ironically can be read as moral and political egalitarians glorifying the dignity of each and every person to pursue life as they see fit. Mistakes in securing the proper means and appropriate ends will be made by individuals, but if they are morally responsible for their actions they not only will bear the consequences but also the opportunity for adapting and learning.

In contrast with this however, such an ethic may not morally obligate against the egoistic exercise of power over others. On these grounds, Friedrich Nietzsche criticizes egalitarian morality and political projects as unconducive to the development of human excellence. Max Stirner's own conception, the union of egoists as detailed in his work The Ego and Its Own, saw a proposed form of societal relations whereby limitations on egoistic action are rejected. When posthumously adopted by the anarchist movement, this became the foundation for egoist anarchism.

Stirner's variant of property theory is similarly dialectical, where the concept of ownership is only that personal distinction made between what is one's property and what is not. Consequentially, it is the exercise of control over property which constitutes the nonabstract possession of it. In contrast to this, Ayn Rand incorporates capitalist property rights into her egoist theory.

Revolutionary politics

Egoist philosopher Nikolai Gavrilovich Chernyshevskii was the dominant intellectual figure behind the 1860–1917 revolutionary movement in Russia, which resulted in the assassination of Tsar Alexander II eight years before his death in 1889. Dmitry Pisarev was a similarly radical influence within the movement, though he did not personally advocate political revolution.

Philosophical egoism has also found wide appeal among anarchist revolutionaries and thinkers, such as John Henry Mackay, Benjamin Tucker, Émile Armand, Han Ryner Gérard de Lacaze-Duthiers, Renzo Novatore, Miguel Giménez Igualada, and Lev Chernyi. Though he did not involve in any revolutionary movements himself, the entire school of individualist anarchism owes much of its intellectual heritage to Max Stirner.

Egoist philosophy may be misrepresented as a principally revolutionary field of thought. However, neither Hobbesian nor Nietzschean theories of egoism approve of political revolution. Anarchism and revolutionary socialism were also strongly rejected by Ayn Rand and her followers.

Fascism

The philosophies of both Nietzsche and Stirner were heavily appropriated (or possibly expropriated) by fascist and proto-fascist ideologies. Nietzsche in particular has infamously been represented as a predecessor to Nazism and a substantial academic effort was necessary to disassociate his ideas from their aforementioned appropriation.

At first sight, Nazi totalitarianism may seem the opposite of Stirner's radical individualism. But fascism was above all an attempt to dissolve the social ties created by history and replace them by artificial bonds among individuals who were expected to render explicit obedience to the state on grounds of absolute egoism. Fascist education combined the tenets of asocial egoism and unquestioning conformism, the latter being the means by which the individual secured his own niche in the system. Stirner's philosophy has nothing to say against conformism, it only objects to the Ego being subordinated to any higher principle: the egoist is free to adjust to the world if it is clear he will better himself by doing so. His 'rebellion' may take the form of utter servility if it will further his interest; what he must not do is to be bound by 'general' values or myths of humanity. The totalitarian ideal of a barrack-like society from which all real, historical ties have been eliminated is perfectly consistent with Stirner's principles: the egoist, by his very nature, must be prepared to fight under any flag that suits his convenience.

Linguistics wars


From Wikipedia, the free encyclopedia

The linguistics wars were extended disputes among American theoretical linguists that occurred mostly during the 1960s and 1970s, stemming from a disagreement between Noam Chomsky and several of his associates and students. The debates started in 1967 when linguists Paul Postal, John R. Ross, George Lakoff, and James D. McCawley —self-dubbed the "Four Horsemen of the Apocalypse"—proposed an alternative approach in which the relation between semantics and syntax is viewed differently, which treated deep structures as meaning rather than syntactic objects. While Chomsky and other generative grammarians argued that meaning is driven by an underlying syntax, generative semanticists posited that syntax is shaped by an underlying meaning. This intellectual divergence led to two competing frameworks in generative semantics and interpretive semantics.

Eventually, generative semantics spawned a different linguistic paradigm, known as cognitive linguistics, a linguistic theory that correlates learning of languages to other cognitive abilities such as memorization, perception, and categorization, while descendants of interpretive semantics continue in the guise of formal semantics.

Background

In 1957, Noam Chomsky (b. 1928) published Syntactic Structures, his first influential work. The ideas in Syntactic Structures were a significant departure from the dominant paradigm among linguists at the time, championed by Leonard Bloomfield (1887–1949). The Bloomfieldian approach focused on smaller linguistic units such as morphemes and phones, and had little to say about how these units were organized into larger structures such as phrases and sentences. By contrast, syntax was the central empirical concern of Syntactic Structures, which modeled grammar as a sets of rules that procedurally generate all and only the sentences of a given language. This approach is referred to as transformational grammar. Moreover, Chomsky criticized Bloomfieldians as being "[t]axonomic linguists", mere collectors and cataloguers of language. Early work in generative grammar attempted to go beyond mere description of the data and identify the fundamental underlying principles of language. According to Chomsky, semantic components created the underlying structure of a given linguistic sequence, whereas phonological components formed its surface-level structure. This left the problem of ‘meaning’ in linguistic analysis unanswered.

Chomsky's Aspects of the Theory of Syntax (1965) developed his theory further by introducing the concepts of deep structure and surface structure, which were influenced by previous scholarship. First, Chomsky drew from Ferdinand de Saussure (1857–1913), specifically his dichotomy of langue (the native knowledge of a language) versus parole (the actual use of language). Secondly, Louis Hjelmslev (1899–1965) later argued parole is observable and can be defined as the arrangement of speech, whereas langue comprises the systems within actual speech that underpin its lexicon and grammar. Aspects of the Theory of Syntax also addressed the issue of meaning by endorsing the Katz–Postal hypothesis, which holds that transformations do not affect meaning, and are therefore “semantically transparent”. This attempted to introduce notions of semantics to descriptions of syntax. Chomsky's endorsement resulted in further exploration of the relation between syntax and semantics, creating the environment for the emergence of generative semantics.

Dispute

The divergence in the generative semantics and Aspects' paradigms (Adapted from Harris, 2022)

The point of disagreement between generative semantics, known at the time as Abstract Syntax, and interpretive semantics was the degree of abstractness of deep structure. This refers to the distance between deep structures and the surface structure. Generative semantics views deep structure and transformations as necessary for connecting the surface structure with meaning. Whereas Chomsky’s paradigm considers the deep structure and transformation that link the deep structure to the surface structure essential for describing the structural composition of linguistic items—syntactic description—without explicitly addressing meaning. Notably, generative semanticists eventually abandoned deep structures altogether for the semantic representation.

Generative semantics approach

Generative semantics was inspired by the notions of Chomsky in Aspects, in which he highlights two notions: deep structures determine the semantic representations, and selectional restrictions—rules that govern what follows and precedes words in a sentence—are stated in deep structures. These restrictions include the ‘semantic’ nature of the verb eat which necessitates that it should be followed by something edible. Generative semanticists initially misinterpreted Chomsky’s ideas about the relation between semantic representation and used the arguments of selectional restrictions to draw a direct and bilateral relation between meaning and surface structures, where semantic representations are mapped onto surface structures, thereby conflating the two levels of semantic representation and deep structures.

Generative semantics analysis evolved to favor an approach where deep structures reflect meaning directly through semantic features and relations—semantic representations. Thus, the formal characteristics of deep structures are considered insufficient and meaning drives the surface structures. The formal features of deep structures include context-free phrase-structures grammar and lexical insertion point—the point where words enter the derivation. Generative semantics view of the transformations and deep structures contrasted sharply with those of Chomsky's. Generative sematicist believed that deep structures are meaning representation and transformations apply to deep structures to create different surface structures while preserving meaning. Chomsky's model suggests that deep structure pertain the organization of linguistic items while transformations apply to and manipulate deep structure but sometimes alter the meaning.

Generative semantics' model:

deep structure:[AGENT] boy,[ACTION] hitting, [PATIENT]the ball

Transformation active: The boy hit the ball.

Chomsky's model:

deep structure: S ((NP the boy) (VP [hit]) (NP the ball))

Transformation passive: The ball was hit by the boy.

Generative semanticists used arguments such as category-changing transformations in which simple paraphrase clouds alter the syntactic categories yet the meaning is unchanged, solidifying the Katz-Postal hypothesis which postulates a transparent nature of transformations. These category-changing transformations exist in inchoative and causative clauses which share the same underlying structures similar to their stative clause as evident in the sentences below.

Inchoative: The door opened.

Causative: He opened the door.

The underlying structure is similar to the stative clause: The door is open.

Generative semanticists used this argument, first suggested by George Lakoff in his dissertation, to cement the idea that the underlying meaning (The door is OPEN) drives two different surface structures (Inchoative- causative), not the other way around.

Generative semantics and logical form

The level of semantic representation in the generative semantic analyses resembled the logical form, therefore, the derivation of a sentence is the direct mapping of semantics, meaning, and logic onto the surface structure, thus all aspects of meaning are represented in phrase-marker. Generative semanticists have claimed that the semantic deep structure is generated in a universal manner similar to those of Predicate logic, thereby reducing the syntactic categories to just three: S (= proposition), NP (= argument), and V (= predicate). In this analysis adjectives, negatives, and auxiliaries are reduced to one category which is Verb, and the other forms are derived transformationally.

Lexical decomposition

Lexical decomposition was used to draw the syntactic stretch of sentences relaying the semantic implication inherent to words. In the word kill the analysis would reveal that it has atomic bits such as CAUSE and BECOME and NOT and ALIVE and work the semantic and syntactic relation between lexical items and their atomic parts. Generative semantics’ case for lexical decomposition in which lexical reading, and base but different lexical extensions in example such as dead where the lexical base would be NOT ALIVE and the lexical extensions such as kill or die but similar readings such as the word die come from NOT ALIVE with the transformation inchoative it becomes (BECOME NOT ALIVE), and kill with the same lexical base NOT ALIVE with transformation causative, it becomes (CAUSE TO BECOME NOT ALIVE). This simplified the projections rules necessary for transformations; rather than entering the word kill directly in the deep structure, thereby creating a new ‘syntactic’ deep structure, it would be considered as sharing the same ‘semantic’ deep structure with dead, NOT ALIVE. Using this case of lexical decomposition, McCawley proposed a new rule—predicate raising—where lexical items can enter at any point of the derivation process rather than the deep structures. This argument by McCawley undermined deep structures as lexical insertion points; as evident in the generative semantics analysis, some transformations—predicate-raising—needed to be applied before the inserting the lexical items—lexical insertion point—in the derivation. Because predicate-raising collects the predicate parts –abstract verbs— into the meaning complexes, words.

These arguments were used to conclude that it made no theoretical sense to have syntactic deep structures as a separate level and that semantic representations, features, and relations should be mapped directly onto the surface structure. Additionally, generative semanticists have proclaimed that any level of structure that comes between the semantic representation and surface structure requires empirical justification.

Interpretivist critique of generative semantics

Chomsky and others conducted a number of arguments that are designed to demonstrate that generative semantics not only did not offer something new but was misconceived and misguided. In response to these challenges, Chomsky conducted a series of lectures and papers, known later as Remarks, which culminated in what was later known as the "interpretivist program". This program aimed to establish syntax as an independent level in the linguistic analysis—autonomous syntax—with independent rules, while the meaning of the syntactic structure follows from ‘interpretive’ rules applied to the syntactic structures. This approach retains the formal characteristics of deep structure as context-free phrase-structure grammar. Chomsky also criticized the Predicate-raising rule of McCawley for being an upside-down interpretive rule.

Lexicalism and deverbal nouns

The generative semanticist’s analysis—lexical decomposition—holds that words refuse and refusal would belong to the same category refuse, but in Remarks Chomsky argued for the limitation of transformations and the separation of lexical entries for semantically related words as some nominalizations have distinct meanings. Chomsky argued that words such as marry, marriage; revolve, revolution should not be treated as derived from their verb forms as revolution has broader scope, so is marry. These nouns—which are known as deverbal nouns—should exist separately in the lexicon. This approach was later known as lexicalism. This posited also, that nominalization transformations should happen in the lexicon not in the deep structure thereby limiting the power of transformations. The words refuse and refusal would belong to the same category REFUSE in the generative semantics framework, but in Remarks Chomsky argued for the limitation of transformations and the separation of lexical entries for semantically related words.

For example:

a. John is eager to please.

b. John's eagerness to please.

c. John is easy to please.

d. *John's easiness to please.

The d. sentence shows some distributional differences not accounted for if the deverbal nouns are to be derived transformationally. Another point made by Chomsky against the generative semantics was the structural similarity deverbal nouns have with noun phrases, which suggests that it has its own independent internal structure, in the example, proofs functions like portraits a regular noun phrase.

a. Several of John's proofs of the theorem.

b. Several of John's portraits of the dean.

Remarks contributed to what Chomsky terms the Extended Standard Theory, which he thought of as an extension to Aspects. To many linguists, the relation between transformations and semantics in the Generative Semantics was the natural progression of Aspects.

Lexical decomposition

The interpretive semanticist, Jerry Fodor, also criticized generative semanticists’ approach to lexical decomposition in which the word kill is derived from CAUSE TO BECOME NOT ALIVE in the work of Foder in a sentence such as:

a Putin caused Litvinenko to die on Wednesday by having him poisoned three weeks earlier.

b * Putin killed Litvinenko on Wednesday by having him poisoned three weeks earlier.

In these sentences (a) kill is derived from (b) caused to die, however, (a) is correct and causes no discrepancies but (b) which suggests a direct causal of killing contradicts the temporal qualifier “Wednesday by having him poisoned three weeks earlier” which suggests that lexical decomposition cloud fail to account for causal and temporal intricacies required for accurate semantic interpretation.

Cases for formalism in underlying structures

Coreference

Under the generative semanticist coreference relations in a sentence such as “Harry thinks he should win the prize” are analyzed in the deep structure as “Harry thinks Harry should win the prize”, then transformations happen to replace Harry with he in the surface structure. But this approach was criticized for creating an infinite loop of embedding— with he—in the deep structure “The man who shows he deserves it will get the prize he desires.”. Thus, the interpretivists considered he as a base component, and finding the correct antecedents is achieved through interpretive rules. Further solidifying the existence of formal structures independent of semantics, which transformations apply to.

Transformations and meaning

Transformations are not fully accounted for in the Katz-Postal hypothesis which underlies the generative semantics paradigm. The Interpretivists argued that passive transformations do alter meaning in sentences with qualifiers such as every. In the sentences

Everyone in the room knows two languages.

Two languages are known by everyone in the room.

Chomsky analyzed these two sentences as semantically different despite being only derivational pairs; he observed that the first sentence might imply that everyone knows two different languages, while the second sentence implies that everyone in the room knows the same two languages. This argument was used to retain the formal characteristics of deep structures as transformation movements are not accounted for through semantic relations, but rather formal ones. The existence of an independent level of syntactic structure to which transformations apply is evidence of formalism.

Global rules of generative semantics

Generative semanticists accounted for such discrepancy resulted from passive transformations by claiming that the previous sentences do not share the same underlying structure, but rather two different structures; the first sentence has an underlying structure starting with “Everyone”, while the other sentence is with “Two” with the quantifier determining the scope of the meaning. Additionally, generative semanticists provided the “Quantifier lowering” rule where quantifiers are moved to the last position in the surface structures. In the previous sentences, in the sentence with “two” as an underlying structure, everyone is lowered highlighting that it is the same two languages are known by everyone, while in the sentence with “Everyone” as an underlying structure, the quantifier “two” is lowered maintaining that it is everyone knows two different languages. Thus, generative semanticist, Lakoff, has expressed that the two sentences are not semantically equivalent. George Lakoff proposed another rule which he termed the global derivational constraint in which sentence such as "Two languages..." would not be possible derivationally from an underlying structure with quantifier "Everyone" encompassing "Two".

Challenges in the paradigm

Generative semantics faced challenges in its empirical confirmation. Analyses in interpretive semantics involve phrase-structure rules and transformations that are innately codified according to Aspects, drawing on Chomsky’s ideas of innate faculty in the human brain which process languages. By contrast, generative analyses contained hypotheses concerning factors like the intent of speakers and the denotation and entailment of sentences. Its lack of explicit rules, formulas, and underlying structures made its predictions difficult to compare and evaluate compared to those of interpretive semantics. Additionally, the generative framework was criticized for introducing irregularities without justification: the attempt to bridge syntax and semantics blurred the lines between these domains, with some arguing that the approach created more problems than it solved. These limitations led to the decline of generative semantics.

Aftermath

After the protracted debates and with the decline of generative semantics, its key figures pursued various paths. George Lakoff moved on to cognitive linguistics, which explores the cognitive domain and the relation between language and mental processes. Meanwhile, in the late 90s Chomsky switched his attention to a more universal program of generative grammar, the minimalist program, which does not claim to offer a comprehensive theory of language acquisition and use. Postal rejects the idea of generative semantics and embraces natural languages discarding aspects of cognition altogether and emphasizing grammaticality. Postal adopts a mathematical/ logical approach to studying ‘natural’ languages. John R. Ross ventured to more literary-orientated endeavors such as poetry, though he maintained his transformationalist essence as his name existed in many of the Chomskyan works. As for McCawley, he continued following the tradition of Generative Semantics until his unfortunate death in 1999. He was known for his malleable approach to linguistic theory, employing both Extended Standard Theory and Generative Semantics elements.

Books

A first systematic description of the linguistics wars is the chapter with this title in Frederick Newmeyer's book Linguistic Theory in America, which appeared in 1980.

The Linguistics Wars is the title of a 1993 book by Randy A. Harris that closely chronicles the dispute among Chomsky and other significant individuals (George Lakoff and Paul Postal, among others) and also highlights how certain theories evolved and which of their important features have influenced modern-day linguistic theories. A second edition was published in 2022, in which Harris traces several important 21st century linguistic developments such as construction grammar, cognitive linguistics and Frame semantics (linguistics), all emerging out of generative semantics. The second edition also argues that Chomsky's minimalist program has significant homologies with early generative semantics.

Ideology and Linguistic Theory, by John A. Goldsmith and Geoffrey J. Huck, also explores that history, with detailed theoretical discussion and observed history of the times, including memoirs/interviews with Ray Jackendoff, Lakoff, Postal, and Ross. The "What happened to Generative Semantics" chapter explores the aftermath of the dispute and the schools of thought or practice that could be seen as the successors to generative semantics.

Astrology and science

From Wikipedia, the free encyclopedia

Astrology consists of a number of belief systems that hold that there is a relationship between astronomical phenomena and events or descriptions of personality in the human world. Astrology has been rejected by the scientific community as having no explanatory power for describing the universe. Scientific testing has found no evidence to support the premises or purported effects outlined in astrological traditions.

Where astrology has made falsifiable predictions, it has been falsified. The most famous test was headed by Shawn Carlson and included a committee of scientists and a committee of astrologers. It led to the conclusion that natal astrology performed no better than chance.

Astrology has not demonstrated its effectiveness in controlled studies and has no scientific validity, and is thus regarded as pseudoscience. There is no proposed mechanism of action by which the positions and motions of stars and planets could affect people and events on Earth in the way astrologers say they do that does not contradict well-understood, basic aspects of biology and physics. Although astrology has no scientific validity, astrological beliefs have impacted human history and astrology has helped to drive the development of astronomy.

Modern scientific inquiry into astrology is primarily focused on drawing a correlation between astrological traditions and the influence of seasonal birth in humans.

Introduction

The majority of professional astrologers rely on performing astrology-based personality tests and making relevant predictions about the remunerator's future. Those who continue to have faith in astrology have been characterised as doing so "in spite of the fact that there is no verified scientific basis for their beliefs, and indeed that there is strong evidence to the contrary". Astrophysicist Neil deGrasse Tyson commented on astrological belief, saying that "part of knowing how to think is knowing how the laws of nature shape the world around us. Without that knowledge, without that capacity to think, you can easily become a victim of people who seek to take advantage of you".

The continued belief in astrology despite its lack of credibility is seen as a demonstration of low scientific literacy, although some continue to believe in it even though they are scientifically literate.

Historical relationship with astronomy

The foundations of the theoretical structure used in astrology originate with the Babylonians, although widespread usage did not occur until the start of the Hellenistic period after Alexander the Great swept through Greece. It was not known to the Babylonians that the constellations are not on a celestial sphere and are very far apart. The appearance of them being close is illusory. The exact demarcation of what a constellation is is cultural and varied between civilisations. Ptolemy's work on astronomy was driven to some extent by the desire, like all astrologers of the time, to easily calculate the planetary movements. Early Western astrology operated under the Ancient Greek concepts of the Macrocosm and microcosm, and thus medical astrology related what happened to the planets and other objects in the sky to medical operations. This provided a further motivator for the study of astronomy. While still defending the practice of astrology, Ptolemy acknowledged that the predictive power of astronomy for the motion of the planets and other celestial bodies ranked above astrological predictions.

During the Islamic Golden Age, astronomy was funded so that the astronomical parameters, such as the eccentricity of the sun's orbit, required for the Ptolemaic model could be calculated to sufficient accuracy and precision. Those in positions of power, like the Fatimid Caliphate vizier in 1120, funded the construction of observatories so that astrological predictions, fuelled by precise planetary information, could be made. Since the observatories were built to help in making astrological predictions, few of these observatories lasted long due to the prohibition against astrology within Islam, and most were torn down during or just after construction.

The clear rejection of astrology in works of astronomy started in 1679, with the yearly publication La Connoissance des temps. Unlike the West, in Iran, the rejection of heliocentrism continued up towards the start of the 20th century, in part motivated by a fear that this would undermine the widespread belief in astrology and Islamic cosmology in Iran. The first work, Falak al-sa'ada by Ictizad al-Saltana, aimed at undermining this belief in astrology and "old astronomy" in Iran was published in 1861. On astrology, it cited the inability of different astrologers to make the same prediction about what occurs following a conjunction and described the attributes astrologers gave to the planets as implausible.

Philosophy of science

Philosopher Karl Popper proposed falsifiability as ideas that distinguish science from non-science, using astrology as the example of an idea that has not dealt with falsification during experiment.

Astrology provides the quintessential example of a pseudoscience since it has been tested repeatedly and failed all the tests.

Falsifiability

Science and non-science are often distinguished by the criterion of falsifiability. The criterion was first proposed by philosopher of science Karl Popper. To Popper, science does not rely on induction; instead, scientific investigations are inherently attempts to falsify existing theories through novel tests. If a single test fails, then the theory is falsified.

Therefore, any test of a scientific theory must prohibit certain results that falsify the theory, and expect other specific results consistent with the theory. Using this criterion of falsifiability, astrology is a pseudoscience.

Astrology was Popper's most frequent example of pseudoscience. Popper regarded astrology as "pseudo-empirical" in that "it appeals to observation and experiment", but "nevertheless does not come up to scientific standards".

In contrast to scientific disciplines, astrology does not respond to falsification through experiment. According to Professor of neurology Terence Hines, this is a hallmark of pseudoscience.

"No puzzles to solve"

In contrast to Popper, the philosopher Thomas Kuhn argued that it was not lack of falsifiability that makes astrology unscientific, but rather that the process and concepts of astrology are non-empirical. To Kuhn, although astrologers had, historically, made predictions that "categorically failed", this in itself does not make it unscientific, nor do the attempts by astrologers to explain away the failure by claiming it was due to the creation of a horoscope being very difficult (through subsuming, after the fact, a more general horoscope that leads to a different prediction).

Rather, in Kuhn's eyes, astrology is not science because it was always more akin to medieval medicine; they followed a sequence of rules and guidelines for a seemingly necessary field with known shortcomings, but they did no research because the fields are not amenable to research, and so, "They had no puzzles to solve and therefore no science to practise."

While an astronomer could correct for failure, an astrologer could not. An astrologer could only explain away failure but could not revise the astrological hypothesis in a meaningful way. As such, to Kuhn, even if the stars could influence the path of humans through life astrology is not scientific.

Progress, practice and consistency

Philosopher Paul Thagard believed that astrology can not be regarded as falsified in this sense until it has been replaced with a successor. In the case of predicting behaviour, psychology is the alternative. To Thagard a further criterion of demarcation of science from pseudoscience was that the state of the art must progress and that the community of researchers should be attempting to compare the current theory to alternatives, and not be "selective in considering confirmations and disconfirmations".

Progress is defined here as explaining new phenomena and solving existing problems, yet astrology has failed to progress having only changed little in nearly 2000 years. To Thagard, astrologers are acting as though engaged in normal science believing that the foundations of astrology were well established despite the "many unsolved problems", and in the face of better alternative theories (Psychology). For these reasons Thagard viewed astrology as pseudoscience.

To Thagard, astrology should not be regarded as a pseudoscience on the failure of Gauquelin to find any correlation between the various astrological signs and someone's career, twins not showing the expected correlations from having the same signs in twin studies, lack of agreement on the significance of the planets discovered since Ptolemy's time and large scale disasters wiping out individuals with vastly different signs at the same time. Rather, his demarcation of science requires three distinct foci: "theory, community [and] historical context".

While verification and falsifiability focused on the theory, Kuhn's work focused on the historical context, but the astrological community should also be considered. Whether or not they:

  • are focused on comparing their approach to others.
  • have a consistent approach.
  • try to falsify their theory through experiment.

In this approach, true falsification rather than modifying a theory to avoid the falsification only really occurs when an alternative theory is proposed.

Irrationality

For the philosopher Edward W. James, astrology is irrational not because of the numerous problems with mechanisms and falsification due to experiments, but because an analysis of the astrological literature shows that it is infused with fallacious logic and poor reasoning.

What if throughout astrological writings we meet little appreciation of coherence, blatant insensitivity to evidence, no sense of a hierarchy of reasons, slight command over the contextual force of critieria, stubborn unwillingness to pursue an argument where it leads, stark naivete concerning the efficacy of explanation and so on? In that case, I think, we are perfectly justified in rejecting astrology as irrational. ... Astrology simply fails to meet the multifarious demands of legitimate reasoning.

— Edward W. James

This poor reasoning includes appeals to ancient astrologers such as Kepler despite any relevance of topic or specific reasoning, and vague claims. The claim that evidence for astrology is that people born at roughly "the same place have a life pattern that is very similar" is vague, but also ignores that time is reference frame dependent and gives no definition of "same place" despite the planet's moving in the reference frame of the Solar System. Other comments by astrologers are based on severely erroneous interpretations of basic physics, such as the general belief by medieval astrologers that the geocentric Solar System corresponded to an atom. Further, James noted that response to criticism also relies on faulty logic, an example of which was a response to twin studies with the statement that coincidences in twins are due to astrology, but any differences are due to "heredity and environment", while for other astrologers the issues are too difficult and they just want to get back to their astrology. Further, to astrologers, if something appears in their favour, they may latch upon it as proof, while making no attempt to explore its implications, preferring to refer to the item in favour as definitive; possibilities that do not make astrology look favourable are ignored.

Quinean dichotomy

From the Quinean web of knowledge, there is a dichotomy where one must either reject astrology or accept astrology but reject all established scientific disciplines that are incompatible with astrology.

Tests of astrology

Astrologers often do not make verifiable predictions, but instead make vague statements that are not falsifiable. Across several centuries of testing, the predictions of astrology have never been more accurate than that expected by chance alone. One approach used in testing astrology quantitatively is through blind experiment. When specific predictions from astrologers were tested in rigorous experimental procedures in the Carlson test, the predictions were falsified. All controlled experiments have failed to show any effect.

Mars effect

The initial Mars effect finding, showing the relative frequency of the diurnal position of Mars in the birth charts (N = 570) of "eminent athletes" (red solid line) compared to the expected results [after Michel Gauquelin 1955]

In 1955, astrologer and psychologist Michel Gauquelin stated that although he had failed to find evidence to support such indicators as the zodiacal signs and planetary aspects in astrology, he had found positive correlations between the diurnal positions of some of the planets and success in professions (such as doctors, scientists, athletes, actors, writers, painters, etc.), which astrology traditionally associates with those planets. The best-known of Gauquelin's findings is based on the positions of Mars in the natal charts of successful athletes and became known as the "Mars effect". A study conducted by seven French scientists attempted to replicate the claim, but found no statistical evidence. They attributed the effect to selective bias on Gauquelin's part, accusing him of attempting to persuade them to add or delete names from their study.

Geoffrey Dean has suggested that the effect may be caused by self-reporting of birth dates by parents rather than any issue with the study by Gauquelin. The suggestion is that a small subset of the parents may have had changed birth times to be consistent with better astrological charts for a related profession. The sample group was taken from a time where belief in astrology was more common. Gauquelin had failed to find the Mars effect in more recent populations, where a nurse or doctor recorded the birth information. The number of births under astrologically undesirable conditions was also lower, indicating more evidence that parents choose dates and times to suit their beliefs.

Carlson's experiment

Shawn Carlson's now renowned experiment was performed by 28 astrologers matching over 100 natal charts to psychological profiles generated by the California Psychological Inventory (CPI) test using double blind methods.

The experimental protocol used in Carlson's study was agreed to by a group of physicists and astrologers prior to the experiment. Astrologers, nominated by the National Council for Geocosmic Research, acted as the astrological advisors, and helped to ensure, and agreed, that the test was fair. They also chose 26 of the 28 astrologers for the tests, the other two being interested astrologers who volunteered afterwards. The astrologers came from Europe and the United States. The astrologers helped to draw up the central proposition of natal astrology to be tested. Published in Nature in 1985, the study found that predictions based on natal astrology were no better than chance, and that the testing "clearly refutes the astrological hypothesis".

Dean and Kelly

Scientist and former astrologer Geoffrey Dean and psychologist Ivan Kelly conducted a large-scale scientific test, involving more than one hundred cognitive, behavioural, physical and other variables, but found no support for astrology. A further test involved 45 confident astrologers, with an average of 10 years' experience and 160 test subjects (out of an original sample size of 1198 test subjects) who strongly favoured certain characteristics in the Eysenck Personality Questionnaire to extremes. The astrologers performed much worse than merely basing decisions off the individuals' ages, and much worse than 45 control subjects who did not use birth charts at all.

Other tests

A meta-analysis was conducted, pooling 40 studies consisting of 700 astrologers and over 1,000 birth charts. Ten of the tests, which had a total of 300 participating, involved the astrologers picking the correct chart interpretation out of a number of others that were not the astrologically correct chart interpretation (usually three to five others). When the date and other obvious clues were removed, no significant results were found to suggest there was any preferred chart.

In 10 studies, participants picked horoscopes that they felt were accurate descriptions, with one being the "correct" answer. Again the results were no better than chance.

In a study of 2011 sets of people born within 5 minutes of each other ("time twins") to see if there was any discernible effect; no effect was seen.

Quantitative sociologist David Voas examined the census data for more than 20 million individuals in England and Wales to see if star signs corresponded to marriage arrangements. No effect was seen.

Theoretic obstacles

Beyond the scientific tests astrology has failed, proposals for astrology face a number of other obstacles due to the many theoretical flaws in astrology including lack of consistency, lack of ability to predict missing planets, lack of connection of the zodiac to the constellations in Western astrology, and lack of any plausible mechanism. The underpinnings of astrology tend to disagree with numerous basic facts from scientific disciplines.

Lack of consistency

Testing the validity of astrology can be difficult because there is no consensus amongst astrologers as to what astrology is or what it can predict. Dean and Kelly documented 25 studies, which had found that the degree of agreement amongst astrologers' predictions was measured as a low 0.1. Most professional astrologers are paid to predict the future or describe a person's personality and life, but most horoscopes only make vague untestable statements that can apply to almost anyone.

Georges Charpak and Henri Broch dealt with claims from Western astrology in the book Debunked! ESP, Telekinesis, and other Pseudoscience. They pointed out that astrologers have only a small knowledge of astronomy and that they often do not take into account basic features such as the precession of the equinoxes. They commented on the example of Elizabeth Teissier who claimed that "the sun ends up in the same place in the sky on the same date each year" as the basis for claims that two people with the same birthday but a number of years apart should be under the same planetary influence. Charpak and Broch noted that "there is a difference of about twenty-two thousand miles between Earth's location on any specific date in two successive years" and that thus they should not be under the same influence according to astrology. Over a 40 years period there would be a difference greater than 780,000 miles.

Lack of physical basis

Edward W. James, commented that attaching significance to the constellation on the celestial sphere the sun is in at sunset was done on the basis of human factors—namely, that astrologers did not want to wake up early, and the exact time of noon was hard to know. Further, the creation of the zodiac and the disconnect from the constellations was because the sun is not in each constellation for the same amount of time. This disconnection from the constellations led to the problem with precession separating the zodiac symbols from the constellations that they once were related to. Philosopher of science, Massimo Pigliucci commenting on the movement, opined "Well then, which sign should I look up when I open my Sunday paper, I wonder?"

The tropical zodiac has no connection to the stars, and as long as no claims are made that the constellations themselves are in the associated sign, astrologers avoid the concept that precession seemingly moves the constellations because they do not reference them. Charpak and Broch, noting this, referred to astrology based on the tropical zodiac as being "...empty boxes that have nothing to do with anything and are devoid of any consistency or correspondence with the stars." Sole use of the tropical zodiac is inconsistent with references made, by the same astrologers, to the Age of Aquarius, which depends on when the vernal point enters the constellation of Aquarius.

Lack of predictive power

Shown in the image is Pluto and its satellites. Astrology was claimed to work before the discovery of Neptune, Uranus and Pluto and they have now been included in the discourse on an ad hoc basis.

Some astrologers make claims that the position of all the planets must be taken into account, but astrologers were unable to predict the existence of Neptune based on mistakes in horoscopes. Instead Neptune was predicted using Newton's law of universal gravitation. The grafting on of Uranus, Neptune and Pluto into the astrology discourse was done on an ad hoc basis.

On the demotion of Pluto to the status of dwarf planet, Philip Zarka of the Paris Observatory in Meudon, France wondered how astrologers should respond:

Should astrologers remove it from the list of luminars [Sun, Moon and the 8 planets other than earth] and confess that it did not actually bring any improvement? If they decide to keep it, what about the growing list of other recently discovered similar bodies (Sedna, Quaoar. etc), some of which even have satellites (Xena, 2003EL61)?

Lack of mechanism

Astrology has been criticised for failing to provide a physical mechanism that links the movements of celestial bodies to their purported effects on human behaviour. In a lecture in 2001, Stephen Hawking stated "The reason most scientists don't believe in astrology is because it is not consistent with our theories that have been tested by experiment." In 1975, amid increasing popular interest in astrology, The Humanist magazine presented a rebuttal of astrology in a statement put together by Bart J. Bok, Lawrence E. Jerome, and Paul Kurtz. The statement, entitled "Objections to Astrology", was signed by 186 astronomers, physicists and leading scientists of the day. They said that there is no scientific foundation for the tenets of astrology and warned the public against accepting astrological advice without question. Their criticism focused on the fact that there was no mechanism whereby astrological effects might occur:

We can see how infinitesimally small are the gravitational and other effects produced by the distant planets and the far more distant stars. It is simply a mistake to imagine that the forces exerted by stars and planets at the moment of birth can in any way shape our futures.

Astronomer Carl Sagan declined to sign the statement. Sagan said he took this stance not because he thought astrology had any validity, but because he thought that the tone of the statement was authoritarian, and that dismissing astrology because there was no mechanism (while "certainly a relevant point") was not in itself convincing. In a letter published in a follow-up edition of The Humanist, Sagan confirmed that he would have been willing to sign such a statement had it described and refuted the principal tenets of astrological belief. This, he argued, would have been more persuasive and would have produced less controversy.

The use of poetic imagery based on the concepts of the macrocosm and microcosm, "as above so below" to decide meaning such as Edward W. James' example of "Mars above is red, so Mars below means blood and war", is a false cause fallacy.

Many astrologers claim that astrology is scientific. If one were to attempt to try to explain it scientifically, there are only four fundamental forces (conventionally), limiting the choice of possible natural mechanisms. Some astrologers have proposed conventional causal agents such as electromagnetism and gravity. The strength of these forces drops off with distance. Scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from Earth, of a large but distant planet such as Jupiter is far smaller than that produced by ordinary household appliances. Astronomer Phil Plait noted that in terms of magnitude, the Sun is the only object with an electromagnetic field of note, but astrology isn't based just off the Sun alone. While astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. If the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. It would also be inconsistent with the other forces which drop off with distance. If distance is irrelevant, then, logically, all objects in space should be taken into account.

Carl Jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. However, synchronicity itself is considered neither testable nor falsifiable. The study was subsequently heavily criticised for its non-random sample and its use of statistics and also its lack of consistency with astrology.

Psychology

Psychological studies have not found any robust relationship between astrological signs and life outcomes. For example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well-being and quality of life.

It has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. Confirmation bias is a form of cognitive bias.

From the literature, astrology believers often tend to selectively remember those predictions that turned out to be true and do not remember those that turned out false. Another, separate, form of confirmation bias also plays a role, where believers often fail to distinguish between messages that demonstrate special ability and those that do not.

Thus there are two distinct forms of confirmation bias that are under study with respect to astrological belief.

The Barnum effect is the tendency for an individual to give a high accuracy rating to a description of their personality that supposedly tailored specifically for them, but is, in fact, vague and general enough to apply to a wide range of people. If more information is requested for a prediction, the more accepting people are of the results.

In 1949 Bertram Forer conducted a personality test on students in his classroom. Each student was given a supposedly individual assessment but actually all students received the same assessment. The personality descriptions were taken from a book on astrology. When the students were asked to comment on the accuracy of the test, more than 40% gave it the top mark of 5 out of 5, and the average rating was 4.2. The results of this study have been replicated in numerous other studies.

The study of the Barnum/Forer effect has been focused mostly on the level of acceptance of fake horoscopes and fake astrological personality profiles. Recipients of these personality assessments consistently fail to distinguish between common and uncommon personality descriptors. In a study by Paul Rogers and Janice Soule (2009), which was consistent with previous research on the issue, it was found that those who believed in astrology are generally more susceptible to giving more credence to the Barnum profile than sceptics.

By a process known as self-attribution, it has been shown in numerous studies that individuals with knowledge of astrology tend to describe their personalities in terms of traits compatible with their sun signs. The effect is heightened when the individuals were aware that the personality description was being used to discuss astrology. Individuals who were not familiar with astrology had no such tendency.

Sociology

In 1953, sociologist Theodor W. Adorno conducted a study of the astrology column of a Los Angeles newspaper as part of a project that examined mass culture in capitalist society. Adorno believed that popular astrology, as a device, invariably led to statements that encouraged conformity—and that astrologers who went against conformity with statements that discouraged performance at work etc. risked losing their jobs. Adorno concluded that astrology was a large-scale manifestation of systematic irrationalism, where flattery and vague generalisations subtly led individuals to believe the author of the column addressed them directly. Adorno drew a parallel with the phrase opium of the people, by Karl Marx, by commenting, "Occultism is the metaphysic of the dopes."

False balance is where a false, unaccepted or spurious viewpoint is included alongside a well reasoned one in media reports and TV appearances and as a result the false balance implies "there were two equal sides to a story when clearly there were not". During Wonders of the Solar System, a TV programme by the BBC, the physicist Brian Cox said: "Despite the fact that astrology is a load of rubbish, Jupiter can in fact have a profound influence on our planet. And it's through a force... gravity." This upset believers in astrology who complained that there was no astrologer to provide an alternative viewpoint. Following the complaints of astrology believers, Cox gave the following statement to the BBC: "I apologise to the astrology community for not making myself clear. I should have said that this new age drivel is undermining the very fabric of our civilisation." In the programme Stargazing Live, Cox further commented by saying: "in the interests of balance on the BBC, yes astrology is nonsense." In an editorial in the medical journal BMJ, editor Trevor Jackson cited this incident showing where false balance could occur.

Studies and polling have shown that the belief in astrology is higher in Western countries than might otherwise be expected. In 2012, in polls 42% of Americans said they thought astrology was at least partially scientific. This belief decreased with education and education is highly correlated with levels of scientific knowledge.

Some of the reported belief levels are due to a confusion of astrology with astronomy (the scientific study of celestial objects). The closeness of the two words varies depending on the language. A plain description of astrology as an "occult influence of stars, planets etc. on human affairs" had no impact on the general public's assessment of whether astrology is scientific or not in a 1992 eurobarometer poll. This may partially be due to the implicit association amongst the general public, of any wording ending in "-ology" with a legitimate field of knowledge. In Eurobarometers 224 and 225 performed in 2004, a split poll was used to isolate confusion over wording. In half of the polls, the word "astrology" was used, while in the other the word "horoscope" was used. Belief that astrology was at least partially scientific was 76%, but belief that horoscopes were at least partially scientific was 43%. In particular, belief that astrology was very scientific was 26% while that of horoscopes was 7%. This appeared to indicate that the high level of apparent polling support for astrology in the EU was indeed due to confusion over terminology.

Copenhagen interpretation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Copenhagen_interpretation   ...