Search This Blog

Saturday, July 9, 2022

Distributive justice

From Wikipedia, the free encyclopedia
 

Distributive justice concerns the socially just allocation of resources. Often contrasted with just process, which is concerned with the administration of law, distributive justice concentrates on outcomes. This subject has been given considerable attention in philosophy and the social sciences.

In social psychology, distributive justice is defined as perceived fairness of how rewards and costs are shared by (distributed across) group members. For example, when some workers work more hours but receive the same pay, group members may feel that distributive justice has not occurred. To determine whether distributive justice has taken place, individuals often turn to the behavioral expectations of their group. If rewards and costs are allocated according to the designated distributive norms of the group, distributive justice has occurred.

Types of distributive norms

Five types of distributive norm are defined by Donelson R. Forsyth:

  1. Equality: Regardless of their inputs, all group members should be given an equal share of the rewards/costs. Equality supports that someone who contributes 20% of the group's resources should receive as much as someone who contributes 60%.
  2. Equity: Members' outcomes should be based upon their inputs. Therefore, an individual who has invested a large amount of input (e.g. time, money, energy) should receive more from the group than someone who has contributed very little. Members of large groups prefer to base allocations of rewards and costs on equity
  3. Power: Those with more authority, status, or control over the group should receive more than those in lower level positions.
  4. Need: Those in greatest needs should be provided with resources needed to meet those needs. These individuals should be given more resources than those who already possess them, regardless of their input.
  5. Responsibility: Group members who have the most should share their resources with those who have less.

Theories of distributive justice

To create a list of the theories of distributive justice will inevitably come with its implications. It is important to take into consideration the various nuances within each theory, as well as the development and variations in interpretations that exist for the theories presented in this article. The listed theories below are three of the most prominent Anglo-American theories within the field. With this in mind, the list is in no way to be considered exhaustive for distributive justice theory.

Justice as fairness

In his book A Theory of Justice, John Rawls outlines his famous theory about justice as fairness. The theory consists of three core components:

  1. the equality of people in rights and liberties;
  2. the equality of opportunities for all; and
  3. an arrangement of economic inequalities focused on benefit maximisation for those who are least advantaged.

The just 'basic structure'

Building a modern view on social contract theory, Rawls bases his work on an idea of justice being rooted in the basic structure, constituting the fundamental rules in society, which shape the social and economic institutions, as well as the governance. This basic structure is what shapes the citizens’ life opportunities. According to Rawls, the structure is based on principles about basic rights and duties that any self-interested, rational individual would accept in order to further his/her own interests in a context of social cooperation.

The original position

Rawls presents the concept of an original position as a hypothetical idea of how to establish "a fair procedure so that any principles agreed on will be just." In his envisioning of the original position, it is created from a judgement made through negotiations between a group of people who will decide on what a just distribution of primary goods is (according to Rawls, the primary goods include freedoms, opportunities, and control over resources). These people are assumed to be guided by self-interest, while also having a basic idea of morality and justice, and thus capable of understanding and evaluating a moral argument. Rawls then argues that procedural justice in the process of negotiation will be possible via a nullification of temptations for these people to exploit circumstances so as to favor their own position in society.

Veil of ignorance

This nullification of temptations is realised through a veil of ignorance, which these people will be behind. The veil prevents the people from knowing what particular preferences they will have by concealing their talents, objectives, and, most importantly, where in society they themselves will end up. The veil, on the other hand, does not conceal general information about the society, and the people are assumed to possess societal and economic knowledge beyond the personal level. Thereby, such veil creates an environment for negotiations where the evaluation of the distribution of goods is based on general considerations, regardless of place in society, rather than biased considerations based on personal gains for specific citizen positions. By this logic, the negotiations will be sensitive to both those who are worst off, given that a risk of being in that category yourself will incentivize protection of these people, but also the rest of society, as one would not wish to hinder maximal utilisation for these in case you would end up in higher classes.

Basic principles of a just distribution

In this original position, the main concern will be to secure the goods that are most essential for pursuing the goals of each individual, regardless of what this specific goal might be. With this in mind, Rawls theorizes two basic principles of just distribution.

The first principle, the liberty principle, is the equal access to basic rights and liberties for all. With this, each person should be able to access the most extensive set of liberties that is compatible with similar schemes of access by other citizens. Thereby, it is not only a question of positive individual access but also of negative restrictions so as to respect others’ basic rights and liberties.

The second principle, the difference principle, addresses how the arrangement of social and economic inequalities, and thus the just distribution should look. Firstly, Rawls argues that such distribution should be based on a reasonable expectation of advantage for all, but also to the greatest benefit of the least advantaged in society. Secondly, the offices and positions attached to this arrangement should be open to all.

These principles of justice are then prioritised according to two additional principles:

  1. the principles of the priority of liberty, wherein basic liberties only can be restricted if this is done for the sake of protecting liberty either:
    1. by strengthening “the total system of liberties shared by all;” or
    2. if a less than equal liberty is acceptable to those who are subject to this same lesser liberty.
  2. inequality of opportunity, and the priority of efficiency & welfare, can only be acceptable if:
    1. it enhances “the opportunities of those with lesser opportunities” in society; and/or
    2. excessive saving either balances out or lessens the gravity of hardship for those who do not traditionally benefit.

Utilitarianism

In 1789, Jeremy Bentham published his book An Introduction to the Principles of Morals and Legislation. Centred on individual utility and welfare, utilitarianism builds on the notion that any action which increases the overall welfare in society is good, and any action that decreases welfare is bad. By this notion, utilitarianism's focus lies with its outcomes and pays little attention to how these outcomes are shaped. This idea of utilisation maximisation, while being a much broader philosophical consideration, also translates into a theory of justice.

Conceptualising welfare

While the basic notion that utilitarianism builds on seems simple, one major dispute within the school of utilitarianism revolved around the conceptualisation and measurement of welfare. With disputes over this fundamental aspect, utilitarianism is evidently a broad term embracing many different sub-theories under its umbrella, and while much of the theoretical framework transects across these conceptualisations, using the different conceptualisation have clear implications for how we understand the more practical side of utilitarianism in distributive justice.

Bentham originally conceptualised this according to the hedonistic calculus, which also became the foundation for John Stuart Mill's focus on intellectual pleasures as the most beneficial contribution to societal welfare. Another path has been painted by Aristotle, based on an attempt to create a more universal list of conditions required for human prosperity. Opposite this, another path focuses on a subjective evaluation of happiness and satisfaction in human lives.

Egalitarianism

Based on a fundamental notion of equal worth and moral status of human beings, egalitarianism is concerned with equal treatment of all citizens in both respect and in concern, and in relation to the state as well as one another. Egalitarianism focuses more on the process through which distribution takes place, egalitarianism evaluates the justification for a certain distribution based on how the society and its institutions have been shaped, rather than what the outcome is. Attention is mainly given to ways in which unchosen person circumstances affect and hinder individuals and their life opportunities. As Elizabeth Anderson defines it, "the positive aim of egalitarian justice is...to create a community in which people stand in relation of equality to others."

While much academic work distinguishes between luck egalitarianism and social egalitarianism, Roland Pierik presents a synthesis combining the two branches. In his synthesis, he argues that instead of focusing on compensations for unjust inequalities in society via redistribution of primary goods, egalitarianism scholars should instead, given the fundamental notion upon which the theory is built, strive to create institutions that creates and promotes meaningful equal opportunities from the get-go. Pierik thus moves egalitarianism's otherwise reactive nature by emphasising a need for attention to the development of fundamentally different institutions that would eradicate the need for redistribution and instead focus on the initial equal distribution of opportunities from which people then themselves be able to shape their lives.

Application and outcomes

Outcomes

Distributive justice affects performance when efficiency and productivity are involved. Improving perceptions of justice increases performance. Organizational citizenship behaviors (OCBs) are employee actions in support of the organization that are outside the scope of their job description. Such behaviors depend on the degree to which an organization is perceived to be distributively just. As organizational actions and decisions are perceived as more just, employees are more likely to engage in OCBs. Perceptions of distributive justice are also strongly related to the withdrawal of employees from the organization.

Wealth

Distributive justice considers whether the distribution of goods among the members of society at a given time is subjectively acceptable.

Not all advocates of consequentialist theories are concerned with an equitable society. What unites them is the mutual interest in achieving the best possible results or, in terms of the example above, the best possible distribution of wealth.

Environmental justice

Distributive justice in an environmental context is the equitable distribution of a society's technological and environmental risks, impacts, and benefits. These burdens include exposure to hazardous waste, land appropriation, armed violence, and murder. Distributive justice is an essential principle of environmental justice because there is evidence that shows that these burdens cause health problems, negatively affect quality of life, and drive down property value.

The potential negative social impacts of environmental degradation and regulatory policies have been at the center environmental discussions since the rise of environmental justice. Environmental burdens fall disproportionately upon the Global South, while benefits are primarily accrued to the Global North.

In politics

Distributive justice theory argues that societies have a duty to individuals in need and that all individuals have a duty to help others in need. Proponents of distributive justice link it to human rights. Many governments are known for dealing with issues of distributive justice, especially in countries with ethnic tensions and geographically distinctive minorities. Post-apartheid South Africa is an example of a country that deals with issues of re-allocating resources with respect to the distributive justice framework.

Influenced figures

Distributive justice is also fundamental to the Catholic Church's social teaching, inspiring such figures as Dorothy Day and Pope John Paul II.

Hayek's criticism

Within the context of Western liberal democracies in the post-WWII decades, Friedrich von Hayek was one of the most famous opposers of the idea of distributive justice. For him, social and distributive justice were meaningless and impossible to attain, on the grounds of being within a system where the outcomes are not determined deliberately by the people but contrarily spontaneity is the norm. Therefore, distributive justice, redistribution of wealth, and the demands for social justice in a society ruled by an impersonal process such as the market are in this sense incompatible with that system.

In his famous book Road to Serfdom, there can be found considerations about social assistance from the state. There, in talking about the importance of a restrictive kind of security (the one against physical privation) in front of one that necessarily needs to control or abolish the market, Hayek poses that "there can be no doubt that some minimum of food, shelter, and clothing, sufficient to preserve health and the capacity to work, can be assured to everybody". Providing this type of security is for Hayek compatible with individual freedom as it does not involve planning. But already in this early work, he acknowledges the fact that this provision must keep the incentives and the external pressure going and not select which group enjoys security and which does not, for under these conditions "the striving for security tends to become higher than the love of freedom". Therefore, fostering a certain kind of security (the one that for him socialist economic policies follow) can entail growing insecurity as the privilege increases social differences. Notwithstanding, he concludes that "adequate security against severe privation, and the reduction of the avoidable causes of misdirected effort and consequent disappointment, will have to be one of the main goals of policy".

Despite that vague social awareness (also a sign of the times as WWII's devastating consequences were obvious), Hayek dismisses an organizational view that ascribes certain outcomes to an intentional design, which would be contrary to his worshipped spontaneous order. For this, Hayek famously firstly regards the term social (or distributive) justice as meaningless when it is applied to the results of a liberal market system that should yield spontaneous outcomes. Justice has an individual component for Hayek, is only understood in the aggregation of individual actions which follow common rules, social and distributive justice are the negative opposite as they need a command economy. Secondly, following Tebble's (2009) view, the concept of social justice is for Hayek a reminiscence of an atavistic view towards society, that has been overcome by the survival capacity of the catallactic order and its values.

The third Hayekian critique is about the unfeasibility of attaining distributive justice in a free market order and this is defended on the basis of the determinate goal that all distributive justice aims to. In a catallactic order, the individual morality should freely determine what are distributive fairness and the values that govern economic activity, and the fact that it is impossible to gather all the individual information in a single pursuit for social and distributive justice results in realizing the fact that it cannot be pursued. Lastly, Hayek claims for the incompatibility between the free market and social justice, for, in essence, they are different kinds of inequalities. The former is one determined by the interaction of free individuals and the latter by the decision of an authority. Hayek will, on ethical grounds, choose the former.

Libertarian perspective

One of the major exponents of the libertarian outlook toward distributive justice is Robert Nozick. In his book Anarchy, State and Utopia he stresses how the term distributive justice is not a neutral one. In fact, there is no central distributor that can be regarded as such. What each person gets, he or she gets from the outcomes of Lockean self-ownership (a condition that implies one's labor mixed with the world), or others who give to him in exchange for something, or as a gift. For him, "there is no more a distributing or distribution of shares than there is a distribution of mates in a society in which persons choose whom they shall marry". This means that there can be no pattern to which to conform or aim. The market and the result of individual actions provided the conditions for libertarian principles of just acquisition and exchange (contained in his Entitlement Theory) will have as a result a distribution that will be just, without the need for considerations about the specific model or standard it should follow.

Calibration

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Calibration

In measurement technology and metrology, calibration is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Such a standard could be another measurement device of known accuracy, a device generating the quantity to be measured such as a voltage, a sound tone, or a physical artifact, such as a meter ruler.

The outcome of the comparison can result in one of the following:

  • no significant error being noted on the device under test
  • a significant error being noted but no adjustment made
  • an adjustment made to correct the error to an acceptable level

Strictly speaking, the term "calibration" means just the act of comparison and does not include any subsequent adjustment.

The calibration standard is normally traceable to a national or international standard held by a metrology body.

BIPM Definition

The formal definition of calibration by the International Bureau of Weights and Measures (BIPM) is the following: "Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication."

This definition states that the calibration process is purely a comparison, but introduces the concept of measurement uncertainty in relating the accuracies of the device under test and the standard.

Modern calibration processes

The increasing need for known accuracy and uncertainty and the need to have consistent and comparable standards internationally has led to the establishment of national laboratories. In many countries a National Metrology Institute (NMI) will exist which will maintain primary standards of measurement (the main SI units plus a number of derived units) which will be used to provide traceability to customer's instruments by calibration.

The NMI supports the metrological infrastructure in that country (and often others) by establishing an unbroken chain, from the top level of standards to an instrument used for measurement. Examples of National Metrology Institutes are NPL in the UK, NIST in the United States, PTB in Germany and many others. Since the Mutual Recognition Agreement was signed it is now straightforward to take traceability from any participating NMI and it is no longer necessary for a company to obtain traceability for measurements from the NMI of the country in which it is situated, such as the National Physical Laboratory in the UK.

Quality

To improve the quality of the calibration and have the results accepted by outside organizations it is desirable for the calibration and subsequent measurements to be "traceable" to the internationally defined measurement units. Establishing traceability is accomplished by a formal comparison to a standard which is directly or indirectly related to national standards (such as NIST in the USA), international standards, or certified reference materials. This may be done by national standards laboratories operated by the government or by private firms offering metrology services.

Quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO 9000 and ISO 17025 standards require that these traceable actions are to a high level and set out how they can be quantified.

To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a stated confidence level. This is evaluated through careful uncertainty analysis. Some times a DFS (Departure From Spec) is required to operate machinery in a degraded state. Whenever this does happen, it must be in writing and authorized by a manager with the technical assistance of a calibration technician.

Measuring devices and instruments are categorized according to the physical quantities they are designed to measure. These vary internationally, e.g., NIST 150-2G in the U.S. and NABL-141 in India. Together, these standards cover instruments that measure various physical quantities such as electromagnetic radiation (RF probes), sound (sound level meter or noise dosimeter), time and frequency (intervalometer), ionizing radiation (Geiger counter), light (light meter), mechanical quantities (limit switch, pressure gauge, pressure switch), and, thermodynamic or thermal properties (thermometer, temperature controller). The standard instrument for each test device varies accordingly, e.g., a dead weight tester for pressure gauge calibration and a dry block temperature tester for temperature gauge calibration.

Instrument calibration prompts

Calibration may be required for the following reasons:

  • a new instrument
  • after an instrument has been repaired or modified
  • moving from one location to another location
  • when a specified time period has elapsed
  • when a specified usage (operating hours) has elapsed
  • before and/or after a critical measurement
  • after an event, for example
    • after an instrument has been exposed to a shock, vibration, or physical damage, which might potentially have compromised the integrity of its calibration
    • sudden changes in weather
  • whenever observations appear questionable or instrument indications do not match the output of surrogate instruments
  • as specified by a requirement, e.g., customer specification, instrument manufacturer recommendation.

In general use, calibration is often regarded as including the process of adjusting the output or indication on a measurement instrument to agree with value of the applied standard, within a specified accuracy. For example, a thermometer could be calibrated so the error of indication or the correction is determined, and adjusted (e.g. via calibration constants) so that it shows the true temperature in Celsius at specific points on the scale. This is the perception of the instrument's end-user. However, very few instruments can be adjusted to exactly match the standards they are compared to. For the vast majority of calibrations, the calibration process is actually the comparison of an unknown to a known and recording the results.

Basic calibration process

Purpose and scope

The calibration process begins with the design of the measuring instrument that needs to be calibrated. The design has to be able to "hold a calibration" through its calibration interval. In other words, the design has to be capable of measurements that are "within engineering tolerance" when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the actual measuring instruments performing as expected. Basically, the purpose of calibration is for maintaining the quality of measurement as well as to ensure the proper working of particular instrument.

Frequency

The exact mechanism for assigning tolerance values varies by country and as per the industry type. The measuring of equipment is manufacturer generally assigns the measurement tolerance, suggests a calibration interval (CI) and specifies the environmental range of use and storage. The using organization generally assigns the actual calibration interval, which is dependent on this specific measuring equipment's likely usage level. The assignment of calibration intervals can be a formal process based on the results of previous calibrations. The standards themselves are not clear on recommended CI values:

ISO 17025
"A calibration certificate (or calibration label) shall not contain any recommendation on the calibration interval except where this has been agreed with the customer. This requirement may be superseded by legal regulations.”
ANSI/NCSL Z540
"...shall be calibrated or verified at periodic intervals established and maintained to assure acceptable reliability..."
ISO-9001
"Where necessary to ensure valid results, measuring equipment shall...be calibrated or verified at specified intervals, or prior to use...”
MIL-STD-45662A
"... shall be calibrated at periodic intervals established and maintained to assure acceptable accuracy and reliability...Intervals shall be shortened or may be lengthened, by the contractor, when the results of previous calibrations indicate that such action is appropriate to maintain acceptable reliability."

Standards required and accuracy

The next step is defining the calibration process. The selection of a standard or standards is the most visible part of the calibration process. Ideally, the standard has less than 1/4 of the measurement uncertainty of the device being calibrated. When this goal is met, the accumulated measurement uncertainty of all of the standards involved is considered to be insignificant when the final measurement is also made with the 4:1 ratio. This ratio was probably first formalized in Handbook 52 that accompanied MIL-STD-45662A, an early US Department of Defense metrology program specification. It was 10:1 from its inception in the 1950s until the 1970s, when advancing technology made 10:1 impossible for most electronic measurements.

Maintaining a 4:1 accuracy ratio with modern equipment is difficult. The test equipment being calibrated can be just as accurate as the working standard. If the accuracy ratio is less than 4:1, then the calibration tolerance can be reduced to compensate. When 1:1 is reached, only an exact match between the standard and the device being calibrated is a completely correct calibration. Another common method for dealing with this capability mismatch is to reduce the accuracy of the device being calibrated.

For example, a gauge with 3% manufacturer-stated accuracy can be changed to 4% so that a 1% accuracy standard can be used at 4:1. If the gauge is used in an application requiring 16% accuracy, having the gauge accuracy reduced to 4% will not affect the accuracy of the final measurements. This is called a limited calibration. But if the final measurement requires 10% accuracy, then the 3% gauge never can be better than 3.3:1. Then perhaps adjusting the calibration tolerance for the gauge would be a better solution. If the calibration is performed at 100 units, the 1% standard would actually be anywhere between 99 and 101 units. The acceptable values of calibrations where the test equipment is at the 4:1 ratio would be 96 to 104 units, inclusive. Changing the acceptable range to 97 to 103 units would remove the potential contribution of all of the standards and preserve a 3.3:1 ratio. Continuing, a further change to the acceptable range to 98 to 102 restores more than a 4:1 final ratio.

This is a simplified example. The mathematics of the example can be challenged. It is important that whatever thinking guided this process in an actual calibration be recorded and accessible. Informality contributes to tolerance stacks and other difficult to diagnose post calibration problems.

Also in the example above, ideally the calibration value of 100 units would be the best point in the gauge's range to perform a single-point calibration. It may be the manufacturer's recommendation or it may be the way similar devices are already being calibrated. Multiple point calibrations are also used. Depending on the device, a zero unit state, the absence of the phenomenon being measured, may also be a calibration point. Or zero may be resettable by the user-there are several variations possible. Again, the points to use during calibration should be recorded.

There may be specific connection techniques between the standard and the device being calibrated that may influence the calibration. For example, in electronic calibrations involving analog phenomena, the impedance of the cable connections can directly influence the result.

Manual and automatic calibrations

Calibration methods for modern devices can be manual or automatic.

Manual calibration - US serviceman calibrating a pressure gauge. The device under test is on his left and the test standard on his right.

As an example, a manual process may be used for calibration of a pressure gauge. The procedure requires multiple steps, to connect the gauge under test to a reference master gauge and an adjustable pressure source, to apply fluid pressure to both reference and test gauges at definite points over the span of the gauge, and to compare the readings of the two. The gauge under test may be adjusted to ensure its zero point and response to pressure comply as closely as possible to the intended accuracy. Each step of the process requires manual record keeping.

Automatic calibration - A U.S. serviceman using a 3666C auto pressure calibrator

An automatic pressure calibrator  is a device that combines an electronic control unit, a pressure intensifier used to compress a gas such as Nitrogen, a pressure transducer used to detect desired levels in a hydraulic accumulator, and accessories such as liquid traps and gauge fittings. An automatic system may also include data collection facilities to automate the gathering of data for record keeping.

Process description and documentation

All of the information above is collected in a calibration procedure, which is a specific test method. These procedures capture all of the steps needed to perform a successful calibration. The manufacturer may provide one or the organization may prepare one that also captures all of the organization's other requirements. There are clearinghouses for calibration procedures such as the Government-Industry Data Exchange Program (GIDEP) in the United States.

This exact process is repeated for each of the standards used until transfer standards, certified reference materials and/or natural physical constants, the measurement standards with the least uncertainty in the laboratory, are reached. This establishes the traceability of the calibration.

See Metrology for other factors that are considered during calibration process development.

After all of this, individual instruments of the specific type discussed above can finally be calibrated. The process generally begins with a basic damage check. Some organizations such as nuclear power plants collect "as-found" calibration data before any routine maintenance is performed. After routine maintenance and deficiencies detected during calibration are addressed, an "as-left" calibration is performed.

More commonly, a calibration technician is entrusted with the entire process and signs the calibration certificate, which documents the completion of a successful calibration. The basic process outlined above is a difficult and expensive challenge. The cost for ordinary equipment support is generally about 10% of the original purchase price on a yearly basis, as a commonly accepted rule-of-thumb. Exotic devices such as scanning electron microscopes, gas chromatograph systems and laser interferometer devices can be even more costly to maintain.

The 'single measurement' device used in the basic calibration process description above does exist. But, depending on the organization, the majority of the devices that need calibration can have several ranges and many functionalities in a single instrument. A good example is a common modern oscilloscope. There easily could be 200,000 combinations of settings to completely calibrate and limitations on how much of an all-inclusive calibration can be automated.

An instrument rack with tamper-indicating seals

To prevent unauthorized access to an instrument tamper-proof seals are usually applied after calibration. The picture of the oscilloscope rack shows these, and prove that the instrument has not been removed since it was last calibrated as they will possible unauthorized to the adjusting elements of the instrument. There also are labels showing the date of the last calibration and when the calibration interval dictates when the next one is needed. Some organizations also assign unique identification to each instrument to standardize the record keeping and keep track of accessories that are integral to a specific calibration condition.

When the instruments being calibrated are integrated with computers, the integrated computer programs and any calibration corrections are also under control.

Historical development

Origins

The words "calibrate" and "calibration" entered the English language as recently as the American Civil War, in descriptions of artillery, thought to be derived from a measurement of the calibre of a gun.

Some of the earliest known systems of measurement and calibration seem to have been created between the ancient civilizations of Egypt, Mesopotamia and the Indus Valley, with excavations revealing the use of angular gradations for construction. The term "calibration" was likely first associated with the precise division of linear distance and angles using a dividing engine and the measurement of gravitational mass using a weighing scale. These two forms of measurement alone and their direct derivatives supported nearly all commerce and technology development from the earliest civilizations until about AD 1800.

Calibration of weights and distances (c. 1100 CE)

An example of a weighing scale with a ½ ounce calibration error at zero. This is a "zeroing error" which is inherently indicated, and can normally be adjusted by the user, but may be due to the string and rubber band in this case

Early measurement devices were direct, i.e. they had the same units as the quantity being measured. Examples include length using a yardstick and mass using a weighing scale. At the beginning of the twelfth century, during the reign of Henry I (1100-1135), it was decreed that a yard be "the distance from the tip of the King's nose to the end of his outstretched thumb." However, it wasn't until the reign of Richard I (1197) that we find documented evidence.

Assize of Measures
"Throughout the realm there shall be the same yard of the same size and it should be of iron."

Other standardization attempts followed, such as the Magna Carta (1225) for liquid measures, until the Mètre des Archives from France and the establishment of the Metric system.

The early calibration of pressure instruments

Direct reading design of a U-tube manometer

One of the earliest pressure measurement devices was the Mercury barometer, credited to Torricelli (1643), which read atmospheric pressure using Mercury. Soon after, water-filled manometers were designed. All these would have linear calibrations using gravimetric principles, where the difference in levels was proportional to pressure. The normal units of measure would be the convenient inches of mercury or water.

In the direct reading hydrostatic manometer design on the right, applied pressure Pa pushes the liquid down the right side of the manometer U-tube, while a length scale next to the tube measures the difference of levels. The resulting height difference "H" is a direct measurement of the pressure or vacuum with respect to atmospheric pressure. In the absence of differential pressure both levels would be equal, and this would be used as the zero point.

The Industrial Revolution saw the adoption of "indirect" pressure measuring devices, which were more practical than the manometer. An example is in high pressure (up to 50 psi) steam engines, where mercury was used to reduce the scale length to about 60 inches, but such a manometer was expensive and prone to damage. This stimulated the development of indirect reading instruments, of which the Bourdon tube invented by Eugène Bourdon is a notable example.

Indirect reading design showing a Bourdon tube from the front
Indirect reading design showing a Bourdon tube from the rear
Indirect reading design showing a Bourdon tube from the front (top) and the rear (bottom).

In the front and back views of a Bourdon gauge on the right, applied pressure at the bottom fitting reduces the curl on the flattened pipe proportionally to pressure. This moves the free end of the tube which is linked to the pointer. The instrument would be calibrated against a manometer, which would be the calibration standard. For measurement of indirect quantities of pressure per unit area, the calibration uncertainty would be dependent on the density of the manometer fluid, and the means of measuring the height difference. From this other units such as pounds per square inch could be inferred and marked on the scale.

Howler (error)

From Wikipedia, the free encyclopedia
 
A project error

A howler is a glaring blunder, clumsy mistake or embarrassing misjudgment, typically one which evokes laughter, though not always.

The Oxford English Dictionary defines howler, "3.3 slang. Something 'crying', 'clamant', or excessive; spec. a glaring blunder, esp. in an examination, etc.", and gives the earliest usage example in 1872. Eric Partridge's Dictionary of Slang and Unconventional English says; the 1951 edition of Partridge defined it in part as: "... A glaring (and amusing) blunder: from before 1890; ... also, a tremendous lie ... Literally something that howls or cries for notice, or perhaps ... by way of contracting howling blunder."

Another common interpretation of this usage is that a howler is a mistake fit to make one howl with laughter.

Equivalent terms

All over the world, probably in all natural languages, there are many informal terms for blunders; the English term "howler" occurs in many translating dictionaries. There are other colloquial English words for howler in the sense dealt with in this article, such as the mainly United States and Canadian slang term boner, which has various interpretations, including that of blunder. Like howler, boner can be used in any sense to mean an ignominious and (usually) laughable blunder, and also like howler, it has been used in the titles of published collections of largely schoolboy blunders since at least the 1930s.

Boner is another colloquialism that means much the same as howler in the context of this article, but its other meanings differ. For one, boner is not traditionally used as a general intensifier, or for specifically describing an accident or similar incidents as howler and howling are. Other assorted terms have much longer histories, and some of them are not regarded as slang. For example, Bull and Blunder have long been used in similar senses, each with its own overtones and assorted extraneous meanings. For example, Bulls and Blunders, an American book published in the 1890s, uses the word howler only once, in the passage: "Miss A. C. Graham, of Annerley, has received a prize from the University Correspondent for the best collection of schoolboy howlers". Although he did not otherwise use the word himself, the author did not define a term so familiar on both sides of the Atlantic even at that time.

Mathematics as a special case of terminology

In mathematics, the term "howler" is used to refer to a mathematical fallacy or an unsound method of reasoning which somehow leads to a correct result. However, the distinction between mathematical howlers and mathematical fallacies is poorly defined, and the terminology is confused and arbitrary, as hardly any uniform definition is universally accepted for any term. Terms related to howlers and fallacies include sophism, in which an error is wilfully concealed, whether for didactic purposes or for entertainment. In one sense, the converse of either a howler or a sophism is a mathematical paradox, in which a valid derivation leads to an unexpected or implausible result. However, in the terminology of Willard V. O. Quine, that would be a veridical paradox, whereas sophisms and fallacies would be falsidical paradoxes.

Forms of howler

Typically such definitions of the term howler or boner do not specify the mode of the error; a howler could be a solecism, a malapropism, or simply a spectacular, usually compact, demonstration of misunderstanding, illogic, or outright ignorance. As such, a howler could be an intellectual blunder in any field of knowledge, usually on a point that should have been obvious in context. In the short story by Eden Philpotts Doctor Dunston's Howler, the "howler" in question was not even verbal; it was flogging the wrong boy, with disastrous consequences.

Conversely, on inspection of many examples of bulls and howlers, they may simply be the products of unfortunate wording, punctuation, or point of view. In particular, schoolboy howlers might sometimes amount to what Richard Feynman refers to as perfectly reasonable deviations from the beaten track. Such specimens may variously be based on mondegreens, or they might be derived from misunderstandings of fact originated from the elders, teachers or communities. Not all howlers originate from the pupil.

Fields in which howlers propagate

As illustrated, terms such as howler need not specify the discipline in which the blunder was perpetrated. Howlers have little special application to any particular field, except perhaps education. Most collections refer mainly to the schoolboy howler, politician's howler, epitaph howler, judicial howler, and so on, not always using the term howler, boner or the like. There are various classes in mood as well; the typical schoolboy howler displays innocent ignorance or misunderstanding, whereas the typical politician's howler is likely to expose smugly ignorant pretentiousness, bigotry, or self-interest (see examples below).

The howlers of prominent or self-important people lend themselves to parody and satire, so much so that Quaylisms, Bushisms, Goldwynisms, and Yogiisms were coined in far greater numbers than ever the alleged sources could have produced. Sometimes such lampooning is fairly good-humoured, sometimes it is deliberately used as a political weapon. In either case, it is generally easier to propagate a spuriously attributed howler than to retract one.

The popularity of howlers

Collections of howlers, boners, bulls and the like are popular sellers as joke books go, and they tend to remain popular as reprints; Abingdon, for example, remarks on that in his preface. People commonly enjoy laughing at the blunders of stereotypes from a comfortable position of superiority. This applies especially strongly when the object of the condescension and mockery is a member of some other social class or group. National, regional, racial, or political rivals, occupational groups such as lawyers, doctors, police, and armed forces, all are stock targets of assorted jokes; their howlers, fictional or otherwise, are common themes. Older collections of cartoons and jokes, published before the modern sensitivity to political correctness, are rich sources of examples.

Sometimes, especially in oppressed peoples, such wit takes on an ironic turn and the butt of the stories then becomes one's own people. It is very likely that such mock self-mockery gave rise to the term Irish bull (as opposed to just any bull), which is reflected in works such as Samuel Lover's novel Handy Andy.

Similarly, the Yiddish stories of the "wise men" of the town of Chelm could be argued to be as rich in self-mockery as in mockery. There are many similar examples of mixed mockery and self-mockery—good-natured or otherwise.

Throughout the ages and in practically all countries, there have been proverbial associations of given regions with foolishness or insanity, ranging from the Phrygians and Boeotians of classical times, down to the present. Stories of the Wise Men of Gotham are prominent medieval examples. Apocryphally, the men of Gotham feigned insanity to discourage unwelcome attention from the representatives of King John early in the thirteenth century. Their fictitious activities recalled stories from many other alleged regions of dunces and in fact, many recurring stories have been borrowed through the ages from other times and places, either for entertainment or satire. For example, some Gotham stories, variously embellished, are far older than the actual town of Gotham; consider for instance the second one: it concerned the man who, not wishing to overburden his horse, took the load off his horse onto his own back as he rode it. That story dates back much further than medieval times and since the time of the alleged event in Gotham, it has appeared in Afrikaans comics of the mid-twentieth century, and no doubt elsewhere. However, such traditions often grow on histories of tyranny and are nurtured as two-edged weapons; as the men of Gotham reputedly said: "We ween there are more fools pass through Gotham than remain in it."

Howler propagation and afterlife – Ghost words

Howlers "in the wild" include many misuses of technical terms or principles that are too obscure or too unfunny for anyone to publish them. Such examples accordingly remain obscure, but a few have reappeared subsequently as good faith entries in dictionaries, encyclopaedias, and related authoritative documents. In the nature of things, encyclopaedic and lexicographic sources rely heavily on each other, and such words have a tendency to propagate from one textbook to another. It can be very difficult to eradicate unnoticed errors that have achieved publication in standard reference books.

Professor Walter William Skeat coined the term ghost-word in the late nineteenth century. By that he meant the creation of fictitious, originally meaningless, words by such influences as printers' errors and illegible copy. So for example, "ciffy" instead of "cliffy" and "morse" instead of "nurse" are just two examples that propagated considerably in printed material, so much so that they occasionally are to be found in print or in usage today, more than a century later, sometimes in old books still in use, sometimes in modern publications relying on such books.

Apart from the problems of revealing the original errors once they have been accepted, there is the problem of dealing with the supporting rationalisations that arise in the course of time. See for example the article on Riding (country subdivision), paying particular attention to the reference to farthing and the sections on Word history and Norse states. In the context of such documented material the false etymology of "Riding" is particularly illustrative: "A common misconception holds that the term arose from some association between the size of the district and the distance that can be covered on horseback in a certain amount of time".

As a notorious example of how such errors can become officially established, the extant and established name of Nome, Alaska allegedly originated when a British cartographer copied an ambiguous annotation made by a British officer on a nautical chart. The officer had written "? Name" next to the unnamed cape. The mapmaker misread the annotation as "C. Nome", meaning Cape Nome. If that story is true, then the name is a material example of a ghost word.

As an example of how such assertions may be disputed, an alternative story connects the source with the place name: Nomedalen in Norway.

Technical terms and technical incompetence

The misuse of technical terms to produce howlers is so common that it often goes unnoticed except by people skilled in the relevant fields. One case in point is the use of "random", when the intended meaning is adventitious, arbitrary, accidental, or something similarly uncertain or nondeterministic. Another example is to speak of something as infinite when the intended meaning is: "very large". Some terms have been subject to such routine abuse that they lose their proper meanings, reducing their expressive value. Imply, infer, unique, absolute and many others have become difficult to use in any precise sense without risk of misunderstanding. Such howlers are lamented as a pernicious, but probably unavoidable, aspect of the continuous change of language. One consequence is that most modern readers are unable to make sense of early modern books, even those as recent as the First Folio of Shakespeare or the earliest editions of the Authorized King James Version of the Bible.

The popularity of nautical themes in literature has provided some conspicuous examples. It has tempted many authors ignorant of the technicalities, into embarrassing howlers in their terminology. A popular example is in the opening line of the song Tom Bowling by Charles Dibdin. It refers metaphorically to a human corpse as a "sheer hulk". The intent is something like "complete wreck", which is quite inappropriate to the real meaning of the term. In literature, blunders of that type have been so common for so long that they have been satirised in works such as the short story by Doyle: Cyprian Overbeck Wells, in which he mocks the nautical blunders in the terminology Jonathan Swift used in Gulliver's Travels.

Sources and authenticity

In contrast to tales representing people's rivals as stupid or undignified, it is easy to believe that many or most schoolboy howlers are genuine, or at least are based on genuine incidents; any school teacher interested in the matter can collect authentic samples routinely. However, it is beyond doubt that the collections formally published or otherwise in circulation contain spurious examples, or at least a high degree of creative editing, as is variously remarked upon in the introductory text of the more thoughtful anthologies. It most certainly is not as a rule possible to establish anything like definitive, pedantically correct versions with authentic wording, even if there were much point to any such ideal. Howlers typically are informally reported, and some of them have been generated repeatedly by similar confusion in independent sources. For example, two members of parliament independently approached Babbage with the same uncomprehending question about his machine.

Examples and collections of allegedly genuine howlers

John Humphrys relates the following example of a journalistic howler: ...The headline above one of the stories on my page read: "Work Comes Second For Tony And I". In case you did not know, newspaper headlines are written not by the contributors but by sub-editors ... I was shocked. That a sub on the Times should commit such a howler was beyond belief. (The howler is that the headline should read, "Tony And Me", given that they are the object of the sentence; however, see coordinative constructions.)

Charles Babbage related his reaction as an intellectual when he wrote: 'On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.' One might see this as a politicians' howler, a layman may so radically fail to understand the logical structure of a system, that he cannot begin to perceive the matching logic of the problems that the system is suited to deal with. One must of course respect the fact that the members concerned had done no worse than reveal their lack of insight into a technical matter; they had not pretentiously propounded personal delusions as fact, which would be more typical of the most notorious howlers perpetrated by politicians.

Probably the most prominent anthologisers of howlers in the United Kingdom were Cecil Hunt and Russell Ash. In the United States, probably the most prominent was Alexander Abingdon. According to Abingdon's foreword to Bigger and Better Boners, he shared material with Hunt at least. However, since their day many more collections have appeared, commonly relying heavily on plagiarism of the earlier anthologies.

Here are a few short, illustrative examples of mainly schoolboy howlers culled from various collections:

Examples of retention of misinformation, or where information is presented in an unfamiliar context:

  • A Cattle is a shaggy kind of cow. (Perhaps a city child had been shown a picture of highland cattle before he knew the word “cattle”. If so, the error was natural.)
  • Africa is much hotter than some countries because it is abroad. (To a British child growing up in a cold temperate zone, a natural idea.)
  • Poetry is when every line starts with a capital letter. (Even many adults struggle to distinguish poetry from prose after first encountering free verse.)
  • The locusts were the chief plague, they ate the first-born.
  • All creatures are imperfect beasts. Man alone is a perfect beast.

Unfamiliar instruction heard without comprehension frequently leads to mondegreens and malapropisms:

  • Hiatus is breath that wants seeing to. (for "halitosis")
  • A gherkin is a native who runs after people with a knife. (for "Gurkha")

Mistranslations from foreign languages happen:

  • "Cum grano salis" means: "Although with a corn, thou dancest." (mistranslation from Latin; salis also means "of salt")
  • "Mon frère ainé" means: "My ass of a brother". (mistranslation from French; âne = "donkey", ainé = "older")
  • "Ris de veau financière" (a cookery dish) misrendered from French as "the laugh of the calf at the banker's wife"
  • "La primavera es el parte del ano en que todo se cambia." (The pupil neglected a tilde; año means "year" and ano means "anus".)
  • "Inkstand" has appeared on a Greek restaurant menu, when octopus was meant; the writer probably looked up the Greek χταπόδι in a Greek–German dictionary and found Tintenfisch. They then looked that up in a German–English dictionary but took the translation for the word before Tintenfisch, which was Tintenfass—in English inkstand, inkwell.

Bull: a confusion of wording often related vaguely to a valid idea; not all howlers are bulls in this sense, but the following are:

  • The Magna Carta provided that no free man should be hanged twice for the same offence.
  • Edward III would have been King of France if his mother had been a man.
  • To be a good nurse you must be absolutely sterile.
  • Tundras are the treeless forests of South America.

In extreme examples of bulls it is hard to guess exactly what the pupil had in mind, or how to correct it. For example:

  • Homer was not written by Homer, but another man of that name.

Perhaps this stems from some idea that Homer’s works were written by someone else (see Homeric question). Whatever its origin, it is a prime example of how a howler, and in particular the paradoxical aspects of a bull, presumably inadvertently, may constitute deeper comment on the human condition than most deliberate epigrams.

Sometimes the pupil simply may have been groping for any answer that might placate the examiner.

  • The plural of ox is oxygen.
  • The Israelites made a golden calf because they didn't have enough gold to make a cow.
  • SOS is a musical term. It means Same Only Softer.
  • There are four symptoms for a cold. Two I forget and the other two are too well known to mention.

Some howlers are disconcertingly thought-provoking or look suspiciously like cynicism.

  • Dictionaries are books written by people who think they can spell better than anyone else.
  • "Etc" is a sign used to make believe that you know more than you do.
  • The difference between air and water is that air can be made wetter, but not water.
  • What is half of five? It depends on whether you mean the two or the three.

As already remarked, not all howlers are verbal:

  • One youngster copied down a subtraction sum wrongly, with the smaller number above. As it happened, the date was just above his sum, so he borrowed from his date.

Evolutionary linguistics

From Wikipedia, the free encyclopedia

Evolutionary linguistics or Darwinian linguistics is a sociobiological approach to the study of language. Evolutionary linguists consider linguistics as a subfield of sociobiology and evolutionary psychology. The approach is also closely linked with evolutionary anthropology, cognitive linguistics and biolinguistics. Studying languages as the products of nature, it is interested in the biological origin and development of language. Evolutionary linguistics is contrasted with humanistic approaches, especially structural linguistics.

A main challenge in this research is the lack of empirical data: there are no archaeological traces of early human language. Computational biological modelling and clinical research with artificial languages have been employed to fill in gaps of knowledge. Although biology is understood to shape the brain, which processes language, there is no clear link between biology and specific human language structures or linguistic universals.

For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on the innate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a language instinct; or that it depends on a single mutation which has caused a language organ to appear in the human brain. This is hypothesized to result in a crystalline grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing. Others, yet, liken languages to living organisms. Languages are considered analogous to a parasite or populations of mind-viruses. While there is no solid scientific evidence for any of the claims, some of them have been labelled as pseudoscience.

History

1863–1945: social Darwinism

Although pre-Darwinian theorists had compared languages to living organisms as a metaphor, the comparison was first taken literally in 1863 by the historical linguist August Schleicher who was inspired by Charles Darwin's Origin of the Species. At the time there was not enough evidence to prove that Darwin's theory of natural selection was correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution of species. A review of Schleicher's book Darwinism as Tested by the Science of Language appeared in the first issue of Nature journal in 1870. Darwin reiterated Schleicher's proposition in his 1871 book The Descent of Man, claiming that languages are comparable to species, and that language change occurs through natural selection as words 'struggle for life'. Darwin believed that languages had evolved from animal mating calls. Darwinists considered the concept of language creation as unscientific.

The social Darwinists Schleicher and Ernst Haeckel were keen gardeners and regarded the study of cultures as a type of botany, with different species competing for the same living space. Their ideas became advocated by politicians who wanted to appeal to working class voters, not least by the national socialists who subsequently included the concept of struggle for living space in their agenda. Highly influential until the end of World War II, social Darwinism was eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies.

This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by the Paris Linguistic Society as early as in 1866. Ferdinand de Saussure proposed structuralism to replace evolutionary linguistics in his Course in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishing Sorbonne as an international centrepoint of humanistic thinking.

From 1959 onwards: genetic determinism

In the United States, structuralism was however fended off by the advocates of behavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach of Noam Chomsky who published a modification of Louis Hjelmslev's formal structuralist theory, claiming that syntactic structures are innate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT.

Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted the post-structuralists in the Science Wars of the late 1990s. The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities. The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit.

Chomsky eventually claimed that syntactic structures are caused by a random mutation in the human genome, proposing a similar explanation for other human faculties such as ethics. But Steven Pinker argued in 1990 that they are the outcome of evolutionary adaptations.

From 1976 onwards: Neo-Darwinism

At the same time when the Chomskyan paradigm of biological determinism defeated humanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 that generative grammar was under fire in applied linguistics and in the process of being replaced with usage-based linguistics; a derivative of Richard Dawkins's memetics. It is a concept of linguistic units as replicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestseller The Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky's Universal Grammar, grouped under different brands including a framework called Cognitive Linguistics (with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused with functional linguistics) to confront both Chomsky and the humanists. The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics and linguistic typology; while the generative approach has maintained its position in general linguistics, especially syntax; and in computational linguistics.

View of linguistics

Evolutionary linguistics is part of a wider framework of Universal Darwinism. In this view, linguistics is seen as an ecological environment for research traditions struggling for the same resources. According to David Hull, these traditions correspond to species in biology. Relationships between research traditions can be symbiotic, competitive or parasitic. An adaptation of Hull's theory in linguistics is proposed by William Croft. He argues that the Darwinian method is more advantageous than linguistic models based on physics, structuralist sociology, or hermeneutics.

Approaches

Evolutionary linguistics is often divided into functionalism and formalism, concepts which are not to be confused with functionalism and formalism in the humanistic reference. Functional evolutionary linguistics considers languages as adaptations to human mind. The formalist view regards them as crystallised or non-adaptational.

Functionalism (adaptationism)

The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated. It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning. Language is not considered as a separate area of cognition, but as coinciding with general cognitive capacities, such as perception, attention, motor skills, and spatial and visual processing. It is argued to function according to the same principles as these.

It is thought that the brain links action schemes to form–meaning pairs which are called constructions. Cognitive linguistic approaches to syntax are called cognitive and construction grammar. Also deriving from memetics and other cultural replicator theories, these can study the natural or social selection and adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units.

The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given. What correspond to replicators or mind-viruses in memetics are called linguemes in Croft's theory of Utterance Selection (TUS), and likewise linguemes or constructions in construction grammar and usage-based linguistics; and metaphors, frames or schemas in cognitive and construction grammar. The reference of memetics has been largely replaced with that of a Complex Adaptive System. In current linguistics, this term covers a wide range of evolutionary notions while maintaining the Neo-Darwinian concepts of replication and replicator population.

Functional evolutionary linguistics is not to be confused with functional humanistic linguistics.

Formalism (structuralism)

Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances in crystallography, Schleicher argued that different types of languages are like plants, animals and crystals. The idea of linguistic structures as frozen drops was revived in tagmemics, an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused by the Creation.

In modern biolinguistics, the X-bar tree is argued to be like natural systems such as ferromagnetic droplets and botanic forms. Generative grammar considers syntactic structures similar to snowflakes. It is hypothesised that such patterns are caused by a mutation in humans.

The formal–structural evolutionary aspect of linguistics is not to be confused with structural linguistics.

Evidence

There was some hope of a breakthrough at the discovery of the FOXP2 gene. There is little support, however, for the idea that FOXP2 is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech. There is no evidence that people have a language instinct. Memetics is widely discredited as pseudoscience and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience. All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes.

Criticism

Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics. Ferdinand de Saussure commented on 19th century evolutionary linguistics:

"Language was considered a specific sphere, a fourth natural kingdom; this led to methods of reasoning which would have caused astonishment in other sciences. Today one cannot read a dozen lines written at that time without being struck by absurdities of reasoning and by the terminology used to justify these absurdities”

Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwnian linguistics as a positive development. Esa Itkonen nonetheless deems the revival of Darwinism as a hopeless enterprise:

"There is ... an application of intelligence in linguistic change which is absent in biological evolution; and this suffices to make the two domains totally disanalogous ... [Grammaticalisation depends on] cognitive processes, ultimately serving the goal of problem solving, which intelligent entities like humans must perform all the time, but which biological entities like genes cannot perform. Trying to eliminate this basic difference leads to confusion.”

Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not their genotype.

Group selection

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Group_selection   Early explanations ...