Search This Blog

Friday, May 20, 2022

Philosophical methodology

From Wikipedia, the free encyclopedia

In its most common sense, philosophical methodology is the field of inquiry studying the methods used to do philosophy. But the term can also refer to the methods themselves. It may be understood in a wide sense as the general study of principles used for theory selection, or in a more narrow sense as the study of ways of conducting one's research and theorizing with the goal of acquiring philosophical knowledge. Philosophical methodology investigates both descriptive issues, such as which methods actually have been used by philosophers, and normative issues, such as which methods should be used or how to do good philosophy.

A great variety of philosophical methods have been employed. Methodological skepticism uses systematic doubt in its search for absolutely certain or indubitable first principles of philosophy. The geometrical method starts with a small set of such principles and tries to build a comprehensive philosophical system on this small foundation based on deductive inferences. The phenomenological method aims to arrive at certain knowledge about the realm of appearances by suspending one's judgments about the external world underlying these appearances. Verificationists focus on the conditions of empirical verification of philosophical claims in order to understand their meaning and to expose meaningless claims. Conceptual analysis is used to analyze philosophical concepts by determining their fundamental constituents with the goal of clarifying their meaning. Common-sense philosophy uses commonly accepted beliefs as its starting point for philosophizing. It is often used in a negative sense to criticize radical philosophical positions that constitute a significant departure from common sense. It is closely related to ordinary language philosophy, which approaches philosophical problems by studying how the related terms are used in ordinary language. Intuition-based methods use intuitions, i.e. non-inferential impressions concerning specific cases or general principles, to evaluate whether a philosophical claim is true or false. Intuitions play a central role in thought experiments, in which certain situations are imagined and their possible consequences are assessed in order to confirm or refute philosophical theories. The method of reflective equilibrium consists in considering all the relevant evidence for and against a theory with the goal of arriving at a balanced and coherent perspective on the issue in question. Pragmatists focus on the consequences of accepting or rejecting philosophical theories with the purpose of assessing whether the theories are true or false. The transcendental method usually starts with trivial facts about our mental life and tries to infer various interesting conclusions from them based on the claim that the trivial fact could not be true if these conclusions were false, i.e. that they constitute its conditions of possibility. Experimental philosophy applies the methods found in social psychology and the cognitive sciences, such as conducting surveys, to philosophical problems. Other methods include the Socratic method, theory selection based on theoretical virtues, methodological naturalism, truthmaker theory, and the genealogical method.

The questions in philosophical methodology do not primarily concern which philosophical claims are true, but how to determine which ones are true. However, these two issues are closely related nonetheless since the choice of one's method often has important implications for the arguments cited for and against philosophical theories. In this sense, methodological disagreements are often reflected in philosophical disagreements. Philosophical methodology is closely related to various fields. Theorists often use the contrast to the natural sciences to emphasize how different the methods of philosophy are. Philsophical methodology has also an intimate relation with epistemology since both fields are interested in studying how to determine what we should believe.

Definition

The term "philosophical methodology" refers either to the methods used to philosophize or to the branch of metaphilosophy studying these methods. A method is a way of doing things, such as a set of actions or decisions, in order to achieve a certain goal, when used under the right conditions. In the context of inquiry, a method is a way of conducting one's research and theorizing, like inductive or axiomatic methods in logic or experimental methods in the sciences. Philosophical methodology studies the methods of philosophy. It is not primarily concerned with whether a philosophical position, such as metaphysical dualism or utilitarianism, is true or false. Instead, it asks how one can determine which position should be adopted.

In the widest sense, any principle for choosing between competing theories may be considered as part of the methodology of philosophy. In this sense, philosophical methodology is "the general study of criteria for theory-selection". For example, Occam’s Razor is a methodological principle of theory selection favoring simple over complex theories. A closely related aspect of philosophical methodology concerns the question of which conventions one needs to adopt necessarily to succeed at theory making. But in a more narrow sense, only guidelines that help philosophers learn about facts studied by philosophy qualify as philosophical methods. This is the more common sense, which applies to most of the methods listed in this article. In this sense, philosophical methodology is closely related to epistemology in that it consists in epistemological methods that enable philosophers to arrive at knowledge. Because of this, the problem of the methods of philosophy is central to how philosophical claims are to be justified.

An important difference in philosophical methodology concerns the distinction between descriptive and normative questions. Descriptive questions ask what methods philosophers actually use or used in the past, while normative questions ask what methods they should use. The normative aspect of philosophical methodology expresses the idea that there is a difference between good and bad philosophy. In this sense, philosophical methods either articulate the standards of evaluation themselves or the practices that ensure that these standards are met. Philosophical methods can be understood as tools that help the theorist do good philosophy and arrive at knowledge. The normative question of philosophical methodology is quite controversial since different schools of philosophy often have very different views on what constitutes good philosophy and how to achieve it.

Methods

A great variety of philosophical methods has been proposed. Some of these methods were developed as a reaction to other methods, for example, to counter skepticism by providing a secure path to knowledge. In other cases, one method may be understood as a development or a specific application of another method. Some philosophers or philosophical movements give primacy to one specific method, while others use a variety of methods depending on the problem they are trying to solve. It has been argued that many of the philosophical methods are also commonly used implicitly in more crude forms by regular people and are only given a more careful, critical, and systematic exposition in philosophical methodology.

Methodological skepticism

Methodological skepticism, also referred to as Cartesian doubt, uses systematic doubt as a method of philosophy. It is motivated by the search for an absolutely certain foundation of our knowledge. The method for finding these foundations is doubt: only that which is indubitable can serve this role. While this approach has been influential, it has also received various criticisms. One problem is that it has proven very difficult to find such absolutely certain claims if the doubt is applied in its most radical form. Another is that while absolute certainty may be desirable, it is by no means necessary for knowledge. In this sense, it excludes too much and seems to be unwarranted and arbitrary, since it is not clear why very certain theorems justified by strong arguments should be abandoned just because they are not absolutely certain. This can be seen in relation to the insights discovered by the empirical sciences, which have proven very useful even though they are not indubitable.

Geometrical method

The geometrical method came to particular prominence through rationalists like Baruch Spinoza. It starts from a small set of self-evident axioms together with relevant definitions and tries to deduce a great variety of theorems from this basis, thereby mirroring the methods found in geometry. Historically, it can be understood as a response to methodological skepticism: it consists in trying to find a foundation of certain knowledge and then expanding this foundation through deductive inferences. The theorems arrived at this way may be challenged in two ways. On the one hand, they may be derived from axioms that are not as self-evident as their defenders proclaim and thereby fail to inherit the status of absolute certainty. For example, many philosophers have rejected the claim of self-evidence concerning one of Rene Descartes' first principles stating that "he can know that whatever he perceives clearly and distinctly is true only if he first knows that God exists and is not a deceiver". Another example is the causal axiom of Spinoza's system that "the knowledge of an effect depends on and involves knowledge of its cause", which has been criticized in various ways. In this sense, philosophical systems built using the geometrical method are open to criticisms that reject their basic axioms. A different form of objection holds that the inference from the axioms to the theorems may be faulty, for example, because it does not follow a rule of inference or because it includes implicitly assumed premises that are not themselves self-evident.

Phenomenological method

Phenomenology is the science of appearances. The phenomenological method aims to study the appearances themselves and the relations found between them. This is achieved through the so-called phenomenological reduction, also known as epoché or bracketing: the researcher suspends their judgments about the natural external world in order to focus exclusively on the experience of how things appear to be, independent of whether these appearances are true or false. One idea behind this approach is that our presuppositions of what things are like can get in the way of studying how they appear to be and thereby mislead the researcher into thinking they know the answer instead of looking for themselves. The phenomenological method can also be seen as a reaction to methodological skepticism since its defenders traditionally claimed that it could lead to absolute certainty and thereby help philosophy achieve the status of a rigorous science. But phenomenology has been heavily criticized because of this overly optimistic outlook concerning the certainty of its insights. A different objection to the method of phenomenological reduction holds that it involves an artificial stance that gives too much emphasis on the theoretical attitude at the expense of feeling and practical concerns.

Another phenomenological method is called "eidetic variation". It is used to study the essences of things. This is done by imagining an object of the kind under investigation. The features of this object are then varied in order to see whether the resulting object still belongs to the investigated kind. If the object can survive the change of a certain feature then this feature is inessential to this kind. Otherwise, it belongs to the kind's essence. For example, when imagining a triangle, one can vary its features, like the length of its sides or its color. These features are inessential since the changed object is still a triangle, but it ceases to be a triangle if a fourth side is added.

Verificationism

The method of verificationism consists in understanding sentences by analyzing their characteristic conditions of verification, i.e. by determining which empirical observations would prove them to be true. A central motivation behind this method has been to distinguish meaningful from meaningless sentences. This is sometimes expressed through the claim that "[the] meaning of a statement is the method of its verification". Meaningful sentences, like the ones found in the natural sciences, have clear conditions of empirical verification. But since most metaphysical sentences cannot be verified by empirical observations, they are deemed to be non-sensical by verificationists. Verificationism has been criticized on various grounds. On the one hand, it has proved very difficult to give a precise formulation that includes all scientific claims, including the ones about unobservables. This is connected to the problem of underdetermination in the philosophy of science: the problem that the observational evidence is often insufficient to determine which theory is true. This would lead to the implausible conclusion that even for the empirical sciences, many of their claims would be meaningless. But on a deeper level, the basic claim underlying verificationism seems itself to be meaningless by its own standards: it is not clear what empirical observations could verify the claim that the meaning of a sentence is the method of its verification. In this sense, verificationism would be contradictory by directly refuting itself. These and other problems have led some theorists, especially from the sciences, to adopt falsificationism instead. It is a less radical approach that holds that serious theories or hypotheses should at least be falsifiable, i.e. there should be some empirical observations that could prove them wrong.

Conceptual analysis

The goal of conceptual analysis is to decompose or analyze a given concept into its fundamental constituents. It consists in considering a philosophically interesting concept, like knowledge, and determining the necessary and sufficient conditions for whether the application of this concept is true. The resulting claim about the relation between the concept and its constituents is normally seen as knowable a priori since it is true only in virtue of the involved concepts and thereby constitutes an analytic truth. Usually, philosophers use their own intuitions to determine whether a concept is applicable to a specific situation to test their analyses. But other approaches have also been utilized by using not the intuitions of philosophers but of regular people, an approach often defended by experimental philosophers.

G. E. Moore proposed that the correctness of a conceptual analysis can be tested using the open question method. According to this view, asking whether the decomposition fits the concept should result in a closed or pointless question. If it results in an open or intelligible question, then the analysis does not exactly correspond to what we have in mind when we use the term. This can be used, for example, to reject the utilitarian claim that "goodness" is "whatever maximizes happiness". The underlying argument is that the question "Is what is good what maximizes happiness?" is an open question, unlike the question "Is what is good what is good?", which is a closed question. One problem with this approach is that it results in a very strict conception of what constitutes a correct conceptual analysis, leading to the conclusion that many concepts, like "goodness", are simple or indefinable.

Willard Van Orman Quine criticized conceptual analysis as part of his criticism of the analytic-synthetic distinction. This objection is based on the idea that all claims, including how concepts are to be decomposed, are ultimately based on empirical evidence. Another problem with conceptual analysis is that it is often very difficult to find an analysis of a concept that really covers all its cases. For this reason, Rudolf Carnap has suggested a modified version that aims to cover only the most paradigmatic cases while excluding problematic or controversial cases. While this approach has become more popular in recent years, it has also been criticized based on the argument that it tends to change the subject rather than resolve the original problem. In this sense, it is closely related to the method of conceptual engineering, which consists in redefining concepts in fruitful ways or developing new interesting concepts. This method has been applied, for example, to the concepts of gender and race.

Common sense

The method of common sense is based on the fact that we already have a great variety of beliefs that seem very certain to us, even if we do not believe them based on explicit arguments. Common sense philosophers use these beliefs as their starting point of philosophizing. This often takes the form of criticism directed against theories whose premises or conclusions are very far removed from how the average person thinks about the issue in question. G. E. Moore, for example, rejects J. M. E. McTaggart's sophisticated argumentation for the unreality of time based on his common-sense impression that time exists. He holds that his simple common-sense impression is much more certain than that McTaggart's arguments are sound, even though Moore was unable to pinpoint where McTaggart's arguments went wrong. According to his method, common sense constitutes an evidence base. This base may be used to eliminate philosophical theories that stray too far away from it, that are abstruse from its perspective. This can happen because either the theory itself or consequences that can be drawn from it violate common sense. For common sense philosophers, it is not the task of philosophy to question common sense. Instead, they should analyze it to formulate theories in accordance with it.

One important argument against this method is that common sense has often been wrong in the past, as is exemplified by various scientific discoveries. This suggests that common sense is in such cases just an antiquated theory that is eventually eliminated by the progress of science. For example, Albert Einstein's theory of relativity constitutes a radical departure from the common-sense conception of space and time, and quantum physics poses equally serious problems to how we tend to think about how elementary particles behave. This puts into question that common sense is a reliable source of knowledge. Another problem is that for many issues, there is no one universally accepted common-sense opinion. In such cases, common sense only amounts to the majority opinion, which should not be blindly accepted by researchers. This problem can be approached by articulating a weaker version of the common-sense method. One such version is defended by Roderick Chisholm, who allows that theories violating common sense may still be true. He contends that, in such cases, the theory in question is prima facie suspect and the burden of proof is always on its side. But such a shift in the burden of proof does not constitute a blind belief in common sense since it leaves open the possibility that, for various issues, there is decisive evidence against the common-sense opinion.

Ordinary language philosophy

The method of ordinary language philosophy consists in tackling philosophical questions based on how the related terms are used in ordinary language. In this sense, it is related to the method of common sense but focuses more on linguistic aspects. Some types of ordinary language philosophy only take a negative form in that they try to show how philosophical problems are not real problems at all. Instead, it is aimed to show that false assumptions, to which humans are susceptible due to the confusing structure of natural language, are responsible for this false impression. Other types take more positive approaches by defending and justifying philosophical claims, for example, based on what sounds insightful or odd to the average English speaker.

One problem for ordinary language philosophy is that regular speakers may have many different reasons for using a certain expression. Sometimes they intend to express what they believe, but other times they may be motivated by politeness or other conversational norms independent of the truth conditions of the expressed sentences. This significantly complicates ordinary language philosophy, since philosophers have to take the specific context of the expression into account, which may considerably alter its meaning. This criticism is partially mitigated by J. L. Austin's approach to ordinary language philosophy. According to him, ordinary language already has encoded many important distinctions and is our point of departure in theorizing. But "ordinary language is not the last word: in principle, it can everywhere be supplemented and improved upon and superseded". However, it also falls prey to another criticism: that it is often not clear how to distinguish ordinary from non-ordinary language. This makes it difficult in all but the paradigmatic cases to decide whether a philosophical claim is or is not supported by ordinary language.

Intuition and thought experiments

Methods based on intuition, like ethical intuitionism, use intuitions to evaluate whether a philosophical claim is true or false. In this context, intuitions are seen as a non-inferential source of knowledge: they consist in the impression of correctness one has when considering a certain claim. They are intellectual seemings that make it appear to the thinker that the considered proposition is true or false without the need to consider arguments for or against the proposition. This is sometimes expressed by saying that the proposition in question is self-evident. Examples of such propositions include "torturing a sentient being for fun is wrong" or "it is irrational to believe both something and its opposite". But not all defenders of intuitionism restrict intuitions to self-evident propositions. Instead, often weaker non-inferential impressions are also included as intuitions, such as a mother's intuition that her child is innocent of a certain crime.

Intuitions can be used in various ways as a philosophical method. On the one hand, philosophers may consult their intuitions in relation to very general principles, which may then be used to deduce further theorems. Another technique, which is often applied in ethics, consists in considering concrete scenarios instead of general principles. This often takes the form of thought experiments, in which certain situations are imagined with the goal of determining the possible consequences of the imagined scenario. These consequences are assessed using intuition and counterfactual thinking. For this reason, thought experiments are sometimes referred to as intuition pumps: they activate the intuitions concerning the specific situation, which may then be generalized to arrive at universal principles. In some cases, the imagined scenario is physically possible but it would not be feasible to make an actual experiment due to the costs, negative consequences, or technological limitations. But other thought experiments even work with scenarios that defy what is physically possible. It is controversial to what extent thought experiments merit to be characterized as real experiments and whether the insights they provide are reliable.

One problem with intuitions in general and thought experiments in particular consists in assessing their epistemological status, i.e. whether, how much, and in which circumstances they provide justification in comparison to other sources of knowledge. Some of its defenders claim that intuition is a reliable source of knowledge just like perception, with the difference being that it happens without the sensory organs. Others compare it not to perception but to the cognitive ability to evaluate counterfactual conditionals, which may be understood as the capacity to answer what-if questions. But the reliability of intuitions has been contested by its opponents. For example, wishful thinking may be the reason why it intuitively seems to a person that a proposition is true without providing any epistemological support for this proposition. Another objection, often raised in the empirical and naturalist tradition, is that intuitions do not constitute a reliable source of knowledge since the practitioner restricts themselves to an inquiry from their armchair instead of looking at the world to make empirical observations.

Reflective equilibrium

Reflective equilibrium is a state in which a thinker has the impression that they have considered all the relevant evidence for and against a theory and have made up their mind on this issue. It is a state of coherent balance among one's beliefs. This does not imply that all the evidence has really been considered, but it is tied to the impression that engaging in further inquiry is unlikely to make one change one's mind, i.e. that one has reached a stable equilibrium. In this sense, it is the endpoint of the deliberative process on the issue in question. The philosophical method of reflective equilibrium aims at reaching this type of state by mentally going back and forth between all relevant beliefs and intuitions. In this process, the thinker may have to let go of some beliefs or deemphasize certain intuitions that do not fit into the overall picture in order to progress.

In this wide sense, reflective equilibrium is connected to a form of coherentism about epistemological justification and is thereby opposed to foundationalist attempts at finding a small set of fixed and unrevisable beliefs from which to build one's philosophical theory. One problem with this wide conception of the reflective equilibrium is that it seems trivial: it is a truism that the rational thing to do is to consider all the evidence before making up one's mind and to strive towards building a coherent perspective. But as a method to guide philosophizing, this is usually too vague to provide specific guidance.

When understood in a more narrow sense, the method aims at finding an equilibrium between particular intuitions and general principles.  On this view, the thinker starts with intuitions about particular cases and formulates general principles that roughly reflect these intuitions. The next step is to deal with the conflicts between the two by adjusting both the intuitions and the principles to reconcile them until an equilibrium is reached. One problem with this narrow interpretation is that it depends very much on the intuitions one started with. This means that different philosophers may start with very different intuitions and may therefore be unable to find a shared equilibrium. For example, the narrow method of reflective equilibrium may lead some moral philosophers towards utilitarianism and others towards Kantianism.

Pragmatic method

The pragmatic method assesses the truth or falsity of theories by looking at the consequences of accepting them. In this sense, "[t]he test of truth is utility: it's true if it works". Pragmatists approach intractable philosophical disputes in a down-to-earth fashion by asking about the concrete consequences associated, for example, with whether an abstract metaphysical theory is true or false. This is also intended to clarify the underlying issues by spelling out what would follow from them. Another goal of this approach is to expose pseudo-problems, which involve a merely verbal disagreement without any genuine difference on the level of the consequences between the competing standpoints.

Succinct summaries of the pragmatic method base it on the pragmatic maxim, of which various versions exist. An important version is due to Charles Sanders Peirce: "Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object." Another formulation is due to William James: "To develop perfect clearness in our thoughts of an object, then, we need only consider what effects of a conceivable practical kind the object may involve – what sensations we are to expect from it and what reactions we must prepare". Various criticisms to the pragmatic method have been raised. For example, it is commonly rejected that the terms "true" and "useful" mean the same thing. A closely related problem is that believing in a certain theory may be useful to one person and useless to another, which would mean the same theory is both true and false.

Transcendental method

The transcendental method is used to study phenomena by reflecting on the conditions of possibility of these phenomena. This method usually starts out with an obvious fact, often about our mental life, such as what we know or experience. It then goes on to argue that for this fact to obtain, other facts also have to obtain: they are its conditions of possibility. This type of argument is called "transcendental argument": it argues that these additional assumptions also have to be true because otherwise, the initial fact would not be the case. For example, it has been used to argue for the existence of an external world based on the premise that the experience of the temporal order of our mental states would not be possible otherwise. Another example argues in favor of a description of nature in terms of concepts such as motion, force, and causal interaction based on the claim that an objective account of nature would not be possible otherwise.

Transcendental arguments have faced various challenges. On the one hand, the claim that the belief in a certain assumption is necessary for the experience of a certain entity is often not obvious. So in the example above, critics can argue against the transcendental argument by denying the claim that an external world is necessary for the experience of the temporal order of our mental states. But even if this point is granted, it does not guarantee that the assumption itself is true. So even if the belief in a given proposition is a psychological necessity for a certain experience, it does not automatically follow that this belief itself is true. Instead, it could be the case that humans are just wired in such a way that they have to believe in certain false assumptions.

Experimental philosophy

Experimental philosophy is the most recent development of the methods discussed in this article: it began only in the early years of the 21st century. Experimental philosophers try to answer philosophical questions by gathering empirical data. It is an interdisciplinary approach that applies the methods of psychology and the cognitive sciences to topics studied by philosophy. This usually takes the form of surveys probing the intuitions of ordinary people and then drawing conclusions from the findings. For example, one such inquiry came to the conclusion that justified true belief may be sufficient for knowledge despite various Gettier cases claiming to show otherwise. The method of experimental philosophy can be used both in a negative or a positive program. As a negative program, it aims to challenge traditional philosophical movements and positions. This can be done, for example, by showing how the intuitions used to defend certain claims vary a lot depending on factors such as culture, gender, or ethnicity. This variation casts doubt on the reliability of the intuitions and thereby also on theories supported by them. As a positive program, it uses empirical data to support its own philosophical claims. It differs from other philosophical methods in that it usually studies the intuitions of ordinary people and uses them, and not the experts' intuitions, as philosophical evidence.

One problem for both the positive and the negative approaches is that the data obtained from surveys do not constitute hard empirical evidence since they do not directly express the intuitions of the participants. The participants may react to subtle pragmatic cues in giving their answers, which brings with it the need for further interpretation in order to get from the given answers to the intuitions responsible for these answers. Another problem concerns the question of how reliable the intuitions of ordinary people on the often very technical issues are. The core of this objection is that, for many topics, the opinions of ordinary people are not very reliable since they have little familiarity with the issues themselves and the underlying problems they may pose. For this reason, it has been argued that they cannot replace the expert intuitions found in trained philosophers. Some critics have even argued that experimental philosophy does not really form part of philosophy. This objection does not reject that the method of experimental philosophy has value, it just rejects that this method belongs to philosophical methodology.

Others

Various other philosophical methods have been proposed. The Socratic method or Socratic debate is a form of cooperative philosophizing in which one philosopher usually first states a claim, which is then scrutinized by their interlocutor by asking them questions about various related claims, often with the implicit goal of putting the initial claim into doubt. It continues to be a popular method for teaching philosophy. Plato and Aristotle emphasize the role of wonder in the practice of philosophy. On this view, "philosophy begins in wonder" and "[i]t was their wonder, astonishment, that first led men to philosophize and still leads them". This position is also adopted in the more recent philosophy of Nicolai Hartmann. Various other types of methods were discussed in ancient Greek philosophy, like analysis, synthesis, dialectics, demonstration, definition, and reduction to absurdity. The medieval philosopher Thomas Aquinas identifies composition and division as ways of forming propositions while he sees invention and judgment as forms of reasoning from the known to the unknown.

Various methods for the selection between competing theories have been proposed. They often focus on the theoretical virtues of the involved theories. One such method is based on the idea that, everything else being equal, the simpler theory is to be preferred. Another gives preference to the theory that provides the best explanation. According to the method of epistemic conservatism, we should, all other things being equal, prefer the theory which, among its competitors, is the most conservative, i.e. the one closest to the beliefs we currently hold. One problem with these methods of theory selection is that it is usually not clear how the different virtues are to be weighted, often resulting in cases where they are unable to resolve disputes between competing theories that excel at different virtues.

Methodological naturalism holds that all philosophical claims are synthetic claims that ultimately depend for their justification or rejection on empirical observational evidence. In this sense, philosophy is continuous with the natural sciences in that they both give priority to the scientific method for investigating all areas of reality.

According to truthmaker theorists, every true proposition is true because another entity, its truthmaker, exists. This principle can be used as a methodology to critically evaluate philosophical theories. In particular, this concerns theories that accept certain truths but are unable to provide their truthmaker. Such theorists are derided as ontological cheaters. For example, this can be applied to philosophical presentism, the view that nothing outside the present exists. Philosophical presentists usually accept the very common belief that dinosaurs existed but have trouble in providing a truthmaker for this belief since they deny existence to past entities.

In philosophy, the term "genealogical method" refers to a form of criticism that tries to expose commonly held beliefs by uncovering their historical origin and function. For example, it may be used to reject specific moral claims or the status of truth by giving a concrete historical reconstruction of how their development was contingent on power relations in society. This is usually accompanied by the assertion that these beliefs were accepted and became established, because of non-rational considerations, such as because they served the interests of a predominant class.

Disagreements and influence

The disagreements within philosophy do not only concern which first-order philosophical claims are true, they also concern the second-order issue of which philosophical methods to use. One way to evaluate philosophical methods is to assess how well they do at solving philosophical problems. The question of the nature of philosophy has important implications for which methods of inquiry are appropriate to philosophizing. Seeing philosophy as an empirical science brings its methods much closer to the methods found in the natural sciences. Seeing it as the attempt to clarify concepts and increase understanding, on the other hand, usually leads to a methodology much more focused on apriori reasoning. In this sense, philosophical methodology is closely tied up with the question of how philosophy is to be defined. Different conceptions of philosophy often associated it with different goals, leading to certain methods being more or less suited to reach the corresponding goal.

The interest in philosophical methodology has risen a lot in contemporary philosophy. But some philosophers reject its importance by emphasizing that "preoccupation with questions about methods tends to distract us from prosecuting the methods themselves". However, such objections are often dismissed by pointing out that philosophy is at its core a reflective and critical enterprise, which is perhaps best exemplified by its preoccupation with its own methods. This is also backed up by the arguments to the effect that one's philosophical method has important implications for how one does philosophy and which philosophical claims one accepts or rejects. Since philosophy also studies the methodology of other disciplines, such as the methods of science, it has been argued that the study of its own methodology is an essential part of philosophy.

In several instances in the history of philosophy, the discovery of a new philosophical method, such as Cartesian doubt or the phenomenological method, has had important implications both on how philosophers conducted their theorizing and what claims they set out to defend. In some cases, such discoveries led the involved philosophers to overly optimistic outlooks, seeing them as historic breakthroughs that would dissolve all previous disagreements in philosophy.

Relation to other fields

Science

The methods of philosophy differ in various respects from the methods found in the natural sciences. One important difference is that philosophy does not use experimental data obtained through measuring equipment like telescopes or cloud chambers to justify its claims. For example, even philosophical naturalists emphasizing the close relation between philosophy and the sciences mostly practice a form of armchair theorizing instead of gathering empirical data. Experimental philosophers are an important exception: they use methods found in social psychology and other empirical sciences to test their claims.

One reason for the methodological difference between philosophy and science is that philosophical claims are usually more speculative and cannot be verified or falsified by looking through a telescope. This problem is not solved by citing works published by other philosophers, since it only defers the question of how their insights are justified. An additional complication concerning testimony is that different philosophers often defend mutually incompatible claims, which poses the challenge of how to select between them. Another difference between scientific and philosophical methodology is that there is wide agreement among scientists concerning their methods, testing procedures, and results. This is often linked to the fact that science has seen much more progress than philosophy.

Epistemology

An important goal of philosophical methods is to assist philosophers in attaining knowledge. This is often understood in terms of evidence. In this sense, philosophical methodology is concerned with the questions of what constitutes philosophical evidence, how much support it offers, and how to acquire it. In contrast to the empirical sciences, it is often claimed that empirical evidence is not used in justifying philosophical theories, that philosophy is less about the empirical world and more about how we think about the empirical world. In this sense, philosophy is often identified with conceptual analysis, which is concerned with explaining concepts and showing their interrelations. Philosophical naturalists often reject this line of thought and hold that empirical evidence can confirm or disconfirm philosophical theories, at least indirectly.

Philosophical evidence, which may be obtained, for example, through intuitions or thought experiments, is central for justifying basic principles and axioms. These principles can then be used as premises to support further conclusions. Some approaches to philosophical methodology emphasize that these arguments have to be deductively valid, i.e. that the truth of their premises ensures the truth of their conclusion. In other cases, philosophers may commit themselves to working hypotheses or norms of investigation even though they lack sufficient evidence. Such assumptions can be quite fruitful in simplifying the possibilities the philosopher needs to consider and by guiding them to ask interesting questions. But the lack of evidence makes this type of enterprise vulnerable to criticism.

Thursday, May 19, 2022

Body armor

From Wikipedia, the free encyclopedia

United States Marines in July 2010 assist a Sri Lanka Navy sailor in trying on a Modular Tactical Vest.
 
Japanese warrior in armor

Body armor, also known as body armour, personal armor or armour, or a suit or coat of armor, is protective clothing designed to absorb or deflect physical attacks. Historically used to protect military personnel, today it is also used by various types of police (riot police in particular), private security guards or bodyguards, and occasionally ordinary civilians. Today there are two main types: regular non-plated body armor for moderate to substantial protection, and hard-plate reinforced body armor for maximum protection, such as used by combat soldiers.

History

Greek Mycenaean armor, c. 1400 BC
 
Bronze lamellae, Vietnam, 300 BC – 100 BC

Many factors have affected the development of personal armor throughout human history. Significant factors in the development of armor include the economic and technological necessities of armor production. For instance full plate armor first appeared in Medieval Europe when water-powered trip hammers made the formation of plates faster and cheaper. At times the development of armor has run parallel to the development of increasingly effective weaponry on the battlefield, with armorers seeking to create better protection without sacrificing mobility.

Ancient

The first record of body armor in history was found on the Stele of Vultures in ancient Sumer in today's south Iraq. The oldest known Western armor is the Dendra panoply, dating from the Mycenaean Era around 1400 BC. Mail, also referred to as chainmail, is made of interlocking iron rings, which may be riveted or welded shut. It is believed to have been invented by Celtic people in Europe about 500 BC. Most cultures that used mail used the Celtic word byrnne or a variant, suggesting the Celts as the originators. The Romans widely adopted mail as the lorica hamata, although they also made use of lorica segmentata and lorica squamata. While no non-metallic armor is known to have survived, it was likely to have been commonplace due to its lower cost.

Eastern armor has a long history, beginning in Ancient China. In East Asian history laminated armor such as lamellar, and styles similar to the coat of plates, and brigandine were commonly used. Later cuirasses and plates were also used. In pre-Qin dynasty times, leather armor was made out of rhinoceros. The use of iron plate armor on the Korean peninsula was developed during the Gaya Confederacy of 42 CE - 562 CE. The iron was mined and refined in the area surrounding Gimhae (Gyeongsangnam Provence, South Korea). Using both vertical and triangular plate design, the plate armor sets consisted of 27 or more individual 1-2mm thick curved plates, which were secured together by nail or hinge. The recovered sets include accessories such as iron arm guards, neck guards, leg guards, and horse armor/bits. The use of these armor types disappeared from use on the Korean Peninsula after the fall of the Gaya Confederacy to the Silla Dynasty, during the three kingdoms era Three Kingdoms of Korea in 562 CE.

Middle Ages

In European history, well-known armor types include the mail hauberk of the early medieval age, and the full steel plate harness worn by later Medieval and Renaissance knights, and a few key components (breast and back plates) by heavy cavalry in several European countries until the first year of World War I (1914–15).

The Japanese armor known today as samurai armor appeared in the Heian period. (794-1185) These early samurai armors are called the ō-yoroi and dō-maru.

Plate

Gradually, small additional plates or discs of iron were added to the mail to protect vulnerable areas. By the late 13th century, the knees were capped, and two circular discs, called besagews were fitted to protect the underarms. A variety of methods for improving the protection provided by mail were used as armorers seemingly experimented. Hardened leather and splinted construction were used for arm and leg pieces. The coat of plates was developed, an armor made of large plates sewn inside a textile or leather coat.

Early plate in Italy, and elsewhere in the 13th to 15th centuries were made of iron. Iron armor could be carburized or case hardened to give a surface of harder steel. Plate armor became cheaper than mail by the 15th century as it required much less labor and labor had become much more expensive after the Black Death, though it did require larger furnaces to produce larger blooms. Mail continued to be used to protect those joints which could not be adequately protected by plate, such as the armpit, crook of the elbow and groin. Another advantage of plate was that a lance rest could be fitted to the breast plate.

Signature Maratha helmet with curved back, side view

The small skull cap evolved into a bigger true helmet, the bascinet, as it was lengthened downward to protect the back of the neck and the sides of the head. Additionally, several new forms of fully enclosed helmets were introduced in the late 14th century to replace the great helm, such as the sallet and barbute and later the armet and close helm.

Probably the most recognized style of armor in the world became the plate armor associated with the knights of the European Late Middle Ages, but continuing to the early 17th-century Age of Enlightenment in all European countries.

By about 1400 the full harness of plate armor had been developed in armories of Lombardy Heavy cavalry dominated the battlefield for centuries in part because of their armor.

In the early 15th century, small "hand cannon" first began to be used, in the Hussite Wars, in combination with Wagenburg tactics, allowing infantry to defeat armored knights on the battlefield. At the same time crossbows were made more powerful to pierce armor, and the development of the Swiss Pike square formation also created substantial problems for heavy cavalry. Rather than dooming the use of body armor, the threat of small firearms intensified the use and further refinement of plate armor. There was a 150-year period in which better and more metallurgically advanced steel armor was being used, precisely because of the danger posed by the gun. Hence, guns and cavalry in plate armor were "threat and remedy" together on the battlefield for almost 400 years. By the 15th century Italian armor plates were almost always made of steel. In Southern Germany armorers began to harden their steel armor only in the late 15th century. They would continue to harden their steel for the next century because they quenched and tempered their product which allowed for the fire-gilding to be combined with tempering.

The quality of the metal used in armor deteriorated as armies became bigger and armor was made thicker, necessitating breeding of larger cavalry horses. If during the 14th and 15th centuries armor seldom weighed more than 15 kg, then by the late 16th century it weighed 25 kg. The increasing weight and thickness of late 16th-century armor therefore gave substantial resistance.

In the early years of pistol and arquebuses, firearms were relatively low in velocity. The full suits of armor, or breast plates actually stopped bullets fired from a modest distance. The front breast plates were, in fact, commonly shot as a test. The impact point would often be encircled with engraving to point it out. This was called the "proof". Armor often also bore an insignia of the maker, especially if it was of good quality. Crossbow bolts, if still used, would seldom penetrate good plate, nor would any bullet unless fired from close range.

Renaissance/Early Modern suits of armor appropriate for heavy cavalry

In effect, rather than making plate armor obsolete, the use of firearms stimulated the development of plate armor into its later stages. For most of that period, it allowed horsemen to fight while being the targets of defending arquebusiers without being easily killed. Full suits of armor were actually worn by generals and princely commanders right up to the 1710s.

Horse armor

The horse was afforded protection from lances and infantry weapons by steel plate barding. This gave the horse protection and enhanced the visual impression of a mounted knight. Late in the era, elaborate barding was used in parade armor.

Gunpowder era

French cuirassier of the 19th century (Drawing by Édouard Detaille, 1885)

As gunpowder weapons improved, it became cheaper and more effective to have groups of unarmored men with early guns than to have expensive knights, which caused armor to be largely discarded. Cavalry units continued to use armor. Examples include the German Reiter, Polish heavy hussars and the back and breast worn by heavy cavalry units during the Napoleonic wars.

Late modern use

Metal armor remained in limited use long after its general obsolescence. Soldiers in the American Civil War (1861–1865) bought iron and steel vests from peddlers (both sides had considered but rejected it for standard issue). The effectiveness of the vests varied widely—some successfully deflected bullets and saved lives but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their weight on long marches as well as the stigma they got for being cowards from their fellow troops.

World War I personal armor, including steel cap, steel plate vest, steel gauntlet/dagger and French splinter goggles

At the start of World War I in 1914, thousands of the French Cuirassiers rode out to engage the German Cavalry who likewise used helmets and armor. By that period, the shiny armor plate was covered in dark paint and a canvas wrap covered their elaborate Napoleonic-style helmets. Their armor was meant to protect only against sabers and lances. The cavalry had to beware of rifles and machine guns, like the infantry soldiers, who at least had a trench to give them some protection.

By the end of the war the Germans had made some 400,000 Sappenpanzer suits. Too heavy and restrictive for infantry, most were worn by spotters, sentries, machine gunners and other troops who stayed in one place.

Modern non-metallic armor

Soldiers use metal or ceramic plates in their bullet resistant vests, providing additional protection from pistol and rifle bullets. Metallic components or tightly woven fiber layers can give soft armor resistance to stab and slash attacks from knives and bayonets. Chain mail armored gloves continue to be used by butchers and abattoir workers to prevent cuts and wounds while cutting up carcasses.

Ceramic

Boron carbide is used in hard plate armor capable of defeating rifle and armor piercing ammunition. It was used in armor plates like the SAPI series, and today in most civilian accessible body armors.

Other materials include boron suboxide, alumina, and silicon carbide, which are used for varying reasons from protecting from tungsten carbide penetrators, to improved weight to area ratios. Ceramic body armor is made up of a hard and rigid ceramic strike face bonded to a ductile fiber composite backing layer. The projectile is shattered, turned, or eroded as it impacts the ceramic strike face, and much of its kinetic energy is consumed as it interacts with the ceramic layer; the fiber composite backing layer absorbs residual kinetic energy and catches bullet and ceramic debris. This allows such armor to defeat armor-piercing 5.56×45mm, 7.62×51mm, and 7.62x39mm bullets, among others, with little or no felt blunt trauma. High-end ceramic armor plates typically utilize ultra-high-molecular-weight polyethylene fiber composite backing layers, whereas budget plates will utilize aramid or fiberglass.

Fibers

DuPont Kevlar is well known as a component of some bullet resistant vests and bullet resistant face masks. The PASGT helmet and vest used by United States military forces since the early 1980s both have Kevlar as a key component, as do their replacements. Civilian applications include Kevlar reinforced clothing for motorcycle riders to protect against abrasion injuries. Kevlar in non-woven long strand form is used inside an outer protective cover to form chaps that loggers use while operating a chainsaw. If the moving chain contacts and tears through the outer cover, the long fibers of Kevlar tangle, clog, and stop the chain from moving as they get drawn into the workings of the drive mechanism of the saw. Kevlar is used also in emergency services protection gear if it involves high heat, e.g., tackling a fire, and Kevlar such as vests for police officers, security, and SWAT. The latest Kevlar material that DuPont has developed is Kevlar XP. In comparison with "normal" Kevlar, Kevlar XP is more lightweight and more comfortable to wear, as its quilt stitch is not required for the ballistic package.

Twaron is similar to Kevlar. They both belong to the aramid family of synthetic fibers. The only difference is that Twaron was first developed by Akzo in the 1970s. Twaron was first commercially produced in 1986. Now, Twaron is manufactured by Teijin Aramid. Like Kevlar, Twaron is a strong, synthetic fiber. It is also heat resistant and has many applications. It can be used in the production of several materials that include the military, construction, automotive, aerospace, and even sports market sectors. Among the examples of Twaron-made materials are body armor, helmets, ballistic vests, speaker woofers, drumheads, tires, turbo hoses, wire ropes, and cables.

Another fiber used to manufacture a bullet-resistant vest is Dyneema ultra-high-molecular-weight polyethylene. Originated in the Netherlands, Dyneema has an extremely high strength-to-weight ratio (a 1-mm-diameter rope of Dyneema can bear up to a 240-kg load), is light enough (low density) that it can float on water, and has high energy absorption characteristics. Since the introduction of the Dyneema Force Multiplier Technology in 2013, many body armor manufacturers have switched to Dyneema for their high-end armor solutions.

Protected areas

Shield

An American police officer in October 2002 wears a helmet while equipped with a riot shield.

A shield is held in the hand or arm. Its purpose is to intercept attacks, either by stopping projectiles such as arrows or by glancing a blow to the side of the shield-user, and it can also be used offensively as a bludgeoning weapon. Shields vary greatly in size, ranging from large shields that protect the user's entire body to small shields that are mostly for use in hand-to-hand combat. Shields also vary a great deal in thickness; whereas some shields were made of thick wooden planking, to protect soldiers from spears and crossbow bolts, other shields were thinner and designed mainly for glancing blows away (such as a sword blow). In prehistory, shields were made of wood, animal hide, or wicker. In antiquity and in the Middle Ages, shields were used by foot soldiers and mounted soldiers. Even after the invention of gunpowder and firearms, shields continued to be used. In the 18th century, Scottish clans continued to use small shields, and in the 19th century, some non-industrialized peoples continued to use shields. In the 20th and 21st centuries, ballistic shields are used by military and police units that specialize in anti-terrorist action, hostage rescue, and siege-breaching.

Head

A combat helmet is among the oldest forms of personal protective equipment, and is known to have been worn in ancient India around 1700 BC and the Assyrians around 900 BC, followed by the ancient Greeks and Romans, throughout the Middle Ages, and up to the modern era. Their materials and construction became more advanced as weapons became more and more powerful. Initially constructed from leather and brass, and then bronze and iron during the Bronze and Iron Ages, they soon came to be made entirely from forged steel in many societies after about AD 950. At that time, they were purely military equipment, protecting the head from cutting blows with swords, flying arrows, and low-velocity musketry. Some late medieval helmets, like the great bascinet, rested on the shoulders and prevented the wearer from turning his head, greatly restricting mobility. During the 18th and 19th centuries, helmets were not widely used in warfare; instead, many armies used unarmored hats that offered no protection against blade or bullet. The arrival of World War I, with its trench warfare and wide use of artillery, led to mass adoption of metal helmets once again, this time with a shape that offered mobility, a low profile, and compatibility with gas masks. Today's militaries often use high-quality helmets made of ballistic materials such as Kevlar and Twaron, which have excellent bullet and fragmentation stopping power. Some helmets also have good non-ballistic protective qualities, though many do not. The two most popular ballistic helmet models are the PASGT and the MICH. The Modular Integrated Communications Helmet (MICH) type helmet has a slightly smaller coverage at the sides which allows tactical headsets and other communication equipment. The MICH model has standard pad suspension and four-point chinstrap. The Personal Armor System for Ground Troops (PASGT) helmet has been in use since 1983 and has slowly been replaced by the MICH helmet.

A ballistic face mask is designed to protect the wearer from ballistic threats. Ballistic face masks are usually made of kevlar or other bullet-resistant materials and the inside of the mask may be padded for shock absorption, depending on the design. Due to weight restrictions, protection levels range only up to NIJ Level IIIA.

Torso

United States Navy sailors in 2007 wearing Lightweight Helmets and Modular Tactical Vests equipped with neck and groin armor

A ballistic vest helps absorb the impact from firearm-fired projectiles and shrapnel from explosions, and is worn on the torso. Soft vests are made from many layers of woven or laminated fibers and can be capable of protecting the wearer from small caliber handgun and shotgun projectiles, and small fragments from explosives such as hand grenades.

Metal or ceramic plates can be used with a soft vest, providing additional protection from rifle rounds, and metallic components or tightly woven fiber layers can give soft armor resistance to stab and slash attacks from a knife or bayonet. Soft vests are commonly worn by police forces, private citizens and private security guards or bodyguards, whereas hard-plate reinforced vests are mainly worn by combat soldiers, police tactical units and hostage rescue teams.

A modern equivalent may combine a ballistic vest with other items of protective clothing, such as a combat helmet. Vests intended for police and military use may also include ballistic shoulder and side protection armor components, and explosive ordnance disposal technicians wear heavy armor and helmets with face visors and spine protection.

Limbs

Medieval armor often offered protection for all of the limbs, including metal boots for the lower legs, gauntlets for the hands and wrists, and greaves for the legs. Today, protection of limbs from bombs is provided by a bombsuit. Most modern soldiers sacrifice limb protection for mobility, since armor thick enough to stop bullets would greatly inhibit movement of the arms and legs.

Performance standards

Due to the various different types of projectiles, it is often inaccurate to refer to a particular product as "bulletproof" because this suggests that it will protect against any and all projectiles. Instead, the term bullet resistant is generally preferred.

Standards are regional. Around the world ammunition varies and armor testing must reflect the threats found locally. According to statistics from the US National Law Enforcement Officers Memorial Fund, "a law enforcement officer’s job is extremely dangerous, with one officer being killed every 53 hours in the line of duty [in the United States]. Furthermore, this number is on the rise. In 2011, 173 officers were killed, 68 of them due to a gun-related incident."

While many standards exist, a few standards are widely used as models. The US National Institute of Justice ballistic and stab documents are examples of broadly accepted standards. Since the time that NIJ started testing, the lives of more than 3,000 officers were saved. In addition to the NIJ, the United Kingdom's Home Office Scientific Development Branch (HOSDB—formerly the Police Scientific Development Branch (PSDB)) standards are also used by a number of other countries and organizations. These "model" standards are usually adapted by other countries by following the same basic test methodologies, while changing the specific ammunition tested. NIJ Standard-0101.06 has specific performance standards for bullet resistant vests used by law enforcement. This rates vests on the following scale against penetration and also blunt trauma protection (deformation):

In 2018 or 2019, NIJ was expected to introduce the new NIJ Standard-0101.07. This new standard will completely replace the NIJ Standard-0101.06. The current system of using Roman numerals (II, IIIA, III, and IV) to indicate the level of threat will disappear and be replaced by a naming convention similar to the standard developed by UK Home Office Scientific Development Branch. HG (Hand Gun) is for soft armor and RF (Rifle) is for hard armor. Another important change is that the test-round velocity for conditioned armor will be the same as that for new armor during testing. For example, for NIJ Standard-0101.06 Level IIIA the .44 Magnum round is currently shot at 408 m/s for conditioned armor and at 436 m/s for new armor. For the NIJ Standard-0101.07, the velocity for both conditioned and new armor will be the same.

In January 2012, the NIJ introduced BA 9000, body armor quality management system requirements as a quality standard not unlike ISO 9001 (and much of the standards were based on ISO 9001).

In addition to the NIJ and HOSDB standards, other important standards include: the German Police's Technische Richtlinie (TR) Ballistische Schutzwesten, Draft ISO prEN ISO 14876, and Underwriters Laboratories (UL Standard 752).

Textile armor is tested for both penetration resistance by bullets and for the impact energy transmitted to the wearer. The "backface signature" or transmitted impact energy is measured by shooting armor mounted in front of a backing material, typically oil-based modelling clay. The clay is used at a controlled temperature and verified for impact flow before testing. After the armor is impacted with the test bullet the vest is removed from the clay and the depth of the indentation in the clay is measured.

The backface signature allowed by different test standards can be difficult to compare. Both the clay materials and the bullets used for the test are not common. In general the British, German and other European standards allow 20–25 mm of backface signature, while the US-NIJ standards allow for 44 mm, which can potentially cause internal injury. The allowable backface signature for this has been controversial from its introduction in the first NIJ test standard and the debate as to the relative importance of penetration-resistance vs. backface signature continues in the medical and testing communities.

In general a vest's textile material temporarily degrades when wet. Neutral water at room temp does not affect para-aramid or UHMWPE but acidic, basic and some other solutions can permanently reduce para-aramid fiber tensile strength. (As a result of this, the major test standards call for wet testing of textile armor.) Mechanisms for this wet loss of performance are not known. Vests that will be tested after ISO-type water immersion tend to have heat-sealed enclosures and those that are tested under NIJ-type water spray methods tend to have water-resistant enclosures.

From 2003 to 2005, a large study of the environmental degradation of Zylon armor was undertaken by the US-NIJ. This concluded that water, long-term use, and temperature exposure significantly affect tensile strength and the ballistic performance of PBO or Zylon fiber. This NIJ study on vests returned from the field demonstrated that environmental effects on Zylon resulted in ballistic failures under standard test conditions.

Ballistic testing V50 and V0

Measuring the ballistic performance of armor is based on determining the kinetic energy of a bullet at impact. Because the energy of a bullet is a key factor in its penetrating capacity, velocity is used as the primary independent variable in ballistic testing. For most users the key measurement is the velocity at which no bullets will penetrate the armor. Measuring this zero penetration velocity (V0) must take into account variability in armor performance and test variability. Ballistic testing has a number of sources of variability: the armor, test backing materials, bullet, casing, powder, primer and the gun barrel, to name a few.

Variability reduces the predictive power of a determination of V0. If, for example, the V0 of an armor design is measured to be 1,600 ft/s (490 m/s) with a 9 mm FMJ bullet based on 30 shots, the test is only an estimate of the real V0 of this armor. The problem is variability. If the V0 is tested again with a second group of 30 shots on the same vest design, the result will not be identical.

Only a single low velocity penetrating shot is required to reduce the V0 value. The more shots made the lower the V0 will go. In terms of statistics, the zero penetration velocity is the tail end of the distribution curve. If the variability is known and the standard deviation can be calculated, one can rigorously set the V0 at a confidence interval. Test Standards now define how many shots must be used to estimate a V0 for the armor certification. This procedure defines a confidence interval of an estimate of V0. (See "NIJ and HOSDB test methods".)

V0 is difficult to measure, so a second concept has been developed in ballistic testing called V50. This is the velocity at which 50 percent of the shots go through and 50 percent are stopped by the armor. US military standards define a commonly used procedure for this test. The goal is to get three shots that penetrate and a second group of three shots that are stopped by the armor all within a specified velocity range. It is possible, and desirable, to have a penetration velocity lower than a stop velocity. These three stops and three penetrations can then be used to calculate a V50 velocity.

In practice this measurement of V50 often requires 1–2 vest panels and 10–20 shots. A very useful concept in armor testing is the offset velocity between the V0 and V50. If this offset has been measured for an armor design, then V50 data can be used to measure and estimate changes in V0. For vest manufacturing, field evaluation and life testing both V0 and V50 are used. However, as a result of the simplicity of making V50 measurements, this method is more important for control of armor after certification.

Cunniff analysis

Using dimensionless analysis, Cuniff arrived at a relation connecting the V50 and the system parameters for textile-based body armors. Under the assumption that the energy of impact is dissipated in breaking the yarn, it was shown that

Here,

are the failure stress, failure strain, density and elastic modulus of the yarn
is the mass per unit area of the armor
is the mass per unit area of the projectile

Military testing

After the Vietnam War, military planners developed a concept of "Casualty Reduction". The large body of casualty data made clear that in a combat situation, fragments, not bullets, were the greatest threat to soldiers. After World War II vests were being developed and fragment testing was in its early stages. Artillery shells, mortar shells, aerial bombs, grenades, and antipersonnel mines are fragmentation devices. They all contain a steel casing that is designed to burst into small steel fragments or shrapnel, when their explosive core detonates. After considerable effort measuring fragment size distribution from various NATO and Soviet Bloc munitions, a fragment test was developed. Fragment simulators were designed and the most common shape is a Right Circular Cylinder or RCC simulator. This shape has a length equal to its diameter. These RCC Fragment Simulation Projectiles (FSPs) are tested as a group. The test series most often includes 2-grain (0.13 g), 4-grain (0.26 g), 16-grain (1.0 g), and 64-grain (4.1 g) mass RCC FSP testing. The 2-4-16-64 series is based on the measured fragment size distributions.

The second part of "Casualty Reduction" strategy is a study of velocity distributions of fragments from munitions. Warhead explosives have blast speeds of 20,000 ft/s (6,100 m/s) to 30,000 ft/s (9,100 m/s). As a result, they are capable of ejecting fragments at speeds of over 3,330 ft/s (1,010 m/s), implying very high energy (where the energy of a fragment is 12 mass × velocity2, neglecting rotational energy). The military engineering data showed that, like the fragment size, the fragment velocities had characteristic distributions. It is possible to segment the fragment output from a warhead into velocity groups. For example, 95% of all fragments from a bomb blast under 4 grains (0.26 g) have a velocity of 3,000 ft/s (910 m/s) or less. This established a set of goals for military ballistic vest design.

The random nature of fragmentation required the military vest specification to trade off mass vs. ballistic-benefit. Hard vehicle armor is capable of stopping all fragments, but military personnel can only carry a limited amount of gear and equipment, so the weight of the vest is a limiting factor in vest fragment protection. The 2-4-16-64 grain series at limited velocity can be stopped by an all-textile vest of approximately 5.4 kg/m2 (1.1 lb/ft2). In contrast to deformable lead bullets, fragments do not change shape; they are steel and can not be deformed by textile materials. The 2-grain (0.13 g) FSP (the smallest fragment projectile commonly used in testing) is about the size of a grain of rice; such small, fast-moving fragments can potentially slip through the vest, moving between yarns. As a result, fabrics optimized for fragment protection are tightly woven, although these fabrics are not as effective at stopping lead bullets.

By the 2010s, the development of body armor had been stymied in regards to weight, in that designers had trouble increasing the protective capability of body armor while still maintaining or decreasing its weight.

Authoritarian socialism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Authoritarian_socialism   Authoritarian socialism ,...