A hunter-gatherer or forager is a human living in a community, or according to an ancestrally derived lifestyle, in which most or all food is obtained by foraging, that is, by gathering food from local naturally occurring sources, especially wild edible plants but also insects, fungi, honey, bird eggs, or anything safe to eat, and/or by hunting game (pursuing and/or trapping and killing wild animals, including catching fish). This is a common practice among most vertebrates that are omnivores. Hunter-gatherer societies stand in contrast to the more sedentaryagricultural societies, which rely mainly on cultivating crops and raising domesticated animals for food production, although the boundaries between the two ways of living are not completely distinct.
Hunting and gathering was humanity's original and most enduring successful competitiveadaptation in the natural world, occupying at least 90 percent of human history. Following the invention of agriculture, hunter-gatherers who did not change were displaced or conquered by farming or pastoralist groups in most parts of the world.
Across Western Eurasia, it was not until approximately 4,000 BC that
farming and metallurgical societies completely replaced
hunter-gatherers. These technologically advanced societies expanded
faster in areas with less forest, pushing hunter-gatherers into denser
woodlands. Only the middle-late Bronze Age and Iron Age societies were
able to fully replace hunter-gatherers in their final stronghold located
in the most densely forested areas. Unlike their Bronze and Iron Age
counterparts, Neolithic societies could not establish themselves in
dense forests, and Copper Age societies had only limited success.
In addition to men, a single study found that women engage in hunting in 79% of modern hunter-gatherer societies.
However, an attempted verification of this study found "that multiple
methodological failures all bias their results in the same
direction...their analysis does not contradict the wide body of
empirical evidence for gendered divisions of labor in foraging
societies". Only a few contemporary societies of uncontacted people are still classified as hunter-gatherers, and many supplement their foraging activity with horticulture or pastoralism.
Archaeological evidence
Hunting and gathering was presumably the subsistence strategy employed by human societies beginning some 1.8 million years ago, by Homo erectus, and from its appearance some 200,000 years ago by Homo sapiens. Prehistoric hunter-gatherers lived in groups that consisted of several families resulting in a size of a few dozen people. It remained the only mode of subsistence until the end of the Mesolithic period some 10,000 years ago, and after this was replaced only gradually with the spread of the Neolithic Revolution.
The Late Pleistocene witnessed the spread of modern humans outside of Africa as well as the extinction of all other human species. Humans spread to the Australian continent and the Americas for the first time, coincident with the extinction of numerous predominantly megafaunal species.
Major extinctions were incurred in Australia beginning approximately
50,000 years ago and in the Americas about 15,000 years ago. Ancient North Eurasians lived in extreme conditions of the mammoth steppes of Siberia and survived by hunting mammoths, bison and woolly rhinoceroses. The settlement of the Americas began when Paleolithic hunter-gatherers entered North America from the North Asian mammoth steppe via the Beringia land bridge.
During the 1970s, Lewis Binford suggested that early humans obtained food via scavenging, not hunting. Early humans in the Lower Paleolithic lived in forests and woodlands,
which allowed them to collect seafood, eggs, nuts, and fruits besides
scavenging. Rather than killing large animals for meat, according to
this view, they used carcasses of such animals that had either been
killed by predators or that had died of natural causes.
Scientists have demonstrated that the evidence for early human
behaviors for hunting versus carcass scavenging vary based on the
ecology, including the types of predators that existed and the
environment.
According to the endurance running hypothesis, long-distance running as in persistence hunting,
a method still practiced by some hunter-gatherer groups in modern
times, was likely the driving evolutionary force leading to the
evolution of certain human characteristics. This hypothesis does not
necessarily contradict the scavenging hypothesis: both subsistence
strategies may have been in use sequentially, alternately or even
simultaneously.
Starting at the transition between the Middle to Upper Paleolithic
period, some 80,000 to 70,000 years ago, some hunter-gatherer bands
began to specialize, concentrating on hunting a smaller selection of
(often larger) game and gathering a smaller selection of food. This
specialization of work also involved creating specialized tools such as fishing nets, hooks, and bone harpoons. The transition into the subsequent Neolithic period is chiefly defined by the unprecedented development of nascent agricultural practices. Agriculture originated as early as 12,000 years ago in the Middle East, and also independently originated in many other areas including Southeast Asia, parts of Africa, Mesoamerica, and the Andes.
Forest gardening was also being used as a food production system in various parts of the world over this period.
Many groups continued their hunter-gatherer ways of life,
although their numbers have continually declined, partly as a result of
pressure from growing agricultural and pastoral communities. Many of
them reside in the developing world, either in arid regions or tropical
forests. Areas that were formerly available to hunter-gatherers were—and
continue to be—encroached upon by the settlements of agriculturalists.
In the resulting competition for land use, hunter-gatherer societies
either adopted these practices or moved to other areas. In addition, Jared Diamond has blamed a decline in the availability of wild foods, particularly animal resources. In North and South America, for example, most large mammal species had gone extinct by the end of the Pleistocene—according to Diamond, because of overexploitation by humans, one of several explanations offered for the Quaternary extinction event there.
As a result of the now near-universal human reliance upon
agriculture, the few contemporary hunter-gatherer cultures usually live
in areas unsuitable for agricultural use.
Archaeologists can use evidence such as stone tool use to track hunter-gatherer activities, including mobility.
Ethnobotany is the field of study whereby food plants of various peoples and tribes worldwide are documented.
Most hunter-gatherers are nomadic
or semi-nomadic and live in temporary settlements. Mobile communities
typically construct shelters using impermanent building materials, or
they may use natural rock shelters, where they are available.
Some hunter-gatherer cultures, such as the indigenous peoples of the Pacific Northwest Coast and the Yokuts,
lived in particularly rich environments that allowed them to be
sedentary or semi-sedentary. Amongst the earliest example of permanent
settlements is the Osipovka culture (14–10.3 thousand years ago), which lived in a fish-rich environment that allowed them to be able to stay at the same place all year. One group, the Chumash,
had the highest recorded population density of any known hunter and
gatherer society with an estimated 21.6 persons per square mile.
Social and economic structure
Hunter-gatherers tend to have an egalitarian social ethos, although settled hunter-gatherers (for example, those inhabiting the Northwest Coast of North America and the Calusa in Florida) are an exception to this rule. For example, the San people
or "Bushmen" of southern Africa have social customs that strongly
discourage hoarding and displays of authority, and encourage economic
equality via sharing of food and material goods. Karl Marx defined this socio-economic system as primitive communism.
The egalitarianism typical of human hunters and gatherers is never
total but is striking when viewed in an evolutionary context. One of
humanity's two closest primate relatives, chimpanzees, are anything but egalitarian, forming themselves into hierarchies that are often dominated by an alpha male. So great is the contrast with human hunter-gatherers that it is widely argued by paleoanthropologists that resistance to being dominated was a key factor driving the evolutionary emergence of human consciousness, language, kinship and social organization.
Most anthropologists believe that hunter-gatherers do not have
permanent leaders; instead, the person taking the initiative at any one
time depends on the task being performed.
Within a particular tribe or people, hunter-gatherers are connected by both kinship and band (residence/domestic group) membership. Postmarital residence among hunter-gatherers tends to be matrilocal, at least initially. Young mothers can enjoy childcare support from their own mothers, who continue living nearby in the same camp.
The systems of kinship and descent among human hunter-gatherers were
relatively flexible, although there is evidence that early human kinship
in general tended to be matrilineal.
The conventional assumption has been that women did most of the gathering, while men concentrated on big game hunting. An illustrative account is Megan Biesele's study of the southern African Ju/'hoan, 'Women Like Meat'. A recent study suggests that the sexual division of labor was the fundamental organizational innovation that gave Homo sapiens the edge over the Neanderthals, allowing our ancestors to migrate from Africa and spread across the globe.
A 1986 study found most hunter-gatherers have a symbolically structured sexual division of labor.
However, it is true that in a small minority of cases, women hunted the
same kind of quarry as men, sometimes doing so alongside men. Among the
Ju'/hoansi people of Namibia, women help men track down quarry.
In the Australian Martu, both women and men participate in hunting but
with a different style of gendered division; while men are willing to
take more risks to hunt bigger animals such as kangaroo for political
gain as a form of "competitive magnanimity", women target smaller game
such as lizards to feed their children and promote working relationships
with other women, preferring a more constant supply of sustenance. In 2018, 9000-year-old remains of a female hunter along with a toolkit of projectile points and animal processing implements were discovered at the Andean site of Wilamaya Patjxa, Puno District in Peru.
A 2020 study inspired by this discovery found that of 27 identified
burials with hunter gatherers of a known sex who were also buried with
hunting tools, 11 were female hunter gatherers, while 16 were male
hunter gatherers. Combined with uncertainties, these findings suggest
that anywhere from 30 to 50 percent of big game hunters were female.
A 2023 study that looked at studies of contemporary hunter gatherer
societies from the 1800s to the present day found that women hunted in
79 percent of hunter gatherer societies.
However, an attempted verification of this study found "that multiple
methodological failures all bias their results in the same
direction...their analysis does not contradict the wide body of
empirical evidence for gendered divisions of labor in foraging
societies".
At the 1966 "Man the Hunter" conference, anthropologists Richard Borshay Lee and Irven DeVore suggested that egalitarianism
was one of several central characteristics of nomadic hunting and
gathering societies because mobility requires minimization of material
possessions throughout a population. Therefore, no surplus of resources
can be accumulated by any single member. Other characteristics Lee and
DeVore proposed were flux in territorial boundaries as well as in demographic composition.
At the same conference, Marshall Sahlins presented a paper entitled, "Notes on the Original Affluent Society", in which he challenged the popular view of hunter-gatherers lives as "solitary, poor, nasty, brutish and short", as Thomas Hobbes
had put it in 1651. According to Sahlins, ethnographic data indicated
that hunter-gatherers worked far fewer hours and enjoyed more leisure
than typical members of industrial society, and they still ate well.
Their "affluence" came from the idea that they were satisfied with very
little in the material sense.
Later, in 1996, Ross Sackett performed two distinct meta-analyses to
empirically test Sahlin's view. The first of these studies looked at 102
time-allocation studies, and the second one analyzed 207
energy-expenditure studies. Sackett found that adults in foraging and
horticultural societies work on average, about 6.5 hours a day, whereas
people in agricultural and industrial societies work on average 8.8
hours a day.
Sahlins' theory has been criticized for only including time spent
hunting and gathering while omitting time spent on collecting firewood,
food preparation, etc. Other scholars also assert that hunter-gatherer
societies were not "affluent" but suffered from extremely high infant
mortality, frequent disease, and perennial warfare.
Researchers Gurven and Kaplan have estimated that around 57% of
hunter-gatherers reach the age of 15. Of those that reach 15 years of
age, 64% continue to live to or past the age of 45. This places the life
expectancy between 21 and 37 years.
They further estimate that 70% of deaths are due to diseases of some
kind, 20% of deaths come from violence or accidents and 10% are due to
degenerative diseases.
Mutual exchange and sharing of resources (i.e., meat gained from
hunting) are important in the economic systems of hunter-gatherer
societies. Therefore, these societies can be described as based on a "gift economy".
A 2010 paper argued that while hunter-gatherers may have lower levels
of inequality than modern, industrialised societies, that does not mean
inequality does not exist. The researchers estimated that the average
Gini coefficient amongst hunter-gatherers was 0.25, equivalent to the
country of Denmark in 2007. In addition, wealth transmission across
generations was also a feature of hunter-gatherers, meaning that
"wealthy" hunter-gatherers, within the context of their communities,
were more likely to have children as wealthy as them than poorer members
of their community and indeed hunter-gatherer societies demonstrate an
understanding of social stratification. Thus while the researchers
agreed that hunter-gatherers were more egalitarian than modern
societies, prior characterisations of them living in a state of
egalitarian primitive communism were inaccurate and misleading.
This study, however, exclusively examined modern hunter-gatherer
communities, offering limited insight into the exact nature of social
structures that existed prior to the Neolithic Revolution. Alain Testart
and others have said that anthropologists should be careful when using
research on current hunter-gatherer societies to determine the structure
of societies in the paleolithic
era, emphasising cross-cultural influences, progress and development
that such societies have undergone in the past 10,000 years.
As such, the combined anthropological and archaeological evidence to
date continues to favour previous understandings of early
hunter-gatherers as largely egalitarian.
Diet
As one moves away from the equator,
the importance of plant food decreases and the importance of aquatic
food increases. In cold and heavily forested environments, edible plant
foods and large game are less abundant and hunter-gatherers may turn to
aquatic resources to compensate. Hunter-gatherers in cold climates also
rely more on stored food than those in warm climates. However, aquatic
resources tend to be costly, requiring boats and fishing
technology, and this may have impeded their intensive use in
prehistory. Marine food probably did not start becoming prominent in the
diet until relatively recently, during the Late Stone Age in southern Africa and the Upper Paleolithic in Europe.
Fat
is important in assessing the quality of game among hunter-gatherers, to
the point that lean animals are often considered secondary resources or
even starvation food. Consuming too much lean meat leads to adverse
health effects like protein poisoning, and can in extreme cases lead to death. Additionally, a diet high in protein and low in other macronutrients
results in the body using the protein as energy, possibly leading to
protein deficiency. Lean meat especially becomes a problem when animals
go through a lean season that requires them to metabolize fat deposits.
In areas where plant and fish resources are scarce, hunter-gatherers may trade meat with horticulturalists for carbohydrates.
For example, tropical hunter-gatherers may have an excess of protein
but be deficient in carbohydrates, and conversely tropical
horticulturalists may have a surplus of carbohydrates but inadequate
protein. Trading may thus be the most cost-effective means of acquiring
carbohydrate resources.
Variability
Hunter-gatherer societies manifest significant variability, depending on climate zone/life zone,
available technology, and societal structure. Archaeologists examine
hunter-gatherer tool kits to measure variability across different
groups. Collard et al. (2005) found temperature to be the only statistically significant factor to impact hunter-gatherer tool kits. Using temperature as a proxy for risk, Collard et al.'s
results suggest that environments with extreme temperatures pose a
threat to hunter-gatherer systems significant enough to warrant
increased variability of tools. These results support Torrence's (1989)
theory that the risk of failure is indeed the most important factor in
determining the structure of hunter-gatherer toolkits.
One way to divide hunter-gatherer groups is by their return
systems. James Woodburn uses the categories "immediate return"
hunter-gatherers for egalitarianism and "delayed return" for
nonegalitarian. Immediate return foragers consume their food within a
day or two after they procure it. Delayed return foragers store the
surplus food.
Hunting-gathering was the common human mode of subsistence throughout the Paleolithic,
but the observation of current-day hunters and gatherers does not
necessarily reflect Paleolithic societies; the hunter-gatherer cultures
examined today have had much contact with modern civilization and do not
represent "pristine" conditions found in uncontacted peoples.
The transition from hunting and gathering to agriculture is not necessarily a one-way process.
It has been argued that hunting and gathering represents an adaptive strategy, which may still be exploited, if necessary, when environmental change causes extreme food stress for agriculturalists.
In fact, it is sometimes difficult to draw a clear line between
agricultural and hunter-gatherer societies, especially since the
widespread adoption of agriculture and resulting cultural diffusion that
has occurred in the last 10,000 years.
Nowadays, some scholars speak about the existence within cultural evolution
of the so-called mixed-economies or dual economies which imply a
combination of food procurement (gathering and hunting) and food
production or when foragers have trade relations with farmers.
Modern and revisionist perspectives
Some of the theorists who advocate this "revisionist" critique imply that, because the "pure hunter-gatherer" disappeared not long after colonial
(or even agricultural) contact began, nothing meaningful can be learned
about prehistoric hunter-gatherers from studies of modern ones (see Wilmsen).
Lee and Guenther have rejected most of the arguments put forward by Wilmsen.Doron Shultziner and others have argued that we can learn a lot about
the life-styles of prehistoric hunter-gatherers from studies of
contemporary hunter-gatherers—especially their impressive levels of
egalitarianism.
There are nevertheless a number of contemporary hunter-gatherer
peoples who, after contact with other societies, continue their ways of
life with very little external influence or with modifications that
perpetuate the viability of hunting and gathering in the 21st century. One such group is the Pila Nguru (Spinifex people) of Western Australia, whose land in the Great Victoria Desert has proved unsuitable for European agriculture (and even pastoralism). Another are the Sentinelese of the Andaman Islands in the Indian Ocean, who live on North Sentinel Island and to date have maintained their independent existence, repelling attempts to engage with and contact them. The Savanna Pumé
of Venezuela also live in an area that is inhospitable to large scale
economic exploitation and maintain their subsistence based on hunting
and gathering, as well as incorporating a small amount of manioc
horticulture that supplements, but is not replacing, reliance on foraged
foods.
Evidence suggests big-game hunter-gatherers crossed the Bering Strait from Asia (Eurasia) into North America over a land bridge (Beringia), that existed between 47,000 and 14,000 years ago. Around 18,500–15,500 years ago, these hunter-gatherers are believed to have followed herds of now-extinct Pleistocenemegafauna along ice-free corridors that stretched between the Laurentide and Cordilleran ice sheets. Another route proposed is that, either on foot or using primitive boats, they migrated down the Pacific coast to South America.
Hunter-gatherers would eventually flourish all over the Americas, primarily based in the Great Plains of the United States and Canada, with offshoots as far east as the Gaspé Peninsula on the Atlantic coast, and as far south as Chile, Monte Verde.
American hunter-gatherers were spread over a wide geographical area,
thus there were regional variations in lifestyles. However, all the
individual groups shared a common style of stone tool production, making
knapping styles and progress identifiable. This early Paleo-Indian period lithic reduction
tool adaptations have been found across the Americas, utilized by
highly mobile bands consisting of approximately 25 to 50 members of an
extended family.
The Archaic period in the Americas saw a changing environment featuring a warmer more arid climate and the disappearance of the last megafauna.
The majority of population groups at this time were still highly mobile
hunter-gatherers. Individual groups started to focus on resources
available to them locally, however, and thus archaeologists have
identified a pattern of increasing regional generalization, as seen with
the Southwest, Arctic, Poverty Point, Dalton and Plano traditions. These regional adaptations would become the norm, with reliance less on hunting and gathering, with a more mixed economy of small game, fish, seasonally wild vegetables and harvested plant foods.
Scholars like Kat Anderson
have suggested that the term Hunter-gatherer is reductive because it
implies that Native Americans never stayed in one place long enough to
affect the environment around them. However, many of the landscapes in
the Americas today are due to the way the Natives of that area
originally tended the land. Anderson specifically looks at California
Natives and the practices they utilized to tame their land. Some of
these practices included pruning, weeding, sowing, burning, and
selective harvesting. These practices allowed them to take from the
environment in a sustainable manner for centuries
California Indians view the idea of wilderness in a negative
light. They believe that wilderness is the result of humans losing their
knowledge of the natural world and how to care for it. When the earth
turns back to wilderness after the connection with humans is lost then
the plants and animals will retreat and hide from the humans.
Software testing is the act of checking whether software satisfies expectations.
Software testing can provide objective, independent information about the quality of software and the risk of its failure to a user or sponsor.
Software testing can determine the correctness of software for specific scenarios, but cannot determine correctness for all scenarios. It cannot find all bugs.
Based on the criteria for measuring correctness from an oracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include: specifications, contracts,
comparable products, past versions of the same product, inferences
about intended or expected purpose, user or customer expectations,
relevant standards, applicable laws.
Software testing is often dynamic in nature; running the software
to verify actual output matches expected. It can also be static in
nature; reviewing code and its associated documentation.
Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do?
Information learned from software testing may be used to improve the process by which software is developed.
Software testing should follow a "pyramid" approach wherein most of your tests should be unit tests, followed by integration tests and finally end to end (e2e) tests should have the lowest proportion.
Economics
A study conducted by NIST
in 2002 reported that software bugs cost the U.S. economy $59.5 billion
annually. More than a third of this cost could be avoided, if better
software testing was performed.
Outsourcing software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.
History
Glenford J. Myers initially introduced the separation of debugging from testing in 1979. Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."),
it illustrated the desire of the software engineering community to
separate fundamental development activities, such as debugging, from
that of verification.
Goals
Software testing is typically goal driven.
Finding bugs
Software testing typically includes handling software bugs – a defect in the code that causes an undesirable result. Bugs generally slow testing progress and involve programmer assistance to debug and fix.
Not all defects cause a failure. For example, a defect in dead code will not be considered a failure.
A defect that does not cause failure at one point in time may
later occur due to environmental changes. Examples of environment change
include running on new computer hardware, changes in data, and interacting with different software.
A single defect may result in multiple failure symptoms.
A fundamental limitation of software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.
Defects that manifest in unusual conditions are difficult to find in testing. Also, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) – usability, scalability, performance, compatibility, and reliability – can be subjective; something that constitutes sufficient value to one person may not to another.
Although testing for every possible input is not feasible, testing can use combinatorics to maximize coverage while minimizing tests.
In software testing, test automation is the use of software
separate from the software being tested to control the execution of
tests and the comparison of actual outcomes with predicted outcomes.
Test automation can automate some repetitive but necessary tasks in a
formalized testing process already in place, or perform additional
testing that would be difficult to do manually. Test automation is
critical for continuous delivery and continuous testing.
Levels
Software testing can be categorized into levels based on how much of the software system is the focus of a test.
Unit testing
Unit testing, a.k.a. component or module testing, is a form of software testing by which isolated source code is tested to validate expected behavior.
Integration testing
Integration testing, also called integration and testing, abbreviated I&T, is a form of software testing in which multiple parts of a software system are tested as a group.
Static testing is often implicit, like proofreading, plus when
programming tools/text editors check source code structure or compilers
(pre-compilers) check syntax and data flow as static program analysis.
Dynamic testing takes place when the program itself is run. Dynamic
testing may begin before the program is 100% complete in order to test
particular sections of code and are applied to discrete functions or modules.Typical techniques for these are either using stubs/drivers or execution from a debugger environment.
Passive testing means verifying the system's behavior without any
interaction with the software product. Contrary to active testing,
testers do not provide any test data but look at system logs and traces.
They mine for patterns and specific behavior in order to make some kind
of decisions. This is related to offline runtime verification and log analysis.
Exploratory
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984,
defines exploratory testing as "a style of software testing that
emphasizes the personal freedom and responsibility of the individual
tester to continually optimize the quality of his/her work by treating
test-related learning, test design, test execution, and test result
interpretation as mutually supportive activities that run in parallel
throughout the project."
Preset testing vs adaptive testing
The
type of testing strategy to be performed depends on whether the tests
to be applied to the IUT should be decided before the testing plan
starts to be executed (preset testing)
or whether each input to be applied to the IUT can be dynamically
dependent on the outputs obtained during the application of the previous
tests (adaptive testing).
Black/white box
Software
testing can often be divided into white-box and black-box. These two
approaches are used to describe the point of view that the tester takes
when designing test cases. A hybrid approach called grey-box includes
aspects of both boxes may also be applied to software testing
methodology.
White-box testing (also known as clear box testing, glass box
testing, transparent box testing, and structural testing) verifies the
internal structures or workings of a program, as opposed to the
functionality exposed to the end-user. In white-box testing, an internal
perspective of the system (the source code), as well as programming
skills, are used to design test cases. The tester chooses inputs to
exercise paths through the code and determines the appropriate outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level.
It can test paths within a unit, paths between units during
integration, and between subsystems during a system–level test. Though
this method of test design can uncover many errors or problems, it might
not detect unimplemented parts of the specification or missing
requirements.
Techniques used in white-box testing include:
API testing – testing of the application using public and private APIs (application programming interfaces)
Code coverage
– creating tests to satisfy some criteria of code coverage (for
example, the test designer can create tests to cause all statements in
the program to be executed at least once)
Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
Code coverage tools can evaluate the completeness of a test suite
that was created with any method, including black-box testing. This
allows the software team to examine parts of a system that are rarely
tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for:
Function coverage, which reports on functions executed
Statement coverage, which reports on the number of lines executed to complete the test
Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
100% statement coverage ensures that all code paths or branches (in terms of control flow)
are executed at least once. This is helpful in ensuring correct
functionality, but not sufficient since the same code may process
different inputs correctly or incorrectly.
Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases
to be provided to the tester, who then can simply verify that for a
given input, the output value (or behavior), either "is" or "is not" the
same as the expected value specified in the test case. Test cases are
built around specifications and requirements, i.e., what the application
is supposed to do. It uses external descriptions of the software,
including specifications, requirements, and designs to derive test
cases. These tests can be functional or non-functional,
though usually functional. Specification-based testing may be necessary
to assure correct functionality, but it is insufficient to guard
against complex or high-risk situations.
Black box testing can be used to any level of testing although usually not at the unit level.
Component interface testing
Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component.
The practice of component interface testing can be used to check the
handling of data passed between various units, or subsystem components,
beyond full integration testing between those units.
The data being passed can be considered as "message packets" and the
range or data types can be checked, for data generated from one unit,
and tested for validity before being passed into another unit. One
option for interface testing is to keep a separate log file of data
items being passed, often with a timestamp logged to allow analysis of
thousands of cases of data passed between units for days or weeks. Tests
can include checking the handling of some extreme data values while
other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit.
Visual testing
The
aim of visual testing is to provide developers with the ability to
examine what was happening at the point of software failure by
presenting the data in such a way that the developer can easily find the
information he or she requires, and the information is expressed
clearly.
At the core of visual testing is the idea that showing someone a
problem (or a test failure), rather than just describing it, greatly
increases clarity and understanding. Visual testing, therefore, requires
the recording of the entire test process – capturing everything that
occurs on the test system in video format. Output videos are
supplemented by real-time tester input via picture-in-a-picture webcam
and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of
communication is increased drastically because testers can show the
problem (and the events leading up to it) to the developer as opposed to
just describing it and the need to replicate test failures will cease
to exist in many cases. The developer will have all the evidence he or
she requires of a test failure and can instead focus on the cause of the
fault and how it should be fixed.
Ad hoc testing and exploratory testing
are important methodologies for checking software integrity, because
they require less preparation time to implement, while the important
bugs can be found quickly.
In ad hoc testing, where testing takes place in an improvised impromptu
way, the ability of the tester(s) to base testing off documented
methods and then improvise variations of those tests can result in more
rigorous examination of defect fixes.
However, unless strict documentation of the procedures are maintained,
one of the limits of ad hoc testing is lack of repeatability.
Grey-box testing (American spelling: gray-box testing) involves using
knowledge of internal data structures and algorithms for purposes of
designing tests while executing those tests at the user, or black-box
level. The tester will often have access to both "the source code and
the executable binary." Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages.
Manipulating input data and formatting output do not qualify as
grey-box, as the input and output are clearly outside of the "black box"
that we are calling the system under test. This distinction is
particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the
tester makes better-informed testing choices while testing the software
from outside. Typically, a grey-box tester will be permitted to set up
an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL
statements against the database and then executing queries to ensure
that the expected changes have been reflected. Grey-box testing
implements intelligent test scenarios, based on limited information.
This will particularly apply to data type handling, exception handling, and so on.
With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.
Installation testing
Most software systems have installation procedures that are needed
before they can be used for their main purpose. Testing these procedures
to achieve an installed software system that may be used is known as installation testing. These procedures may involve full or partial upgrades, and install/uninstall processes.
A user must select a variety of options.
Dependent files and libraries must be allocated, loaded or located.
Valid hardware configurations must be present.
Software systems may need connectivity to connect to other software systems.
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web application, which must render in a Web browser). For example, in the case of a lack of backward compatibility,
this can occur because the programmers develop and test software only
on the latest version of the target environment, which not all users may
be running. This results in the unintended consequence that the latest
work may not function on earlier versions of the target environment, or
on older hardware that earlier versions of the target environment were
capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing
consists of minimal attempts to operate the software, designed to
determine whether there are any basic problems that will prevent it from
working at all. Such tests can be used as build verification test.
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions,
as degraded or lost features, including old bugs that have come back.
Such regressions occur whenever software functionality that was
previously working correctly, stops working as intended. Typically,
regressions occur as an unintended consequence
of program changes, when the newly developed part of the software
collides with the previously existing code. Regression testing is
typically the largest test effort in commercial software development,
due to checking numerous details in prior software features, and even
new software can be developed while using some old test cases to test
parts of the new design to ensure prior functionality is still
supported.
Common methods of regression testing include re-running previous
sets of test cases and checking whether previously fixed faults have
re-emerged. The depth of testing depends on the phase in the release
process and the risk
of the added features. They can either be complete, for changes added
late in the release or deemed to be risky, or be very shallow,
consisting of positive tests on each feature, if the changes are early
in the release or deemed to be of low risk.
Acceptance testing is system-level testing to ensure the software meets customer expectations.Acceptance testing may be performed as part of the hand-off process between any two phases of development.
Tests are frequently grouped into these levels by where they are
performed in the software development process, or by the level of
specificity of the test.
User acceptance testing (UAT)
Operational acceptance testing (OAT)
Contractual and regulatory acceptance testing
Alpha and beta testing
Sometimes, UAT is performed by the customer, in their environment and on their own hardware.
OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance
projects. This type of testing focuses on the operational readiness of
the system to be supported, or to become part of the production
environment. Hence, it is also known as operational readiness testing
(ORT) or Operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests that are required to verify the non-functional aspects of the system.
In addition, the software testing should ensure that the
portability of the system, as well as working as expected, does not also
damage or partially corrupt its operating environment or cause other
processes within that environment to become inoperative.
Contractual acceptance testing is performed based on the
contract's acceptance criteria defined during the agreement of the
contract, while regulatory acceptance testing is performed based on the
relevant regulations to the software product. Both of these two tests
can be performed by users or independent testers. Regulation acceptance
testing sometimes involves the regulatory agencies auditing the test
results.
Alpha testing
Alpha
testing is simulated or actual operational testing by potential
users/customers or an independent test team at the developers' site.
Alpha testing is often employed for off-the-shelf software as a form of
internal acceptance testing before the software goes to beta testing.
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions,
are released to a limited audience outside of the programming team
known as beta testers. The software is released to groups of people so
that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).
Functional vs non-functional testing
Functional testing
refers to activities that verify a specific action or function of the
code. These are usually found in the code requirements documentation,
although some development methodologies work from use cases or user
stories. Functional tests tend to answer the question of "can the user
do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security.
Testing will determine the breaking point, the point at which extremes
of scalability or performance leads to unstable execution.
Non-functional requirements tend to be those that reflect the quality of
the product, particularly in the context of the suitability perspective
of its users.
Continuous testing is the process of executing automated tests
as part of the software delivery pipeline to obtain immediate feedback
on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements;
the scope of testing extends from validating bottom-up requirements or
user stories to assessing the system requirements associated with
overarching business goals.
Destructive testing attempts to cause the software or a sub-system to
fail. It verifies that the software functions properly even when it
receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Performance testing is generally executed to determine how a system
or sub-system performs in terms of responsiveness and stability under a
particular workload. It can also serve to investigate, measure, validate
or verify other quality attributes of the system, such as scalability,
reliability and resource usage.
Load testing
is primarily concerned with testing that the system can continue to
operate under a specific load, whether that be large quantities of data
or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing
(often referred to as load or endurance testing) checks to see if the
software can continuously function well in or above an acceptable
period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Usability testing
is to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application. This is not a kind of
testing that can be automated; actual human users are needed, being
monitored by skilled UI designers.
Accessibility testing
Accessibility
testing is done to ensure that the software is accessible to persons
with disabilities. Some of the common web accessibility tests are
Ensuring that the color contrast between the font and the background color is appropriate
Font Size
Alternate Texts for multimedia content
Ability to use the system using the computer keyboard in addition to the mouse.
The International Organization for Standardization (ISO) defines
this as a "type of testing conducted to evaluate the degree to which a
test item, and associated data and information, are protected so that
unauthorised persons or systems cannot use, read or modify them, and
authorized persons or systems are not denied access to them."
Internationalization and localization
Testing for internationalization and localization validates that the software can be used with different languages and geographic regions. The process of pseudolocalization
is used to test the ability of an application to be translated to
another language, and make it easier to identify when the localization
process may introduce new bugs into the product.
Globalization testing verifies that the software is adapted for a new culture (such as different currencies or time zones).
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.
Technical terminology may become inconsistent, if the project is
translated by several people without proper coordination or if the
translator is imprudent.
Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
Untranslated messages in the original language may be left hard coded in the source code.
Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
Software may use a keyboard shortcut that has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language.
Software may lack support for the character encoding of the target language.
Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable, if the font is too small.
A string in the target language may be longer than the software can
handle. This may make the string partly invisible to the user or cause
the software to crash or malfunction.
Software may lack proper support for reading or writing bi-directional text.
Software may display images with text that was not localized.
Development Testing is a software development process that involves
the synchronized application of a broad spectrum of defect prevention
and detection strategies in order to reduce software development risks,
time, and costs. It is performed by the software developer or engineer
during the construction phase of the software development lifecycle.
Development Testing aims to eliminate construction errors before code is
promoted to other testing; this strategy is intended to increase the
quality of the resulting software as well as the efficiency of the
overall development process.
Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software testing practices.
A/B testing is a method of running a controlled experiment to
determine if a proposed change is more effective than the current
approach. Customers are routed to either a current version (control) of a
feature, or to a modified version (treatment) and data is collected to
determine which version is better at achieving the desired outcome.
Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use concurrent computing,
generally under normal usage conditions. Typical problems this type of
testing will expose are deadlocks, race conditions and problems with
shared memory/resource handling.
In software testing, conformance testing verifies that a product
performs according to its specified standards. Compilers, for instance,
are extensively tested to determine whether they meet the recognized
standard for that language.
Output comparison testing
Creating a display expected output, whether as data comparison of text or screenshots of the UI,
is sometimes called snapshot testing or Golden Master Testing unlike
many other forms of testing, this cannot detect failures automatically
and instead requires that a human evaluate the output for
inconsistencies.
Property testing is a testing technique where, instead of asserting
that specific inputs produce specific expected outputs, the practitioner
randomly generates many inputs, runs the program on all of them, and
asserts the truth of some "property" that should be true for every pair
of input and output. For example, every output from a serialization
function should be accepted by the corresponding deserialization
function, and every output from a sort function should be a
monotonically increasing list containing exactly the same elements as
its input.
Property testing libraries allow the user to control the strategy
by which random inputs are constructed, to ensure coverage of
degenerate cases, or inputs featuring specific patterns that are needed
to fully exercise aspects of the implementation under test.
Property testing is also sometimes known as "generative testing"
or "QuickCheck testing" since it was introduced and popularized by the
Haskell library QuickCheck.
Metamorphic testing (MT) is a property-based software testing
technique, which can be an effective approach for addressing the test
oracle problem and test case generation problem. The test oracle problem
is the difficulty of determining the expected outcomes of selected test
cases or to determine whether the actual outputs agree with the
expected outcomes.
VCR testing
VCR
testing, also known as "playback testing" or "record/replay" testing,
is a testing technique for increasing the reliability and speed of
regression tests that involve a component that is slow or unreliable to
communicate with, often a third-party API outside of the tester's
control. It involves making a recording ("cassette") of the system's
interactions with the external component, and then replaying the
recorded interactions as a substitute for communicating with the
external system on subsequent runs of the test.
The technique was popularized in web development by the Ruby library vcr.
Teamwork
Roles
In an organization, testers may be in a separate team from the rest of the software development team or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers.
In the 1980s, the term software tester started to be used to denote a separate profession.
Notable software testing roles and titles include: test manager, test lead, test analyst, test designer, tester, automation developer, and test administrator.
Processes
Organizations that develop software, perform testing differently, but there are common patterns.
In waterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.
Some contend that the waterfall process allows for testing to
start when the development project starts and to be a continuous process
until the project finishes.
Agile development
Agile software development
commonly involves testing while the code is being written and
organizing teams with both programmers and testers and with team members
performing both programming and testing.
One agile practice, test-driven software development (TDD), is a way of unit testing such that unit-level testing is performed while writing the product code.
Test code is updated as new features are added and failure conditions
are discovered (bugs fixed). Commonly, the unit test code is maintained
with the project code, integrated in the build process, and run on each
build and as part of regression testing. Goals of this continuous integration is to support development and reduce defects.
Even in organizations that separate teams by programming and testing functions, many often have the programmers perform unit testing.
Sample process
The
sample below is common for waterfall development. The same activities
are commonly found in other development models, but might be described
differently.
Requirements analysis: Testing should begin in the requirements phase of the software development life cycle.
During the design phase, testers work to determine what aspects of a
design are testable and with what parameters those tests work.
Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
Test execution: Testers execute the software based on the plans and
test documents then report any errors found to the development team.
This part could be complex when running tests with a lack of programming
knowledge.
Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
Test result analysis: Or Defect Analysis, is done by the development
team usually along with the client, in order to decide what defects
should be assigned, fixed, rejected (i.e. found software working
properly) or deferred to be dealt with later.
Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team.
Regression testing:
It is common to have a small test program built of a subset of tests,
for each integration of new, modified, or fixed software, in order to
ensure that the latest delivery has not ruined anything and that the
software product as a whole is still working correctly.
Test Closure: Once the test meets the exit criteria, the activities
such as capturing the key outputs, lessons learned, results, logs,
documents related to the project are archived and used as a reference
for future projects.
Verification: Have we built the software right? (i.e., does it implement the requirements).
Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer).
The terms verification and validation are commonly used
interchangeably in the industry; it is also common to see these two
terms defined with contradictory definitions. According to the IEEE Standard Glossary of Software Engineering Terminology:
Verification is the process of evaluating a system or component
to determine whether the products of a given development phase satisfy
the conditions imposed at the start of that phase.
Validation is the process of evaluating a system or component during
or at the end of the development process to determine whether it
satisfies specified requirements.
And, according to the ISO 9000 standard:
Verification is confirmation by examination and through
provision of objective evidence that specified requirements have been
fulfilled.
Validation is confirmation by examination and through provision of
objective evidence that the requirements for a specific intended use or
application have been fulfilled.
The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings.
In the case of IEEE standards, the specified requirements,
mentioned in the definition of validation, are the set of problems,
needs and wants of the stakeholders that the software must solve and
satisfy. Such requirements are documented in a Software Requirements
Specification (SRS). And, the products mentioned in the definition of
verification, are the output artifacts of every phase of the software
development process. These products are, in fact, specifications such as
Architectural Design Specification, Detailed Design Specification, etc.
The SRS is also a specification, but it cannot be verified (at least
not in the sense used here, more on this subject below).
But, for the ISO 9000, the specified requirements are the set of
specifications, as just mentioned above, that must be verified. A
specification, as previously explained, is the product of a software
development process phase that receives another specification as input. A
specification is verified successfully when it correctly implements its
input specification. All the specifications can be verified except the
SRS because it is the first one (it can be validated, though). Examples:
The Design Specification must implement the SRS; and, the Construction
phase artifacts must implement the Design Specification.
So, when these words are defined in common terms, the apparent contradiction disappears.
Both the SRS and the software must be validated. The SRS can be
validated statically by consulting with the stakeholders. Nevertheless,
running some partial implementation of the software or a prototype of
any kind (dynamic testing) and obtaining positive feedback from them,
can further increase the certainty that the SRS is correctly formulated.
On the other hand, the software, as a final and running product (not
its artifacts and documents, including the source code) must be
validated dynamically with the stakeholders by executing the software
and having them to try it.
Some might argue that, for SRS, the input is the words of
stakeholders and, therefore, SRS validation is the same as SRS
verification. Thinking this way is not advisable as it only causes more
confusion. It is better to think of verification as a process involving a
formal and technical input document.
Software quality assurance
In some organizations, software testing is part of a software quality assurance (SQA) process.
In SQA, software process specialists and auditors are concerned with
the software development process rather than just the artifacts such as
documentation, code and systems. They examine and change the software engineering
process itself to reduce the number of faults that end up in the
delivered software: the so-called defect rate. What constitutes an
acceptable defect rate depends on the nature of the software; a flight
simulator video game would have much higher defect tolerance than
software for an actual airplane. Although there are close links with
SQA, testing departments often exist independently, and there may be no
SQA function in some companies.
Software testing is an activity to investigate software under
test in order to provide quality-related information to stakeholders. By
contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Artifacts
A software testing process can produce several artifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs.
A test plan
is a document detailing the approach that will be taken for intended
test activities. The plan may include aspects such as objectives, scope,
processes and procedures, personnel requirements, and contingency
plans.
The test plan could come in the form of a single plan that includes all
test types (like an acceptance or system test plan) and planning
considerations, or it may be issued as a master test plan that provides
an overview of more than one detailed test plan (a plan of a plan). A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact.
Traceability matrix
In software development, a traceability matrix (TM)
is a document, usually in the form of a table, used to assist in
determining the completeness of a relationship by correlating any two baselined documents using a many-to-many relationship comparison. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the product to the matching parts of high-level design, detailed design, test plan, and test cases.
A test case
normally consists of a unique identifier, requirement references from a
design specification, preconditions, events, a series of steps (also
known as actions) to follow, input, output, expected result, and the
actual result. Clinically defined, a test case is an input and an
expected result.
This can be as terse as "for condition x your derived result is y",
although normally test cases describe in more detail the input scenario
and what results might be expected. It can occasionally be a series of
steps (but often steps are contained in a separate test procedure that
can be exercised against multiple test cases, as a matter of economy)
but with one expected result or expected outcome. The optional fields
are a test case ID, test step, or order of execution number, related
requirement(s), depth, test category, author, and check boxes for
whether the test is automatable and has been automated. Larger test
cases may also contain prerequisite states or steps, and descriptions. A
test case should also contain a place for the actual result. These
steps can be stored in a word processor document, spreadsheet, database,
or other common repositories. In a database system, you may also be
able to see past test results, who generated the results, and what
system configuration was used to generate those results. These past
results would usually be stored in a separate table.
Test script
A test script
is a procedure or programming code that replicates user actions.
Initially, the term was derived from the product of work created by
automated regression test tools. A test case will be a baseline to
create test scripts using a tool or a program.
Test suite
In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviors.
A test suite often contains detailed instructions or goals for each
collection of test cases and information on the system configuration to
be used during testing. A group of test cases may also contain
prerequisite states or steps and descriptions of the following tests.
In most cases, multiple sets of values or data are used to test the
same functionality of a particular feature. All the test values and
changeable environmental components are collected in separate files and
stored as test data. It is also useful to provide this data to the
client and with the product or a project. There are techniques to
generate Test data.
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
Test run
A
test run is a collection of test cases or test suites that the user is
executing and comparing the expected with the actual results. Once
complete, a report or all executed tests may be generated.
Several certification programs exist to support the professional
aspirations of software testers and quality assurance specialists. A few
practitioners argue that the testing field is not ready for
certification, as mentioned in the controversy section.
Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since the early 2000s mainly in commercial circles, whereas government and military software providers use this methodology but also the traditional test-last models (e.g., in the Waterfall model).
Manual vs. automated testing
Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.
The test automation then can be considered as a way to capture and
implement the requirements. As a general rule, the larger the system and
the greater the complexity, the greater the ROI in test automation.
Also, the investment in tools and expertise can be amortized over
multiple projects with the right level of knowledge sharing within an
organization.
Is the existence of the ISO 29119 software testing standard justified?
Significant opposition has formed out of the ranks of the
context-driven school of software testing about the ISO 29119 standard.
Professional testing associations, such as the International Society for
Software Testing, have attempted to have the standard withdrawn.
Some practitioners declare that the testing field is not ready for certification
No certification now offered actually requires the applicant to show
their ability to test software. No certification is based on a widely
accepted body of knowledge. Certification itself cannot measure an
individual's productivity, their skill, or practical knowledge, and
cannot guarantee their competence, or professionalism as a tester.
Studies used to show the relative expense of fixing defects
There are opposing views on the applicability of studies used to
show the relative expense of fixing defects depending on their
introduction and detection. For example:
It is commonly believed that the earlier a defect is found, the
cheaper it is to fix it. The following table shows the cost of fixing
the defect depending on the stage it was found.
For example, if a problem in the requirements is found only
post-release, then it would cost 10–100 times more to fix than if it had
already been found by the requirements review. With the advent of
modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
Cost to fix a defect
Time detected
Requirements
Architecture
Construction
System test
Post-release
Time introduced
Requirements
1×
3×
5–10×
10×
10–100×
Architecture
–
1×
10×
15×
25–100×
Construction
–
–
1×
10×
10–25×
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The "smaller projects" curve turns out to be from only two teams of
first-year students, a sample size so small that extrapolating to
"smaller projects in general" is totally indefensible. The GTE study
does not explain its data, other than to say it came from two projects,
one large and one small. The paper cited for the Bell Labs "Safeguard"
project specifically disclaims having collected the fine-grained data
that Boehm's data points suggest. The IBM study (Fagan's paper) contains
claims that seem to contradict Boehm's graph and no numerical results
that clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing
for "Making Software" in 2010, and there he cited the original 1976
article. There exists a large study conducted at TRW at the right time
for Boehm to cite it, but that paper doesn't contain the sort of data
that would support Boehm's claims.