The history of genetics can be represented on a timeline of events from the earliest work in the 1850s, to the DNA era starting in the 1940s, and the genomics era beginning in the 1970s.
Early timeline
1856–1863: Mendel studied the inheritance of traits
between generations based on experiments involving garden pea plants.
He deduced that there is a certain tangible essence that is passed on
between generations from both parents. Mendel established the basic principles of inheritance, namely, the principles of dominance, independent assortment, and segregation.
1869: Friedrich Miescher discovers a weak acid in the nuclei of white blood cells that today we call DNA. In 1871 he isolated cell nuclei, separated the nucleic cells from bandages and then treated them with pepsin (an enzyme which breaks down proteins). From this, he recovered an acidic substance which he called "nuclein".
1889: Richard Altmann purified protein free DNA. However, the nucleic acid was not as pure as he had assumed. It was determined later to contain a large amount of protein.
1889: Hugo de Vries postulates that "inheritance of specific traits in organisms comes in particles", naming such particles "(pan)genes".
1902: Archibald Garrod
discovered inborn errors of metabolism. An explanation for epistasis is
an important manifestation of Garrod's research, albeit indirectly.
When Garrod studied alkaptonuria, a disorder that makes urine quickly
turn black due to the presence of gentisate, he noticed that it was
prevalent among populations whose parents were closely related.
1903: Walter Sutton and Theodor Boveri independently hypothesizes that chromosomes, which segregate in a Mendelian fashion, are hereditary units; see the chromosome theory. Boveri was studying sea urchins when he found that all the chromosomes in the sea urchins had to be present for proper embryonic development
to take place. Sutton's work with grasshoppers showed that chromosomes
occur in matched pairs of maternal and paternal chromosomes which
separate during meiosis. He concluded that this could be "the physical basis of the Mendelian law of heredity."
1908: G.H. Hardy and Wilhelm Weinberg proposed the Hardy–Weinberg equilibrium model
which describes the frequencies of alleles in the gene pool of a
population, which are under certain specific conditions, as constant and
at a state of equilibrium from generation to generation unless specific
disturbing influences are introduced.
1910: Thomas Hunt Morgan shows that genes reside on chromosomes while determining the nature of sex-linked traits by studying Drosophila melanogaster.
He determined that the white-eyed mutant was sex-linked based on
Mendelian's principles of segregation and independent assortment.
1911: Alfred Sturtevant, one of Morgan's collaborators, invented the procedure of linkage mapping which is based on the frequency of crossing-over.
1913: Alfred Sturtevant makes the first genetic map, showing that chromosomes contain linearly arranged genes.
1920: Lysenkoism
Started, during Lysenkoism they stated that the hereditary factor are
not only in the nucleus, but also in the cytoplasm, though they called
it living protoplasm.
1930s–1950s: Joachim Hämmerling conducted experiments with Acetabularia
in which he began to distinguish the contributions of the nucleus and
the cytoplasm substances (later discovered to be DNA and mRNA,
respectively) to cell morphogenesis and development.
1931: Crossing over is identified as the cause of recombination; the first cytological demonstration of this crossing over was performed by Barbara McClintock and Harriet Creighton.
1933: Jean Brachet, while studying virgin sea urchin eggs, suggested that DNA is found in cell nucleus and that RNA is present exclusively in the cytoplasm.
At the time, "yeast nucleic acid" (RNA) was thought to occur only in
plants, while "thymus nucleic acid" (DNA) only in animals. The latter
was thought to be a tetramer, with the function of buffering cellular
pH.
1943: Luria–Delbrück experiment:
this experiment showed that genetic mutations conferring resistance to
bacteriophage arise in the absence of selection, rather than being a
response to selection.
1947: Salvador Luria discovers reactivation of irradiated phage, stimulating numerous further studies of DNA repair processes in bacteriophage, and other organisms, including humans.
1950: Erwin Chargaff determined the pairing method of nitrogenous bases. Chargaff and his team studied the DNA from multiple organisms and found three things (also known as Chargaff's rules). First, the concentration of the pyrimidines (guanine and adenine) are always found in the same amount as one another. Second, the concentration of purines (cytosine and thymine) are also always the same. Lastly, Chargaff and his team found the proportion of pyrimidines and purines correspond each other.
1955: Joe Hin Tjio,
while working in Albert Levan's lab, determined the number of
chromosomes in humans to be of 46. Tjio was attempting to refine an
established technique to separate chromosomes onto glass slides by
conducting a study of human embryonic lung tissue, when he saw that
there were 46 chromosomes rather than 48. This revolutionized the world
of cytogenetics.
1957: Arthur Kornberg with Severo Ochoa synthesized DNA in a test tube after discovering the means by which DNA is duplicated. DNA polymerase 1 established requirements for in vitro synthesis of DNA. Kornberg and Ochoa were awarded the Nobel Prize in 1959 for this work.
1957/1958: Robert W. Holley, Marshall Nirenberg, Har Gobind Khorana proposed the nucleotide sequence of the tRNA molecule. Francis Crick
had proposed the requirement of some kind of adapter molecule and it
was soon identified by Holey, Nirenberg and Khorana. These scientists
help explain the link between a messenger RNA nucleotide sequence and a polypeptide sequence. In the experiment, they purified tRNAs from yeast cells and were awarded the Nobel prize in 1968.
1960: Jacob and collaborators discover the operon, a group of genes whose expression is coordinated by an operator.
1961: Francis Crick and Sydney Brenner discovered frame shift mutations. In the experiment, proflavin-induced mutations of the T4 bacteriophage gene (rIIB) were isolated. Proflavin
causes mutations by inserting itself between DNA bases, typically
resulting in insertion or deletion of a single base pair. The mutants
could not produce functional rIIB protein. These mutations were used to demonstrate that three sequential bases of
the rIIB gene's DNA specify each successive amino acid of the encoded
protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid.
1966: Marshall W. Nirenberg, Philip Leder, Har Gobind Khorana
cracked the genetic code by using RNA homopolymer and heteropolymer
experiments, through which they figured out which triplets of RNA were translated into what amino acids in yeast cells.
1969: Molecular hybridization of radioactive DNA to the DNA of cytological preparation by Pardue, M. L. and Gall, J. G.
In the late 1970s: nonisotopic methods of nucleic acid labeling were
developed. The subsequent improvements in the detection of reporter
molecules using immunocytochemistry and immunofluorescence, in
conjunction with advances in fluorescence microscopy and image analysis,
have made the technique safer, faster and reliable.
1980: Paul Berg, Walter Gilbert and Frederick Sanger
developed methods of mapping the structure of DNA. In 1972, recombinant
DNA molecules were produced in Paul Berg's Stanford University
laboratory. Berg was awarded the 1980 Nobel Prize
in Chemistry for constructing recombinant DNA molecules that contained
phage lambda genes inserted into the small circular DNA mol.
1980: Stanley Norman Cohen and Herbert Boyer received first U.S. patent for gene cloning, by proving the successful outcome of cloning a plasmid
and expressing a foreign gene in bacteria to produce a "protein foreign
to a unicellular organism." These two scientist were able to replicate
proteins such as HGH, Erythropoietin and Insulin. The patent earned about $300 million in licensing royalties for Stanford.
1982: The U.S. Food and Drug Administration (FDA) approved the release of the first genetically engineered human insulin, originally biosynthesized using recombination DNA methods by Genentech in 1978. Once approved, the cloning process lead to mass production of humulin (under license by Eli Lilly & Co.).
1983: Barbara McClintock was awarded the Nobel Prize in Physiology or Medicine for her discovery of mobile genetic elements. McClintock studied transposon-mediated mutation and chromosome breakage in maize and published her first report in 1948 on transposable elements or transposons. She found that transposons
were widely observed in corn, although her ideas weren't widely granted
attention until the 1960s and 1970s when the same phenomenon was
discovered in bacteria and Drosophila melanogaster.
Display of VNTR allele lengths on a chromatogram, a technology used in DNA fingerprinting 1985: Alec Jeffreys announced DNA fingerprinting method. Jeffreys was studying DNA variation and the evolution of gene families in order to understand disease causing genes. In an attempt to develop a process to isolate many mini-satellites at
once using chemical probes, Jeffreys took x-ray films of the DNA for
examination and noticed that mini-satellite regions differ greatly from
one person to another. In a DNA fingerprinting technique, a DNA sample
is digested by treatment with specific nucleases or Restriction endonuclease and then the fragments are separated by electrophoresis producing a template distinct to each individual banding pattern of the gel.
1986: Jeremy Nathans found genes for color vision and color blindness, working with David Hogness, Douglas Vollrath and Ron Davis as they were studying the complexity of the retina.
1987: Yoshizumi Ishino discovers and describes part of a DNA sequence which later will be called CRISPR.
1989: Thomas Cech discovered that RNA can catalyze chemical reactions, making for one of the most important breakthroughs in molecular
genetics, because it elucidates the true function of poorly understood
segments of DNA.
1992: American and British scientists unveiled a technique for testing embryos in-vitro (Amniocentesis) for genetic abnormalities such as Cystic fibrosis and Hemophilia.
1993: Phillip Allen Sharp and Richard Roberts awarded the Nobel Prize for the discovery that genes in DNA are made up of introns and exons. According to their findings, not all the nucleotides on the RNA strand (product of DNA transcription) are used in the translation process. The intervening sequences in the RNA strand are first spliced out so that only the RNA segment left behind after splicing would be translated to polypeptides.
1994: The first breast cancer gene is discovered. BRCA I was discovered by researchers at the King laboratory at UC Berkeley in 1990 but was first cloned in 1994. BRCA II, the second key gene in the manifestation of breast cancer was discovered later in 1994 by Professor Michael Stratton and Dr. Richard Wooster.
1995: The genome of bacterium Haemophilus influenzae is the first genome of a free living organism to be sequenced.
1996: Alexander Rich discovered the Z-DNA, a type of DNA which is in a transient state, that is in some cases associated with DNA transcription. The Z-DNA form is more likely to occur in regions of DNA rich in cytosine and guanine with high salt concentrations.
2001: Francisco Mojica and Rudd Jansen
propose the acronym CRISPR to describe a family of bacterial DNA
sequences that can be used to specifically change genes within
organisms.
Francis Collins announces the successful completion of the Human Genome Project in 2003 2003: Successful completion of Human Genome Project with 99% of the genome sequenced to a 99.99% accuracy.
2003: Paul Hebert introduces the standardisation of molecular species identification and coins the term 'DNA Barcoding', proposing Cytochrome Oxidase 1 (CO1) as the DNA Barcode for Animals.
2004: Merck introduced a vaccine for Human Papillomavirus which promised to protect women against infection with HPV 16 and 18, which inactivates tumor suppressor genes and together cause 70% of cervical cancers.
2007: Michael Worobey traced the evolutionary origins of HIV by analyzing its genetic mutations, which revealed that HIV infections had occurred in the United States as early as the 1960s.
2007: The Barcode of Life Data System (BOLD) is set up as an international reference library for molecular species identification.
2008: Houston-based Introgen developed Advexin (FDA Approval pending), the first gene therapy for cancer and Li-Fraumeni syndrome, utilizing a form of Adenovirus to carry a replacement gene coding for the p53 protein.
2009: The Consortium for the Barcode of Life Project (CBoL) Plant
Working Group propose rbcL and matK as the duel barcode for land plants.
2011: Fungal Barcoding Consortium propose Internal Transcribed Spacer region (ITS) as the Universal DNA Barcode for Fungi.
2012: The flora of Wales is completely barcoded, and reference
specimens stored in the BOLD systems database, by the National Botanic
Garden of Wales.
2016: A genome is sequenced in outer space for the first time, with NASA astronaut Kate Rubins using a MinION device aboard the International Space Station.
The commodification of nature
is an area of research within critical environmental studies that is
concerned with the ways in which natural entities and processes are made
exchangeable through the market, and the implications thereof.
Most researchers who employ a commodification of nature framing invoke a Marxian conceptualization of commodities as "objects produced for sale on the market" that embody both use and exchange value. Commodification itself is a process by which goods and services not produced for sale are converted into an exchangeable form. It involves multiple elements, including privatization, alienation, individuation, abstraction, valuation and displacement.
As capitalism expands in breadth and depth, more and more things
previously external to the system become “internalized,” including
entities and processes that are usually considered "natural." Nature,
as a concept, however, is very difficult to define, with many layers of
meaning, including external environments as well as humans themselves. Political ecology and other critical conceptions draw upon strands within Marxist geography that see nature as "socially produced," with no neat boundary separating the "social" from the "natural." Still, the commodification of entities and processes that are
considered natural is viewed as a "special case" based on nature's
biophysical materiality, which "shape[es] and condition[s] trajectories of commodification."
Origins and development
Classical liberalism and enclosure
The commodification of nature has its origins in the rise of capitalism. In England and later elsewhere, "enclosure" involved attacks upon and eventual near-elimination of the commons—a long, contested and frequently violent process Marx referred to as "primitive accumulation."
Classical liberalism,
the ideological aspect of this process, was closely bound to questions
of the environment. Privatization was presented as "more conducive to
the careful stewardship of natural resources than the commons" by thinkers like Bentham, Locke and Malthus. The neo-Malthusian discourse of Garrett Hardin's "Tragedy of the Commons"
(1968) parallels this perspective, reconceptualizing public goods as
"scarce commodities" requiring either privatization or strong state
control.
Ecology Against Capitalism
As Foster points out in Ecology Against Capitalism, the
environment is not a commodity (such as most things are treated in
capitalism) but it is rather the biosphere that sustains all life that
we know of. However, it is important to note that in our society, it is
treated as a capitalistic value. For example, a price is put on lumber
in a certain forest or the quality of water in a river or stream, or the
minerals that are available under ground. These ways of putting a price
on the ecosystem tend to forget to put a price on exploiting it. This
can cause more damage to an ecosystem if the externalities for business
are not taken into consideration. One way to fix this problem is taxes
that will increase the cost of environmental damage. For example, a
carbon tax would help society get off of fossil fuels and go towards
renewables much faster. This is one step that many scientists and
experts agree needs to happen in order to transition away from fossil fuels and delay or even prevent man-made climate change.
Deregulation of governmental programs such as the EPA, and other
environmental organizations may be good for business, but it doesn't
serve the people who must live on a more polluted earth.
Capitalist expansion
Marxists define capitalism as a socio-economic system whose central goal is the accumulation
of more wealth through the production and exchange of commodities.
While the commodity form is not unique to capitalism, in it economic
production is motivated increasingly by exchange. Competition provides constant pressure for innovation and growth in a
"restless and unstable process," making the system expansionary and
"tendentially all-encompassing."
Through market globalization, the tendency Marx described in the Communist Manifesto
in which "[t]he need of a constantly expanding market for its products
chases the bourgeoisie over the entire surface of the globe," capitalism converts nature into "an appendage of the production process." As Neil Smith
argues, "[n]o part of the earth’s surface, the atmosphere, the oceans,
the geological substratum, or the biological superstratum are immune
from transformation by capital."
Neoliberal nature
Since the late 1980s, an ideology of "market environmentalism" has gained prominence within environmental policy. Such a perspective is based in neoclassical economic theory, which sees degradation as a result of the absence of prices in environmental goods. Market environmentalism gained widespread acceptance through the rise
of neoliberalism, an approach to human affairs in which the "free market" is given priority and money-mediated relations are seen as the best way to deliver services.
A neoliberal approach constructs nature as a "world currency,"
valued in international markets and given "the opportunity to earn its
own right to survive." This "selling nature to save it" approach requires economic valuation — either indirectly, as with cost-benefit analysis and contingent valuation, or through direct commodification. Critics of neoliberal environmental policy argue that this reduces the
importance of species survival "into a price whose rise or fall is
entangled with bets on their susceptibility to irreversible loss,
underscored by a calculus whereby species value rises with rarity, or
greater risk of extinction". Thus, neoliberal interventions like ecotourism and bioprospecting are
viewed by critics as ways of forcing nature to earn its right to survive
in the global marketplace.
While commodification efforts are propelled in large part by private firms seeking new areas of investment and avenues for the circulation of capital, there are also explicit policy prescriptions for privatization and market exchange of resources, production byproducts and processes as the best means to rationally manage and conserve the environment.
The neoliberal commodification of nature and its exploitation in the global south for the profits of the global north is also known as ecological imperialism, whereby ecological racism is understood as part of ecological imperialism.
Stretching and deepening
The
commodification of nature occurs through two distinct "moments" as
capitalization "stretches" its reach to include greater distances of
space and time, and "deepens" to penetrate into more types of goods and
services. External nature becomes an "accumulation strategy" for capital, through traditional examples like mining and agriculture as well as new "commodity frontiers" in bioprospecting and ecotourism.
David Harvey sees this as "the wholesale commodification of nature in all its forms," a "new wave of ‘enclosing the commons’" that employs environmentalism in the service of the rapid expansion of capitalism. This "accumulation by dispossession" releases assets at very low or zero cost, providing immediate profitability and counteracting overaccumulation.
Aspects of commodification
At
the most abstract level, commodification is a process through which
qualitatively different things are made equivalent and exchangeable
through the medium of money. By taking on a general quality of exchange value, they become commensurable.
Commodification turns on this apparent dissolution of qualitative
difference and its “renegotiation,” as commodities are standardized in
order to maintain a constant identity across space and time.
Commodity status is not something intrinsic to a natural entity, but is rather an assigned quality, brought about through an active process. The conversion of a whole class of goods or services necessitates changes in the way nature is conceptualized and discursively represented.
There is no "single path" to commodification.Noel Castree
stresses that commodification in fact involves several interrelated
aspects, or "relational moments," that should not be confused or
conflated as they can be employed independently of each other.
Element
Meaning
Privatization
Assigning of legal title over a commodity to a particular actor
Alienability
Capacity of a given commodity to be physically and morally separated from sellers
Individuation
Separating a commodity from supporting context through legal and material boundaries
Abstraction
Setting individual things as equivalent based on classifiable similarities
Valuation
Monetizing the value of a commodity
Displacement
Spatiotemporal separation, obscuring origins and relations
Privatization is the assigning of legal title to an entity or process. A commodity needs to be owned, either by an individual or a group, in order to be traded. Privatization of natural entities can entail enclosure or the representation thereof (as with intellectual property rights),
and represents a shift in social relations, changing rights of access,
use and disposal as things move from communally-, state- or unowned
modes into private hands.
Alienability is the capacity of a given commodity
to be separated, physically and morally, from its seller. If a commodity
is not alienable, it cannot be exchanged and is thus shielded from the
market. For example, human organs might be privatized (owned by their bearer) but very rarely would they be considered alienable.
Individuation is the representational and physical
act of separating a commodity from its supporting context through legal
and/or material boundaries. This could involve "splitting" an ecosystem
into legally-defined and tradable property rights to specific services
or resources.
Abstraction is the assimilation of a given thing into a broader type or process, the transformation of particular things into classes. Through functional abstraction, "wetlands" are constructed as a generic category despite the uniqueness of physical sitesand different gasses and activities are equated through carbon markets. Through spatial abstraction things in one place are treated as the same as things located elsewhere so that both can form part of the same market.
Valuation is the manifestation of all expressions of worth (aesthetic, practical, ethical, et cetera)
through a single exchange value. Monetization is thus foundational to
capitalism, rendering things commensurable and exchangeable, allowing
for the separation of production, circulation and consumption over great
gulfs of time and space.
Displacement involves something appearing as
"something other than itself." Commodities might be better thought of as
"socio-natural relations" than reified
as things "in and of themselves," but through spatio-temporal
separation of producers and consumers, the histories and relations of
commodities become obscured. This is Marx's commodity fetishism, the "making invisible" of the social relationships and embeddedness of production.
Problems with commodification
Critics
see environmental degradation as stemming from these processes of
commodification, and generally include at least implicit criticism of
one or more aspect. There appear to be three broad "problem areas" from
which the commodification of nature is critiqued: practical, in terms of whether or not nature can be properly made into a commodity; moral, in terms of the ethical implications of commodification; and consequential, in terms of the effects of commodification on nature itself.
Practical problems
Much
of the literature relates commodification of nature to the issue of
materiality—the significance of biophysical properties and context. The
qualitative differences of a heterogeneous biophysical world are seen to
be analytically and practically significant, sources of
unpredictability and resistance to human intention that also shape and
provide opportunities for capital circulation and accumulation.
The tangible non-human world thus affects the construction of
social and economic relations and practice, inscribing ecology in the
dynamics of capital. While some "natures" are readily subsumed by
capitalism, others "resist" complete commodification, displaying a form
of "agency." The ecological characteristics of marine fish, for example, affect the forms that privatization, industry structure and regulation can take. Water, also, does not commodify easily due to its physical properties,
which leads to differentiation in its governing institutions.
The demarcation and pricing of nature-based commodities is thus
problematic. Divisibility and exclusion are difficult, as it is often
not possible to draw clean property rights around environmental services
or resources. Likewise, pricing is a problem as many species, landscapes and services are unique or otherwise irreplaceable and incommensurable. Their monetary values are thus in many ways arbitrary, as they do not
follow changes in quality or quantity but rather social preference,
failing to convey "real" ecological value or reasons for conservation.
Moral difficulties
A
single monetary value also denies the multiplicity of values which
could be attributed to nature — non-monetary systems of cultural and
social importance. The environment can express relations between generations as a sort of heritage. Livelihood, territorial rights and "sacredness"
poorly translate into prices, and dividing a communal-social value — a
forest, for instance — into private property rights can undermine the
relations and identity of a community.
Neoliberal policies have been implicated in greatly altered
patterns of access and use. Markets generally deal poorly with issues of
procedural fairness and equitable distribution, and critics see commodification as producing greater levels of inequality in power and participation while reinforcing existing vulnerabilities. Ecosystem benefits might be considered "normative public goods" — even when commodified, there is a sense that individuals ought to not be excluded from access. When water privatization prices people out, for instance, a sense of use rights inspires protest.While neoliberal approaches are often presented as neutral or objective, they disguise highly political approaches to resources and the interests and power of certain actors.
Problematic consequences
Through commodification, natural entities and services become vehicles for the realization of profit, subject to the pressures of the market where efficiency overrides other concerns. With climate commodities, the profit motive incentivizes buyers and sellers to ignore the steady erosion of the climate mitigation goal. Market exchange is "reason-blind," but without rational assessment of different strategies and the
ecological importance of particular natural entities, commodification
cannot effectively deliver on conservation.
Harvey thus declares that there is something "inherently anti-ecological" about capitalist commodification. It ignores and simplifies complex relations, obscuring origins and narrowing things to a single service or standard unit.
The treatment of things as the same for a particular end — either
profit or a single utility — leads to a homogenization and
simplification of the biophysical. As governments and private firms seek
to maximize carbon content for emissions markets, they invest
preferably in tree plantations over complex forest ecosystems, eliminating species diversity, density and resulting in domino effects on processes such as water flow.
The neglect of relational aspects also ignores the emergent
and embedded character of ecosystem functions. Components are
frequently dependent on each other and the result of interactions
between biotic
and non-biotic factors across space and at multiple levels. Alienation
and individuation may thus be counterproductive to the provision of
ecosystem services, and veils human perception of what an ecosystem is
and how it functions—and consequently how to best conserve and repair
it. John Bellamy Foster argues that neglect of such relational aspects is a result of economic reductionism. This reductionism leads to an inefficiency in promoting biodiversity
since as ecosystems are simplified into more basic commodities they can
no longer support as diverse a set of organisms as they could
pre-commodification. This creates a concern that the commodification of
nature lends itself toward undermining biodiversity through its pursuit
of attaching a value to nature.
Karl Polanyi voiced this concern when addressing the concept of
treating nature as a commodity. If nature were treated as a commodity it
would be concentrated down to its base parts and destroyed. Polanyi
highlighted many of the concerns that contemporary environmentalists
have by noting that nature's commodification would lead to its
pollution, overuse, and eventually imperil human life
Crisis and resistance
Incomplete capitalization and the fictitious commodity
When
confronted with natural "barriers to accumulation," capitalists attempt
to overcome them through technical and social innovation. This often involves the modification of nature to fit the needs of
production and exchange, allowing for fuller realization of profits.
Nature is "subsumed" to capitalist accumulation, losing its
"independent" capacity and approaching "the archetype of a ‘pure’
commodity."
However, as nature becomes "rationalized" and internalized, increasing the control of capitalists over exchange, production and distribution, a new contradiction emerges. Capitalist penetration into natural
commodities can never be complete, because a certain amount of
production, by definition, takes place prior to human intervention. Because natural entities and processes do not require capital or labor
to be produced, and their social, cultural and/or ecological value exceeds the market value placed upon them, they are considered pseudo- or fictitious commodities. This basic fictitiousness is the origin of the material contradictions that arise when natural commodities are treated as if they were "true" commodities, as completely privatizable, alienable, separable, et cetera.
Possible consequences of commodifying nature
Many scholars believe that ecology and capitalism are against one another regarding climate change. As environmental economics is a relatively new field of study, and
capitalism a significantly older economic system, radical change of
current capitalist systems is highly unlikely while internalization of
natural resources into the economy is much more feasible. John Bellamy Foster
believes that commodification of nature might be more dangerous than
the impending danger of climate change and ecologic disaster. Foster
fears that commodification of nature might lead to a system that favors
economy over ecology (endangering natural resources) and promote a form
of neocolonialism
that acknowledges the elements of capitalism, globalization, and
cultural imperialism, but disregards the idea of colonialism altogether.
Degradation of resources, underproduction of conditions
As
fictitious commodities with origins outside of capitalist production,
the value of nature, counter to the neoclassical assumption, cannot be fully accounted for in monetary terms, and there is a resultant tendency toward the overexploitation and "underproduction" of nature.
Natural entities that are commodified are subjected to the
competitive drive for accumulation. Capitalism is "ecologically
irrational," with a systematic tendency to overexploit its natural
resource base. At the same time, what O’Connor terms the "conditions of production"
(all the phenomena upon which capitalism depends but is unable to
produce itself, including environmental conditions and processes) are
subjected to indiscriminate degradation as they cannot be fully commodified. This is the "second contradiction" of capitalism, between the relations and forces of production and its conditions. Capitalism undermines its own production system, "producing its own scarcity."
Reclaiming the commons?
Recruiting
nature into relations of capitalist exchange "incites a good deal of
push back," as these entities and services "matter a great deal to
ordinary people." Social needs compete politically for access and control of an increasingly commodified nature, and as price is insufficient to resolve these competing claims, counter-movements emerge, expressing the "crisis tendencies" of capitalist nature through socio-political struggles over representation and access.
Protest movements, transnational coalitions, instances of
alternative practices and counter-discourses all fall within a broad
tent of resistance struggles to "reclaim the commons." This can be seen as Polanyi's "double movement," in which tendencies toward and against market coordination interact, based in a rejection of the treatment of the environment as alienable market goods.
Specific Examples in Modern Society
While
there are numerous natural resources that are being capitalized upon
all across the world, there are several more notable examples of
commodification of nature. The following examples are some that are
either more prevalent or larger in scale and scope.
Emissions Trading
Emissions trading,
commonly referred to as cap and trade, embodies commodification of
nature in that it allows for the trade of pollution and emissions within
a given limit for a specific environment. Rather than simply outright
prohibiting or allowing pollution and other various negative
externalities, cap and trade essentially permits members of an industry
to buy and sell units of emission with a maximum set for the industry as
a whole.
While there are various outlooks on whether emissions trading is
effective in cutting emissions or pollution, it is pertinent to
understand that this concept takes a company or individual's emissions
and presents them as something that can be bought or sold on a
specialized market.
Drinking Water
As capitalism has spread in leaps and bounds, so too has its reach on previously universal resources; one such resource is drinking water.
Water, a fundamental resource to human survival, now is a multibillion-dollar industry. Essentially what this means is that something that used to be
completely free and public has been taken and turned into a privatized
service. One modern example of water commodification is the current
conflict going on in Flint, Michigan.
Petroleum
As petroleum
has begun to be used for fuel and other various mechanical and
transportation uses, the demand for the natural resource has
skyrocketed. As a result, an economic industry has formed that completely revolves
around the extraction and sale of the resource. By extension, many other
industries also rely on the resource such as the automotive industry or
anyone that relies on transportation for their business.
Oil is just one of many natural resources being taken from the
environment to be sold in markets of various size and influence across
the globe. What sets this resource apart from others, however, is that
so many other industries are reliant upon oil that it has become one of
the most sought after resources across the world.
The nature–culture divide is the notion of a dichotomy between humans and the environment. It is a theoretical foundation of contemporary anthropology that considers whether nature and culture function separately from one another, or if they are in a continuous biotic relationship with each other.
In East Asian society, nature and culture are conceptualized as dichotomous (separate and distinct domains of reference). Some researchers consider culture to be "man's secret adaptive weapon"
in the sense that it is the core means of survival. It has been
observed that the terms "nature" and "culture" can not necessarily be
translated into non-western languages, for example, the Native American scholar John Mohawk
utilizes the term nature to describe "everything that supports life on
the planet," specifically when discussing the limits of science to ever
fully understand nature's complexity.
There is an idea that small-scale societies can have a more symbiotic relationship with nature. Less symbiotic relations with nature are limiting small-scale communities' access to water and food resources. It was also argued that the contemporary man-nature divide manifests itself in different aspects of alienation and conflicts. Greenwood and Stini argue that agriculture is only monetarily
cost-efficient because it takes much more to produce than one can get
out of eating their own crops,e.g. "high culture cannot come at low energy costs".
During the 1960s and 1970s, Sherry Ortner showed the parallel between the divide and gender roles with women as nature and men as culture. Feminist scholars question whether the dichotomies between nature and culture, or man and woman, are essential. For example, Donna Haraway's works on cyborg theory, as well as companion species gesture toward a notion of "naturecultures": a new way of understanding
non-discrete assemblages relating humans to technology and animals.
History
Within
European culture, land was an inherited right for each family's
firstborn son and every other child would need to find another way to own land. European expansion would be motivated by this desire to claim land and extract resources through technological developments or the invention of public trading companies. Other factors include religious (e.g. Crusades) and discovery
(e.g. voyages) purposes. In addition to the desire for expansion,
Europeans had the resources for external growth. They had ships, maps,
and knowledge—a complex of politics, economy and military tactics
that they believed were superior for ruling. These factors helped them
to possess and rule the people of the lands they came in contact with.
One large element of this was Western European's strong cultural belief
in private property.
Colonialists from Europe saw the American landscape as desolate, savage, dark and waste and thus needed to be tamed in order for it to be safe and habitable. Once cleared and settled, these areas were depicted as “Eden itself." Land was a commodity and as such, anyone who did not use it to turn a profit could have it taken from them. John Locke was one responsible for these ideals. Yet the commodities didn't end with the acquisition of land. Profit became the main driver for all resources that would follow (including slavery). The cultural divide that existed between Europeans and the native groups they colonised allowed the Europeans to capitalise on both local and global trade. So whether the ruling of these other lands and peoples was direct or
indirect, the diffusion of European ideals and practices spread to
nearly every country on the globe. Imperialism and globalisation were also at play in creating a ruling dominion for the European nation, but it did not come without challenges.
The native groups they encountered saw their relationship with the land in a more holistic
view. They saw the land as a shared entity of which they were a part,
but the Europeans saw it as a commodity that could and should be divided
and owned by individuals to then buy and sell as they pleased. And that “wilderness” is that the connection between humans and nature is broken. For native communities, human intervention was a part of their ecological practices.
Theories
The Role of Society
Pre-existing
movements include a spectrum of environmental thoughts. Authors,
Büscher and Fletcher, present these various movements on a condensed
map. Though it is simplified in thought and definition, it offers an
excellent way for readers to see the major conservation movements
plotted together in which elements of their philosophy are highlighted.
The following movements are as follows: mainstream conservation, new
conservation, neoprotectionism, and their newly proposed convivial
conservation. Each movement is plotted against two major factors: capitalism
and the human-nature divide. Mainstream conservation supports the
human-nature divide and capitalism, new conservation supports the
human-nature divide but rejects capitalism, neoprotectionism rejects
capitalism but supports the human-nature divide, and convivial
conservation rejects both the human-nature divide and capitalism. This
newest movement, though reminiscent of previous ones, sets itself apart
by addressing the political climate more directly. They argue this is important because without it, their movement will
only gain as much traction as those before it, i.e. very little. Lasting
change will come, not only from an overhaul in human-nature relations
and capitalist thought but from a political system that will enact and
support these changes.
The Role of Science
The nature–culture
divide is intertwined with the social versus biological debate since
both are implications of each other. As viewed in earlier forms of anthropology, it is believed that genetic determinism
de-emphasises the importance of culture, making it obsolete. However,
more modern views show that culture is valued more than nature because
everyday aspects of culture have a wider impact on how humans see the
world, rather than just our genetic makeup. Older anthropological
theories have separated the two, such as Franz Boas,
who claimed that social organisation and behaviour are purely the
transmission of social norms and not necessarily the passing of
heritable traits. Instead of using such a contrasting approach, more modern anthropologists see Neo-Darwinism
as an outline for culture, therefore nature is essentially guiding how
culture develops. When looking at adaptations, anthropologists such as
Daniel Nettle state that animals choose their mates based on their
environment, which is shaped directly by culture. More importantly, the
adaptations seen in nature are a result of evoked nature, which is
defined as cultural characteristics which shape the environment and that
then queue changes in phenotypes
for future generations. Put simply, cultures that promote more
effective resource allocation and a chance for survival are more likely
to be successful and produce more developed societies and cultures that
feed off of each other.
Transmitted culture can also be used to bridge the gap between
the two even more, because it uses a trial and error based approach that
shows how humans are constantly learning, and that they use social learning to influence individual choices. This is seen best in how the more superficial aspects of culture still
are intertwined with nature and genetic variation. For example, there
are beauty standards intertwined into the culture because they are associated with better survival rates,
yet they also serve personal interests which allows for individual
breeding pairs to understand how they fit into society. By learning from
each other, nature becomes more intertwined with culture since they
reinforce each other.
Sandra Harding
critiqued dominant science as "posit[ing] as necessary, and/or as
facts, a set of dualisms—culture vs. nature; rational mind vs.
prerational body and irrational emotions and values; objectivity vs.
subjectivity; public vs. private—and then links men and masculinity to
the former and women and femininity to the latter in each dichotomy". Instead, they argue for a more holistic approach to knowledge-seeking
which recognizes that every attempt at objectivity is bound up in the
social, historical, and political subjectivity of the knowledge producer.
Real-World Examples
National Parks
There
is a historical belief that wilderness must not only be tamed to be
protected but that humans also need to be outside of it. In fact, there have been instances where the removal of people from an
area has actually increased illegal activities and negative
environmental effects. National parks may not be particularly known as places of increased
violence, but they do perpetuate the idea of humans being removed from
nature to protect it. They also create a symbol of power for humans over
nature, as these sites have become tourist attractions. Ecotourism, even with environmentally friendly practices in effect, still represents a commodification of nature.
Another example can be seen in “the great frontier.” The American frontier became the nation's most sacred myth of origin.
Yet the lands protected as monuments to the American past were
constructed as pristine and uninhabited by removing the people that
lived and survived on those lands. Some authors have
come to describe this type of conservation as conservation-far, where
humans and nature are kept separate. The other end of the conservation
spectrum then, would be conservation-near, which would mimic native
ecological practices of humans integrated into the care of nature.