Search This Blog

Saturday, March 14, 2026

Genetic engineering

From Wikipedia, the free encyclopedia

Genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism's genes using technology. It is a set of technologies used to change the genetic makeup of cells, including the transfer of genes within and across species boundaries to produce improved or novel organisms. New DNA is obtained by either isolating and copying the genetic material of interest using recombinant DNA methods or by artificially synthesising the DNA. A construct is usually created and used to insert this DNA into the host organism. The first recombinant DNA molecule was designed by Paul Berg in 1972 by combining DNA from the monkey virus SV40 with the lambda virus. As well as inserting genes, the process can be used to remove, or "knock out", genes. The new DNA can either be inserted randomly or targeted to a specific part of the genome.

An organism that is generated through genetic engineering is considered to be genetically modified (GM), and the resulting entity is a genetically modified organism (GMO). The first GMO was a bacterium generated by Herbert Boyer and Stanley Cohen in 1973. Rudolf Jaenisch created the first GM animal when he inserted foreign DNA into a mouse in 1974. The first company to focus on genetic engineering, Genentech, was founded in 1976 and began the production of human proteins. Genetically engineered human insulin was produced in 1978, and insulin-producing bacteria were commercialised in 1982. Genetically modified food has been sold since 1994, with the release of the Flavr Savr tomato. The Flavr Savr was engineered to have a longer shelf life, but most current GM crops are modified to increase resistance to insects and herbicides. GloFish, the first GMO designed as a pet, was sold in the United States in December 2003. In 2016, salmon modified with a growth hormone were sold.

Genetic engineering has been applied in numerous fields, including research, medicine, industrial biotechnology, and agriculture. In research, GMOs are used to study gene function and expression through loss-of-function, gain-of-function, tracking, and expression experiments. By knocking out genes responsible for certain conditions, it is possible to create animal model organisms of human diseases. As well as producing hormones, vaccines, and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. Chinese hamster ovary (CHO) cells are used in industrial genetic engineering. Additionally, mRNA vaccines are made through genetic engineering to prevent infections by viruses such as COVID-19. The same techniques that are used to produce drugs can also have industrial applications, such as producing enzymes for laundry detergent, cheeses, and other products.

The rise of commercialised genetically modified crops has provided economic benefit to farmers in many different countries; however, it has also been the source of most of the controversy surrounding the technology. This has been present since its early use; the first field trials were destroyed by anti-GM activists. Although there is a scientific consensus that food derived from GMO crops poses no greater risk to human health than conventional food, critics consider GM food safety a leading concern. Gene flow, impact on non-target organisms, control of the food supply, and intellectual property rights have also been raised as potential issues. These concerns have led to the development of a regulatory framework, which started in 1975. Eventually, this has led to a proposal of an international treaty, the Cartagena Protocol on Biosafety, which was officially adopted in 2000. Individual countries have developed their own regulatory systems regarding GMOs, with the most marked differences occurring between the United States and Europe.

IUPAC definition

Genetic engineering: Process of inserting new genetic information into existing cells in order to modify a specific organism for the purpose of changing its characteristics.

Overview

Comparison of conventional plant breeding with transgenic and cisgenic genetic modification

Genetic engineering is a process that alters the genetic structure of an organism by either removing or introducing DNA or modifying existing genetic material in situ. Unlike traditional animal and plant breeding, which involves doing multiple crosses and then selecting for the organism with the desired phenotype, genetic engineering takes the gene directly from one organism and delivers it to the other. This is much faster, can be used to insert any genes from any organism (even ones from different domains), and prevents other undesirable genes from also being added.

Genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. It is an important tool in research that allows the function of specific genes to be studied. Drugs, vaccines, and other products have been harvested from organisms engineered to produce them. Crops have been developed that aid food security by increasing yield, nutritional value, and tolerance to environmental stresses.

The DNA can be introduced directly into the host organism or into a cell that is then fused or hybridised with the host. This relies on recombinant nucleic acid techniques to form new combinations of heritable genetic material, followed by the incorporation of that material either indirectly through a vector system or directly through micro-injection, macro-injection, or micro-encapsulation.

Genetic engineering does not normally include traditional breeding, in vitro fertilisation, induction of polyploidy, mutagenesis, and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. However, some broad definitions of genetic engineering include selective breedingCloning and stem cell research, although not considered genetic engineering, are closely related, and genetic engineering can be used within them. Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesised material into an organism.

Plants, animals, or microorganisms that have been changed through genetic engineering are termed genetically modified organisms or GMOs. If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used, the resulting organism is called cisgenic. If genetic engineering is used to remove genetic material from the target organism, the resulting organism is termed a knockout organism. In Europe, genetic modification is synonymous with genetic engineering while within the United States of America and Canada, genetic modification can also be used to refer to more conventional breeding methods.

History

Humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection as contrasted with natural selection. More recently, mutation breeding has used exposure to chemicals or radiation to produce a high frequency of random mutations for selective breeding purposes. Genetic engineering, as the direct manipulation of DNA by humans outside breeding and mutations, has only existed since the 1970s. The term "genetic engineering" was coined by the Russian-born geneticist Nikolay Timofeev-Ressovsky in his 1934 paper "The Experimental Production of Mutations", published in the British journal Biological Reviews. Jack Williamson used the term in his science fiction novel Dragon's Island, published in 1951 – one year before DNA's role in heredity was confirmed by Alfred Hershey and Martha Chase, and two years before James Watson and Francis Crick showed that the DNA molecule has a double-helix structure – though the general concept of direct genetic manipulation was explored in rudimentary form in Stanley G. Weinbaum's 1936 science fiction story Proteus Island.

In 1974, Rudolf Jaenisch created a genetically modified mouse, the first GM animal.

In 1972, Paul Berg created the first recombinant DNA molecules by combining DNA from the monkey virus SV40 with that of the lambda virus. In 1973, Herbert Boyer and Stanley Cohen created the first transgenic organism by inserting antibiotic resistance genes into the plasmid of an Escherichia coli bacterium. A year later Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal. These achievements led to concerns in the scientific community about potential risks from genetic engineering, which were first discussed in depth at the Asilomar Conference in 1975. One of the main recommendations from this meeting was that government oversight of recombinant DNA research should be established until the technology was deemed safe.

In 1976, Genentech, the first genetic engineering company, was founded by Herbert Boyer and Robert Swanson and, a year later, the company produced a human protein (somatostatin) in E. coli. Genentech announced the production of genetically engineered human insulin in 1978. In 1980, the Supreme Court of the United States in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented. The insulin produced by bacteria was approved for release by the Food and Drug Administration (FDA) in 1982.

In 1983, a biotech company, Advanced Genetic Sciences (AGS), applied for U.S. government authorisation to perform field tests with the ice-minus strain of Pseudomonas syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges. In 1987, the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment when a strawberry field and a potato field in California were sprayed with it. Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher".

The first field trials of genetically engineered plants occurred in France and the US in 1986, and tobacco plants were engineered to be resistant to herbicides. The People's Republic of China was the first country to commercialise transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994, Calgene received official approval to commercially release the first genetically modified food, the Flavr Savr, a tomato engineered to have a longer shelf life. In 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialised in Europe. In 1995, Bt potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide-producing crop to be approved in the US. In 2009, 11 transgenic crops were grown commercially in 25 countries, the largest of which by area grown were the United States, Brazil, Argentina, India, Canada, People's Republic of China, Paraguay, and South Africa.

In 2010, scientists at the J. Craig Venter Institute created the first synthetic genome and inserted it into an empty bacterial cell. The resulting bacterium, named Mycoplasma laboratorium, could replicate and produce proteins. Four years later, this was taken a step further when a bacterium was developed that replicated a plasmid containing a unique base pair, creating the first organism engineered to use an expanded genetic alphabet. In 2012, Jennifer Doudna and Emmanuelle Charpentier collaborated to develop the CRISPR/Cas9 system, a technique that can be used to easily and specifically alter the genome of almost any organism.

Process

The polymerase chain reaction is a powerful tool used in molecular cloning.

Creating a GMO is a multi-step process. Genetic engineers must first choose which gene they wish to insert into the organism. This is driven by what the aim is for the resultant organism and is built on earlier research. Genetic screens can be carried out to determine potential genes, and further tests can then be used to identify the most suitable candidates. The development of microarrays, transcriptomics, and genome sequencing has made it much easier to find suitable genes. Luck also plays its part; the Roundup Ready gene was discovered after scientists noticed a bacterium thriving in the presence of the herbicide.

Gene isolation and cloning

The next step is to isolate the candidate gene. The cell containing the gene is opened, and the DNA is purified. The gene is separated by using restriction enzymes to cut the DNA into fragments or polymerase chain reaction (PCR) to amplify up the gene segment. These segments can then be extracted through gel electrophoresis. If the chosen gene or the donor organism's genome has been well studied, it may already be accessible from a genetic library. If the DNA sequence is known, but no copies of the gene are available, it can also be artificially synthesised. Once isolated, the gene is ligated into a plasmid that is then inserted into a bacterium. The plasmid is replicated when the bacteria divide, ensuring unlimited copies of the gene are available. The RK2 plasmid is notable for its ability to replicate in a wide variety of single-celled organisms, which makes it suitable as a genetic engineering tool.

Before the gene is inserted into the target organism, it must be combined with other genetic elements. These include a promoter and terminator region, which initiate and end transcription. A selectable marker gene is added, which in most cases confers antibiotic resistance, so researchers can easily determine which cells have been successfully transformed. The gene can also be modified at this stage for better expression or effectiveness. These manipulations are carried out using recombinant DNA techniques, such as restriction digests, ligations, and molecular cloning.

Inserting DNA into the host genome

A gene gun uses biolistics to insert DNA into plant tissue.

There are a number of techniques used to insert genetic material into the host genome. Some bacteria can naturally take up foreign DNA. This ability can be induced in other bacteria via stress (e.g., thermal or electric shock), which increases the cell membrane's permeability to DNA; up-taken DNA can either integrate with the genome or exist as extrachromosomal DNA. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors.

Plant genomes can be engineered by physical methods or by the use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. In plants, the DNA is often inserted using Agrobacterium-mediated transformation, taking advantage of the Agrobacteriums T-DNA sequence that allows natural insertion of genetic material into plant cells. Other methods include biolistics, where particles of gold or tungsten are coated with DNA and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid DNA.

As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants, this is accomplished through the use of tissue culture. In animals, it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Bacteria consist of a single cell and reproduce clonally, so regeneration is not necessary. Selectable markers are used to easily differentiate transformed from untransformed cells. These markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant.

A. tumefaciens attaching itself to a carrot cell

Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene. These tests can also confirm the chromosomal location and copy number of the inserted gene. The presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products (RNA and protein) are also used. These include northern hybridisation, quantitative RT-PCR, Western blot, immunofluorescence, ELISA, and phenotypic analysis.

The new genetic material can be inserted randomly within the host genome or targeted to a specific location. The technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. This tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. The frequency of gene targeting can be greatly enhanced through genome editing. Genome editing utilizes artificially engineered nucleases that create specific double-stranded breaks at desired locations in the genome, and use the cell's endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end-joining. There are four families of engineered nucleases: meganucleases, zinc finger nucleasestranscription activator-like effector nucleases (TALENs) and the Cas9-guideRNA system (adapted from CRISPR), TALEN and CRISPR are the two most commonly used, and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient. In addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout.

Applications

Genetic engineering has applications in medicine, research, industry, and agriculture, and can be used on a wide range of plants, animals, and microorganisms. Bacteria, the first organisms to be genetically modified, can have plasmid DNA inserted containing new genes that code for medicines or enzymes that process food and other substrates. Plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures, and the production of edible vaccines. Most commercialised GMOs are insect-resistant or herbicide-tolerant crop plants. Genetically modified animals have been used for research, model animals, and the production of agricultural or pharmaceutical products. The genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth, and the ability to express proteins in their milk.

Medicine

Genetic engineering has many applications to medicine that include the manufacturing of drugs, the creation of model animals that mimic human conditions, and gene therapy. One of the earliest uses of genetic engineering was to mass-produce human insulin in bacteria. This application has now been applied to human growth hormones, follicle-stimulating hormones (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines, and many other drugs. Mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences.

Genetic engineering is also used to create animal models of human diseases. Genetically modified mice are the most common genetically engineered animal model. They have been used to study and model cancer (the oncomouse), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging, and Parkinson's disease. Potential cures can be tested against these mouse models.

Gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. Clinical research using somatic gene therapy has been conducted with several diseases, including X-linked SCIDchronic lymphocytic leukemia (CLL), and Parkinson's disease. In 2012, Alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. In 2015, a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy's body, which was affected by the illness.

Germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. In 2015, CRISPR was used to edit the DNA of non-viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. There are also concerns that the technology could be used not just for treatment, but for enhancement, modification, or alteration of a human beings' appearance, adaptability, intelligence, character, or behavior. The distinction between cure and enhancement can also be difficult to establish. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. The work was widely condemned as unethical, dangerous, and premature. Currently, germline modification is banned in 40 countries. Scientists who perform this type of research will often let embryos grow for a few days without allowing them to develop into a baby.

Researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of pig-to-human organ transplantation. Scientists are creating "gene drives", changing the genomes of mosquitoes to make them immune to malaria, and then looking to spread the genetically altered mosquitoes throughout the mosquito population in the hopes of eliminating the disease.

Research

Knockout mice
Human cells in which some proteins are fused with green fluorescent protein to allow them to be visualised

Genetic engineering is an important tool for natural scientists, with the creation of transgenic organisms one of the most important tools for the analysis of gene function. Genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process . Bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform, and can be stored at -80 °C almost indefinitely. Once a gene is isolated, it can be stored inside the bacteria, providing an unlimited supply for research.

Organisms are genetically engineered to discover the functions of certain genes. This could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. These experiments generally involve loss of function, gain of function, tracking, and expression.

  • Loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. In a simple knockout, a copy of the desired gene has been altered to make it non-functional. Embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. These stem cells are injected into blastocysts, which are implanted into surrogate mothers. This allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. It is used especially frequently in developmental biology. When this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called "scanning mutagenesis". The simplest method, and the first to be used, is "alanine scanning", where every position in turn is mutated to the unreactive amino acid alanine.
  • Gain of function experiments, the logical counterpart of knockouts. These are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. The process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. Gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy.
  • Tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. One way to do this is to replace the wild-type gene with a 'fusion' gene, which is a juxtaposition of the wild-type gene with a reporting element such as green fluorescent protein (GFP) that will allow easy visualisation of the products of the genetic modification. While this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment. More sophisticated techniques are now in development that can track protein products without mitigating their function, such as the addition of small sequences that will serve as binding motifs to monoclonal antibodies.
  • Expression studies aim to discover where and when specific proteins are produced. In these experiments, the DNA sequence before the DNA that codes for a protein, known as a gene's promoter, is reintroduced into an organism with the protein coding region replaced by a reporter gene such as GFP or an enzyme that catalyses the production of a dye. Thus the time and place where a particular protein is produced can be observed. Expression studies can be taken a step further by altering the promoter to find which pieces are crucial for the proper expression of the gene and are actually bound by transcription factor proteins; this process is known as promoter bashing.

Industrial

Organisms can have their cells transformed with a gene coding for a useful protein, such as an enzyme, so that they will overexpress the desired protein. Mass quantities of the protein can then be manufactured by growing the transformed organism in bioreactor equipment using industrial fermentation, and then purifying the protein. Some genes do not work well in bacteria, so yeast, insect cells or mammalian cells can also be used. These techniques are used to produce medicines such as insulin, human growth hormone, and vaccines, supplements such as tryptophan, aid in the production of food (chymosin in cheese making) and fuels. Other applications with genetically engineered bacteria could involve making them perform tasks outside their natural cycle, such as making biofuels, cleaning up oil spills, carbon and other toxic waste and detecting arsenic in drinking water. Certain genetically modified microbes can also be used in biomining and bioremediation, due to their ability to extract heavy metals from their environment and incorporate them into compounds that are more easily recoverable.

In materials science, a genetically modified virus has been used in a research laboratory as a scaffold for assembling a more environmentally friendly lithium-ion battery. Bacteria have also been engineered to function as sensors by expressing a fluorescent protein under certain environmental conditions.

Agriculture

Bt-toxins present in peanut leaves (bottom image) protect it from extensive damage caused by lesser cornstalk borer larvae (top image).

One of the best-known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. Crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products.

The first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. Fungal and virus resistant crops have also been developed or are in development. This makes the insect and weed management of crops easier and can indirectly increase crop yield. GM crops that directly improve yield by accelerating growth or making the plant more hardy (by improving salt, cold or drought tolerance) are also under development. In 2016 Salmon have been genetically modified with growth hormones to reach normal adult size much faster.

GMOs have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. The Amflora potato produces a more industrially useful blend of starches. Soybeans and canola have been genetically modified to produce more healthy oils. The first commercialised GM food was a tomato that had delayed ripening, increasing its shelf life.

Plants and animals have been engineered to produce materials they do not normally make. Pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. Cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the FDA approved a drug produced in goat milk.

Other applications

Genetic engineering has potential applications in conservation and natural area management. Gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. Transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. With the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. Applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice.

Genetic engineering is also being used to create microbial art. Some bacteria have been genetically engineered to create black and white photographs. Novelty items such as lavender-colored carnationsblue roses, and glowing fish, have also been produced through genetic engineering.

Regulation

The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of GMOs. The development of a regulatory framework began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. As the technology improved the US established a committee at the Office of Science and Technology, which assigned regulatory approval of GM food to the USDA, FDA and EPA. The Cartagena Protocol on Biosafety, an international treaty that governs the transfer, handling, and use of GMOs, was adopted on 29 January 2000. One hundred and fifty-seven countries are members of the Protocol, and many use it as a reference point for their own regulations.

The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. Some countries allow the import of GM food with authorisation, but either do not allow its cultivation (Russia, Norway, Israel) or have provisions for cultivation even though no GM products are yet produced (Japan, South Korea). Most countries that do not allow GMO cultivation do permit research. Some of the most marked differences occur between the US and Europe. The US policy focuses on the product (not the process), only looks at verifiable scientific risks and uses the concept of substantial equivalence. The European Union by contrast has possibly the most stringent GMO regulations in the world. All GMOs, along with irradiated food, are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority. The criteria for authorisation fall in four broad categories: "safety", "freedom of choice", "labelling", and "traceability". The level of regulation in other countries that cultivate GMOs lie in between Europe and the United States.

Regulatory agencies by geographical region
Region Regulators Notes
US USDA, FDA and EPA
Europe European Food Safety Authority
Canada Health Canada and the Canadian Food Inspection Agency Regulated products with novel features regardless of method of origin
Africa Common Market for Eastern and Southern Africa Final decision lies with each individual country.
China Office of Agricultural Genetic Engineering Biosafety Administration
India Institutional Biosafety Committee, Review Committee on Genetic Manipulation and Genetic Engineering Approval Committee
Argentina National Agricultural Biotechnology Advisory Committee (environmental impact), the National Service of Health and Agrifood Quality (food safety) and the National Agribusiness Direction (effect on trade) Final decision made by the Secretariat of Agriculture, Livestock, Fishery and Food.
Brazil National Biosafety Technical Commission (environmental and food safety) and the Council of Ministers (commercial and economical issues)
Australia Office of the Gene Technology Regulator (oversees all GM products), Therapeutic Goods Administration (GM medicines) and Food Standards Australia New Zealand (GM food). The individual state governments can then assess the impact of release on markets and trade and apply further legislation to control approved genetically modified products.

One of the key issues concerning regulators is whether GM products should be labeled. The European Commission says that mandatory labeling and traceability are needed to allow for informed choice, avoid potential false advertising and facilitate the withdrawal of products if adverse effects on health or the environment are discovered. The American Medical Association and the American Association for the Advancement of Science say that absent scientific evidence of harm even voluntary labeling is misleading and will falsely alarm consumers. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. In Canada and the US labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled.

Controversy

Critics have objected to the use of genetic engineering on several grounds, including ethical, ecological and economic concerns. Many of these concerns involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries.

Accusations that scientists are "playing God" and other religious issues have been ascribed to the technology from the beginning. Other ethical issues raised include the patenting of life, the use of intellectual property rights, the level of labeling on products, control of the food supply and the objectivity of the regulatory process. Although doubts have been raised, economically most studies have found growing GM crops to be beneficial to farmers.

Gene flow between GM crops and compatible plants, along with increased use of selective herbicides, can increase the risk of "superweeds" developing. Other environmental concerns involve potential impacts on non-target organisms, including soil microbes, and an increase in secondary and resistant insect pests. Many of the environmental impacts regarding GM crops may take many years to be understood and are also evident in conventional agriculture practices. With the commercialisation of genetically modified fish there are concerns over what the environmental consequences will be if they escape.

There are three main concerns over the safety of genetically modified food: whether they may provoke an allergic reaction; whether the genes could transfer from the food into human cells; and whether the genes not approved for human consumption could outcross to other crops. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are less likely than scientists to perceive GM foods as safe.

Genetic engineering features in many science fiction stories. Frank Herbert's novel The White Plague describes the deliberate use of genetic engineering to create a pathogen which specifically kills women. Another of Herbert's creations, the Dune series of novels, uses genetic engineering to create the powerful Tleilaxu. Few films have informed audiences about genetic engineering, with the exception of the 1978 The Boys from Brazil and the 1993 Jurassic Park, both of which make use of a lesson, a demonstration, and a clip of scientific film. Genetic engineering methods are weakly represented in film; Michael Clark, writing for the Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" in films such as The 6th Day. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience.

Just-world fallacy

From Wikipedia, the free encyclopedia

The just-world fallacy, or just-world hypothesis, is the cognitive bias that assumes that "people get what they deserve" – that actions will necessarily have morally fair and fitting consequences for the actor. For example, the assumptions that noble actions will eventually be rewarded and evil actions will eventually be punished fall under this fallacy. In other words, the just-world fallacy is the tendency to attribute consequences to – or expect consequences as the result of – either a universal force that restores moral balance or a universal connection between the nature of actions and their results. This belief generally implies the existence of cosmic justice, destiny, divine providence, desert, stability, order, or the anglophone colloquial use of "karma". It is often associated with a variety of fundamental fallacies, especially in regard to rationalizing suffering on the grounds that the sufferers "deserve" it. This is called victim blaming.

This fallacy popularly appears in the English language in various figures of speech that imply guaranteed punishment for wrongdoing, such as: "you got what was coming to you", "what goes around comes around", "chickens come home to roost", "everything happens for a reason", "you reap what you sow", and "you brought this on yourself." This hypothesis has been widely studied by social psychologists since Melvin J. Lerner conducted seminal work on the belief in a just world in the early 1960s. Research has continued since then, examining the predictive capacity of the fallacy in various situations and across cultures, and clarifying and expanding the theoretical understandings of just-world beliefs.

Emergence

Many philosophers and social theorists have observed and considered the phenomenon of belief in a just world, going back to at least as early as the Pyrrhonist philosopher Sextus Empiricus, writing c. 180 CE, who argued against this belief. Lerner's work made the just-world hypothesis a focus of research in the field of social psychology.

Melvin Lerner

Lerner was prompted to study justice beliefs and the just-world fallacy in the context of social psychological inquiry into negative social and societal interactions. Lerner saw his work as extending Stanley Milgram's work on obedience. He sought to answer the questions of how regimes that cause cruelty and suffering maintain popular support, and how people come to accept social norms and laws that produce misery and suffering.

Lerner's inquiry was influenced by repeatedly witnessing the tendency of observers to blame victims for their suffering. During his clinical training as a psychologist, he observed treatment of mentally ill persons by the health care practitioners with whom he worked. Although Lerner knew them to be kindhearted, educated people, they often blamed patients for the patients' own suffering. Lerner also describes his surprise at hearing his students derogate (disparage, belittle) the poor, seemingly oblivious to the structural forces that contribute to poverty. The desire to understand the processes that caused these phenomena led Lerner to conduct his first experiments on what is now called the just-world fallacy.

Early evidence

In 1966, Lerner and his colleagues began a series of experiments that used shock paradigms to investigate observer responses to victimization. In the first of these experiments conducted at the University of Kansas, 72 female participants watched what appeared to be a confederate receiving electrical shocks for her errors during a learning task (learning pairs of nonsense syllables). Initially, these observing participants were upset by the victim's apparent suffering. But as the suffering continued and observers remained unable to intervene, the observers began to reject and devalue the victim. Rejection and devaluation of the victim was greater when the observed suffering was greater. But when participants were told the victim would receive compensation for her suffering, the participants did not derogate the victim. Lerner and colleagues replicated these findings in subsequent studies, as did other researchers.

Theory

To explain these studies' findings, it was theorized that there was a prevalent belief in a just world. A just world is one in which actions and conditions have predictable, appropriate consequences. These actions and conditions are typically individuals' behaviors or attributes. The specific conditions that correspond to certain consequences are socially determined by a society's norms and ideologies. Lerner presents the belief in a just world as functional: it maintains the idea that one can influence the world in a predictable way. Belief in a just world functions as a sort of "contract" with the world regarding the consequences of behavior. This allows people to plan for the future and engage in effective, goal-driven behavior. Lerner summarized his findings and his theoretical work in his 1980 monograph The Belief in a Just World: A Fundamental Delusion.

Lerner hypothesized that the belief in a just world is crucially important for people to maintain for their own well-being. But people are confronted daily with evidence that the world is not just: people suffer without apparent cause. Lerner explained that people use strategies to eliminate threats to their belief in a just world. These strategies can be rational or irrational. Rational strategies include accepting the reality of injustice, trying to prevent injustice or provide restitution, and accepting one's own limitations. Non-rational strategies include denial, withdrawal, and reinterpretation of the event.

There are a few modes of reinterpretation that could make an event fit the belief in a just world. One can reinterpret the outcome, the cause, and/or the character of the victim. In the case of observing the injustice of the suffering of innocent people, one major way to rearrange the cognition of an event is to interpret the victim of suffering as deserving. Specifically, observers can blame victims for their suffering on the basis of their behaviors and/or their characteristics. Much psychological research on the belief in a just world has focused on these negative social phenomena of victim blaming and victim derogation in different contexts.

An additional effect of this thinking is that individuals experience less personal vulnerability because they do not believe they have done anything to deserve or cause negative outcomes. This is related to the self-serving bias observed by social psychologists.

Many researchers have interpreted just-world beliefs as an example of causal attribution. In victim blaming, the causes of victimization are attributed to an individual rather than to a situation. Thus, the consequences of belief in a just world may be related to or explained in terms of particular patterns of causal attribution.

Alternatives

Veridical judgment

Others have suggested alternative explanations for the derogation of victims. One suggestion is that derogation effects are based on accurate judgments of a victim's character. In particular, in relation to Lerner's first studies, some have hypothesized that it would be logical for observers to derogate an individual who would allow himself to be shocked without reason. A subsequent study by Lerner challenged this alternative hypothesis by showing that individuals are only derogated when they actually suffer; individuals who agreed to undergo suffering but did not were viewed positively.

Guilt reduction

Another alternative explanation offered for the derogation of victims early in the development of the just-world fallacy was that observers derogate victims to reduce their own feelings of guilt. Observers may feel responsible, or guilty, for a victim's suffering if they themselves are involved in the situation or experiment. In order to reduce the guilt, they may devalue the victim. Lerner and colleagues claim that there has not been adequate evidence to support this interpretation. They conducted one study that found derogation of victims occurred even by observers who were not implicated in the process of the experiment and thus had no reason to feel guilty.

Discomfort reduction

Alternatively, victim derogation and other strategies may only be ways to alleviate discomfort after viewing suffering. This would mean that the primary motivation is not to restore a belief in a just world, but to reduce discomfort caused by empathizing. Studies have shown that victim derogation does not suppress subsequent helping activity and that empathizing with the victim plays a large role when assigning blame. According to Ervin Staub, devaluing the victim should lead to lesser compensation if restoring belief in a just world was the primary motive; instead, there is virtually no difference in compensation amounts whether the compensation precedes or follows devaluation. Psychopathy has been linked to the lack of just-world maintaining strategies, possibly due to dampened emotional reactions and lack of empathy.

Additional evidence

After Lerner's first studies, other researchers replicated these findings in other settings in which individuals are victimized. This work, which began in the 1970s and continues today, has investigated how observers react to victims of random calamities like traffic accidents, as well as rape and domestic violence, illnesses, and poverty. Generally, researchers have found that observers of the suffering of innocent victims tend to both derogate and blame victims for their suffering. Observers thus maintain their belief in a just world by changing their cognitions about the victims' character.

In the early 1970s, social psychologists Zick Rubin and Letitia Anne Peplau developed a measure of belief in a just world. This measure and its revised form published in 1975 allowed for the study of individual differences in just-world beliefs. Much of the subsequent research on the just-world hypothesis used these measurement scales.

These studies on victims of violence, illness, and poverty and others like them have provided consistent support for the link between observers' just-world beliefs and their tendency to blame victims for their suffering. As a result, the existence of the just-world hypothesis as a psychological phenomenon has become widely accepted.

Violence

Researchers have looked at how observers react to victims of rape and other violence. In a formative experiment on rape and belief in a just world by Linda Carli and colleagues, researchers gave two groups of subjects a narrative about interactions between a man and a woman. The description of the interaction was the same until the end; one group received a narrative that had a neutral ending and the other group received a narrative that ended with the man raping the woman. Subjects judged the rape ending as inevitable and blamed the woman in the narrative for the rape on the basis of her behavior, but not her characteristics. These findings have been replicated repeatedly, including using a rape ending and a "happy ending" (a marriage proposal).

Other researchers have found a similar phenomenon for judgments of battered partners. One study found that observers' labels of blame of female victims of relationship violence increase with the intimacy of the relationship. Observers blamed the perpetrator only in the least intimate case of violence, in which a male struck an acquaintance.

Bullying

Researchers have employed the just-world fallacy to understand bullying. Given other research on beliefs in a just world, it would be expected that observers would derogate and blame bullying victims, but the opposite has been found: individuals high in just-world belief have stronger anti-bullying attitudes. Other researchers have found that strong belief in a just world is associated with lower levels of bullying behavior. This finding is in keeping with Lerner's understanding of belief in a just world as functioning as a "contract" that governs behavior. There is additional evidence that belief in a just world is protective of the well-being of children and adolescents in the school environment, as has been shown for the general population.

Illness

Other researchers have found that observers judge sick people as responsible for their illnesses. One experiment showed that persons suffering from a variety of illnesses were derogated on a measure of attractiveness more than healthy individuals were. In comparison to healthy people, victim derogation was found for persons presenting with indigestion, pneumonia, and stomach cancer. Moreover, derogation was found to be higher for those suffering from more severe illnesses, except for those presenting with cancer. Stronger belief in a just world has also been found to correlate with greater derogation of AIDS victims.

Poverty

More recently, researchers have explored how people react to poverty through the lens of the just-world fallacy. Strong belief in a just world is associated with blaming the poor, with weak belief in a just world associated with identifying external causes of poverty including world economic systems, war, and exploitation.

The self as victim

Some research on belief in a just world has examined how people react when they themselves are victimized. An early paper by Dr. Ronnie Janoff-Bulman found that rape victims often blame their own behavior, but not their own characteristics, for their victimization. It was hypothesized that this may be because blaming one's own behavior makes an event more controllable.

Theoretical refinement

Subsequent work on measuring belief in a just world has focused on identifying multiple dimensions of the belief. This work has resulted in the development of new measures of just-world belief and additional research. Hypothesized dimensions of just-world beliefs include belief in an unjust world, beliefs in immanent justice and ultimate justice, hope for justice, and belief in one's ability to reduce injustice. Other work has focused on looking at the different domains in which the belief may function; individuals may have different just-world beliefs for the personal domain, the sociopolitical domain, the social domain, etc. An especially fruitful distinction is between the belief in a just world for the self (personal) and the belief in a just world for others (general). These distinct beliefs are differentially associated with positive mental health.

Correlates

Researchers have used measures of belief in a just world to look at correlates of high and low levels of belief in a just world.

Limited studies have examined ideological correlates of the belief in a just world. These studies have found sociopolitical correlates of just-world beliefs, including right-wing authoritarianism and the Protestant work ethic. Studies have also found belief in a just world to be correlated with aspects of religiosity.

Studies of demographic differences, including gender and racial differences, have not shown systematic differences, but do suggest racial differences, with black people and African Americans having the lowest levels of belief in a just world.

The development of measures of just-world beliefs has also allowed researchers to assess cross-cultural differences in just-world beliefs. Much research conducted shows that beliefs in a just world are evident cross-culturally. One study tested beliefs in a just world of students in 12 countries. This study found that in countries where the majority of inhabitants are powerless, belief in a just world tends to be weaker than in other countries. This supports the theory of the just-world fallacy because the powerless have had more personal and societal experiences that provided evidence that the world is not just and predictable.

Belief in an unjust world has been linked to increased self-handicapping, criminality, defensive coping, anger and perceived future risk. It may also serve as an ego-protective belief for certain individuals by justifying maladaptive behavior.

Current research

Although much of the initial work on belief in a just world focused on its negative social effects, other research suggests that belief in a just world is good, and even necessary, for mental health. Belief in a just world is associated with greater life satisfaction and well-being and less depressive affect. Researchers are actively exploring the reasons why the belief in a just world might have this relationship to mental health; it has been suggested that such beliefs could be a personal resource or coping strategy that buffers stress associated with daily life and with traumatic events. This hypothesis suggests that belief in a just world can be understood as a positive illusion. In line with this perspective, recent research also suggests that belief in a just world may explain the known statistical association between religiosity/spirituality and psychological well-being. Some belief in a just world research has been conducted within the framework of primal world beliefs, and has found strong correlations between just world belief and beliefs that the world is safe, abundant and cooperative (among other qualities).

Some studies also show that beliefs in a just world are correlated with internal locus of control. Strong belief in a just world is associated with greater acceptance of and less dissatisfaction with negative events in one's life. This may be one way in which belief in a just world affects mental health. Others have suggested that this relationship holds only for beliefs in a just world for oneself. Beliefs in a just world for others are related instead to the negative social phenomena of victim blaming and victim derogation observed in other studies.

Belief in a just world has also been found to negatively predict the perceived likelihood of kin favoritism. The perspective of the individual plays an important role in this relationship, such that when people imagine themselves as mere observers of injustice, general belief in a just world will be the stronger predictor, and when they imagine themselves as victims of injustice, personal belief in a just world will be the stronger predictor. This further supports the distinction between general and personal belief in a just world.

International research

More than 40 years after Lerner's seminal work on belief in a just world, researchers continue to study the phenomenon. Belief in a just world scales have been validated in several countries such as Iran, Russia, Brazil, and France. Work continues primarily in the United States, Europe, Australia, and Asia. Researchers in Germany have contributed disproportionately to recent research. Their work resulted in a volume edited by Lerner and German researcher Leo Montada titled Responses to Victimizations and Belief in a Just World.

Green chemistry

From Wikipedia, the free encyclopedia

Green chemistry, similar to sustainable chemistry or circular chemistry, is an area of chemistry and chemical engineering focused on the design of products and processes that minimize or eliminate the use and generation of hazardous substances. While environmental chemistry focuses on the effects of polluting chemicals on nature, green chemistry focuses on the environmental impact of chemistry, including lowering consumption of nonrenewable resources and technological approaches for preventing pollution.

The overarching goals of green chemistry—namely, more resource-efficient and inherently safer design of molecules, materials, products, and processes—can be pursued in a wide range of contexts.

Definition

Green chemistry (also called sustainable chemistry) is the design of chemical products and processes that reduce or eliminate the use and generation of hazardous substances. The concept integrates pollution-prevention and process-intensification approaches at laboratory and industrial scales to improve resource efficiency and minimize waste and risk across the life cycle of chemicals and materials.

History

Green chemistry evolved and emerged from a variety of existing ideas and research efforts (such as Pollution Prevention, atom economy and catalysis) in the period leading up to the 1990s, in the context of increasing attention to problems of chemical pollution and resource depletion. The development of green chemistry in Europe and the United States was proceeded by a shift in environmental problem-solving strategies: a movement from command and control regulation and mandated lowering of industrial emissions at the "end of the pipe," toward the broad interdisciplinary concept of prevention of pollution through the innovative design of production technologies themselves. The narrower set of concepts later recognized and re-named as green chemistry coalesced in the mid- to late-1990s, along with broader adoption of the new term in the Academic literature (which prevailed over earlier competing terms such as "clean" and "sustainable" chemistry).

In the United States, the Environmental Protection Agency played a significant supporting role in evolving green chemistry out of its earlier pollution prevention programs, funding, and cooperative coordination with industry. At the same time in the United Kingdom, researchers at the University of York, who used the term "clean technology" in the early 1990s, contributed to the establishment of the Green Chemistry Network within the Royal Society of Chemistry, and the launch of the journal Green Chemistry. In 1991, in the Netherlands, a special issue called 'green chemistry' [groene chemie] was published in Chemisch Magazine. In the Dutch context, the umbrella term green chemistry was associated with the exploitation of biomass as a renewable feedstock.

Principles

In 1998, Paul Anastas (who then directed the Green Chemistry Program at the US EPA) and John C. Warner (then of Polaroid Corporation) published a set of principles to guide the practice of green chemistry. The twelve principles address a range of ways to lower the environmental and health impacts of chemical production, and also indicate research priorities for the development of green chemistry technologies.

The principles cover such concepts as:

The twelve principles of green chemistry are:

  1. Prevention: Preventing waste is better than treating or cleaning up waste after it is created.
  2. Atom economy: Synthetic methods should try to maximize the incorporation of all materials used in the process into the final product. This means that less waste will be generated as a result.
  3. Less hazardous chemical syntheses: Synthetic methods should avoid using or generating substances toxic to humans and/or the environment.
  4. Designing safer chemicals: Chemical products should be designed to achieve their desired function while being as non-toxic as possible.
  5. Safer solvents and auxiliaries: Auxiliary substances should be avoided wherever possible, and as non-hazardous as possible when they must be used.
  6. Design for energy efficiency: Energy requirements should be minimized, and processes should be conducted at ambient temperature and pressure whenever possible.
  7. Use of renewable feedstocks: Whenever it is practical to do so, renewable feedstocks or raw materials are preferable to non-renewable ones.
  8. Reduce derivatives: Unnecessary generation of derivatives—such as the use of protecting groups—should be minimized or avoided if possible; such steps require additional reagents and may generate additional waste.
  9. Catalysis: Catalytic reagents that can be used in small quantities to repeat a reaction are superior to stoichiometric reagents (ones that are consumed in a reaction).
  10. Design for degradation: Chemical products should be designed so that they do not pollute the environment; when their function is complete, they should break down into non-harmful products.
  11. Real-time analysis for pollution prevention: Analytical methodologies need to be further developed to permit real-time, in-process monitoring and control before hazardous substances form.
  12. Inherently safer chemistry for accident prevention: Whenever possible, the substances in a process, and the forms of those substances, should be chosen to minimize risks such as explosions, fires, and accidental releases.

Attempts are being made not only to quantify the greenness of a chemical process but also to factor in other variables such as chemical yield, the price of reaction components, safety in handling chemicals, hardware demands, energy profile and ease of product workup and purification. In one quantitative study, the reduction of nitrobenzene to aniline receives 64 points out of 100 marking it as an acceptable synthesis overall whereas a synthesis of an amide using HMDS is only described as adequate with a combined 32 points.

Green-chemistry methods are applied to the development and manufacture of nanomaterials, with attention to life-cycle impacts and potential nanotoxicity.

Examples

Green solvents

The major application of solvents in human activities is in paints and coatings (46% of usage). Smaller volume applications include cleaning, de-greasing, adhesives, and in chemical synthesis. Traditional solvents are often toxic or are chlorinated. Green solvents, on the other hand, are generally less harmful to health and the environment and preferably more sustainable. Ideally, solvents would be derived from renewable resources and biodegrade to innocuous, often a naturally occurring product. However, the manufacture of solvents from biomass can be more harmful to the environment than making the same solvents from fossil fuels. Thus the environmental impact of solvent manufacture must be considered when a solvent is being selected for a product or process. Another factor to consider is the fate of the solvent after use. If the solvent is being used in an enclosed situation where solvent collection and recycling is feasible, then the energy cost and environmental harm associated with recycling should be considered; in such a situation water, which is energy-intensive to purify, may not be the greenest choice. On the other hand, a solvent contained in a consumer product is likely to be released into the environment upon use, and therefore the environmental impact of the solvent itself is more important than the energy cost and impact of solvent recycling; in such a case water is very likely to be a green choice. In short, the impact of the entire lifetime of the solvent, from cradle to grave (or cradle to cradle if recycled) must be considered. Thus the most comprehensive definition of a green solvent is the following: "a green solvent is the solvent that makes a product or process have the least environmental impact over its entire life cycle."

By definition, then, a solvent might be green for one application (because it results in less environmental harm than any other solvent that could be used for that application) and yet not be a green solvent for a different application. A classic example is water, which is a very green solvent for consumer products such as toilet bowl cleaner but is not a green solvent for the manufacture of polytetrafluoroethylene. For the production of that polymer, the use of water as solvent requires the addition of perfluorinated surfactants which are highly persistent. Instead, supercritical carbon dioxide seems to be the greenest solvent for that application because it performs well without any surfactant. In summary, no solvent can be declared to be a "green solvent" unless the declaration is limited to a specific application.

Synthetic techniques

Novel or enhanced synthetic techniques can often provide improved environmental performance or enable better adherence to the principles of green chemistry. For example, the 2005 Nobel Prize for Chemistry was awarded to Yves Chauvin, Robert H. Grubbs and Richard R. Schrock, for the development of the metathesis method in organic synthesis, with explicit reference to its contribution to green chemistry and "smarter production." A 2005 review identified three key developments in green chemistry in the field of organic synthesis: use of supercritical carbon dioxide as green solvent, aqueous hydrogen peroxide for clean oxidations and the use of hydrogen in asymmetric synthesis. Some further examples of applied green chemistry are supercritical water oxidation, on water reactions, and dry media reactions.

Bioengineering is also seen as a promising technique for achieving green chemistry goals. A number of important process chemicals can be synthesized in engineered organisms, such as shikimate, a Tamiflu precursor which is fermented by Roche in bacteria. Click chemistry is often cited as a style of chemical synthesis that is consistent with the goals of green chemistry. The concept of 'green pharmacy' has recently been articulated based on similar principles.

Carbon dioxide as blowing agent

In 1996, Dow Chemical won the 1996 Greener Reaction Conditions award for their 100% carbon dioxide blowing agent for polystyrene foam production. Polystyrene foam is a common material used in packing and food transportation. Seven hundred million pounds are produced each year in the United States alone. Traditionally, CFC and other ozone-depleting chemicals were used in the production process of the foam sheets, presenting a serious environmental hazard. Flammable, explosive, and, in some cases toxic hydrocarbons have also been used as CFC replacements, but they present their own problems. Dow Chemical discovered that supercritical carbon dioxide works equally as well as a blowing agent, without the need for hazardous substances, allowing the polystyrene to be more easily recycled. The CO2 used in the process is reused from other industries, so the net carbon released from the process is zero.

Hydrazine

Addressing principle #2 is the peroxide process for producing hydrazine without cogenerating salt. Hydrazine is traditionally produced by the Olin Raschig process from sodium hypochlorite (the active ingredient in many bleaches) and ammonia. The net reaction produces one equivalent of sodium chloride for every equivalent of the targeted product hydrazine:

NaOCl + 2 NH3 → H2N-NH2 + NaCl + H2O

In the greener peroxide process hydrogen peroxide is employed as the oxidant and the side product is water. The net conversion follows:

2 NH3 + H2O2 → H2N-NH2 + 2 H2O

Addressing principle #4, this process does not require auxiliary extracting solvents. Methyl ethyl ketone is used as a carrier for the hydrazine, the intermediate ketazine phase separates from the reaction mixture, facilitating workup without the need of an extracting solvent.

1,3-Propanediol

Addressing principle #7 is a green route to 1,3-propanediol, which is traditionally generated from petrochemical precursors. It can be produced from renewable precursors via the bioseparation of 1,3-propanediol using a genetically modified strain of E. coli. This diol is used to make new polyesters for the manufacture of carpets.

Lactide

Lactide

In 2002, Cargill Dow (now NatureWorks) won the Greener Reaction Conditions Award for their improved method for polymerization of polylactic acid. Unfortunately, lactide-base polymers do not perform well and the project was discontinued by Dow soon after the award. Lactic acid is produced by fermenting corn and converted to lactide, the cyclic dimer ester of lactic acid using an efficient, tin-catalyzed cyclization. The L,L-lactide enantiomer is isolated by distillation and polymerized in the melt to make a crystallizable polymer, which has some applications including textiles and apparel, cutlery, and food packaging. The NatureWorks PLA process substitutes renewable materials for petroleum feedstocks, doesn't require the use of hazardous organic solvents typical in other PLA processes, and results in a high-quality polymer that is recyclable and compostable.

Carpet tile backings

In 2003 Shaw Industries selected a combination of polyolefin resins as the base polymer of choice for EcoWorx due to the low toxicity of its feedstocks, superior adhesion properties, dimensional stability, and its ability to be recycled. The EcoWorx compound also had to be designed to be compatible with nylon carpet fiber. Although EcoWorx may be recovered from any fiber type, nylon-6 provides a significant advantage. Polyolefins are compatible with known nylon-6 depolymerization methods. PVC interferes with those processes. Nylon-6 chemistry is well-known and not addressed in first-generation production. From its inception, EcoWorx met all of the design criteria necessary to satisfy the needs of the marketplace from a performance, health, and environmental standpoint. Research indicated that separation of the fiber and backing through elutriation, grinding, and air separation proved to be the best way to recover the face and backing components, but an infrastructure for returning postconsumer EcoWorx to the elutriation process was necessary. Research also indicated that the postconsumer carpet tile had a positive economic value at the end of its useful life. EcoWorx is recognized by MBDC as a certified cradle-to-cradle design.

Transesterification of fats

Trans and cis fatty acids

In 2005, Archer Daniels Midland (ADM) and Novozymes won the Greener Synthetic Pathways Award for their enzyme interesterification process. In response to the U.S. Food and Drug Administration (FDA) mandated labeling of trans-fats on nutritional information by January 1, 2006, Novozymes and ADM worked together to develop a clean, enzymatic process for the interesterification of oils and fats by interchanging saturated and unsaturated fatty acids. The result is commercially viable products without trans-fats. In addition to the human health benefits of eliminating trans-fats, the process has reduced the use of toxic chemicals and water, prevents vast amounts of byproducts, and reduces the amount of fats and oils wasted.

Bio-succinic acid

In 2011, the Outstanding Green Chemistry Accomplishments by a Small Business Award went to BioAmber Inc. for integrated production and downstream applications of bio-based succinic acid. Succinic acid is a platform chemical that is an important starting material in the formulations of everyday products. Traditionally, succinic acid is produced from petroleum-based feedstocks. Bio Amber has developed process and technology that produces succinic acid from the fermentation of renewable feedstocks at a lower cost and lower energy expenditure than the petroleum equivalent while sequestering CO2 rather than emitting it. However, lower prices of oil precipitated the company into bankruptcy  and bio-sourced succinic acid is now barely made.

Laboratory chemicals

Several laboratory chemicals are controversial from the perspective of Green chemistry. The Massachusetts Institute of Technology created a "Green" Alternatives Wizard to help identify alternatives. Ethidium bromide, xylene, mercury, and formaldehyde have been identified as "worst offenders" which have alternatives. Solvents in particular make a large contribution to the environmental impact of chemical manufacturing and there is a growing focus on introducing Greener solvents into the earliest stage of development of these processes: laboratory-scale reaction and purification methods. In the Pharmaceutical Industry, both GSK and Pfizer have published Solvent Selection Guides for their Drug Discovery chemists.

Legislation

The EU

In 2007, The EU put into place the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) program, which requires companies to provide data showing that their products are safe. This regulation (1907/2006) ensures not only the assessment of the chemicals' hazards as well as risks during their uses but also includes measures for banning or restricting/authorizing uses of specific substances. ECHA, the EU Chemicals Agency in Helsinki, is implementing the regulation whereas the enforcement lies with the EU member states.

United States

The United States formed the Environmental Protection Agency (EPA) in 1970 to protect human and environmental health by creating and enforcing environmental regulation. Green chemistry builds on the EPA's goals by encouraging chemists and engineers to design chemicals, processes, and products that avoid the creation of toxins and waste.

The U.S. law that governs the majority of industrial chemicals (excluding pesticides, foods, and pharmaceuticals) is the Toxic Substances Control Act (TSCA) of 1976. Examining the role of regulatory programs in shaping the development of green chemistry in the United States, analysts have revealed structural flaws and long-standing weaknesses in TSCA; for example, a 2006 report to the California Legislature concludes that TSCA has produced a domestic chemicals market that discounts the hazardous properties of chemicals relative to their function, price, and performance. Scholars have argued that such market conditions represent a key barrier to the scientific, technical, and commercial success of green chemistry in the U.S., and fundamental policy changes are needed to correct these weaknesses.

Passed in 1990, the Pollution Prevention Act helped foster new approaches for dealing with pollution by preventing environmental problems before they happen.

Green chemistry grew in popularity in the United States after the Pollution Prevention Act of 1990 was passed. This Act declared that pollution should be lowered by improving designs and products rather than treatment and disposal. These regulations encouraged chemists to reimagine pollution and research ways to limit the toxins in the atmosphere. In 1991, the EPA Office of Pollution Prevention and Toxics created a research grant program encouraging the research and recreation of chemical products and processes to limit the impact on the environment and human health. The EPA hosts The Green Chemistry Challenge each year to incentivize the economic and environmental benefits of developing and utilizing green chemistry.

In 2008, the State of California approved two laws aiming to encourage green chemistry, launching the California Green Chemistry Initiative. One of these statutes required California's Department of Toxic Substances Control (DTSC) to develop new regulations to prioritize "chemicals of concern" and promote the substitution of hazardous chemicals with safer alternatives. The resulting regulations took effect in 2013, initiating DTSC's Safer Consumer Products Program.

Scientific journals specialized in green chemistry

Contested definition

There are ambiguities in the definition of green chemistry and how it is understood among broader science, policy, and business communities. Even within chemistry, researchers have used the term "green chemistry" to describe a range of work independently of the framework put forward by Anastas and Warner (i.e., the 12 principles). While not all uses of the term are legitimate (see greenwashing), many are, and the authoritative status of any single definition is uncertain. More broadly, the idea of green chemistry can easily be linked (or confused) with related concepts like green engineering, environmental design, or sustainability in general. Green chemistry's complexity and multifaceted nature makes it difficult to devise clear and simple metrics. As a result, "what is green" is often open to debate.

Climate inertia

From Wikipedia, the free encyclopedia
Societal elements of inertia work to prevent abrupt shifts within pathways of greenhouse gas emissions, while physical inertia of the Earth system acts to delay the surface temperature response.

Climate inertia or climate change inertia is the phenomenon by which a planet's climate system shows a resistance or slowness to deviate away from a given dynamic state. It can accompany stability and other effects of feedback within complex systems, and includes the inertia exhibited by physical movements of matter and exchanges of energy. The term is a colloquialism used to encompass and loosely describe a set of interactions that extend the timescales around climate sensitivity. Inertia has been associated with the drivers of, and the responses to, climate change.

Increasing fossil-fuel carbon emissions are a primary inertial driver of change to Earth's climate during recent decades, and have risen along with the collective socioeconomic inertia of its 8 billion human inhabitants. Many system components have exhibited inertial responses to this driver, also known as a forcing. The rate of rise in global surface temperature (GST) has especially been resisted by 1) the thermal inertia of the planet's surface, primarily its ocean, and 2) inertial behavior within its carbon cycle feedback. Various other biogeochemical feedbacks have contributed further resiliency. Energy stored in the ocean following the inertial responses principally determines near-term irreversible change known as climate commitment.

Earth's inertial responses are important because they provide the planet's diversity of life and its human civilization further time to adapt to an acceptable degree of planetary change. However, unadaptable change like that accompanying some tipping points may only be avoidable with early understanding and mitigation of the risk of such dangerous outcomes. This is because inertia also delays much surface warming unless and until action is taken to rapidly reduce emissions. An aim of Integrated assessment modelling, summarized for example as Shared Socioeconomic Pathways (SSP), is to explore Earth system risks that accompany large inertia and uncertainty in the trajectory of human drivers of change.

Inertial timescales

Response times to climate forcing
Earth System
Component
Time
Constant
(years)
Response
Modes
Atmosphere

Water Vapor
and Clouds
10−2-10 EC, WC
Trace Gases 10−1-108 CC
Hydrosphere

Ocean Mixed
Layer
10−1-10 EC, WC,
CC
Deep Ocean 10-103 EC, CC
Lithosphere

Land Surface
and Soils
10−1-102 EC, WC,
CC
Subterranean
Sediments
104-109 CC
Cryosphere
Glaciers 10−1-10 EC, WC
Sea Ice 10−1-10 EC, WC
Ice Sheets 103-106 EC, WC
Biosphere
Upper Marine 10−1-102 CC
Terrestrial 10−1-102 WC, CC
EC=Energy Cycle
WC=Water Cycle  CC=Carbon Cycle

The paleoclimate record shows that Earth's climate system has evolved along various pathways and with multiple timescales. Its relatively stable states which can persist for many millennia have been interrupted by short to long transitional periods of relative instability. Studies of climate sensitivity and inertia are concerned with quantifying the most basic manner in which a sustained forcing perturbation will cause the system to deviate within or initially away from its relatively stable state of the present Holocene epoch.

"Time constants" are useful metrics for summarizing the first-order (linear) impacts of the various inertial phenomena within both simple and complex systems. They quantify the time after which 63% of a full output response occurs following the step change of an input. They are observed from data or can be estimated from numerical simulation or a lumped system analysis. In climate science these methods can be applied to Earth's energy cycle, water cycle, carbon cycle and elsewhere. For example, heat transport and storage in the ocean, cryosphere, land and atmosphere are elements within a lumped thermal analysis. Response times to radiative forcing via the atmosphere typically increase with depth below the surface.

Inertial time constants indicate a base rate for forced changes, but lengthy values provide no guarantee of long-term system evolution along a smooth pathway. Numerous higher-order tipping elements having various trigger thresholds and transition timescales have been identified within Earth's present state. Such events might precipitate a nonlinear rearrangement of internal energy flows along with more rapid shifts in climate and/or other systems at regional to global scale.

Climate response time

The response of global surface temperature (GST) to a step-like doubling of the atmospheric CO2 concentration, and its resultant forcing, is defined as the Equilibrium Climate Sensitivity (ECS). The ECS response extends over short and long timescales, however the main time constant associated with ECS has been identified by Jule Charney, James Hansen and others as a useful metric to help guide policymaking. RCPs, SSPs, and other similar scenarios have also been used by researchers to simulate the rate of forced climate changes. By definition, ECS presumes that ongoing emissions will offset the ocean and land carbon sinks following the step-wise perturbation in atmospheric CO2.

ECS response time is proportional to ECS and is principally regulated by the thermal inertia of the uppermost mixed layer and adjacent lower ocean layers. Main time constants fitted to the results from climate models have ranged from a few decades when ECS is low, to as long as a century when ECS is high. A portion of the variation between estimates arises from different treatments of heat transport into the deep ocean.

Components

Thermal inertia

The observed accumulation of energy in the oceanic, land, ice, and atmospheric components of Earth's climate system since 1960. The rate of rise has been partially slowed by the system's thermal inertia.

Thermal inertia is a term which refers to the observed delays in a body's temperature response during heat transfers. A body with large thermal inertia can store a big amount of energy because of its heat capacity, and can effectively transmit energy according to its heat transfer coefficient. The consequences of thermal inertia are inherently expressed via many climate change feedbacks because of their temperature dependencies; including through the strong stabilizing feedback of the Planck response.

Ocean inertia

The global ocean is Earth's largest thermal reservoir that functions to regulate the planet's climate; acting as both a sink and a source of energy. The ocean's thermal inertia delays some global warming for decades or centuries. It is accounted for in global climate models, and has been confirmed via measurements of ocean heat content. The observed transient climate sensitivity is proportional to the thermal inertia time scale of the shallower ocean.

Ice sheet inertia

Even after CO2 emissions are lowered, the melting of ice sheets will persist and further increase sea-level rise for centuries. The slower transportation of heat into the extreme deep ocean, subsurface land sediments, and thick ice sheets will continue until the new Earth system equilibrium has been reached.

Permafrost also takes longer to respond to a warming planet because of thermal inertia, due to ice rich materials and permafrost thickness.

Inertia from carbon cycle feedbacks

The impulse response following a 100 GtC injection of CO2 into Earth's atmosphere. The relative inertial effect of positive vs. negative feedback during early years is indicated by the pulse fraction which ultimately remains.

Earth's carbon cycle feedback includes a destabilizing positive feedback (identified as the climate-carbon feedback) which prolongs warming for centuries, and a stabilizing negative feedback (identified as the concentration-carbon feedback) which limits the ultimate warming response to fossil carbon emissions. The near-term effect following emissions is asymmetric with latter mechanism being about four times larger, and results in a significant net slowing contribution to the inertia of the climate system during the first few decades following emissions.

Ecological inertia

Depending on the ecosystem, effects of climate change could show quickly, while others take more time to respond. For instance, coral bleaching can occur in a single warm season, while trees may be able to persist for decades under a changing climate, but be unable to regenerate. Changes in the frequency of extreme weather events could disrupt ecosystems as a consequence, depending on individual response times of species.

Policy implications of inertia

The IPCC concluded that the inertia and uncertainty of the climate system, ecosystems, and socioeconomic systems implies that margins for safety should be considered. Thus, setting strategies, targets, and time tables for avoiding dangerous interference through climate change. Further the IPCC concluded in their 2001 report that the stabilization of atmospheric CO2 concentration, temperature, or sea level is affected by:

  • The inertia of the climate system, which will cause climate change to continue for a period after mitigation actions are implemented.
  • Uncertainty regarding the location of possible thresholds of irreversible change and the behavior of the system in their vicinity.
  • The time lags between adoption of mitigation goals and their achievement.

Human extinction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Human_ext...