Search This Blog

Monday, March 30, 2015

Genetic engineering


From Wikipedia, the free encyclopedia

Genetic engineering, also called genetic modification, is the direct manipulation of an organism's genome using biotechnology. New DNA may be inserted in the host genome by first isolating and copying the genetic material of interest using molecular cloning methods to generate a DNA sequence, or by synthesizing the DNA, and then inserting this construct into the host organism. Genes may be removed, or "knocked out", using a nuclease. Gene targeting is a different technique that uses homologous recombination to change an endogenous gene, and can be used to delete a gene, remove exons, add a gene, or introduce point mutations.

An organism that is generated through genetic engineering is considered to be a genetically modified organism (GMO). The first GMOs were bacteria in 1973 and GM mice were generated in 1974. Insulin-producing bacteria were commercialized in 1982 and genetically modified food has been sold since 1994. Glofish, the first GMO designed as a pet, was first sold in the United States December in 2003.[1]

Genetic engineering techniques have been applied in numerous fields including research, agriculture, industrial biotechnology, and medicine. Enzymes used in laundry detergent and medicines such as insulin and human growth hormone are now manufactured in GM cells, experimental GM cell lines and GM animals such as mice or zebrafish are being used for research purposes, and genetically modified crops have been commercialized.

Definition


Comparison of conventional plant breeding with transgenic and cisgenic genetic modification.

Genetic engineering alters the genetic make-up of an organism using techniques that remove heritable material or that introduce DNA prepared outside the organism either directly into the host or into a cell that is then fused or hybridized with the host.[4] This involves using recombinant nucleic acid (DNA or RNA) techniques to form new combinations of heritable genetic material followed by the incorporation of that material either indirectly through a vector system or directly through micro-injection, macro-injection and micro-encapsulation techniques.

Genetic engineering does not normally include traditional animal and plant breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process.[4] However the European Commission has also defined genetic engineering broadly as including selective breeding and other means of artificial selection.[5] Cloning and stem cell research, although not considered genetic engineering,[6] are closely related and genetic engineering can be used within them.[7] Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesized material from raw materials into an organism.[8]

If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic.[9] Genetic engineering can also be used to remove genetic material from the target organism, creating a gene knockout organism.[10] In Europe genetic modification is synonymous with genetic engineering while within the United States of America it can also refer to conventional breeding methods.[11][12] The Canadian regulatory system is based on whether a product has novel features regardless of method of origin. In other words, a product is regulated as genetically modified if it carries some trait not previously found in the species whether it was generated using traditional breeding methods (e.g., selective breeding, cell fusion, mutation breeding) or genetic engineering.[13][14][15] Within the scientific community, the term genetic engineering is not commonly used; more specific terms such as transgenic are preferred.

Genetically modified organisms

Plants, animals or micro organisms that have changed through genetic engineering are termed genetically modified organisms or GMOs.[16] Bacteria were the first organisms to be genetically modified. Plasmid DNA containing new genes can be inserted into the bacterial cell and the bacteria will then express those genes. These genes can code for medicines or enzymes that process food and other substrates.[17][18] Plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines.[19] Most commercialised GMO's are insect resistant and/or herbicide tolerant crop plants.[20] 
Genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. They include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk.[21]

History

Humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection as contrasted with natural selection, and more recently through mutagenesis. Genetic engineering as the direct manipulation of DNA by humans outside breeding and mutations has only existed since the 1970s. The term "genetic engineering" was first coined by Jack Williamson in his science fiction novel Dragon's Island, published in 1951,[22] one year before DNA's role in heredity was confirmed by Alfred Hershey and Martha Chase,[23] and two years before James Watson and Francis Crick showed that the DNA molecule has a double-helix structure.

In 1974 Rudolf Jaenisch created the first GM animal.

In 1972 Paul Berg created the first recombinant DNA molecules by combining DNA from the monkey virus SV40 with that of the lambda virus.[24] In 1973 Herbert Boyer and Stanley Cohen created the first transgenic organism by inserting antibiotic resistance genes into the plasmid of an E. coli bacterium.[25][26] A year later Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world’s first transgenic animal.[27] These achievements led to concerns in the scientific community about potential risks from genetic engineering, which were first discussed in depth at the Asilomar Conference in 1975. One of the main recommendations from this meeting was that government oversight of recombinant DNA research should be established until the technology was deemed safe.[28][29]

In 1976 Genentech, the first genetic engineering company, was founded by Herbert Boyer and Robert Swanson and a year later the company produced a human protein (somatostatin) in E.coli. Genentech announced the production of genetically engineered human insulin in 1978.[30] In 1980, the U.S. Supreme Court in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented.[31] The insulin produced by bacteria, branded humulin, was approved for release by the Food and Drug Administration in 1982.[32]

In the 1970s graduate student Steven Lindow of the University of Wisconsin–Madison with D.C. Arny and C. Upper found a bacterium he identified as P. syringae that played a role in ice nucleation and in 1977, he discovered a mutant ice-minus strain. Later, he successfully created a recombinant ice-minus strain.[33] In 1983, a biotech company, Advanced Genetic Sciences (AGS) applied for U.S. government authorization to perform field tests with the ice-minus strain of P. syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges.[34] In 1987, the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment[35] when a strawberry field and a potato field in California were sprayed with it.[36] Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher".[35]

The first field trials of genetically engineered plants occurred in France and the USA in 1986, tobacco plants were engineered to be resistant to herbicides.[37] The People’s Republic of China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992.[38] In 1994 Calgene attained approval to commercially release the Flavr Savr tomato, a tomato engineered to have a longer shelf life.[39] In 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialized in Europe.[40] In 1995, Bt Potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide producing crop to be approved in the USA.[41] In 2009 11 transgenic crops were grown commercially in 25 countries, the largest of which by area grown were the USA, Brazil, Argentina, India, Canada, China, Paraguay and South Africa.[42]

In the late 1980s and early 1990s, guidance on assessing the safety of genetically engineered plants and food emerged from organizations including the FAO and WHO.[43][44][45][46]

In 2010, scientists at the J. Craig Venter Institute, announced that they had created the first synthetic bacterial genome. The researchers added the new genome to bacterial cells and selected for cells that contained the new genome. To do this the cells undergoes a process called resolution, where during bacterial cell division one new cell receives the original DNA genome of the bacteria, whilst the other receives the new synthetic genome. When this cell replicates it uses the synthetic genome as its template. The resulting bacterium the researchers developed, named Synthia, was the world's first synthetic life form.[47][48]

On 19 March 2015, scientists, including an inventor of CRISPR, urged a worldwide moratorium on using gene editing methods to genetically engineer the human genome in a way that can be inherited, writing “scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans” until the full implications “are discussed among scientific and governmental organizations.”[49][50][51][52]

Process

The first step is to choose and isolate the gene that will be inserted into the genetically modified organism. As of 2012, most commercialised GM plants have genes transferred into them that provide protection against insects or tolerance to herbicides.[53] The gene can be isolated using restriction enzymes to cut DNA into fragments and gel electrophoresis to separate them out according to length.[54] Polymerase chain reaction (PCR) can also be used to amplify up a gene segment, which can then be isolated through gel electrophoresis.[55] If the chosen gene or the donor organism's genome has been well studied it may be present in a genetic library. If the DNA sequence is known, but no copies of the gene are available, it can be artificially synthesized.[56]
The gene to be inserted into the genetically modified organism must be combined with other genetic elements in order for it to work properly. The gene can also be modified at this stage for better expression or effectiveness. As well as the gene to be inserted most constructs contain a promoter and terminator region as well as a selectable marker gene. The promoter region initiates transcription of the gene and can be used to control the location and level of gene expression, while the terminator region ends transcription. The selectable marker, which in most cases confers antibiotic resistance to the organism it is expressed in, is needed to determine which cells are transformed with the new gene. The constructs are made using recombinant DNA techniques, such as restriction digests, ligations and molecular cloning.[57] The manipulation of the DNA generally occurs within a plasmid.

The most common form of genetic engineering involves inserting new genetic material randomly within the host genome.[citation needed] Other techniques allow new genetic material to be inserted at a specific location in the host genome or generate mutations at desired genomic loci capable of knocking out endogenous genes. The technique of gene targeting uses homologous recombination to target desired changes to a specific endogenous gene. This tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. The frequency of gene targeting can be greatly enhanced with the use of engineered nucleases such as zinc finger nucleases,[58][59] engineered homing endonucleases,[60][61] or nucleases created from TAL effectors.[62][63] In addition to enhancing gene targeting, engineered nucleases can also be used to introduce mutations at endogenous genes that generate a gene knockout.[64][65]

Transformation

A. tumefaciens attaching itself to a carrot cell

Only about 1% of bacteria are naturally capable of taking up foreign DNA. However, this ability can be induced in other bacteria via stress (e.g. thermal or electric shock), thereby increasing the cell membrane's permeability to DNA; up-taken DNA can either integrate with the genome or exist as extrachromosomal DNA. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus or through the use of viral vectors.[66] In plants the DNA is generally inserted using Agrobacterium-mediated recombination or biolistics.[67]

In Agrobacterium-mediated recombination, the plasmid construct contains T-DNA, DNA which is responsible for insertion of the DNA into the host plants genome. This plasmid is transformed into Agrobacterium containing no plasmids prior to infecting the plant cells. The Agrobacterium will then naturally insert the genetic material into the plant cells.[68] In biolistics transformation particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. Some genetic material will enter the cells and transform them. This method can be used on plants that are not susceptible to Agrobacterium infection and also allows transformation of plant plastids. Another transformation method for plant and animal cells is electroporation. Electroporation involves subjecting the plant or animal cell to an electric shock, which can make the cell membrane permeable to plasmid DNA. In some cases the electroporated cells will incorporate the DNA into their genome. Due to the damage caused to the cells and DNA the transformation efficiency of biolistics and electroporation is lower than agrobacterial mediated transformation and microinjection.[69]

As often only a single cell is transformed with genetic material the organism must be regenerated from that single cell. As bacteria consist of a single cell and reproduce clonally regeneration is not necessary. In plants this is accomplished through the use of tissue culture. Each plant species has different requirements for successful regeneration through tissue culture. If successful an adult plant is produced that contains the transgene in every cell. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Selectable markers are used to easily differentiate transformed from untransformed cells. These markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant.[70] When the offspring is produced they can be screened for the presence of the gene. All offspring from the first generation will be heterozygous for the inserted gene and must be mated together to produce a homozygous animal.

Further testing uses PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene. These tests can also confirm the chromosomal location and copy number of the inserted gene. The presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products (RNA and protein) are also used. These include northern hybridization, quantitative RT-PCR, Western blot, immunofluorescence, ELISA and phenotypic analysis. For stable transformation the gene should be passed to the offspring in a Mendelian inheritance pattern, so the organism's offspring are also studied.

Genome editing

Genome editing is a type of genetic engineering in which DNA is inserted, replaced, or removed from a genome using artificially engineered nucleases, or "molecular scissors." The nucleases create specific double-stranded break (DSBs) at desired locations in the genome, and harness the cell’s endogenous mechanisms to repair the induced break by natural processes of homologous recombination (HR) and nonhomologous end-joining (NHEJ). There are currently four families of engineered nucleases: meganucleases, zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and CRISPRs.[71][72]

Applications

Genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and micro organisms.

Medicine

In medicine, genetic engineering has been used to mass-produce insulin, human growth hormones, follistim (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs.[73][74] Vaccination generally involves injecting weak, live, killed or inactivated forms of viruses or their toxins into the person being immunized.[75] Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences.[76] Mouse hybridomas, cells fused together to create monoclonal antibodies, have been humanised through genetic engineering to create human monoclonal antibodies.[77] Genetic engineering has shown promise for treating certain forms of cancer.[78][79]

Genetic engineering is used to create animal models of human diseases. Genetically modified mice are the most common genetically engineered animal model.[80] They have been used to study and model cancer (the oncomouse), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and Parkinson disease.[81] Potential cures can be tested against these mouse models. Also genetically modified pigs have been bred with the aim of increasing the success of pig to human organ transplantation.[82]

Gene therapy is the genetic engineering of humans by replacing defective human genes with functional copies. This can occur in somatic tissue or germline tissue. If the gene is inserted into the germline tissue it can be passed down to that person's descendants.[83][84] Gene therapy has been successfully used to treat multiple diseases, including X-linked SCID,[85] chronic lymphocytic leukemia (CLL),[86] and Parkinson's disease.[87] In 2012, Glybera became the first gene therapy treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.[88][89] There are also ethical concerns should the technology be used not just for treatment, but for enhancement, modification or alteration of a human beings' appearance, adaptability, intelligence, character or behavior.[90] The distinction between cure and enhancement can also be difficult to establish.[91] Transhumanists consider the enhancement of humans desirable.

Research


Human cells in which some proteins are fused with green fluorescent protein to allow them to be visualised

Genetic engineering is an important tool for natural scientists. Genes and other genetic information from a wide range of organisms are transformed into bacteria for storage and modification, creating genetically modified bacteria in the process. Bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at -80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research.

Organisms are genetically engineered to discover the functions of certain genes. This could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. These experiments generally involve loss of function, gain of function, tracking and expression.
  • Loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. A knockout experiment involves the creation and manipulation of a DNA construct in vitro, which, in a simple knockout, consists of a copy of the desired gene, which has been altered such that it is non-functional. Embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. These stem cells are injected into blastocysts, which are implanted into surrogate mothers. This allows the experimenter to analyze the defects caused by this mutation and thereby determine the role of particular genes. It is used especially frequently in developmental biology. Another method, useful in organisms such as Drosophila (fruit fly), is to induce mutations in a large population and then screen the progeny for the desired mutation. A similar process can be used in both plants and prokaryotes.
  • Gain of function experiments, the logical counterpart of knockouts. These are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. The process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently.
  • Tracking experiments, which seek to gain information about the localization and interaction of the desired protein. One way to do this is to replace the wild-type gene with a 'fusion' gene, which is a juxtaposition of the wild-type gene with a reporting element such as green fluorescent protein (GFP) that will allow easy visualization of the products of the genetic modification. While this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment. More sophisticated techniques are now in development that can track protein products without mitigating their function, such as the addition of small sequences that will serve as binding motifs to monoclonal antibodies.
  • Expression studies aim to discover where and when specific proteins are produced. In these experiments, the DNA sequence before the DNA that codes for a protein, known as a gene's promoter, is reintroduced into an organism with the protein coding region replaced by a reporter gene such as GFP or an enzyme that catalyzes the production of a dye. Thus the time and place where a particular protein is produced can be observed. Expression studies can be taken a step further by altering the promoter to find which pieces are crucial for the proper expression of the gene and are actually bound by transcription factor proteins; this process is known as promoter bashing.

Industrial

Using genetic engineering techniques one can transform microorganisms such as bacteria or yeast, or transform cells from multicellular organisms such as insects or mammals, with a gene coding for a useful protein, such as an enzyme, so that the transformed organism will overexpress the desired protein. One can manufacture mass quantities of the protein by growing the transformed organism in bioreactor equipment using techniques of industrial fermentation, and then purifying the protein.[92] Some genes do not work well in bacteria, so yeast, insect cells, or mammalians cells, each a eukaryote, can also be used.[93] These techniques are used to produce medicines such as insulin, human growth hormone, and vaccines, supplements such as tryptophan, aid in the production of food (chymosin in cheese making) and fuels.[94] Other applications involving genetically engineered bacteria being investigated involve making the bacteria perform tasks outside their natural cycle, such as making biofuels,[95] cleaning up oil spills, carbon and other toxic waste[96] and detecting arsenic in drinking water.[97]

Experimental, lab scale industrial applications

In materials science, a genetically modified virus has been used in an academic lab as a scaffold for assembling a more environmentally friendly lithium-ion battery.[98][99]

Bacteria have been engineered to function as sensors by expressing a fluorescent protein under certain environmental conditions.[100]

Agriculture

Bt-toxins present in peanut leaves (bottom image) protect it from extensive damage caused by European corn borer larvae (top image).[101]

One of the best-known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified organisms, such as genetically modified fish, which are used to produce genetically modified food and materials with diverse uses. There are four main goals in generating genetically modified crops.[102]

One goal, and the first to be realized commercially, is to provide protection from environmental threats, such as cold (in the case of Ice-minus bacteria), or pathogens, such as insects or viruses, and/or resistance to herbicides. There are also fungal and virus resistant crops developed or in development.[103][104] They have been developed to make the insect and weed management of crops easier and can indirectly increase crop yield.[105]

Another goal in generating GMOs is to modify the quality of produce by, for instance, increasing the nutritional value or providing more industrially useful qualities or quantities.[106] The Amflora potato, for example, produces a more industrially useful blend of starches. Cows have been engineered to produce more protein in their milk to facilitate cheese production.[107] Soybeans and canola have been genetically modified to produce more healthy oils.[108][109]

Another goal consists of driving the GMO to produce materials that it does not normally make. One example is "pharming", which uses crops as bioreactors to produce vaccines, drug intermediates, or drug themselves; the useful product is purified from the harvest and then used in the standard pharmaceutical production process.[110] Cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the FDA approved a drug produced in goat milk.[111][112]

Another goal in generating GMOs, is to directly improve yield by accelerating growth, or making the organism more hardy (for plants, by improving salt, cold or drought tolerance).[106] Some agriculturally important animals have been genetically modified with growth hormones to increase their size.[113]

The genetic engineering of agricultural crops can increase the growth rates and resistance to different diseases caused by pathogens and parasites.[114] This is beneficial as it can greatly increase the production of food sources with the usage of fewer resources that would be required to host the world's growing populations. These modified crops would also reduce the usage of chemicals, such as fertilizers and pesticides, and therefore decrease the severity and frequency of the damages produced by these chemical pollution.[114][115]

Ethical and safety concerns have been raised around the use of genetically modified food.[116] A major safety concern relates to the human health implications of eating genetically modified food, in particular whether toxic or allergic reactions could occur.[117] Gene flow into related non-transgenic crops, off target effects on beneficial organisms and the impact on biodiversity are important environmental issues.[118] Ethical concerns involve religious issues, corporate control of the food supply, intellectual property rights and the level of labeling needed on genetically modified products.

BioArt and entertainment

Genetic engineering is also being used to create BioArt.[119] Some bacteria have been genetically engineered to create black and white photographs.[120]

Genetic engineering has also been used to create novelty items such as lavender-colored carnations,[121] blue roses,[122] and glowing fish.[123][124]

Regulation

The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of genetically modified crops. There are differences in the regulation of GM crops between countries, with some of the most marked differences occurring between the USA and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.

Controversy

Critics have objected to use of genetic engineering per se on several grounds, including ethical concerns, ecological concerns, and economic concerns raised by the fact GM techniques and GM organisms are subject to intellectual property law. GMOs also are involved in controversies over GM food with respect to whether food produced from GM crops is safe, whether it should be labeled, and whether GM crops are needed to address the world's food needs. See the genetically modified food controversies article for discussion of issues about GM crops and GM food. 
These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries.

Genetic recombination


From Wikipedia, the free encyclopedia


A current model of meiotic recombination, initiated by a double-strand break or gap, followed by pairing with an homologous chromosome and strand invasion to initiate the recombinational repair process. Repair of the gap can lead to crossover (CO) or non-crossover (NCO) of the flanking regions. CO recombination is thought to occur by the Double Holliday Junction (DHJ) model, illustrated on the right, above. NCO recombinants are thought to occur primarily by the Synthesis Dependent Strand Annealing (SDSA) model, illustrated on the left, above. Most recombination events appear to be the SDSA type.

Genetic recombination is the production of offspring with combinations of traits that differ from those found in either parent. In eukaryotes, genetic recombination during meiosis can lead, through sexual reproduction, to a novel set of genetic information that can be passed on through heredity from the parents to the offspring. Most recombination is naturally occurring. During meiosis in eukaryotes, genetic recombination involves the pairing of homologous chromosomes. This may be followed by information exchange between the chromosomes. The information exchange may occur without physical exchange (a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed) (see SDSA pathway in Figure); or by the breaking and rejoining of DNA strands, which forms new molecules of DNA (see DHJ pathway in Figure). Recombination may also occur during mitosis in eukaryotes where it ordinarily involves the two sister chromosomes formed after chromosomal replication. In this case, new combinations of alleles are not produced since the sister chromosomes are usually identical. In meiosis and mitosis, recombination occurs between homologous that is similar molecules (homologs) of DNA. In meiosis, non-sister homologous chromosomes pair with each other so that recombination characteristically occurs between non-sister homologues. In both meiotic and mitotic cells, recombination between homologous chromosomes is a common mechanism used in DNA repair.
Genetic recombination and recombinational DNA repair also occurs in bacteria and archaea, which use asexual reproduction.

Recombination can be artificially induced in laboratory (in vitro) settings, producing recombinant DNA for purposes including vaccine development.

V(D)J recombination in organisms with an adaptive immune system is a type of site-specific genetic recombination that helps immune cells rapidly diversify to recognize and adapt to new pathogens.

Synapsis

During meiosis, synapsis (the pairing of homologous chromosomes) ordinarily precedes genetic recombination.

Mechanism

Genetic recombination is catalyzed by many different enzymes. Recombinases are key enzymes that catalyse the strand transfer step during recombination. RecA, the chief recombinase found in Escherichia coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination, whereas the DNA repair protein, DMC1, is specific to meiotic recombination. In the archaea, the ortholog of the bacterial RecA protein is RadA.

Chromosomal crossover

Thomas Hunt Morgan's illustration of crossing over (1916)

In eukaryotes, recombination during meiosis is facilitated by chromosomal crossover. The crossover process leads to offspring having different combinations of genes from those of their parents, and can occasionally produce new chimeric alleles. The shuffling of genes brought about by genetic recombination produces increased genetic variation. It also allows sexually reproducing organisms to avoid Muller's ratchet, in which the genomes of an asexual population accumulate genetic deletions in an irreversible manner.

Chromosomal crossover involves recombination between the paired chromosomes inherited from each of one's parents, generally occurring during meiosis. During prophase I (pachytene stage) the four available chromatids are in tight formation with one another. While in this formation, homologous sites on two chromatids can closely pair with one another, and may exchange genetic information.[1]

Because recombination can occur with small probability at any location along chromosome, the frequency of recombination between two locations depends on the distance separating them. Therefore, for genes sufficiently distant on the same chromosome the amount of crossover is high enough to destroy the correlation between alleles.

Tracking the movement of genes resulting from crossovers has proven quite useful to geneticists. Because two genes that are close together are less likely to become separated than genes that are farther apart, geneticists can deduce roughly how far apart two genes are on a chromosome if they know the frequency of the crossovers. Geneticists can also use this method to infer the presence of certain genes. Genes that typically stay together during recombination are said to be linked. One gene in a linked pair can sometimes be used as a marker to deduce the presence of another gene. This is typically used in order to detect the presence of a disease-causing gene.[2]

Gene conversion

In gene conversion, a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed. Gene conversion occurs at high frequency at the actual site of the recombination event during meiosis. It is a process by which a DNA sequence is copied from one DNA helix (which remains unchanged) to another DNA helix, whose sequence is altered. Gene conversion has often been studied in fungal crosses[3] where the 4 products of individual meioses can be conveniently observed. Gene conversion events can be distinguished as deviations in an individual meiosis from the normal 2:2 segregation pattern (e.g. a 3:1 pattern).

Nonhomologous recombination

Recombination can occur between DNA sequences that contain no sequence homology. This can cause chromosomal translocations, sometimes leading to cancer.

In B cells

B cells of the immune system perform genetic recombination, called immunoglobulin class switching. It is a biological mechanism that changes an antibody from one class to another, for example, from an isotype called IgM to an isotype called IgG.

Genetic engineering

In genetic engineering, recombination can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA. A prime example of such a use of genetic recombination is gene targeting, which can be used to add, delete or otherwise change an organism's genes. This technique is important to biomedical researchers as it allows them to study the effects of specific genes.
Techniques based on genetic recombination are also applied in protein engineering to develop new proteins of biological interest.

Recombinational repair

During both mitosis and meiosis, DNA damages caused by a variety of exogenous agents (e.g. UV light, X-rays, chemical cross-linking agents) can be repaired by homologous recombinational repair (HRR).[4] These findings suggest that DNA damages arising from natural processes, such as exposure to reactive oxygen species that are byproducts of normal metabolism, are also repaired by HRR. In humans and rodents, deficiencies in the gene products necessary for HRR during meiosis cause infertility.[4] In humans, deficiencies in gene products necessary for HRR, such as BRCA1 and BRCA2, increase the risk of cancer (see DNA repair-deficiency disorder).

In bacteria, transformation is a process of gene transfer that ordinarily occurs between individual cells of the same bacterial species. Transformation involves integration of donor DNA into the recipient chromosome by recombination. This process appears to be an adaptation for repairing DNA damages in the recipient chromosome by HRR.[5] Transformation may provide a benefit to pathogenic bacteria by allowing repair of DNA damage, particularly damages that occur in the inflammatory, oxidizing environment associated with infection of a host.

When two or more viruses, each containing lethal genomic damages, infect the same host cell, the virus genomes can often pair with each other and undergo HRR to produce viable progeny. This process, referred to as multiplicity reactivation, has been studied in bacteriophages T4 and lambda,[6] as well as in several pathogenic viruses. In the case of pathogenic viruses, multiplicity reactivation may be an adaptive benefit to the virus since it allows the repair of DNA damages caused by exposure to the oxidizing environment produced during host infection.[5]

Meiotic recombination

Molecular models of meiotic recombination have evolved over the years as relevant evidence accumulated. A major incentive for developing a fundamental understanding of the mechanism of meiotic recombination is that such understanding is crucial for solving the problem of the adaptive function of sex, a major unresolved issue in biology. A recent model that reflects current understanding was presented by Anderson and Sekelsky,[7] and is outlined in the first figure in this article. The figure shows that two of the four chromatids present early in meiosis (prophase I) are paired with each other and able to interact. Recombination, in this version of the model, is initiated by a double-strand break (or gap) shown in the DNA molecule (chromatid) at the top of the first figure in this article. However, other types of DNA damage may also initiate recombination. For instance, an inter-strand cross-link (caused by exposure to a cross-linking agent such as mitomycin C) can be repaired by HRR.

As indicated in the first figure, above, two types of recombinant product are produced. Indicated on the right side is a “crossover” (CO) type, where the flanking regions of the chromosomes are exchanged, and on the left side, a “non-crossover” (NCO) type where the flanking regions are not exchanged. The CO type of recombination involves the intermediate formation of two “Holliday junctions” indicated in the lower right of the figure by two X shaped structures in each of which there is an exchange of single strands between the two participating chromatids. This pathway is labeled in the figure as the DHJ (double-Holliday junction) pathway.

The NCO recombinants (illustrated on the left in the figure) are produced by a process referred to as “synthesis dependent strand annealing” (SDSA). Recombination events of the NCO/SDSA type appear to be more common than the CO/DHJ type.[4] The NCO/SDSA pathway contributes little to genetic variation since the arms of the chromosomes flanking the recombination event remain in the parental configuration. Thus, explanations for the adaptive function of meiosis that focus exclusively on crossing-over are inadequate to explain the majority of recombination events.

Achiasmy and heterochiasmy

Achiasmy is the phenomenon where autosomal recombination is completely absent in one sex of a species. Achiasmatic chromosomal segregation is well documented in male Drosophila melanogaster. Heterochiasmy is the term used to describe recombination rates which differ between the sexes of a species.[8] This sexual dimorphic pattern in recombination rate has been observed in many species. In mammals, females most often have higher rates of recombination. The "Haldane-Huxley rule" states that achiasmy usually occurs in the heterogametic sex.[8]

Molecular genetics


From Wikipedia, the free encyclopedia

Molecular genetics is the field of biology and genetics that studies the structure and function of genes at a molecular level. Molecular genetics employs the methods of genetics and molecular biology to elucidate molecular function and interactions among genes. It is so called to differentiate it from other sub fields of genetics such as ecological genetics and population genetics.

Along with determining the pattern of descendants, molecular genetics helps in understanding developmental biology, genetic mutations that can cause certain types of diseases. Through utilizing the methods of genetics and molecular biology, molecular genetics discovers the reasons why traits are carried on and how and why some may mutate.

Forward genetics

One of the first tools available to molecular geneticists is the forward genetic screen. The aim of this technique is to identify mutations that produce a certain phenotype. A mutagen is very often used to accelerate this process. Once mutants have been isolated, the mutated genes can be molecularly identified.

Reverse genetics

While forward genetic screens are productive, a more straightforward approach is to simply determine the phenotype that results from mutating a given gene. This is called reverse genetics. In some organisms, such as yeast and mice, it is possible to induce the deletion of a particular gene, creating what's known as a gene "knockout" - the laboratory origin of so-called "knockout mice" for further study. In other words this process involves the creation of transgenic organisms that do not express a gene of interest. Alternative methods of reverse genetic research include the random induction of DNA deletions and subsequent selection for deletions in a gene of interest, as well as the application of RNA interference.

Gene therapy

A mutation in a gene can result in a severe medical condition. A protein encoded by a mutated gene may malfunction and cells that rely on the protein might therefore fail to function properly. This can cause problems for specific tissues or organs, or for the entire body. This might manifest through the course of development (like a cleft palate) or as an abnormal response to stimuli (like a peanut allergy). Conditions related to gene mutations are called genetic disorders. One way to fix such a physiological problem is gene therapy. By adding a corrected copy of the gene, a functional form of the protein can be produced, and affected cells, tissues, and organs may work properly. As opposed to drug-based approaches, gene therapy repairs the underlying genetic defect.
One form of gene therapy is the process of treating or alleviating diseases by genetically modifying the cells of the affected person with a new gene that's functioning properly. When a human disease gene has been recognized molecular genetics tools can be used to explore the process of the gene in both its normal and mutant states. From there, geneticists engineer a new gene that is working correctly. Then the new gene is transferred either in vivo or ex vivo and the body begins to make proteins according to the instructions in that gene. Gene therapy has to be repeated several times for the infected patient to continually be relieved, however, as repeated cell division and cell death slowly randomizes the body's ratio of functional-to-mutant genes.

Currently, gene therapy is still being experimented with and products are not approved by the U.S. Food and Drug Administration. There have been several setbacks in the last 15 years that have restricted further developments in gene therapy. As there are unsuccessful attempts, there continue to be a growing number of successful gene therapy transfers which have furthered the research.

Major diseases that can be treated with gene therapy include viral infections, cancers, and inherited disorders, including immune system disorders.[citation needed]

Classical gene therapy

Classical gene therapy is the approach which delivers genes, via a modified virus or "vector" to the appropriate target cells with a goal of attaining optimal expression of the new, introduced gene. Once inside the patient, the expressed genes are intended to produce a product that the patient lacks, kill diseased cells directly by producing a toxin, or activate the immune system to help the killing of diseased cells.

Nonclassical gene therapy

Nonclassical gene therapy inhibits the expression of genes related to pathogenesis, or corrects a genetic defect and restores normal gene expression.

In vivo gene transfer

During In vivo gene transfer, the genes are transferred directly into the tissue of the patient and this can be the only possible option in patients with tissues where individual cells cannot be cultured in vitro in sufficient numbers (e.g. brain cells). Also, in vivo gene transfer is necessary when cultured cells cannot be re-implanted in patients effectively.

Ex vivo gene transfer

During ex vivo gene transfer the cells are cultured outside the body and then the genes are transferred into the cells grown in culture. The cells that have been transformed successfully are expanded by cell culture and then introduced into the patient.

Principles for gene transfer

Classical gene therapies usually require efficient transfer of cloned genes into the disease cells so that the introduced genes are expressed at sufficiently high levels to change the patient's physiology. There are several different physicochemical and biological methods that can be used to transfer genes into human cells. The size of the DNA fragments that can be transferred is very limited, and often the transferred gene is not a conventional gene.
Horizontal gene transfer is the transfer of genetic material from one cell to another that is not its offspring. Artificial horizontal gene transfer is a form of genetic engineering.[1]

Techniques in molecular genetics

There are three general techniques used for molecular genetics: amplification, separation and detection, and expression. Specifically used for amplification is polymerase chain reaction, which is an "indispensable tool in a great variety of applications".[2] In the separation and detection technique DNA and mRNA are isolated from their cells. Gene expression in cells or organisms is done in a place or time that is not normal for that specific gene.

Amplification

There are other methods for amplification besides the polymerase chain reaction. Cloning DNA in bacteria is also a way to amplify DNA in genes.

Polymerase chain reaction

The main materials used in the polymerase chain reaction are DNA nucleotides, template DNA, primers and Taq polymerase. DNA nucleotides are the base for the new DNA, the template DNA is the specific sequence being amplified, primers are complementary nucleotides that can go on either side of the template DNA, and Taq polymerase is a heat stable enzyme that jump-starts the production of new DNA at the high temperatures needed for reaction.[3] This technique does not need to use living bacteria or cells; all that is needed is the base sequence of the DNA and materials listed above.

Cloning DNA in bacteria

The word cloning for this type of amplification entails making multiple identical copies of a sequence of DNA. The target DNA sequence is then inserted into a cloning vector. Because this vector originates from a self-replicating virus, plasmid, or higher organism cell when the appropriate size DNA is inserted the "target and vector DNA fragments are then ligated"[2] and create a recombinant DNA molecule. The recombinant DNA molecules are then put into a bacteria strain (usually E. coli) which produces several identical copies by transformation. Transformation is the DNA uptake mechanism possessed by bacteria. However, only one recombinant DNA molecule can be cloned within a single bacteria cell, so each clone is of just one DNA insert.

Separation and detection

In separation and detection DNA and mRNA are isolated from cells (the separation) and then detected simply by the isolation. Cell cultures are also grown to provide a constant supply of cells ready for isolation.

Cell cultures

A cell culture for molecular genetics is a culture that is grown in artificial conditions. Some cell types grow well in cultures such as skin cells, but other cells are not as productive in cultures. There are different techniques for each type of cell, some only recently being found to foster growth in stem and nerve cells. Cultures for molecular genetics are frozen in order to preserve all copies of the gene specimen and thawed only when needed. This allows for a steady supply of cells.

DNA isolation

DNA isolation extracts DNA from a cell in a pure form. First, the DNA is separated from cellular components such as proteins, RNA, and lipids. This is done by placing the chosen cells in a tube with a solution that mechanically, chemically, breaks the cells open. This solution contains enzymes, chemicals, and salts that breaks down the cells except for the DNA. It contains enzymes to dissolve proteins, chemicals to destroy all RNA present, and salts to help pull DNA out of the solution.

Next, the DNA is separated from the solution by being spun in a centrifuge, which allows the DNA to collect in the bottom of the tube. After this cycle in the centrifuge the solution is poured off and the DNA is resuspended in a second solution that makes the DNA easy to work with in the future.

This results in a concentrated DNA sample that contains thousands of copies of each gene. For large scale projects such as sequencing the human genome, all this work is done by robots.

mRNA isolation

Expressed DNA that codes for the synthesis of a protein is the final goal for scientists and this expressed DNA is obtained by isolating mRNA (Messenger RNA). First, laboratories use a normal cellular modification of mRNA that adds up to 200 adenine nucleotides to the end of the molecule (poly(A) tail). Once this has been added, the cell is ruptured and its cell contents are exposed to synthetic beads that are coated with thymine string nucleotides.
Because Adenine and Thymine pair together in DNA, the poly(A) tail and synthetic beads are attracted to one another, and once they bind in this process the cell components can be washed away without removing the mRNA. Once the mRNA has been isolated, reverse transcriptase is employed to convert it to single-stranded DNA, from which a stable double-stranded DNA is produced using DNA polymerase. Complementary DNA (cDNA) is much more stable than mRNA and so, once the double-stranded DNA has been produced it represents the expressed DNA sequence scientists look for.[4]

The Human Genome Project

The Human Genome Project is a molecular genetics project that began in the 1990s and was projected to take fifteen years to complete. However, because of technological advances the progress of the project was advanced and the project finished in 2003, taking only thirteen years. The project was started by the U.S. Department of Energy and the National Institutes of Health in an effort to reach six set goals. These goals included:
  1. identifying 20,000 to 25,000 genes in human DNA (although initial estimates were approximately 100,000 genes),
  2. determining sequences of chemical base pairs in human DNA,
  3. storing all found information into databases,
  4. improving the tools used for data analysis,
  5. transferring technologies to private sectors, and
  6. addressing the ethical, legal, and social issues (ELSI) that may arise from the projects.[5]
The project was worked on by eighteen different countries including the United States, Japan, France, Germany, and the United Kingdom. The collaborative effort resulted in the discovery of the many benefits of molecular genetics.
Discoveries such as molecular medicine, new energy sources and environmental applications, DNA forensics, and livestock breeding, are only a few of the benefits that molecular genetics can provide.[5]

The Islamic State, apostasy and ‘real Muslims’

Many people today deny that the violent and fanatical Islamic State or ISIL/ISIS is actually Islamic. Appalled by its horrendous crimes against humanity, political and religious leaders and members of the public alike have been scrambling to claim that “the Islamic State isn’t really Islamic.” Muslims who deny ISIL’s members as “real Muslims” are engaging in what is called in Islam takfir, the act of one Muslim accusing another Muslim of apostasy or being an “infidel.” Those Muslims who make such accusations are called takfiri, as described in the Wiki article on “Takfir“:
In principle the only group authorised to declare a member of an Abrahamic religion a kafir (“infidel”) is the ulema, and this is only done once all the prescribed legal precautions have been taken. However, a growing number of splinter Wahhabist/Salafist groups, classified by some scholars as Salafi-Takfiris, have split from the orthodox method of establishing takfir through the processes of the Sharia law. They have reserved the right to declare apostasy against any Muslim, in addition to non-Muslims.
Takfiris…condone acts of violence as legitimate methods of achieving religious or political goals…. A takfiri’s mission is to re-create the Caliphate according to a literal interpretation of the Qur’an.
The latter paragraph describes the Islamic State (ISIL/ISIS) perfectly, as does the next:
Takfiris believe in Islam strictly according to their interpretation of Muhammad’s and his companions’ actions and statements, and do not accept any deviation from their path; they reject any reform or change to their interpretation of religion as it was revealed in the time of the prophet. Those who change their religion from Islam to any other way of life, or deny any of the fundamental foundations of Islam, or who worship, follow or obey anything other than Islam, become those upon whom the takfiris declare the “takfir”, calling them apostates from Islam and so no longer Muslim.
Will anyone declaring the Islamic State is “not really Islamic” stand before these takfiris and accuse them of apostasy? Will you sentence them all to death?
According to at least one source (Trevor Stanley), the precedent “for the declaration of takfir against a leader'”came from Medieval Islamic scholar Taqi al-Din Ibn Taymiyyah, who issued a famous fatwa declaring jihad against invading Mongols. This was not because they were invading but because they were apostates, apostasy from Islam being punishable by death.
Like jihadis, takfiri groups advocate armed struggle against the secular regime
Moreover, suicide bombing or other violent act that brings about one’s own death is a legitimate practice of takfiris such as the Islamic State members:
Takfiris believe that one who deliberately kills himself whilst attempting to kill enemies is a martyr (shahid) and therefore goes straight to heaven.
So much for “peaceful Islam” and the “greater jihad,” which represents mere “inner struggle.” Takfiri Islamists cannot live peacefully with secularism and nonbelief. They must fight it – and violently, engaging in “lesser jihad” or “struggle against those who do not believe in the Islamic God (Allah) and do not acknowledge the submission to Muslims, and so is often translated as ‘Holy War’…”

The takfiris‘ violence thus is not a “defense,” except that they feel secularism or nonbelief is an offense. Hence, all it takes for these fanatics to be violent is the mere existence of nonbelievers.

Being a takfiri (or jihadi) Muslim, therefore, means eternal declaration of war against nonbelievers, regardless of whether or not the latter have done anything overtly to the ummah or global Muslim community.

Thus, the “blowback” excuse proffered by “progressive liberals” and “leftists” for Islamist aggression is fallacious. The only “provocation” these violent fanatics require is non-adherence to their literal, fundamentalist brand of Islam, which is why they are currently engaging in wholesale genocide not only of non-Muslims but also of other Muslims, whom these takfiris consider to be kuffar (infidels/apostates) and munafiqun or “hypocrites.”

Baghdadi the Islamic State caliph isn't a 'real Muslim?' 

Who is a ‘real Muslim?’

If the Islamic State is not really Islamic, we encounter the problem of defining who is a “real Muslim.” It is widely claimed there are at least 1.5 billion Muslims in the world, a figure held up often by Muslim devotees as a show of strength and for purposes of intimidation. However, over the years I have been told repeatedly by many Muslims and ex-Muslims that MILLIONS of human beings forced to call themselves “Muslim” would like to leave Islam but they cannot, under fear of punishment, including possibly the death penalty for apostasy. In any event, we can remove millions of potential apostates from that figure of 1.5 billion.

Some Muslims also claim the fundamentalist Saudis do not represent “real” Islam and that Saudi Arabia is not “really” a Muslim country, even though its citizenship is 100% Muslim. If the Saudis are not “real” Muslims, then we likewise must remove 30 million people from the 1.5 billion figure tossed around of supposed Muslims in the world.

Next come all others influenced by Saudi-style Wahhabism and Salafism, such as the Kuwaitis, Yemenis, UAE citizens and many others globally – that’s hundreds of millions more, potentially most of the Sunnis, in fact. Remove them from the global Muslim total.

How about the Shia Muslims as found in Syria, Iraq, Iran and other places? That would be possibly 200 million people removed from the Muslim count, since they are not “real” Muslims, according to the Sunnis.

The Ahmadis? Remove their millions from the count. Ditto with Boko Haram, al Qaeda, Jamaat ul-Fuqra, al Shabaab, the Islamic State and thousands of other heretical or extremist groups, totaling millions more.

By the time we come up with “real” Muslims according to everyone’s standards, there are not so many left in the world, far less than 1.5 billion.

By these standards, we cannot say that Islam is the second largest religion in the world, since Buddhists and Hindus would dwarf these ten to few hundred million “real” Muslims.

In any event, declaring all of these Muslims to be apostates is the act of takfir, and one must be prepared to take responsibility for this declaration, which is not supposed to be made lightly or by lay persons and which can bring with it the death penalty.

Further Reading

http://en.wikipedia.org/wiki/Takfiri
What is the ummah?
The Truth About Islam
Tom Holland: We must not deny the religious roots of Islamic State

Reza Aslan on ISIS, ISIL, Islamic StateIslamic State doctrine comes from the QuranISISvsQuran-vi1isisquranisisquran

Ex-Muslim imam: ‘I wasted a significant part of my life believing in this load of crap’

Islamic crescent moon, star and sword 
(Editor: An ex-Muslim from Fiji posted the following
comments on an article here,
The Islamic State, apostasy and ‘real Muslims.’ )

I am an ex-Muslim with a seriously long axe to grind. I was born into a Muslim home. As the eldest child I was supposed to carry the religious ideals of the home, and before I knew how to say “Papa,” I was sent to a Quranic teacher who taught me how to read in Arabic and recite the Quran. I was also taught how to pray etc.

I live in Fiji, and we are a largely secular state where western education is widely taught and compulsory. Thank heavens for this fact, as I was also sent to a normal school.

By the time I was 13 years old, I was brainwashed into believing that the sun literally shined out of the butcher of Mecca (Mohammed’s) ass. I was an Imam by then and led about 50 grown men in prayer.

I was taught that Islam was “it,” and everyone else was either to be converted or to be seen as almost subhuman—this, despite the fact that Fiji Muslims are regarded as very moderate.

Fortunately for me, I have an inquiring mind, and, by the time I started high school, many questions regarding my faith started coming to mind. As I started earning for myself and moved away from the controls of my parents, these questions started bothering me more and more.

I eventually sat down and read the Quran in English to understand it better and its supporting Hadiths. By the end if it all, I was so pissed that I had wasted a significant part of my life believing in this load of crap. For the life of me, I could not understand how anyone in his or her right mind could follow, let alone treat someone like Mohammed as almost a demi-God.

Quite frankly, I was ashamed, and a part of me still is for being such a fool. I soon started researching religion, and it was not long before I found this website and many others like it, which helped to clear a lot of the questions that I had.

While I am not an atheist, as I do believe that there are too many unexplained phenomenons in this world to discount a creator, I will happily admit that I simply do not know. What I do know is that the sooner organized stupidity which some call religion is removed from this world, the better.

“There is nothing rational or peaceful about Islam, and anybody saying there is is deluded and in need of serious help.”

I do hold a particular disdain for the religion of Islam, as I believe that it is as evil as it gets. Islam is, as I like to put it, Christianity on steroids. There is nothing rational or peaceful about it, and anybody saying it is is deluded and in need of serious help.

—Reaaz Ali

Intelligent robots must uphold human rights


The common fear is that intelligent machines will turn against humans. But who will save the robots from each other, and from us, asks Hutan Ashrafian.


There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society.
Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982's Blade Runner and this year's Ex Machina.

Academic and fictional analyses of AIs tend to focus on human–robot interactions, asking questions such as: would robots make our lives easier? Would they be dangerous? And could they ever pose a threat to humankind?

These questions ignore one crucial point. We must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators. For example, if we were to allow sentient machines to commit injustices on one another — even if these 'crimes' did not have a direct impact on human welfare — this might reflect poorly on our own humanity. Such philosophical deliberations have paved the way for the concept of 'machine rights'.

Most discussions on robot development draw on the Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.

Animals that exhibit thinking behaviour are already afforded rights and protection, and civilized society shows contempt for animal fights that are set up for human entertainment. It follows that sentient machines that are potentially much more intelligent than animals should not be made to fight for entertainment.
“Intelligent robots remain science fiction, but it is not too early to take these issues seriously.”
Of course, military robots are already being deployed in conflicts. But outside legitimate warfare, forcing AIs and robots into conflict, or mistreating them, would be detrimental to humankind's moral, ethical and psychological well-being.

Intelligent robots remain science fiction, but it is not too early to take these issues seriously. In the United Kingdom, for example, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council have already introduced a set of principles for robot designers. These reinforce the position that robots are manufactured products, so that “humans, not robots, are responsible agents”.

Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering.

National and international technological policies should introduce AIonAI concepts into current programmes aimed at developing safe AIs. We must engage with educational activities and research, and continue to raise philosophical awareness. There could even be an annual AIonAI prize for the 'most altruistically designed AI'.

Social scientists and philosophers should be linked to cutting-edge robotics and computer research. Technological funders could support ethical studies on AIonAI concepts in addition to funding AI development. Medical funders such as the Wellcome Trust follow this model already: supporting research on both cutting-edge healthcare and medical ethics and history.

Current and future AI and robotic research communities need to have sustained exposure to the ideas of AIonAI. Conferences focused on AIonAI issues could be a hub of research, guidelines and policy statements. The next generation of robotic engineers and AI researchers can also be galvanized to adopt AIonAI principles through hybrid degree courses. For example, many people who hope to get into UK politics take a course in PPE (politics, philosophy and economics) — an equivalent course for students with ambitions in robotics and AI could be CEP (computer science, engineering and philosophy).

We should extend Asimov's Three Laws of Robotics to support work on AIonAI interaction. I suggest a fourth law: all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood.

Do not underestimate the likelihood of artificial thinking machines. Humankind is arriving at the horizon of the birth of a new intelligent race. Whether or not this intelligence is 'artificial' does not detract from the issue that the new digital populace will deserve moral dignity and rights, and a new law to protect them.
Nature
519,
391
()
doi:10.1038/519391a

Fact or Fiction?: Dark Matter Killed the Dinosaurs

A new out-of-this-world theory links mass extinctions with exotic astrophysics and galactic architecture
An image of Manicougan Crater as seen from the International Space Station

The 100-kilometer-wide Manicougan Crater in Canada was produced by a 5-kilometer-wide space rock smacking into Earth about 215 million years ago. A similar larger impact some 66 million years ago is thought to have wiped out the dinosaurs. Some researchers believe giant impacts cyclically occur, driven by our solar system's movement through a disk of dark matter in the Milky Way.
Credit: NASA
Every once in a great while, something almost unspeakable happens to Earth. Some terrible force reaches out and tears the tree of life limb from limb. In a geological instant, countless creatures perish and entire lineages simply cease to exist.

The most famous of these mass extinctions happened about 66 million years ago, when the dinosaurs died out in the planet-wide environmental disruption that followed a mountain-sized space rock walloping Earth. We can still see the scar from the impact today as a nearly 200-kilometer-wide crater in the Yucatan Peninsula.

But this is only one of the “Big Five” cataclysmic mass extinctions recognized by paleontologists, and not even the worst. Some 252 million years ago, the Permian-Triassic mass extinction wiped out an estimated nine of every ten species on the planet—scientists call this one “the Great Dying.” In addition to the Big Five, evidence exists for dozens of other mass extinction events that were smaller and less severe. Not all of these are conclusively related to giant impacts; some are linked instead to enormous upticks in volcanic activity worldwide that caused dramatic, disruptive climate change and habitat loss. Researchers suspect that many—perhaps most—mass extinctions come about through the stresses caused by overlapping events, such as a giant impact paired with an erupting supervolcano. Maybe the worst mass extinctions are simply matters of poor timing, cases of planetary bad luck.

Or maybe mass extinctions are not matters of chaotic chance at all. Perhaps they are in fact predictable and certain, like clockwork. Some researchers have speculated as much because of curious patterns they perceive in giant impacts, volcanic activity and biodiversity declines.

In the early 1980s, the University of Chicago paleontologists David Raup and Jack Sepkoski found evidence for a 26-million-year pattern of mass extinction in the fossil record since the Great Dying of the Permian-Triassic. This 26-million-year periodicity overlaps and closely aligns with the Big Five extinctions, as well as several others. In subsequent work over the years, several other researchers examining Earth’s geological record have replicated Raup and Sepkoski’s original conclusions, finding a mass-extinction periodicity of roughly 30 million years that extends back half a billion years. Some of those same researchers have also claimed to detect similar, aligned periodicities in impact cratering and in volcanic activity. Every 30 million years, give or take a few million, it seems the stars align to make all life on Earth suffer. Yet for want of a clear mechanism linking all these different phenomena together, the idea has languished for years at the scientific fringe.

It may not be a fringe idea much longer. According to a new theory from Michael Rampino, a geoscientist at New York University, dark matter may be the missing link—the mechanism behind Earth’s mysterious multi-million-year cycles of giant impacts, massive volcanism and planetary death.

Dark matter is an invisible substance that scarcely interacts with the rest of the universe through any force other than gravity. Whatever dark matter is, astronomers have inferred there is quite a lot of it by watching how large-scale structures respond to its gravitational pull. Dark matter seems to constitute almost 85 percent of all the mass in the universe, and it is thought to be the cosmic scaffolding upon which galaxies coalesce. Many theories, in fact, call for dark matter concentrating in the central planes of spiral galaxies such as the Milky Way. Our solar system, slowly orbiting the galactic core, periodically moves up and down through this plane like a cork bobbing in water. The period of our bobbing solar system is thought to be roughly 30 million years. Sound familiar?

In 2014, the Harvard University physicists Lisa Randall and Matthew Reece published a study showing how the gravitational pull from a thin disk of dark matter in the galactic plane could perturb the orbits of comets as our solar system passed through, periodically peppering Earth with giant impacts. To reliably knock the far-out comets down into Earth-crossing orbits, the dark-matter disk would need to be thin, about one-tenth the thickness of the Milky Way’s visible disk of stars, and with a density of at least one solar mass per square light-year.

Randall and Reece’s theory is broadly consistent with dark matter’s plausible properties, but the researchers only used it to explain the periodicity of impacts. In his new study, published in the Monthly Notices of the Royal Astronomical Society, Rampino suggests dark matter can explain the presumed periodicity of volcanism, too.

If dark matter forms dense clumps rather than being uniformly spread throughout the disk, Rampino says, then Earth could sweep up and capture large numbers of dark-matter particles in its gravitational field as it passes through the disk. The particles would fall to Earth’s core, where they could reach sufficient densities to annihilate each other, heating the core by hundreds of degrees during the solar system’s crossing of the galactic plane. For millions of years, the overheated core would belch gigantic plumes of magma up toward the surface, birthing gigantic volcanic eruptions that rip apart continents, alter sea levels and change the climate. All the while, comets perturbed by the solar system’s passage through the dark-matter disk would still be pounding the planet. Death would come from above and below in a potent one-two punch that would set off waves of mass extinction.

If true, Rampino’s hypothesis would have profound implications not only for the past and future of life on Earth, but for planetary science as a whole. Scientists would be forced to consider the histories of Earth and the solar system’s other rocky worlds in a galactic context, where the Milky Way’s invisible dark-matter architecture is the true cause of key events in a planet’s life. “Most geologists will not like this, as it might mean that astrophysics trumps geology as the underlying driver for geological changes,” Rampino says. “But geology, or let us say planetary science, is really a subfield of astrophysics, isn't it?”

The key question, of course, is whether some of the Milky Way’s dark matter actually exists in a thin, clumpy disk. Fortunately, within a decade researchers should have a wealth of data in hand that could disprove or validate Rampino’s controversial idea. Launched in 2013 to map the motions of a billion stars in the Milky Way, the European Space Agency’s Gaia spacecraft will help pin down the dimensions of any dark-matter disk and how often our solar system oscillates through it. The discovery and study of additional ancient craters could also confirm or refute the postulated periodicity of giant impacts and help determine how many were caused by comets rather than asteroids. If Gaia’s results reveal no signs of a thin, dense dark-matter disk, or if studies show that more craters are caused by rocky asteroids from the inner solar system than by icy comets, Rampino and other researchers will probably have to go back to the drawing board.

Alternatively, evidence for or against dark-matter-driven mass extinctions could come from extragalactic astronomy and even from particle physics itself. Recent observations of small satellite galaxies orbiting Andromeda, the Milky Way’s nearest neighboring spiral galaxy, tentatively support the existence of a dark-matter disk there, suggesting that our galaxy probably has one, too.

Even so, the University of Michigan astrophysicist Katherine Freese, one of the first researchers to rigorously examine how dark-matter annihilation could occur inside Earth, notes that Rampino’s scenario would demand “very special dark matter.” Specifically, the dark matter would have to weakly interact with itself to dissipate enough energy to cool and settle into a placid, very thin disk. According to Lisa Randall, who co-authored the 2014 paper suggesting dark matter might drive extinctions through giant impacts, several plausible theoretical models predict such a disk, but very few allow the dense intra-disk clumps required by Rampino’s hypothesis.

“In most models of dark matter, these clumps don’t exist,” Randall says. “Even if they do exist and are distributed in a disk, we don’t see that they will pass through the Earth sufficiently often. After all, clumps are not space-filling—there is room in between for the solar system to pass through.” Further, Randall notes that if dense clumps do exist in a thin disk of dark matter, the dark matter should occasionally annihilate in the clumps to produce gamma rays. “It’s not clear why we wouldn’t have already observed that gamma-ray signal,” she says.

There is also the possibility that theories of thin, self-interacting dark-matter disks could be swept away entirely if and when one of the many dark-matter detection experiments now underway finally spots its quarry and pins down the particulate identity of this elusive cosmic substance.

Or, perhaps most likely, the purported periodicities of mass extinctions, impacts and flood basalts are not as clear-cut and precisely aligned as might be hoped. Coryn Bailer-Jones, an astrophysicist at the Max Planck Institute for Astronomy in Heidelberg, Germany who has performed statistical analyses of impact-cratering rates as well as of mass extinctions, is skeptical that either of these exhibit periodic phenomena at all.

The problem is that the available data are not very good and carry immense uncertainties. Impact-crater statistics for Earth are notoriously variable and suspect. Their supposed periodicity fluctuates greatly depending on the minimum sizes of evaluated craters, and craters can be erased, obscured or even mimicked by a variety of geological processes. According to Bailer-Jones, biodiversity statistics from the fossil record are still more problematic, due to an even greater number of complex mechanisms dictating how, when and where fossils of different varieties of organisms are formed and preserved. Furthermore, Bailer-Jones notes, the solar system’s up-and-down oscillation through the Milky Way still has multi-million-year uncertainties.

While claims of overlapping, aligning periodicities within all this data could be significant and valid, Bailer-Jones says, in all likelihood they are instead the product of an all-too-human tendency to project order and logic onto little more than chaotic noise. Periodicity proponents have strenuously disagreed, and the heated, back-and-forth battle is still ongoing in the scientific literature.

“I think it’s interesting and worthwhile to ask these questions,” Bailer-Jones says. “But we must be careful not to give the impression that we actually have a problem that needs dark matter as a mechanism for mass extinction. It’s fine to talk about the mechanism, but the supposed periodicities—or rather, lack thereof—don’t provide any evidence for it.”

Rampino and others who see periodicities in fossils and craters freely acknowledge that their conclusions are speculative and that some of their statistics are presently underwhelming. Yet the telltale hints of order they glimpse in shattered rocks and scattered fossils still fuel their search for some final puzzle piece, some crucial evidence that will at last make everything cohere and confirm what could be the greatest cycle of life and death ever discovered.

One way or another, time will eventually tell. On geological timescales, our oscillating, bobbing solar system has recently crossed the mid-plane of the Milky Way, passing through the very region where a dark-matter disk would exist. Perhaps the faraway comets feel that gentle tug even now, and Earth’s core is already sizzling with dark-matter annihilation. Confirmation may be as close as the next spate of extinction-level cometary impacts or supervolcanic eruptions. Keep watching the skies—and the ground right beneath your feet.

New Reactor Paves the Way for Efficiently Producing Fuel from Sunlight

Original link:  http://www.caltech.edu/content/new-reactor-paves-way-efficiently-producing-fuel-sunlight

PASADENA, Calif.—Using a common metal most famously found in self-cleaning ovens, Sossina Haile hopes to change our energy future. The metal is cerium oxide—or ceria—and it is the centerpiece of a promising new technology developed by Haile and her colleagues that concentrates solar energy and uses it to efficiently convert carbon dioxide and water into fuels.

Solar energy has long been touted as the solution to our energy woes, but while it is plentiful and free, it can't be bottled up and transported from sunny locations to the drearier—but more energy-hungry—parts of the world. The process developed by Haile—a professor of materials science and chemical engineering at the California Institute of Technology (Caltech)—and her colleagues could make that possible.

The researchers designed and built a two-foot-tall prototype reactor that has a quartz window and a cavity that absorbs concentrated sunlight. The concentrator works "like the magnifying glass you used as a kid" to focus the sun's rays, says Haile.

At the heart of the reactor is a cylindrical lining of ceria. Ceria—a metal oxide that is commonly embedded in the walls of self-cleaning ovens, where it catalyzes reactions that decompose food and other stuck-on gunk—propels the solar-driven reactions. The reactor takes advantage of ceria's ability to "exhale" oxygen from its crystalline framework at very high temperatures and then "inhale" oxygen back in at lower temperatures.

"What is special about the material is that it doesn't release all of the oxygen. That helps to leave the framework of the material intact as oxygen leaves," Haile explains. "When we cool it back down, the material's thermodynamically preferred state is to pull oxygen back into the structure."
The ETH-Caltech solar reactor for producing H2
and CO from H2O and CO2 via the two-step
thermochemical cycle with ceria redox reactions.
 
Specifically, the inhaled oxygen is stripped off of carbon dioxide (CO2) and/or water (H2O) gas molecules that are pumped into the reactor, producing carbon monoxide (CO) and/or hydrogen gas (H2). H2 can be used to fuel hydrogen fuel cells; CO, combined with H2, can be used to create synthetic gas, or "syngas," which is the precursor to liquid hydrocarbon fuels. Adding other catalysts to the gas mixture, meanwhile, produces methane. And once the ceria is oxygenated to full capacity, it can be heated back up again, and the cycle can begin anew.

For all of this to work, the temperatures in the reactor have to be very high—nearly 3,000 degrees Fahrenheit. At Caltech, Haile and her students achieved such temperatures using electrical furnaces. But for a real-world test, she says, "we needed to use photons, so we went to Switzerland." At the Paul Scherrer Institute's High-Flux Solar Simulator, the researchers and their collaborators—led by Aldo Steinfeld of the institute's Solar Technology Laboratory—installed the reactor on a large solar simulator capable of delivering the heat of 1,500 suns.

In experiments conducted last spring, Haile and her colleagues achieved the best rates for CO2 dissociation ever achieved, "by orders of magnitude," she says. The efficiency of the reactor was uncommonly high for CO2 splitting, in part, she says, "because we're using the whole solar spectrum, and not just particular wavelengths." And unlike in electrolysis, the rate is not limited by the low solubility of CO2 in water. Furthermore, Haile says, the high operating temperatures of the reactor mean that fast catalysis is possible, without the need for expensive and rare metal catalysts (cerium, in fact, is the most common of the rare earth metals—about as abundant as copper).

In the short term, Haile and her colleagues plan to tinker with the ceria formulation so that the reaction temperature can be lowered, and to re-engineer the reactor, to improve its efficiency. Currently, the system harnesses less than 1% of the solar energy it receives, with most of the energy lost as heat through the reactor's walls or by re-radiation through the quartz window. "When we designed the reactor, we didn't do much to control these losses," says Haile. Thermodynamic modeling by lead author and former Caltech graduate student William Chueh suggests that efficiencies of 15% or higher are possible.

Ultimately, Haile says, the process could be adopted in large-scale energy plants, allowing solar-derived power to be reliably available during the day and night. The CO2 emitted by vehicles could be collected and converted to fuel, "but that is difficult," she says. A more realistic scenario might be to take the CO2 emissions from coal-powered electric plants and convert them to transportation fuels. "You'd effectively be using the carbon twice," Haile explains. Alternatively, she says, the reactor could be used in a "zero CO2 emissions" cycle: H2O and CO2 would be converted to methane, would fuel electricity-producing power plants that generate more CO2 and H2O, to keep the process going.

A paper about the work, "High-Flux Solar-Driven Thermochemical Dissociation of CO2 and H2O Using Nonstoichiometric Ceria," was published in the December 23 issue of Science. The work was funded by the National Science Foundation, the State of Minnesota Initiative for Renewable Energy and the Environment, and the Swiss National Science Foundation.
 
Written by Kathy Svitil

Body language

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Body_lang...