Search This Blog

Monday, March 30, 2015

Genetic recombination


From Wikipedia, the free encyclopedia


A current model of meiotic recombination, initiated by a double-strand break or gap, followed by pairing with an homologous chromosome and strand invasion to initiate the recombinational repair process. Repair of the gap can lead to crossover (CO) or non-crossover (NCO) of the flanking regions. CO recombination is thought to occur by the Double Holliday Junction (DHJ) model, illustrated on the right, above. NCO recombinants are thought to occur primarily by the Synthesis Dependent Strand Annealing (SDSA) model, illustrated on the left, above. Most recombination events appear to be the SDSA type.

Genetic recombination is the production of offspring with combinations of traits that differ from those found in either parent. In eukaryotes, genetic recombination during meiosis can lead, through sexual reproduction, to a novel set of genetic information that can be passed on through heredity from the parents to the offspring. Most recombination is naturally occurring. During meiosis in eukaryotes, genetic recombination involves the pairing of homologous chromosomes. This may be followed by information exchange between the chromosomes. The information exchange may occur without physical exchange (a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed) (see SDSA pathway in Figure); or by the breaking and rejoining of DNA strands, which forms new molecules of DNA (see DHJ pathway in Figure). Recombination may also occur during mitosis in eukaryotes where it ordinarily involves the two sister chromosomes formed after chromosomal replication. In this case, new combinations of alleles are not produced since the sister chromosomes are usually identical. In meiosis and mitosis, recombination occurs between homologous that is similar molecules (homologs) of DNA. In meiosis, non-sister homologous chromosomes pair with each other so that recombination characteristically occurs between non-sister homologues. In both meiotic and mitotic cells, recombination between homologous chromosomes is a common mechanism used in DNA repair.
Genetic recombination and recombinational DNA repair also occurs in bacteria and archaea, which use asexual reproduction.

Recombination can be artificially induced in laboratory (in vitro) settings, producing recombinant DNA for purposes including vaccine development.

V(D)J recombination in organisms with an adaptive immune system is a type of site-specific genetic recombination that helps immune cells rapidly diversify to recognize and adapt to new pathogens.

Synapsis

During meiosis, synapsis (the pairing of homologous chromosomes) ordinarily precedes genetic recombination.

Mechanism

Genetic recombination is catalyzed by many different enzymes. Recombinases are key enzymes that catalyse the strand transfer step during recombination. RecA, the chief recombinase found in Escherichia coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination, whereas the DNA repair protein, DMC1, is specific to meiotic recombination. In the archaea, the ortholog of the bacterial RecA protein is RadA.

Chromosomal crossover

Thomas Hunt Morgan's illustration of crossing over (1916)

In eukaryotes, recombination during meiosis is facilitated by chromosomal crossover. The crossover process leads to offspring having different combinations of genes from those of their parents, and can occasionally produce new chimeric alleles. The shuffling of genes brought about by genetic recombination produces increased genetic variation. It also allows sexually reproducing organisms to avoid Muller's ratchet, in which the genomes of an asexual population accumulate genetic deletions in an irreversible manner.

Chromosomal crossover involves recombination between the paired chromosomes inherited from each of one's parents, generally occurring during meiosis. During prophase I (pachytene stage) the four available chromatids are in tight formation with one another. While in this formation, homologous sites on two chromatids can closely pair with one another, and may exchange genetic information.[1]

Because recombination can occur with small probability at any location along chromosome, the frequency of recombination between two locations depends on the distance separating them. Therefore, for genes sufficiently distant on the same chromosome the amount of crossover is high enough to destroy the correlation between alleles.

Tracking the movement of genes resulting from crossovers has proven quite useful to geneticists. Because two genes that are close together are less likely to become separated than genes that are farther apart, geneticists can deduce roughly how far apart two genes are on a chromosome if they know the frequency of the crossovers. Geneticists can also use this method to infer the presence of certain genes. Genes that typically stay together during recombination are said to be linked. One gene in a linked pair can sometimes be used as a marker to deduce the presence of another gene. This is typically used in order to detect the presence of a disease-causing gene.[2]

Gene conversion

In gene conversion, a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed. Gene conversion occurs at high frequency at the actual site of the recombination event during meiosis. It is a process by which a DNA sequence is copied from one DNA helix (which remains unchanged) to another DNA helix, whose sequence is altered. Gene conversion has often been studied in fungal crosses[3] where the 4 products of individual meioses can be conveniently observed. Gene conversion events can be distinguished as deviations in an individual meiosis from the normal 2:2 segregation pattern (e.g. a 3:1 pattern).

Nonhomologous recombination

Recombination can occur between DNA sequences that contain no sequence homology. This can cause chromosomal translocations, sometimes leading to cancer.

In B cells

B cells of the immune system perform genetic recombination, called immunoglobulin class switching. It is a biological mechanism that changes an antibody from one class to another, for example, from an isotype called IgM to an isotype called IgG.

Genetic engineering

In genetic engineering, recombination can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA. A prime example of such a use of genetic recombination is gene targeting, which can be used to add, delete or otherwise change an organism's genes. This technique is important to biomedical researchers as it allows them to study the effects of specific genes.
Techniques based on genetic recombination are also applied in protein engineering to develop new proteins of biological interest.

Recombinational repair

During both mitosis and meiosis, DNA damages caused by a variety of exogenous agents (e.g. UV light, X-rays, chemical cross-linking agents) can be repaired by homologous recombinational repair (HRR).[4] These findings suggest that DNA damages arising from natural processes, such as exposure to reactive oxygen species that are byproducts of normal metabolism, are also repaired by HRR. In humans and rodents, deficiencies in the gene products necessary for HRR during meiosis cause infertility.[4] In humans, deficiencies in gene products necessary for HRR, such as BRCA1 and BRCA2, increase the risk of cancer (see DNA repair-deficiency disorder).

In bacteria, transformation is a process of gene transfer that ordinarily occurs between individual cells of the same bacterial species. Transformation involves integration of donor DNA into the recipient chromosome by recombination. This process appears to be an adaptation for repairing DNA damages in the recipient chromosome by HRR.[5] Transformation may provide a benefit to pathogenic bacteria by allowing repair of DNA damage, particularly damages that occur in the inflammatory, oxidizing environment associated with infection of a host.

When two or more viruses, each containing lethal genomic damages, infect the same host cell, the virus genomes can often pair with each other and undergo HRR to produce viable progeny. This process, referred to as multiplicity reactivation, has been studied in bacteriophages T4 and lambda,[6] as well as in several pathogenic viruses. In the case of pathogenic viruses, multiplicity reactivation may be an adaptive benefit to the virus since it allows the repair of DNA damages caused by exposure to the oxidizing environment produced during host infection.[5]

Meiotic recombination

Molecular models of meiotic recombination have evolved over the years as relevant evidence accumulated. A major incentive for developing a fundamental understanding of the mechanism of meiotic recombination is that such understanding is crucial for solving the problem of the adaptive function of sex, a major unresolved issue in biology. A recent model that reflects current understanding was presented by Anderson and Sekelsky,[7] and is outlined in the first figure in this article. The figure shows that two of the four chromatids present early in meiosis (prophase I) are paired with each other and able to interact. Recombination, in this version of the model, is initiated by a double-strand break (or gap) shown in the DNA molecule (chromatid) at the top of the first figure in this article. However, other types of DNA damage may also initiate recombination. For instance, an inter-strand cross-link (caused by exposure to a cross-linking agent such as mitomycin C) can be repaired by HRR.

As indicated in the first figure, above, two types of recombinant product are produced. Indicated on the right side is a “crossover” (CO) type, where the flanking regions of the chromosomes are exchanged, and on the left side, a “non-crossover” (NCO) type where the flanking regions are not exchanged. The CO type of recombination involves the intermediate formation of two “Holliday junctions” indicated in the lower right of the figure by two X shaped structures in each of which there is an exchange of single strands between the two participating chromatids. This pathway is labeled in the figure as the DHJ (double-Holliday junction) pathway.

The NCO recombinants (illustrated on the left in the figure) are produced by a process referred to as “synthesis dependent strand annealing” (SDSA). Recombination events of the NCO/SDSA type appear to be more common than the CO/DHJ type.[4] The NCO/SDSA pathway contributes little to genetic variation since the arms of the chromosomes flanking the recombination event remain in the parental configuration. Thus, explanations for the adaptive function of meiosis that focus exclusively on crossing-over are inadequate to explain the majority of recombination events.

Achiasmy and heterochiasmy

Achiasmy is the phenomenon where autosomal recombination is completely absent in one sex of a species. Achiasmatic chromosomal segregation is well documented in male Drosophila melanogaster. Heterochiasmy is the term used to describe recombination rates which differ between the sexes of a species.[8] This sexual dimorphic pattern in recombination rate has been observed in many species. In mammals, females most often have higher rates of recombination. The "Haldane-Huxley rule" states that achiasmy usually occurs in the heterogametic sex.[8]

Molecular genetics


From Wikipedia, the free encyclopedia

Molecular genetics is the field of biology and genetics that studies the structure and function of genes at a molecular level. Molecular genetics employs the methods of genetics and molecular biology to elucidate molecular function and interactions among genes. It is so called to differentiate it from other sub fields of genetics such as ecological genetics and population genetics.

Along with determining the pattern of descendants, molecular genetics helps in understanding developmental biology, genetic mutations that can cause certain types of diseases. Through utilizing the methods of genetics and molecular biology, molecular genetics discovers the reasons why traits are carried on and how and why some may mutate.

Forward genetics

One of the first tools available to molecular geneticists is the forward genetic screen. The aim of this technique is to identify mutations that produce a certain phenotype. A mutagen is very often used to accelerate this process. Once mutants have been isolated, the mutated genes can be molecularly identified.

Reverse genetics

While forward genetic screens are productive, a more straightforward approach is to simply determine the phenotype that results from mutating a given gene. This is called reverse genetics. In some organisms, such as yeast and mice, it is possible to induce the deletion of a particular gene, creating what's known as a gene "knockout" - the laboratory origin of so-called "knockout mice" for further study. In other words this process involves the creation of transgenic organisms that do not express a gene of interest. Alternative methods of reverse genetic research include the random induction of DNA deletions and subsequent selection for deletions in a gene of interest, as well as the application of RNA interference.

Gene therapy

A mutation in a gene can result in a severe medical condition. A protein encoded by a mutated gene may malfunction and cells that rely on the protein might therefore fail to function properly. This can cause problems for specific tissues or organs, or for the entire body. This might manifest through the course of development (like a cleft palate) or as an abnormal response to stimuli (like a peanut allergy). Conditions related to gene mutations are called genetic disorders. One way to fix such a physiological problem is gene therapy. By adding a corrected copy of the gene, a functional form of the protein can be produced, and affected cells, tissues, and organs may work properly. As opposed to drug-based approaches, gene therapy repairs the underlying genetic defect.
One form of gene therapy is the process of treating or alleviating diseases by genetically modifying the cells of the affected person with a new gene that's functioning properly. When a human disease gene has been recognized molecular genetics tools can be used to explore the process of the gene in both its normal and mutant states. From there, geneticists engineer a new gene that is working correctly. Then the new gene is transferred either in vivo or ex vivo and the body begins to make proteins according to the instructions in that gene. Gene therapy has to be repeated several times for the infected patient to continually be relieved, however, as repeated cell division and cell death slowly randomizes the body's ratio of functional-to-mutant genes.

Currently, gene therapy is still being experimented with and products are not approved by the U.S. Food and Drug Administration. There have been several setbacks in the last 15 years that have restricted further developments in gene therapy. As there are unsuccessful attempts, there continue to be a growing number of successful gene therapy transfers which have furthered the research.

Major diseases that can be treated with gene therapy include viral infections, cancers, and inherited disorders, including immune system disorders.[citation needed]

Classical gene therapy

Classical gene therapy is the approach which delivers genes, via a modified virus or "vector" to the appropriate target cells with a goal of attaining optimal expression of the new, introduced gene. Once inside the patient, the expressed genes are intended to produce a product that the patient lacks, kill diseased cells directly by producing a toxin, or activate the immune system to help the killing of diseased cells.

Nonclassical gene therapy

Nonclassical gene therapy inhibits the expression of genes related to pathogenesis, or corrects a genetic defect and restores normal gene expression.

In vivo gene transfer

During In vivo gene transfer, the genes are transferred directly into the tissue of the patient and this can be the only possible option in patients with tissues where individual cells cannot be cultured in vitro in sufficient numbers (e.g. brain cells). Also, in vivo gene transfer is necessary when cultured cells cannot be re-implanted in patients effectively.

Ex vivo gene transfer

During ex vivo gene transfer the cells are cultured outside the body and then the genes are transferred into the cells grown in culture. The cells that have been transformed successfully are expanded by cell culture and then introduced into the patient.

Principles for gene transfer

Classical gene therapies usually require efficient transfer of cloned genes into the disease cells so that the introduced genes are expressed at sufficiently high levels to change the patient's physiology. There are several different physicochemical and biological methods that can be used to transfer genes into human cells. The size of the DNA fragments that can be transferred is very limited, and often the transferred gene is not a conventional gene.
Horizontal gene transfer is the transfer of genetic material from one cell to another that is not its offspring. Artificial horizontal gene transfer is a form of genetic engineering.[1]

Techniques in molecular genetics

There are three general techniques used for molecular genetics: amplification, separation and detection, and expression. Specifically used for amplification is polymerase chain reaction, which is an "indispensable tool in a great variety of applications".[2] In the separation and detection technique DNA and mRNA are isolated from their cells. Gene expression in cells or organisms is done in a place or time that is not normal for that specific gene.

Amplification

There are other methods for amplification besides the polymerase chain reaction. Cloning DNA in bacteria is also a way to amplify DNA in genes.

Polymerase chain reaction

The main materials used in the polymerase chain reaction are DNA nucleotides, template DNA, primers and Taq polymerase. DNA nucleotides are the base for the new DNA, the template DNA is the specific sequence being amplified, primers are complementary nucleotides that can go on either side of the template DNA, and Taq polymerase is a heat stable enzyme that jump-starts the production of new DNA at the high temperatures needed for reaction.[3] This technique does not need to use living bacteria or cells; all that is needed is the base sequence of the DNA and materials listed above.

Cloning DNA in bacteria

The word cloning for this type of amplification entails making multiple identical copies of a sequence of DNA. The target DNA sequence is then inserted into a cloning vector. Because this vector originates from a self-replicating virus, plasmid, or higher organism cell when the appropriate size DNA is inserted the "target and vector DNA fragments are then ligated"[2] and create a recombinant DNA molecule. The recombinant DNA molecules are then put into a bacteria strain (usually E. coli) which produces several identical copies by transformation. Transformation is the DNA uptake mechanism possessed by bacteria. However, only one recombinant DNA molecule can be cloned within a single bacteria cell, so each clone is of just one DNA insert.

Separation and detection

In separation and detection DNA and mRNA are isolated from cells (the separation) and then detected simply by the isolation. Cell cultures are also grown to provide a constant supply of cells ready for isolation.

Cell cultures

A cell culture for molecular genetics is a culture that is grown in artificial conditions. Some cell types grow well in cultures such as skin cells, but other cells are not as productive in cultures. There are different techniques for each type of cell, some only recently being found to foster growth in stem and nerve cells. Cultures for molecular genetics are frozen in order to preserve all copies of the gene specimen and thawed only when needed. This allows for a steady supply of cells.

DNA isolation

DNA isolation extracts DNA from a cell in a pure form. First, the DNA is separated from cellular components such as proteins, RNA, and lipids. This is done by placing the chosen cells in a tube with a solution that mechanically, chemically, breaks the cells open. This solution contains enzymes, chemicals, and salts that breaks down the cells except for the DNA. It contains enzymes to dissolve proteins, chemicals to destroy all RNA present, and salts to help pull DNA out of the solution.

Next, the DNA is separated from the solution by being spun in a centrifuge, which allows the DNA to collect in the bottom of the tube. After this cycle in the centrifuge the solution is poured off and the DNA is resuspended in a second solution that makes the DNA easy to work with in the future.

This results in a concentrated DNA sample that contains thousands of copies of each gene. For large scale projects such as sequencing the human genome, all this work is done by robots.

mRNA isolation

Expressed DNA that codes for the synthesis of a protein is the final goal for scientists and this expressed DNA is obtained by isolating mRNA (Messenger RNA). First, laboratories use a normal cellular modification of mRNA that adds up to 200 adenine nucleotides to the end of the molecule (poly(A) tail). Once this has been added, the cell is ruptured and its cell contents are exposed to synthetic beads that are coated with thymine string nucleotides.
Because Adenine and Thymine pair together in DNA, the poly(A) tail and synthetic beads are attracted to one another, and once they bind in this process the cell components can be washed away without removing the mRNA. Once the mRNA has been isolated, reverse transcriptase is employed to convert it to single-stranded DNA, from which a stable double-stranded DNA is produced using DNA polymerase. Complementary DNA (cDNA) is much more stable than mRNA and so, once the double-stranded DNA has been produced it represents the expressed DNA sequence scientists look for.[4]

The Human Genome Project

The Human Genome Project is a molecular genetics project that began in the 1990s and was projected to take fifteen years to complete. However, because of technological advances the progress of the project was advanced and the project finished in 2003, taking only thirteen years. The project was started by the U.S. Department of Energy and the National Institutes of Health in an effort to reach six set goals. These goals included:
  1. identifying 20,000 to 25,000 genes in human DNA (although initial estimates were approximately 100,000 genes),
  2. determining sequences of chemical base pairs in human DNA,
  3. storing all found information into databases,
  4. improving the tools used for data analysis,
  5. transferring technologies to private sectors, and
  6. addressing the ethical, legal, and social issues (ELSI) that may arise from the projects.[5]
The project was worked on by eighteen different countries including the United States, Japan, France, Germany, and the United Kingdom. The collaborative effort resulted in the discovery of the many benefits of molecular genetics.
Discoveries such as molecular medicine, new energy sources and environmental applications, DNA forensics, and livestock breeding, are only a few of the benefits that molecular genetics can provide.[5]

The Islamic State, apostasy and ‘real Muslims’

Many people today deny that the violent and fanatical Islamic State or ISIL/ISIS is actually Islamic. Appalled by its horrendous crimes against humanity, political and religious leaders and members of the public alike have been scrambling to claim that “the Islamic State isn’t really Islamic.” Muslims who deny ISIL’s members as “real Muslims” are engaging in what is called in Islam takfir, the act of one Muslim accusing another Muslim of apostasy or being an “infidel.” Those Muslims who make such accusations are called takfiri, as described in the Wiki article on “Takfir“:
In principle the only group authorised to declare a member of an Abrahamic religion a kafir (“infidel”) is the ulema, and this is only done once all the prescribed legal precautions have been taken. However, a growing number of splinter Wahhabist/Salafist groups, classified by some scholars as Salafi-Takfiris, have split from the orthodox method of establishing takfir through the processes of the Sharia law. They have reserved the right to declare apostasy against any Muslim, in addition to non-Muslims.
Takfiris…condone acts of violence as legitimate methods of achieving religious or political goals…. A takfiri’s mission is to re-create the Caliphate according to a literal interpretation of the Qur’an.
The latter paragraph describes the Islamic State (ISIL/ISIS) perfectly, as does the next:
Takfiris believe in Islam strictly according to their interpretation of Muhammad’s and his companions’ actions and statements, and do not accept any deviation from their path; they reject any reform or change to their interpretation of religion as it was revealed in the time of the prophet. Those who change their religion from Islam to any other way of life, or deny any of the fundamental foundations of Islam, or who worship, follow or obey anything other than Islam, become those upon whom the takfiris declare the “takfir”, calling them apostates from Islam and so no longer Muslim.
Will anyone declaring the Islamic State is “not really Islamic” stand before these takfiris and accuse them of apostasy? Will you sentence them all to death?
According to at least one source (Trevor Stanley), the precedent “for the declaration of takfir against a leader'”came from Medieval Islamic scholar Taqi al-Din Ibn Taymiyyah, who issued a famous fatwa declaring jihad against invading Mongols. This was not because they were invading but because they were apostates, apostasy from Islam being punishable by death.
Like jihadis, takfiri groups advocate armed struggle against the secular regime
Moreover, suicide bombing or other violent act that brings about one’s own death is a legitimate practice of takfiris such as the Islamic State members:
Takfiris believe that one who deliberately kills himself whilst attempting to kill enemies is a martyr (shahid) and therefore goes straight to heaven.
So much for “peaceful Islam” and the “greater jihad,” which represents mere “inner struggle.” Takfiri Islamists cannot live peacefully with secularism and nonbelief. They must fight it – and violently, engaging in “lesser jihad” or “struggle against those who do not believe in the Islamic God (Allah) and do not acknowledge the submission to Muslims, and so is often translated as ‘Holy War’…”

The takfiris‘ violence thus is not a “defense,” except that they feel secularism or nonbelief is an offense. Hence, all it takes for these fanatics to be violent is the mere existence of nonbelievers.

Being a takfiri (or jihadi) Muslim, therefore, means eternal declaration of war against nonbelievers, regardless of whether or not the latter have done anything overtly to the ummah or global Muslim community.

Thus, the “blowback” excuse proffered by “progressive liberals” and “leftists” for Islamist aggression is fallacious. The only “provocation” these violent fanatics require is non-adherence to their literal, fundamentalist brand of Islam, which is why they are currently engaging in wholesale genocide not only of non-Muslims but also of other Muslims, whom these takfiris consider to be kuffar (infidels/apostates) and munafiqun or “hypocrites.”

Baghdadi the Islamic State caliph isn't a 'real Muslim?' 

Who is a ‘real Muslim?’

If the Islamic State is not really Islamic, we encounter the problem of defining who is a “real Muslim.” It is widely claimed there are at least 1.5 billion Muslims in the world, a figure held up often by Muslim devotees as a show of strength and for purposes of intimidation. However, over the years I have been told repeatedly by many Muslims and ex-Muslims that MILLIONS of human beings forced to call themselves “Muslim” would like to leave Islam but they cannot, under fear of punishment, including possibly the death penalty for apostasy. In any event, we can remove millions of potential apostates from that figure of 1.5 billion.

Some Muslims also claim the fundamentalist Saudis do not represent “real” Islam and that Saudi Arabia is not “really” a Muslim country, even though its citizenship is 100% Muslim. If the Saudis are not “real” Muslims, then we likewise must remove 30 million people from the 1.5 billion figure tossed around of supposed Muslims in the world.

Next come all others influenced by Saudi-style Wahhabism and Salafism, such as the Kuwaitis, Yemenis, UAE citizens and many others globally – that’s hundreds of millions more, potentially most of the Sunnis, in fact. Remove them from the global Muslim total.

How about the Shia Muslims as found in Syria, Iraq, Iran and other places? That would be possibly 200 million people removed from the Muslim count, since they are not “real” Muslims, according to the Sunnis.

The Ahmadis? Remove their millions from the count. Ditto with Boko Haram, al Qaeda, Jamaat ul-Fuqra, al Shabaab, the Islamic State and thousands of other heretical or extremist groups, totaling millions more.

By the time we come up with “real” Muslims according to everyone’s standards, there are not so many left in the world, far less than 1.5 billion.

By these standards, we cannot say that Islam is the second largest religion in the world, since Buddhists and Hindus would dwarf these ten to few hundred million “real” Muslims.

In any event, declaring all of these Muslims to be apostates is the act of takfir, and one must be prepared to take responsibility for this declaration, which is not supposed to be made lightly or by lay persons and which can bring with it the death penalty.

Further Reading

http://en.wikipedia.org/wiki/Takfiri
What is the ummah?
The Truth About Islam
Tom Holland: We must not deny the religious roots of Islamic State

Reza Aslan on ISIS, ISIL, Islamic StateIslamic State doctrine comes from the QuranISISvsQuran-vi1isisquranisisquran

Ex-Muslim imam: ‘I wasted a significant part of my life believing in this load of crap’

Islamic crescent moon, star and sword 
(Editor: An ex-Muslim from Fiji posted the following
comments on an article here,
The Islamic State, apostasy and ‘real Muslims.’ )

I am an ex-Muslim with a seriously long axe to grind. I was born into a Muslim home. As the eldest child I was supposed to carry the religious ideals of the home, and before I knew how to say “Papa,” I was sent to a Quranic teacher who taught me how to read in Arabic and recite the Quran. I was also taught how to pray etc.

I live in Fiji, and we are a largely secular state where western education is widely taught and compulsory. Thank heavens for this fact, as I was also sent to a normal school.

By the time I was 13 years old, I was brainwashed into believing that the sun literally shined out of the butcher of Mecca (Mohammed’s) ass. I was an Imam by then and led about 50 grown men in prayer.

I was taught that Islam was “it,” and everyone else was either to be converted or to be seen as almost subhuman—this, despite the fact that Fiji Muslims are regarded as very moderate.

Fortunately for me, I have an inquiring mind, and, by the time I started high school, many questions regarding my faith started coming to mind. As I started earning for myself and moved away from the controls of my parents, these questions started bothering me more and more.

I eventually sat down and read the Quran in English to understand it better and its supporting Hadiths. By the end if it all, I was so pissed that I had wasted a significant part of my life believing in this load of crap. For the life of me, I could not understand how anyone in his or her right mind could follow, let alone treat someone like Mohammed as almost a demi-God.

Quite frankly, I was ashamed, and a part of me still is for being such a fool. I soon started researching religion, and it was not long before I found this website and many others like it, which helped to clear a lot of the questions that I had.

While I am not an atheist, as I do believe that there are too many unexplained phenomenons in this world to discount a creator, I will happily admit that I simply do not know. What I do know is that the sooner organized stupidity which some call religion is removed from this world, the better.

“There is nothing rational or peaceful about Islam, and anybody saying there is is deluded and in need of serious help.”

I do hold a particular disdain for the religion of Islam, as I believe that it is as evil as it gets. Islam is, as I like to put it, Christianity on steroids. There is nothing rational or peaceful about it, and anybody saying it is is deluded and in need of serious help.

—Reaaz Ali

Intelligent robots must uphold human rights


The common fear is that intelligent machines will turn against humans. But who will save the robots from each other, and from us, asks Hutan Ashrafian.


There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society.
Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982's Blade Runner and this year's Ex Machina.

Academic and fictional analyses of AIs tend to focus on human–robot interactions, asking questions such as: would robots make our lives easier? Would they be dangerous? And could they ever pose a threat to humankind?

These questions ignore one crucial point. We must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators. For example, if we were to allow sentient machines to commit injustices on one another — even if these 'crimes' did not have a direct impact on human welfare — this might reflect poorly on our own humanity. Such philosophical deliberations have paved the way for the concept of 'machine rights'.

Most discussions on robot development draw on the Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.

Animals that exhibit thinking behaviour are already afforded rights and protection, and civilized society shows contempt for animal fights that are set up for human entertainment. It follows that sentient machines that are potentially much more intelligent than animals should not be made to fight for entertainment.
“Intelligent robots remain science fiction, but it is not too early to take these issues seriously.”
Of course, military robots are already being deployed in conflicts. But outside legitimate warfare, forcing AIs and robots into conflict, or mistreating them, would be detrimental to humankind's moral, ethical and psychological well-being.

Intelligent robots remain science fiction, but it is not too early to take these issues seriously. In the United Kingdom, for example, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council have already introduced a set of principles for robot designers. These reinforce the position that robots are manufactured products, so that “humans, not robots, are responsible agents”.

Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering.

National and international technological policies should introduce AIonAI concepts into current programmes aimed at developing safe AIs. We must engage with educational activities and research, and continue to raise philosophical awareness. There could even be an annual AIonAI prize for the 'most altruistically designed AI'.

Social scientists and philosophers should be linked to cutting-edge robotics and computer research. Technological funders could support ethical studies on AIonAI concepts in addition to funding AI development. Medical funders such as the Wellcome Trust follow this model already: supporting research on both cutting-edge healthcare and medical ethics and history.

Current and future AI and robotic research communities need to have sustained exposure to the ideas of AIonAI. Conferences focused on AIonAI issues could be a hub of research, guidelines and policy statements. The next generation of robotic engineers and AI researchers can also be galvanized to adopt AIonAI principles through hybrid degree courses. For example, many people who hope to get into UK politics take a course in PPE (politics, philosophy and economics) — an equivalent course for students with ambitions in robotics and AI could be CEP (computer science, engineering and philosophy).

We should extend Asimov's Three Laws of Robotics to support work on AIonAI interaction. I suggest a fourth law: all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood.

Do not underestimate the likelihood of artificial thinking machines. Humankind is arriving at the horizon of the birth of a new intelligent race. Whether or not this intelligence is 'artificial' does not detract from the issue that the new digital populace will deserve moral dignity and rights, and a new law to protect them.
Nature
519,
391
()
doi:10.1038/519391a

Fact or Fiction?: Dark Matter Killed the Dinosaurs

A new out-of-this-world theory links mass extinctions with exotic astrophysics and galactic architecture
An image of Manicougan Crater as seen from the International Space Station

The 100-kilometer-wide Manicougan Crater in Canada was produced by a 5-kilometer-wide space rock smacking into Earth about 215 million years ago. A similar larger impact some 66 million years ago is thought to have wiped out the dinosaurs. Some researchers believe giant impacts cyclically occur, driven by our solar system's movement through a disk of dark matter in the Milky Way.
Credit: NASA
Every once in a great while, something almost unspeakable happens to Earth. Some terrible force reaches out and tears the tree of life limb from limb. In a geological instant, countless creatures perish and entire lineages simply cease to exist.

The most famous of these mass extinctions happened about 66 million years ago, when the dinosaurs died out in the planet-wide environmental disruption that followed a mountain-sized space rock walloping Earth. We can still see the scar from the impact today as a nearly 200-kilometer-wide crater in the Yucatan Peninsula.

But this is only one of the “Big Five” cataclysmic mass extinctions recognized by paleontologists, and not even the worst. Some 252 million years ago, the Permian-Triassic mass extinction wiped out an estimated nine of every ten species on the planet—scientists call this one “the Great Dying.” In addition to the Big Five, evidence exists for dozens of other mass extinction events that were smaller and less severe. Not all of these are conclusively related to giant impacts; some are linked instead to enormous upticks in volcanic activity worldwide that caused dramatic, disruptive climate change and habitat loss. Researchers suspect that many—perhaps most—mass extinctions come about through the stresses caused by overlapping events, such as a giant impact paired with an erupting supervolcano. Maybe the worst mass extinctions are simply matters of poor timing, cases of planetary bad luck.

Or maybe mass extinctions are not matters of chaotic chance at all. Perhaps they are in fact predictable and certain, like clockwork. Some researchers have speculated as much because of curious patterns they perceive in giant impacts, volcanic activity and biodiversity declines.

In the early 1980s, the University of Chicago paleontologists David Raup and Jack Sepkoski found evidence for a 26-million-year pattern of mass extinction in the fossil record since the Great Dying of the Permian-Triassic. This 26-million-year periodicity overlaps and closely aligns with the Big Five extinctions, as well as several others. In subsequent work over the years, several other researchers examining Earth’s geological record have replicated Raup and Sepkoski’s original conclusions, finding a mass-extinction periodicity of roughly 30 million years that extends back half a billion years. Some of those same researchers have also claimed to detect similar, aligned periodicities in impact cratering and in volcanic activity. Every 30 million years, give or take a few million, it seems the stars align to make all life on Earth suffer. Yet for want of a clear mechanism linking all these different phenomena together, the idea has languished for years at the scientific fringe.

It may not be a fringe idea much longer. According to a new theory from Michael Rampino, a geoscientist at New York University, dark matter may be the missing link—the mechanism behind Earth’s mysterious multi-million-year cycles of giant impacts, massive volcanism and planetary death.

Dark matter is an invisible substance that scarcely interacts with the rest of the universe through any force other than gravity. Whatever dark matter is, astronomers have inferred there is quite a lot of it by watching how large-scale structures respond to its gravitational pull. Dark matter seems to constitute almost 85 percent of all the mass in the universe, and it is thought to be the cosmic scaffolding upon which galaxies coalesce. Many theories, in fact, call for dark matter concentrating in the central planes of spiral galaxies such as the Milky Way. Our solar system, slowly orbiting the galactic core, periodically moves up and down through this plane like a cork bobbing in water. The period of our bobbing solar system is thought to be roughly 30 million years. Sound familiar?

In 2014, the Harvard University physicists Lisa Randall and Matthew Reece published a study showing how the gravitational pull from a thin disk of dark matter in the galactic plane could perturb the orbits of comets as our solar system passed through, periodically peppering Earth with giant impacts. To reliably knock the far-out comets down into Earth-crossing orbits, the dark-matter disk would need to be thin, about one-tenth the thickness of the Milky Way’s visible disk of stars, and with a density of at least one solar mass per square light-year.

Randall and Reece’s theory is broadly consistent with dark matter’s plausible properties, but the researchers only used it to explain the periodicity of impacts. In his new study, published in the Monthly Notices of the Royal Astronomical Society, Rampino suggests dark matter can explain the presumed periodicity of volcanism, too.

If dark matter forms dense clumps rather than being uniformly spread throughout the disk, Rampino says, then Earth could sweep up and capture large numbers of dark-matter particles in its gravitational field as it passes through the disk. The particles would fall to Earth’s core, where they could reach sufficient densities to annihilate each other, heating the core by hundreds of degrees during the solar system’s crossing of the galactic plane. For millions of years, the overheated core would belch gigantic plumes of magma up toward the surface, birthing gigantic volcanic eruptions that rip apart continents, alter sea levels and change the climate. All the while, comets perturbed by the solar system’s passage through the dark-matter disk would still be pounding the planet. Death would come from above and below in a potent one-two punch that would set off waves of mass extinction.

If true, Rampino’s hypothesis would have profound implications not only for the past and future of life on Earth, but for planetary science as a whole. Scientists would be forced to consider the histories of Earth and the solar system’s other rocky worlds in a galactic context, where the Milky Way’s invisible dark-matter architecture is the true cause of key events in a planet’s life. “Most geologists will not like this, as it might mean that astrophysics trumps geology as the underlying driver for geological changes,” Rampino says. “But geology, or let us say planetary science, is really a subfield of astrophysics, isn't it?”

The key question, of course, is whether some of the Milky Way’s dark matter actually exists in a thin, clumpy disk. Fortunately, within a decade researchers should have a wealth of data in hand that could disprove or validate Rampino’s controversial idea. Launched in 2013 to map the motions of a billion stars in the Milky Way, the European Space Agency’s Gaia spacecraft will help pin down the dimensions of any dark-matter disk and how often our solar system oscillates through it. The discovery and study of additional ancient craters could also confirm or refute the postulated periodicity of giant impacts and help determine how many were caused by comets rather than asteroids. If Gaia’s results reveal no signs of a thin, dense dark-matter disk, or if studies show that more craters are caused by rocky asteroids from the inner solar system than by icy comets, Rampino and other researchers will probably have to go back to the drawing board.

Alternatively, evidence for or against dark-matter-driven mass extinctions could come from extragalactic astronomy and even from particle physics itself. Recent observations of small satellite galaxies orbiting Andromeda, the Milky Way’s nearest neighboring spiral galaxy, tentatively support the existence of a dark-matter disk there, suggesting that our galaxy probably has one, too.

Even so, the University of Michigan astrophysicist Katherine Freese, one of the first researchers to rigorously examine how dark-matter annihilation could occur inside Earth, notes that Rampino’s scenario would demand “very special dark matter.” Specifically, the dark matter would have to weakly interact with itself to dissipate enough energy to cool and settle into a placid, very thin disk. According to Lisa Randall, who co-authored the 2014 paper suggesting dark matter might drive extinctions through giant impacts, several plausible theoretical models predict such a disk, but very few allow the dense intra-disk clumps required by Rampino’s hypothesis.

“In most models of dark matter, these clumps don’t exist,” Randall says. “Even if they do exist and are distributed in a disk, we don’t see that they will pass through the Earth sufficiently often. After all, clumps are not space-filling—there is room in between for the solar system to pass through.” Further, Randall notes that if dense clumps do exist in a thin disk of dark matter, the dark matter should occasionally annihilate in the clumps to produce gamma rays. “It’s not clear why we wouldn’t have already observed that gamma-ray signal,” she says.

There is also the possibility that theories of thin, self-interacting dark-matter disks could be swept away entirely if and when one of the many dark-matter detection experiments now underway finally spots its quarry and pins down the particulate identity of this elusive cosmic substance.

Or, perhaps most likely, the purported periodicities of mass extinctions, impacts and flood basalts are not as clear-cut and precisely aligned as might be hoped. Coryn Bailer-Jones, an astrophysicist at the Max Planck Institute for Astronomy in Heidelberg, Germany who has performed statistical analyses of impact-cratering rates as well as of mass extinctions, is skeptical that either of these exhibit periodic phenomena at all.

The problem is that the available data are not very good and carry immense uncertainties. Impact-crater statistics for Earth are notoriously variable and suspect. Their supposed periodicity fluctuates greatly depending on the minimum sizes of evaluated craters, and craters can be erased, obscured or even mimicked by a variety of geological processes. According to Bailer-Jones, biodiversity statistics from the fossil record are still more problematic, due to an even greater number of complex mechanisms dictating how, when and where fossils of different varieties of organisms are formed and preserved. Furthermore, Bailer-Jones notes, the solar system’s up-and-down oscillation through the Milky Way still has multi-million-year uncertainties.

While claims of overlapping, aligning periodicities within all this data could be significant and valid, Bailer-Jones says, in all likelihood they are instead the product of an all-too-human tendency to project order and logic onto little more than chaotic noise. Periodicity proponents have strenuously disagreed, and the heated, back-and-forth battle is still ongoing in the scientific literature.

“I think it’s interesting and worthwhile to ask these questions,” Bailer-Jones says. “But we must be careful not to give the impression that we actually have a problem that needs dark matter as a mechanism for mass extinction. It’s fine to talk about the mechanism, but the supposed periodicities—or rather, lack thereof—don’t provide any evidence for it.”

Rampino and others who see periodicities in fossils and craters freely acknowledge that their conclusions are speculative and that some of their statistics are presently underwhelming. Yet the telltale hints of order they glimpse in shattered rocks and scattered fossils still fuel their search for some final puzzle piece, some crucial evidence that will at last make everything cohere and confirm what could be the greatest cycle of life and death ever discovered.

One way or another, time will eventually tell. On geological timescales, our oscillating, bobbing solar system has recently crossed the mid-plane of the Milky Way, passing through the very region where a dark-matter disk would exist. Perhaps the faraway comets feel that gentle tug even now, and Earth’s core is already sizzling with dark-matter annihilation. Confirmation may be as close as the next spate of extinction-level cometary impacts or supervolcanic eruptions. Keep watching the skies—and the ground right beneath your feet.

New Reactor Paves the Way for Efficiently Producing Fuel from Sunlight

Original link:  http://www.caltech.edu/content/new-reactor-paves-way-efficiently-producing-fuel-sunlight

PASADENA, Calif.—Using a common metal most famously found in self-cleaning ovens, Sossina Haile hopes to change our energy future. The metal is cerium oxide—or ceria—and it is the centerpiece of a promising new technology developed by Haile and her colleagues that concentrates solar energy and uses it to efficiently convert carbon dioxide and water into fuels.

Solar energy has long been touted as the solution to our energy woes, but while it is plentiful and free, it can't be bottled up and transported from sunny locations to the drearier—but more energy-hungry—parts of the world. The process developed by Haile—a professor of materials science and chemical engineering at the California Institute of Technology (Caltech)—and her colleagues could make that possible.

The researchers designed and built a two-foot-tall prototype reactor that has a quartz window and a cavity that absorbs concentrated sunlight. The concentrator works "like the magnifying glass you used as a kid" to focus the sun's rays, says Haile.

At the heart of the reactor is a cylindrical lining of ceria. Ceria—a metal oxide that is commonly embedded in the walls of self-cleaning ovens, where it catalyzes reactions that decompose food and other stuck-on gunk—propels the solar-driven reactions. The reactor takes advantage of ceria's ability to "exhale" oxygen from its crystalline framework at very high temperatures and then "inhale" oxygen back in at lower temperatures.

"What is special about the material is that it doesn't release all of the oxygen. That helps to leave the framework of the material intact as oxygen leaves," Haile explains. "When we cool it back down, the material's thermodynamically preferred state is to pull oxygen back into the structure."
The ETH-Caltech solar reactor for producing H2
and CO from H2O and CO2 via the two-step
thermochemical cycle with ceria redox reactions.
 
Specifically, the inhaled oxygen is stripped off of carbon dioxide (CO2) and/or water (H2O) gas molecules that are pumped into the reactor, producing carbon monoxide (CO) and/or hydrogen gas (H2). H2 can be used to fuel hydrogen fuel cells; CO, combined with H2, can be used to create synthetic gas, or "syngas," which is the precursor to liquid hydrocarbon fuels. Adding other catalysts to the gas mixture, meanwhile, produces methane. And once the ceria is oxygenated to full capacity, it can be heated back up again, and the cycle can begin anew.

For all of this to work, the temperatures in the reactor have to be very high—nearly 3,000 degrees Fahrenheit. At Caltech, Haile and her students achieved such temperatures using electrical furnaces. But for a real-world test, she says, "we needed to use photons, so we went to Switzerland." At the Paul Scherrer Institute's High-Flux Solar Simulator, the researchers and their collaborators—led by Aldo Steinfeld of the institute's Solar Technology Laboratory—installed the reactor on a large solar simulator capable of delivering the heat of 1,500 suns.

In experiments conducted last spring, Haile and her colleagues achieved the best rates for CO2 dissociation ever achieved, "by orders of magnitude," she says. The efficiency of the reactor was uncommonly high for CO2 splitting, in part, she says, "because we're using the whole solar spectrum, and not just particular wavelengths." And unlike in electrolysis, the rate is not limited by the low solubility of CO2 in water. Furthermore, Haile says, the high operating temperatures of the reactor mean that fast catalysis is possible, without the need for expensive and rare metal catalysts (cerium, in fact, is the most common of the rare earth metals—about as abundant as copper).

In the short term, Haile and her colleagues plan to tinker with the ceria formulation so that the reaction temperature can be lowered, and to re-engineer the reactor, to improve its efficiency. Currently, the system harnesses less than 1% of the solar energy it receives, with most of the energy lost as heat through the reactor's walls or by re-radiation through the quartz window. "When we designed the reactor, we didn't do much to control these losses," says Haile. Thermodynamic modeling by lead author and former Caltech graduate student William Chueh suggests that efficiencies of 15% or higher are possible.

Ultimately, Haile says, the process could be adopted in large-scale energy plants, allowing solar-derived power to be reliably available during the day and night. The CO2 emitted by vehicles could be collected and converted to fuel, "but that is difficult," she says. A more realistic scenario might be to take the CO2 emissions from coal-powered electric plants and convert them to transportation fuels. "You'd effectively be using the carbon twice," Haile explains. Alternatively, she says, the reactor could be used in a "zero CO2 emissions" cycle: H2O and CO2 would be converted to methane, would fuel electricity-producing power plants that generate more CO2 and H2O, to keep the process going.

A paper about the work, "High-Flux Solar-Driven Thermochemical Dissociation of CO2 and H2O Using Nonstoichiometric Ceria," was published in the December 23 issue of Science. The work was funded by the National Science Foundation, the State of Minnesota Initiative for Renewable Energy and the Environment, and the Swiss National Science Foundation.
 
Written by Kathy Svitil

Peel Commission

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Peel_Commission   Report of the Palest...