Search This Blog

Monday, July 16, 2018

Humans With Amplified Intelligence Could Be More Powerful Than AI


Humans With Amplified Intelligence Could Be More Powerful Than AI
Top image: imredesiuk/shutterstock.

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It's an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous — as AI.
 
As a species, we've been amplifying our brains for millennia. Or at least we've tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today's nootropics. But none of these compare to what's in store.
 
Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.

Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger at Accelerating Future and a co-organiser of the Singularity Summit. He's given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.

Humans With Amplified Intelligence Could Be More Powerful Than AI
 
Michael, when we speak of Intelligence Amplification, what are we really talking about? Are we looking to create Einsteins? Or is it something significantly more profound?

The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

The first step will be to create a direct neural link to information. Think of it as a "telepathic Google."

The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualisation and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.

The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.

For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.

Looking ahead to the next few decades, how could AI come about? Is the human brain really that fungible?

The human brain is not really that fungible. It is the product of more than seven million years of evolutionary optimization and fine-tuning, which is to say that it's already highly optimised given its inherent constraints. Attempts to overclock it usually cause it to break, as demonstrated by the horrific effects of amphetamine addiction.

Trailer for Limitless Chemicals are not targeted enough to produce big gains in human cognitive performance. The evidence for the effectiveness of current "brain-enhancing drugs" is extremely sketchy. To achieve real strides will require brain implants with connections to millions of neurons. This will require millions of tiny electrodes, and a control system to synchronise them all. The current state of the art brain-computer interfaces have around 1000 connections. So, current devices need to be scaled up by more than 1000 times to get anywhere interesting. Even if you assume exponential improvement, it will be a while before this is possible — at least 15 to 20 years.

Improvement in IA rests upon progress in nano-manufacturing. Brain-computer interface engineers, like Ed Boyden at MIT, depend upon improvements in manufacturing to build these devices. Manufacturing is the linchpin on which everything else depends.

Given that there is very little development of atomically-precise manufacturing technologies, nanoscale self-assembly seems like the most likely route to million-electrode brain-computer interfaces. Nanoscale self-assembly is not atomically precise, but it's precise by the standards of bulk manufacturing and photolithography.

What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?

One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.

Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.

Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, "An intelligent man is sometimes forced to be drunk to spend time with his fools." What if drunkenness were not enough to instil camaraderie and mutual affection? There could be a clean "empathy break" that leads to psychopathy.

So which will come first? AI or IA?

It's very difficult to predict either. There is a tremendous bias for wanting IA to come first, because of all the fun movies and video games with intelligence-enhanced protagonists. It's important to recognise that this bias in favour of IA does not in fact influence the actual technological difficulty of the approach. My guess is that AI will come first because development is so much cheaper and cleaner.

Both endeavours are extremely difficult. They may not come to pass until the 2060s, 2070s, or later. Eventually, however, they must both come to pass — there's nothing magical about intelligence, and the demand for its enhancement is enormous. It would require nothing less than a global totalitarian Luddite dictatorship to hold either back for the long term.

What are the advantages and disadvantages to the two different developmental approaches?

The primary advantage of the AI route is that it is immeasurably cheaper and easier to do research. AI is developed on paper and in code. Most useful IA research, on the other hand, is illegal. Serious IA would require deep neurosurgery and experimental brain implants. These brain implants may malfunction, causing seizures, insanity, or death. Enhancing human intelligence in a qualitative way is not a matter of popping a few pills — you really need to develop brain implants to get any significant returns.

Most research in that area is heavily regulated and expensive. All animal testing is expensive. Theodore Berger has been working on a hippocampal implant for a number of years — and in 2004 it passed a live tissue test, but there has been very little news since then. Every few years he pops up in the media and says it's just around the corner, but I'm sceptical. Meanwhile, there is a lot of intriguing progress in Artificial Intelligence.

Does IA have the potential to be safer than AI as far as predictability and controllability is concerned? Is it important that we develop IA before super-powerful AGI?

Intelligence Augmentation is much more unpredictable and uncontrollable than AGI has the potential to be. It's actually quite dangerous, in the long term. I recently wrote an article that speculates on global political transformation caused by a large amount of power concentrated in the hands of a small group due to "miracle technologies" like IA or molecular manufacturing. I also coined the term "Maximillian", meaning "the best", to refer to a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.

Humans With Amplified Intelligence Could Be More Powerful Than AI
Image: The cognitively enhanced Reginald Barclay from the ST:TNG
episode, "The Nth Degree".

The problem with IA is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes.

AGI, on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing.

People say, "won't it reject those motivations?" It won't, because those motivations will make up its entire core of values — if it's programmed properly. There will be no "ghost in the machine" to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper "The Superintelligent Will". The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn't mean we can't code selfless agents de novo.

What roadblocks, be they technological, medical, or ethical, do you see hindering development?

The biggest roadblock is developing the appropriate manufacturing technology. Right now, we aren't even close.

Another roadblock is figuring out what exactly each neuron does, and identifying the exact positions of these neurons in individual people. Again, we're not even close.

Thirdly, we need some way to quickly test extremely fine-grained theories of brain function — what Ed Boyden calls "high throughput circuit screening" of neural circuits. The best way to do this would be to somehow create a human being without consciousness and experiment on them to our heart's content, but I have a feeling that idea might not go over so well with ethics committees.

Absent that, we'd need an extremely high-resolution simulation of the human brain. Contrary to hype surrounding "brain simulation" projects today, such a high-resolution simulation is not likely to be developed until the 2050-2080 timeframe. An Oxford analysis picks a median date of around 2080. That sounds a bit conservative to me, but in the right ballpark.

Regulation of gene expression

From Wikipedia, the free encyclopedia

Regulation of gene expression by a hormone receptor
Diagram showing at which stages in the DNA-mRNA-protein pathway expression can be controlled

Regulation of gene expression includes a wide range of mechanisms that are used by cells to increase or decrease the production of specific gene products (protein or RNA), and is informally termed gene regulation. Sophisticated programs of gene expression are widely observed in biology, for example to trigger developmental pathways, respond to environmental stimuli, or adapt to new food sources. Virtually any step of gene expression can be modulated, from transcriptional initiation, to RNA processing, and to the post-translational modification of a protein. Often, one gene regulator controls another, and so on, in a gene regulatory network.

Gene regulation is essential for viruses, prokaryotes and eukaryotes as it increases the versatility and adaptability of an organism by allowing the cell to express protein when needed. Although as early as 1951, Barbara McClintock showed interaction between two genetic loci, Activator (Ac) and Dissociator (Ds), in the color formation of maize seeds, the first discovery of a gene regulation system is widely considered to be the identification in 1961 of the lac operon, discovered by François Jacob and Jacques Monod, in which some enzymes involved in lactose metabolism are expressed by E. coli only in the presence of lactose and absence of glucose.

In multicellular organisms, gene regulation drives cellular differentiation and morphogenesis in the embryo, leading to the creation of different cell types that possess different gene expression profiles from the same genome sequence. This explains how evolution actually works at a molecular level, and is central to the science of evolutionary developmental biology ("evo-devo").

The initiating event leading to a change in gene expression includes activation or deactivation of receptors.

Regulated stages of gene expression

Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The following is a list of stages where gene expression is regulated, the most extensively utilised point is Transcription Initiation:

Modification of DNA

In eukaryotes, the accessibility of large regions of DNA can depend on its chromatin structure, which can be altered as a result of histone modifications directed by DNA methylation, ncRNA, or DNA-binding protein. Hence these modifications may up or down regulate the expression of a gene. Some of these modifications that regulate gene expression are inheritable and are referred to as epigenetic regulation.

Structural

Transcription of DNA is dictated by its structure. In general, the density of its packing is indicative of the frequency of transcription. Octameric protein complexes called nucleosomes are responsible for the amount of supercoiling of DNA, and these complexes can be temporarily modified by processes such as phosphorylation or more permanently modified by processes such as methylation. Such modifications are considered to be responsible for more or less permanent changes in gene expression levels.[1]

Chemical

Methylation of DNA is a common method of gene silencing. DNA is typically methylated by methyltransferase enzymes on cytosine nucleotides in a CpG dinucleotide sequence (also called "CpG islands" when densely clustered). Analysis of the pattern of methylation in a given region of DNA (which can be a promoter) can be achieved through a method called bisulfite mapping. Methylated cytosine residues are unchanged by the treatment, whereas unmethylated ones are changed to uracil. The differences are analyzed by DNA sequencing or by methods developed to quantify SNPs, such as Pyrosequencing (Biotage) or MassArray (Sequenom), measuring the relative amounts of C/T at the CG dinucleotide. Abnormal methylation patterns are thought to be involved in oncogenesis.[2]

Histone acetylation is also an important process in transcription. Histone acetyltransferase enzymes (HATs) such as CREB-binding protein also dissociate the DNA from the histone complex, allowing transcription to proceed. Often, DNA methylation and histone deacetylation work together in gene silencing. The combination of the two seems to be a signal for DNA to be packed more densely, lowering gene expression.[citation needed]

Regulation of transcription

1: RNA Polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA. Top: The gene is essentially turned off. There is no lactose to inhibit the repressor, so the repressor binds to the operator, which obstructs the RNA polymerase from binding to the promoter and making lactase. Bottom: The gene is turned on. Lactose is inhibiting the repressor, allowing the RNA polymerase to bind with the promoter, and express the genes, which synthesize lactase. Eventually, the lactase will digest all of the lactose, until there is none to bind to the repressor. The repressor will then bind to the operator, stopping the manufacture of lactase.

Regulation of transcription thus controls when transcription occurs and how much RNA is created. Transcription of a gene by RNA polymerase can be regulated by several mechanisms. Specificity factors alter the specificity of RNA polymerase for a given promoter or set of promoters, making it more or less likely to bind to them (i.e., sigma factors used in prokaryotic transcription). Repressors bind to the Operator, coding sequences on the DNA strand that are close to or overlapping the promoter region, impeding RNA polymerase's progress along the strand, thus impeding the expression of the gene.The image to the right demonstrates regulation by a repressor in the lac operon. General transcription factors position RNA polymerase at the start of a protein-coding sequence and then release the polymerase to transcribe the mRNA. Activators enhance the interaction between RNA polymerase and a particular promoter, encouraging the expression of the gene. Activators do this by increasing the attraction of RNA polymerase for the promoter, through interactions with subunits of the RNA polymerase or indirectly by changing the structure of the DNA. Enhancers are sites on the DNA helix that are bound by activators in order to loop the DNA bringing a specific promoter to the initiation complex. Enhancers are much more common in eukaryotes than prokaryotes, where only a few examples exist (to date).[3] Silencers are regions of DNA sequences that, when bound by particular transcription factors, can silence expression of the gene.

Regulation of transcription in cancer

In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites.[4] When many of a gene's promoter CpG sites are methylated the gene becomes silenced.[5] Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations.[6] However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs.[7] In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).

Regulation of transcription in addiction

One of the cardinal features of addiction is its persistence. The persistent behavioral changes appear to be due to long-lasting changes, resulting from epigenetic alterations affecting gene expression, within particular regions of the brain.[8] Drugs of abuse cause three types of epigenetic alteration in the brain. These are (1) histone acetylations and histone methylations, (2) DNA methylation at CpG sites, and (3) epigenetic downregulation or upregulation of microRNAs.[8][9] (See Epigenetics of cocaine addiction for some details.)

Chronic nicotine intake in mice alters brain cell epigenetic control of gene expression through acetylation of histones. This increases expression in the brain of the protein FosB, important in addiction.[10] Cigarette addiction was also studied in about 16,000 humans, including never smokers, current smokers, and those who had quit smoking for up to 30 years.[11] In blood cells, more than 18,000 CpG sites (of the roughly 450,000 analyzed CpG sites in the genome) had frequently altered methylation among current smokers. These CpG sites occurred in over 7,000 genes, or roughly a third of known human genes. The majority of the differentially methylated CpG sites returned to the level of never-smokers within five years of smoking cessation. However, 2,568 CpGs among 942 genes remained differentially methylated in former versus never smokers. Such remaining epigenetic changes can be viewed as “molecular scars”[9] that may affect gene expression.

In rodent models, drugs of abuse, including cocaine,[12] methampheamine,[13][14] alcohol[15] and tobacco smoke products,[16] all cause DNA damage in the brain. During repair of DNA damages some individual repair events can alter the methylation of DNA and/or the acetylations or methylations of histones at the sites of damage, and thus can contribute to leaving an epigenetic scar on chromatin.[17]

Such epigenetic scars likely contribute to the persistent epigenetic changes found in addiction.

Post-transcriptional regulation

After the DNA is transcribed and mRNA is formed, there must be some sort of regulation on how much the mRNA is translated into proteins. Cells do this by modulating the capping, splicing, addition of a Poly(A) Tail, the sequence-specific nuclear export rates, and, in several contexts, sequestration of the RNA transcript. These processes occur in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript that, in turn, is regulated and may have an affinity for certain sequences.

Three prime untranslated regions and microRNAs

Three prime untranslated regions (3'-UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression.[18] Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.

The 3'-UTR often contains miRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.

As of 2014, the miRBase web site,[19] an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes).[20] Freidman et al.[20] estimate that >45,000 miRNA target sites within human mRNA 3'-UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.

Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs.[21] Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).[22][23]

The effects of miRNA dysregulation of gene expression seem to be important in cancer.[24] For instance, in gastrointestinal cancers, a 2015 paper identified nine miRNAs as epigenetically altered and effective in down-regulating DNA repair enzymes.[25]

The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depressive disorder, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.[26][27][28]

Regulation of translation

The translation of mRNA can also be controlled by a number of mechanisms, mostly at the level of initiation. Recruitment of the small ribosomal subunit can indeed be modulated by mRNA secondary structure, antisense RNA binding, or protein binding. In both prokaryotes and eukaryotes, a large number of RNA binding proteins exist, which often are directed to their target sequence by the secondary structure of the transcript, which may change depending on certain conditions, such as temperature or presence of a ligand (aptamer). Some transcripts act as ribozymes and self-regulate their expression.

Examples of gene regulation

  • Enzyme induction is a process in which a molecule (e.g., a drug) induces (i.e., initiates or enhances) the expression of an enzyme.
  • The induction of heat shock proteins in the fruit fly Drosophila melanogaster.
  • The Lac operon is an interesting example of how gene expression can be regulated.
  • Viruses, despite having only a few genes, possess mechanisms to regulate their gene expression, typically into an early and late phase, using collinear systems regulated by anti-terminators (lambda phage) or splicing modulators (HIV).
  • Gal4 is a transcriptional activator that controls the expression of GAL1, GAL7, and GAL10 (all of which code for the metabolic of galactose in yeast). The GAL4/UAS system has been used in a variety of organisms across various phyla to study gene expression.[29]

Developmental biology

A large number of studied regulatory systems come from developmental biology. Examples include:
  • The colinearity of the Hox gene cluster with their nested antero-posterior patterning
  • Pattern generation of the hand (digits - interdigits): the gradient of sonic hedgehog (secreted inducing factor) from the zone of polarizing activity in the limb, which creates a gradient of active Gli3, which activates Gremlin, which inhibits BMPs also secreted in the limb, results in the formation of an alternating pattern of activity as a result of this reaction-diffusion system.
  • Somitogenesis is the creation of segments (somites) from a uniform tissue (Pre-somitic Mesoderm). They are formed sequentially from anterior to posterior. This is achieved in amniotes possibly by means of two opposing gradients, Retinoic acid in the anterior (wavefront) and Wnt and Fgf in the posterior, coupled to an oscillating pattern (segmentation clock) composed of FGF + Notch and Wnt in antiphase.[30]
  • Sex determination in the soma of a Drosophila requires the sensing of the ratio of autosomal genes to sex chromosome-encoded genes, which results in the production of sexless splicing factor in females, resulting in the female isoform of doublesex.[31]

Circuitry

Up-regulation and down-regulation

Up-regulation is a process that occurs within a cell triggered by a signal (originating internal or external to the cell), which results in increased expression of one or more genes and as a result the protein(s) encoded by those genes. Conversely, down-regulation is a process resulting in decreased gene and corresponding protein expression.
  • Up-regulation occurs, for example, when a cell is deficient in some kind of receptor. In this case, more receptor protein is synthesized and transported to the membrane of the cell and, thus, the sensitivity of the cell is brought back to normal, reestablishing homeostasis.
  • Down-regulation occurs, for example, when a cell is overstimulated by a neurotransmitter, hormone, or drug for a prolonged period of time, and the expression of the receptor protein is decreased in order to protect the cell (see also tachyphylaxis).

Inducible vs. repressible systems

Gene Regulation can be summarized by the response of the respective system:
  • Inducible systems - An inducible system is off unless there is the presence of some molecule (called an inducer) that allows for gene expression. The molecule is said to "induce expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
  • Repressible systems - A repressible system is on except in the presence of some molecule (called a corepressor) that suppresses gene expression. The molecule is said to "repress expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
The GAL4/UAS system is an example of both an inducible and repressible system. Gal4 binds an upstream activation sequence (UAS) to activate the transcription of the GAL1/GAL7/GAL10 cassette. On the other hand, a MIG1 response to the presence of glucose can inhibit GAL4 and therefore stop the expression of the GAL1/GAL7/GAL10 cassette.[32]

Theoretical circuits

  • Repressor/Inducer: an activation of a sensor results in the change of expression of a gene
  • negative feedback: the gene product downregulates its own production directly or indirectly, which can result in
    • keeping transcript levels constant/proportional to a factor
    • inhibition of run-away reactions when coupled with a positive feedback loop
    • creating an oscillator by taking advantage in the time delay of transcription and translation, given that the mRNA and protein half-life is shorter
  • positive feedback: the gene product upregulates its own production directly or indirectly, which can result in
    • signal amplification
    • bistable switches when two genes inhibit each other and both have positive feedback
    • pattern generation

Study methods

In general, most experiments investigating differential expression used whole cell extracts of RNA, called steady-state levels, to determine which genes changed and by how much they did. These are, however, not informative of where the regulation has occurred and may actually mask conflicting regulatory processes (see post-transcriptional regulation), but it is still the most commonly analysed (quantitative PCR and DNA microarray).
When studying gene expression, there are several methods to look at the various stages. In eukaryotes these include:
  • The local chromatin environment of the region can be determined by ChIP-chip analysis by pulling down RNA Polymerase II, Histone 3 modifications, Trithorax-group protein, Polycomb-group protein, or any other DNA-binding element to which a good antibody is available.
  • Epistatic interactions can be investigated by synthetic genetic array analysis
  • Due to post-transcriptional regulation, transcription rates and total RNA levels differ significantly. To measure the transcription rates nuclear run-on assays can be done and newer high-throughput methods are being developed, using thiol labelling instead of radioactivity.[33]
  • Only 5% of the RNA polymerised in the nucleus actually exits,[34] and not only introns, abortive products, and non-sense transcripts are degradated. Therefore, the differences in nuclear and cytoplasmic levels can be see by separating the two fractions by gentle lysis.[35]
  • Alternative splicing can be analysed with a splicing array or with a tiling array (see DNA microarray).
  • All in vivo RNA is complexed as RNPs. The quantity of transcripts bound to specific protein can be also analysed by RIP-Chip. For example, DCP2 will give an indication of sequestered protein; ribosome-bound gives and indication of transcripts active in transcription (although it should be noted that a more dated method, called polysome fractionation, is still popular in some labs)
  • Protein levels can be analysed by Mass spectrometry, which can be compared only to quantitative PCR data, as microarray data is relative and not absolute.
  • RNA and protein degradation rates are measured by means of transcription inhibitors (actinomycin D or α-amanitin) or translation inhibitors (Cycloheximide), respectively.

Taming the Multiverse

August 7, 2001 by Marcus Chown
Original link:  http://www.kurzweilai.net/taming-the-multiverse

In Ray Kurzweil’s The Singularity is Near, physicist Sir Roger Penrose is paraphrased as suggesting it is impossible to perfectly replicate a set of quantum states, so therefore perfect downloading (i.e., creating a digital or synthetic replica of the human brain based upon quantum states) is impossible. What would be required to make it possible? A solution to the problem of quantum teleportation, perhaps. But there is a further complication: the multiverse. Do we live in a world of schizophrenic tables? Does free will negate the possibility of perfect replication?

Originally published July 14, 2001 at New Scientist. Published on KurzweilAI.net August 7, 2001. Original article at New Scientist.

The Singularity is Near précis can be read here. The mechanics of quantum teleportation can be found here.

Parallel universes are no longer a figment of our imagination. They’re so real that we can reach out and touch them, and even use them to change our world.

Flicking through New Scientist, you stop at this page, think “that’s interesting” and read these words. Another you thinks “what nonsense”, and moves on. Yet another lets out a cry, keels over and dies.
Is this an insane vision? Not according to David Deutsch of the University of Oxford. Deutsch believes that our Universe is part of the multiverse, a domain of parallel universes that comprises ultimate reality.

Until now, the multiverse was a hazy, ill-defined concept-little more than a philosophical trick. But in a paper yet to be published, Deutsch has worked out the structure of the multiverse. With it, he claims, he has answered the last criticism of the sceptics. “For 70 years physicists have been hiding from it, but they can hide no longer.” If he’s right, the multiverse is no trick. It is real. So real that we can mold the fate of the universes and exploit them.

Why believe in something so extraordinary? Because it can explain one of the greatest mysteries of modern science: why the world of atoms behaves so very differently from the everyday world of trees and tables.

The theory that describes atoms and their constituents is quantum mechanics. It is hugely successful. It has led to computers, lasers and nuclear reactors, and it tells us why the Sun shines and why the ground beneath our feet is solid. But quantum theory also tells us something very disturbing about atoms and their like: they can be in many places at once. This isn’t just a crazy theory-it has observable consequences (see “Interfering with the multiverse”).

But how is it that atoms can be in many places at once whereas big things made out of atoms-tables, trees and pencils-apparently cannot? Reconciling the difference between the microscopic and the macroscopic is the central problem in quantum theory.

The many worlds interpretation is one way to do it. This idea was proposed by Princeton graduate student Hugh Everett III in 1957. According to many worlds, quantum theory doesn’t just apply to atoms, says Deutsch. “The world of tables is exactly the same as the world of atoms.”

But surely this means tables can be in many places at once. Right. But nobody has ever seen such a schizophrenic table. So what gives?

The idea is that if you observe a table that is in two places at once, there are also two versions of you-one that sees the table in one place and one that sees it in another place.

The consequences are remarkable. A universe must exist for every physical possibility. There are Earths where the Nazis prevailed in the Second World War, where Marilyn Monroe married Einstein, and where the dinosaurs survived and evolved into intelligent beings who read New Scientist.

However, many worlds is not the only interpretation of quantum theory. Physicists can choose between half a dozen interpretations, all of which predict identical outcomes for all conceivable experiments.

Deutsch dismisses them all. “Some are gibberish, like the Copenhagen interpretation,” he says-and the rest are just variations on the many worlds theme.

For example, according to the Copenhagen interpretation, the act of observing is crucial. Observation forces an atom to make up its mind, and plump for being in only one place out of all the possible places it could be. But the Copenhagen interpretation is itself open to interpretation. What constitutes an observation? For some people, this only requires a large-scale object such as a particle detector. For others it means an interaction with some kind of conscious being.

Worse still, says Deutsch, is that in this type of interpretation you have to abandon the idea of reality. Before observation, the atom doesn’t have a real position. To Deutsch, the whole thing is mysticism-throwing up our hands and saying there are some things we are not allowed to ask.

Some interpretations do try to give the microscopic world reality, but they are all disguised versions of the many worlds idea, says Deutsch. “Their proponents have fallen over backward to talk about the many worlds in a way that makes it appear as if they are not.”

In this category, Deutsch includes David Bohm’s “pilot-wave” interpretation. Bohm’s idea is that a quantum wave guides particles along their trajectories. Then the strange shape of the pilot wave can be used to explain all the odd quantum behaviours, such as interference patterns. In effect, says Deutsch, Bohm’s single universe occupies one groove in an immensely complicated multi-dimensional wave function.

“The question that pilot-wave theorists must address is: what are the unoccupied grooves?” says Deutsch. “It is no good saying they are merely theoretical and do not exist physically, for they continually jostle each other and the occupied groove, affecting its trajectory. What’s really being talked about here is parallel universes. Pilot-wave theories are parallel-universe theories in a state of chronic denial.”

Back and forth

Another disguised many worlds theory, says Deutsch, is John Cramer’s “transactional” interpretation in which information passes backward and forward through time. When you measure the position of an atom, it sends a message back to its earlier self to change its trajectory accordingly.

But as the system gets more complicated, the number of messages explodes. Soon, says Deutsch, it becomes vastly greater than the number of particles in the Universe. The full quantum evolution of a system as big as the Universe consists of an exponentially large number of classical processes, each of which contains the information to describe a whole universe. So Cramer’s idea forces the multiverse on you, says Deutsch.

So do other interpretations, according to Deutsch. “Quantum theory leaves no doubt that other universes exist in exactly the same sense that the single Universe that we see exists,” he says. “This is not a matter of interpretation. It is a logical consequence of quantum theory.”

Yet many physicists still refuse to accept the multiverse. “People say the many worlds is simply too crazy, too wasteful, too mind-blowing,” says Deutsch. “But this is an emotional not a scientific reaction. We have to take what nature gives us.”

A much more legitimate objection is that many worlds is vague and has no firm mathematical basis. Proponents talk of a multiverse that is like a stack of parallel universes. The critics point out that it cannot be that simple-quantum phenomena occur precisely because the universes interact. “What is needed is a precise mathematical model of the multiverse,” says Deutsch. And now he’s made one.

The key to Deutsch’s model sounds peculiar. He treats the multiverse as if it were a quantum computer. Quantum computers exploit the strangeness of quantum systems-their ability to be in many states at once-to do certain kinds of calculation at ludicrously high speed. For example, they could quickly search huge databases that would take an ordinary computer the lifetime of the Universe. Although the hardware is still at a very basic stage, the theory of how quantum computers process information is well advanced.

In 1985, Deutsch proved that such a machine can simulate any conceivable quantum system, and that includes the Universe itself. So to work out the basic structure of the multiverse, all you need to do is analyze a general quantum calculation. “The set of all programs that can be run on a quantum computer includes programs that would simulate the multiverse,” says Deutsch. “So we don’t have to include any details of stars and galaxies in the real Universe, we can just analyze quantum computers and look at how information flows inside them.”

If information could flow freely from one part of the multiverse to another, we’d live in a chaotic world where all possibilities would overlap. We really would see two tables at once, and worse, everything imaginable would be happening everywhere at the same time.

Deutsch found that, almost all the time, information flows only within small pieces of the quantum calculation, and not in between those pieces. These pieces, he says, are separate universes. They feel separate and autonomous because all the information we receive through our senses has come from within one universe. As Oxford philosopher Michael Lockwood put it, “We cannot look sideways, through the multiverse, any more than we can look into the future.”

Sometimes universes in Deutsch’s model peel apart only locally and fleetingly, and then slap back together again. This is the cause of quantum interference, which is at the root of everything from the two-slit experiment to the basic structure of atoms.

Other physicists are still digesting what Deutsch has to say. Anton Zeilinger of the University of Vienna remains unconvinced. “The multiverse interpretation is not the only possible one, and it is not even the simplest,” he says. Zeilinger instead uses information theory to come to very different conclusions. He thinks that quantum theory comes from limits on the information we get out of measurements (New Scientist, 17 February, p 26). As in the Copenhagen interpretation, there is no reality to what goes on before the measurement.

But Deutsch insists that his picture is more profound than Zeilinger’s. “I hope he’ll come round, and realize that the many worlds theory explains where the information in his measurements comes from.”

Why are physicists reluctant to accept many worlds? Deutsch blames logical positivism, the idea that science should concern itself only with objects that can be observed. In the early 20th century, some logical positivists even denied the existence of atoms-until the evidence became overwhelming. The evidence for the multiverse, according to Deutsch, is equally overwhelming. “Admittedly, it’s indirect,” he says. “But then, we can detect pterodactyls and quarks only indirectly too. The evidence that other universes exist is at least as strong as the evidence for pterodactyls or quarks.”

Perhaps the sceptics will be convinced by a practical demonstration of the multiverse. And Deutsch thinks he knows how. By building a quantum computer, he says, we can reach out and mold the multiverse.

“One day, a quantum computer will be built which does more simultaneous calculations than there are particles in the Universe,” says Deutsch. “Since the Universe as we see it lacks the computational resources to do the calculations, where are they being done?” It can only be in other universes, he says. “Quantum computers share information with huge numbers of versions of themselves throughout the multiverse.”

Imagine that you have a quantum PC and you set it a problem. What happens is that a huge number of versions of your PC split off from this Universe into their own separate, local universes, and work on parallel strands of the problem. A split second later, the pocket universes recombine into one, and those strands are pulled together to provide the answer that pops up on your screen. “Quantum computers are the first machines humans have ever built to exploit the multiverse directly,” says Deutsch.

At the moment, even the biggest quantum computers can only work their magic on about 6 bits of information, which in Deutsch’s view means they exploit copies of themselves in 26 universes-that’s just 64 of them. Because the computational feats of such computers are puny, people can choose to ignore the multiverse. “But something will happen when the number of parallel calculations becomes very large,” says Deutsch. “If the number is 64, people can shut their eyes but if it’s 1064, they will no longer be able to pretend.”

What would it mean for you and me to know there are inconceivably many yous and mes living out all possible histories? Surely, there is no point in making any choices for the better if all possible outcomes happen? We might as well stay in bed or commit suicide.

Deutsch does not agree. In fact, he thinks it could make real choice possible. In classical physics, he says, there is no such thing as “if”; the future is determined absolutely by the past. So there can be no free will. In the multiverse, however, there are alternatives; the quantum possibilities really happen. Free will might have a sensible definition, Deutsch thinks, because the alternatives don’t have to occur within equally large slices of the multiverse. “By making good choices, doing the right thing, we thicken the stack of universes in which versions of us live reasonable lives,” he says. “When you succeed, all the copies of you who made the same decision succeed too. What you do for the better increases the portion of the multiverse where good things happen.”

Let’s hope that deciding to read this article was the right choice.

Interfering with the multiverse

You can see the shadow of other universes using little more than a light source and two metal plates. This is the famous double-slit experiment, the touchstone of quantum weirdness.

Particles from the atomic realm such as photons, electrons or atoms are fired at the first plate, which has two vertical slits in it. The particles that go through hit the second plate on the far side.

Imagine the places that are hit show up black and that the places that are not hit show up white. After the experiment has been running for a while, and many particles have passed through the slits, the plate will be covered in vertical stripes alternating black and white. That is an interference pattern.

To make it, particles that passed through one slit have to interfere with particles that passed through the other slit. The pattern simply does not form if you shut one slit.

The strange thing is that the interference pattern forms even if particles come one at a time, with long periods in between. So what is affecting these single particles?

According to the many worlds interpretation, each particle interferes with another particle going through the other slit. What other particle? “Another particle in a neighboring universe,” says David Deutsch. He believes this is a case where two universes split apart briefly, within the experiment, then come back together again. “In my opinion, the argument for the many worlds was won with the double-slit experiment. It reveals interference between neighboring universes, the root of all quantum phenomena.”

Reproduced with permission from New Scientist issue dated July 14, 2001.

Sunday, July 15, 2018

The Paradigms and Paradoxes of Intelligence: Building a Brain

August 6, 2001 by Ray Kurzweil
Original link:  http://www.kurzweilai.net/the-paradigms-and-paradoxes-of-intelligence-building-a-brain

How to build a brain, written for “The Futurecast,” a monthly column in the Library Journal.

Originally published November 1992. Published on KurzweilAI.net August 6, 2001.

In the last two columns, we examined two methods for emulating intelligence in a machine. The recursive paradigm involves the application of massive computing power to analyze the implications of every possible course of action (e.g., every allowable move in a chess game) followed by every possible course of reaction that could follow each course of action, and so on in an exponentially exploding tree of possibilities. The neural net paradigm involves the cascading of networks of neurons, where each neuron simplifies thousands of inputs into a single judgment.

Up until recently, humans and computers have used radically different strategies in their intelligent decision-making. For example, in chess (our quintessential laboratory for studying intelligence), a human chess master memorizes about 50,000 situations and then uses his or her neural net-based pattern recognition capabilities to recognize which of these situations is most applicable to the current board position. In contrast, the computer chess master typically memorizes very few board positions and relies instead on its ability to analyze in depth every possible course of action during the time of play. The computer player will typically analyze between one million and one billion board positions for each move. In contrast, the human player does not have time to consider more than a few dozen board positions, hence the reliance on a memory of previously analyzed situations.

Humans do not have the precise memory nor the mental speed to excel at using the recursive paradigm (not without a computer to help). However, while humans will never master the recursive paradigm, machines are not restricted to it. Machines programmed to use the neural net paradigm are displaying a rapidly increasing ability to learn patterns (of speech, handwriting, faces, etc.) in a manner similar to, if still cruder than, their human creators

Computer neural net simulations have been limited by two factors: the number of neural connections that can be simulated in real time and the capacity of computer memories. While human neurons are slow (a million times slower than electronic circuits), every neuron and every interneuronal connection is operating simultaneously. With about 100 billion neurons and an average of 1000 connections per neuron, there are about 100 trillion computations being performed at the same time. At about 200 computations per second, that comes to 20 million billion (2 x 1016) calculations per second.

So how does that compare to the state-of-the-art in human-created technology? Specialized neural computers have been developed that can simulate neurons directly in hardware. These operate about a thousand times faster than neural networks simulated in software on conventional PCs. One such neural computer, the Ricoh RN100, can process 128 million connection computations per second. This type of computer represents significant progress, but it is still 150 million times slower than the human brain.

Moore speed and memory

So enters Moore’s Law. Moore’s Law is the driving force behind a technological revolution so vast that the entire computer revolution to date represents only a minor ripple of its ultimate implications. Simply stated, Moore’s Law says that computing speeds and densities double every 18 months. In other words, every 18 months we can buy a computer that is twice as fast and has twice as much memory for the same cost. Remarkably, this law has held true since the beginning of this century through numerous changes in underlying methods – from the mechanical card-based computing technology of the 1890 census, to the relay-based computers of the 1940s, to the vacuum tube-based computers of the 1950s, to the transistor-based machines of the 1960s, to the integrated circuits of today. The trend has held for thousands of different calculators and computers over the past 100 years. Computer memory, for example, is about 150 million times more powerful today (for the same unit cost) than it was in 1950.

Moore’s Law will continue unabated for many decades to come. We have not even begun to explore the third dimension in chip design. Chips today are flat, whereas the brain is organized in three dimensions. Improvements in semiconductor materials, including the development of superconducting circuits that do not generate heat, will enable the development of chips (actually cubes) with thousands of layers of circuitry combined with far smaller component geometries for an improvement in computer power of many million-fold. There are more than enough new computing technologies being developed to assure a continuation of Moore’s Law for a very long time.

Moore’s Law does more than double computing power every 18 months. It doubles both the capacity of computation (the number of computing elements) and the speed (the number of calculations per second) of each computing element. Since a neural computer is inherently massively parallel, each doubling of capacity and speed actually multiplies the number of neural connections per second by four. Thus, we can increase the power of our neural computer by a factor of 1000 every 7 1/2 years. To provide just one example among many that this rate of progress is quite reasonable, Ricoh has just announced a new version of its neural computer that is 12 times faster than the one developed two years ago. At this rate, a personal neural computer will match the capacity of the human brain in terms of neuron connections per second (i.e., 2 x 1016 calculations per second) in about 20 years, or in the year 2012. Achieving the memory capacity of the human brain (1014 analog values stored at the synapses) will take a little longer- about 27 years, or in the year 2019.

Reaching this threshold will not cause Moore’s Law to slow down. As we go through the 21st century, computer circuits will be grown like crystals with computing taking place at the molecular level. By the year 2040, your state-of-the-art personal computer will be able to simulate a society of 10,000 brains, each of which would be opening at a speed 10,000 times faster than a human brain. Or, alternatively, it could implement a single mind with 10,000 times the memory capacity of a human brain and 100 million times the speed.

The sources of knowledge

However, raw computing speed and memory capacity, even if implemented in massively parallel neural nets, will not automatically result in human-level intelligence. The architecture and organization of these resources are at least as important as the capacity itself. And then of course, neural net-based systems will need to learn their lessons. After all, the human neural net spends at least a couple of decades learning before it is considered ready for most useful tasks.

There are several sources of such knowledge. One is the extensive array of research efforts (still performed by humans) to understand the algorithms and methods underlying the hundreds of faculties we collectively call human intelligence. Progress in this arena is steady if painstaking, although in many areas – e.g., speech recognition – algorithms already exist that are just waiting for more powerful computers to enable them.

There is, of course, a source of knowledge that we can tap to accelerate greatly our understanding of how to design intelligence in a machine, and that is the human brain itself. By probing the brain’s circuits, we can essentially copy a proven design, one that took its original designer several billion years to develop. Just as the Human Genome Project (in which the entire human genetic code is being scanned and recorded) will accelerate the ability to create new treatments and drugs, a similar effort to scan and record the neural organization of the human brain can help provide the templates of intelligence. This effort has already begun. For example, an artificial retina chip created by Synaptics is fundamntally a copy of the neural organization (implemented in silicon, of course) of the human retina and its visual processing layer.

High speed, high-resolution magnetic resonance imaging (MRI) scanners are already able to resolve individual somas (neuron cell bodies) without disturbing the living tissue being scanned. More powerfull MRls, using larger magnets, would be capable of scanning individual nerve fibers that are only ten microns in diameter. Eventually, we will be able to automatically scan the presynaptic vesicles that are the site of human learning.

Layers of intelligence

This suggests two scenarios. The first is to scan portions of a brain to ascertain the architecture of interneuronal connections in different regions. The exact position of each nerve fiber is not as important as the overall pattern. With this information, we can design artificial neural nets that will operate similarly. This process will be like peeling an onion as each layer of human intelligence is revealed.

A more difficult scenario would be to noninvasively scan someone’s brain to map the locations and interconnections of the somas, axons, dendrites, synapses, presynaptic vesicles, and other neural components. Its entire organization could then be re-created on a neural computer of sufficient capacity, including the contents of its memory. We can peer inside someone’s brain today with MRI scanners, which are increasing their resolution with each new generation of the device.

There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth (i.e., speed of scanning), lack of vibration, and safety. For a number of reasons, it will be easier to scan the brain of someone recently deceased than of someone still living (it is easier to get someone deceased to sit still for one thing), but noninvasively, scanning a living brain will ultimately become feasible as MRI and other scanning technologies continue to improve in resolution and speed.If people were scanned and then re-created in a neural computer, one might wonder just who are those people in the machine? The answer would depend on whom you ask. If you ask the people in the machine, they would strenuously claim to be the original persons, having lived certain lives, gone into a scanner, and then woke up in the machine. On the other hand, the original people who were scanned would claim that the people in the machine are imposters, those who appear to share their memories and personality, but are definitely different people.

Many other issues are raised by these scenarios. A machine intelligence that was derived from human intelligence would need a body. A disembodied mind would become quickly depressed. While progress will be made in this area as well, building a suitable artificial body will in many ways be more challenging than building an artificial mind. Even partial success in the first and easiest of the two scenarios above (scanning portions of a brain to ascertain general principles of construction) will present new dilemmas. If, as seems likely, the next century will produce PCs with memory capacities and computational capabilities vastly outstripping the human brain, even a partial mastery of human cognitive facilities will be formidable. At a minimum, we are likely to see the Luddite issue (i.e., concern over the negative impact of machines on human employment) become of intense interest once again. We will examine these and other issues when we take a look at the impact of machine intelligence on life in the 21st century in an upcoming series of Futurecasts.

Meanwhile, back in the closing days of the 20th century, we all share an intense interest in making the most of our human intelligence and frail yet marvelous human bodies. I have long been interested in our health and well being and have discovered a rather unexpected perspective: we actually have the knowledge to virtually eliminate heart disease and cancer. I will share some of these thoughts in the next Futurecasts.

Reprinted with permission from Library Journal, November 1992. Copyright © 1992, Reed Elsevier, USA

RNA interference

From Wikipedia, the free encyclopedia
Lentiviral delivery of designed shRNA's and the mechanism of RNA interference in mammalian cells.

RNA interference (RNAi) is a biological process in which RNA molecules inhibit gene expression or translation, by neutralizing targeted mRNA molecules. Historically, RNA interference was known by other names, including co-suppression, post-transcriptional gene silencing (PTGS), and quelling. The detailed study of each of these seemingly different processes, elucidated that the identity of these phenomena were all actually RNAi. Andrew Fire and Craig C. Mello shared the 2006 Nobel Prize in Physiology or Medicine for their work on RNA interference in the nematode worm Caenorhabditis elegans, which they published in 1998. Since the discovery of RNAi and its regulatory potentials, it has become evident that RNAi has immense potential in suppression of desired genes. RNAi is now known as precise, efficient, stable and better than antisense technology for gene suppression. However, antisense RNA produced intracellularly by an expression vector may be developed and find utility as novel therapeutic agents.

Two types of small ribonucleic acid (RNA) molecules – microRNA (miRNA) and small interfering RNA (siRNA) – are central to RNA interference. RNAs are the direct products of genes, and these small RNAs can direct enzyme complexes to degrade messenger RNA (mRNA) molecules and thus decrease their activity by preventing translation, via post-transcriptional gene silencing. Moreover, transcription can be inhibited via the pre-transcriptional silencing mechanism of RNA interference, through which an enzyme complex catalyzes DNA methylation at genomic positions complementary to complexed siRNA or miRNA. RNA interference has an important role in defending cells against parasitic nucleotide sequences – viruses and transposons. It also influences development.

The RNAi pathway is found in many eukaryotes, including animals, and is initiated by the enzyme Dicer, which cleaves long double-stranded RNA (dsRNA) molecules into short double-stranded fragments of ~21 nucleotide siRNAs. Each siRNA is unwound into two single-stranded RNAs (ssRNAs), the passenger strand and the guide strand. The passenger strand is degraded and the guide strand is incorporated into the RNA-induced silencing complex (RISC). The most well-studied outcome is post-transcriptional gene silencing, which occurs when the guide strand pairs with a complementary sequence in a messenger RNA molecule and induces cleavage by Argonaute 2 (Ago2), the catalytic component of the RISC. In some organisms, this process spreads systemically, despite the initially limited molar concentrations of siRNA.

RNAi is a valuable research tool, both in cell culture and in living organisms, because synthetic dsRNA introduced into cells can selectively and robustly induce suppression of specific genes of interest. RNAi may be used for large-scale screens that systematically shut down each gene in the cell, which can help to identify the components necessary for a particular cellular process or an event such as cell division. The pathway is also used as a practical tool in biotechnology, medicine and insecticides.[3]

Cellular mechanism

The dicer protein from Giardia intestinalis, which catalyzes the cleavage of dsRNA to siRNAs. The RNase domains are colored green, the PAZ domain yellow, the platform domain red, and the connector helix blue.[4]

RNAi is RNA-dependent gene silencing process that is controlled by the RNA-induced silencing complex (RISC) and is initiated by short double-stranded RNA molecules in a cell's cytoplasm, where they interact with the catalytic RISC component argonaute.[5] When the dsRNA is exogenous (coming from infection by a virus with an RNA genome or laboratory manipulations), the RNA is imported directly into the cytoplasm and cleaved to short fragments by Dicer. The initiating dsRNA can also be endogenous (originating in the cell), as in pre-microRNAs expressed from RNA-coding genes in the genome. The primary transcripts from such genes are first processed to form the characteristic stem-loop structure of pre-miRNA in the nucleus, then exported to the cytoplasm. Thus, the two dsRNA pathways, exogenous and endogenous, converge at the RISC.[6]

Exogenous dsRNA initiates RNAi by activating the ribonuclease protein Dicer,[7] which binds and cleaves double-stranded RNAs (dsRNAs) in plants, or short hairpin RNAs (shRNAs) in humans, to produce double-stranded fragments of 20–25 base pairs with a 2-nucleotide overhang at the 3' end.[8]

Bioinformatics studies on the genomes of multiple organisms suggest this length maximizes target-gene specificity and minimizes non-specific effects.[9] These short double-stranded fragments are called small interfering RNAs (siRNAs). These siRNAs are then separated into single strands and integrated into an active RISC, by RISC-Loading Complex (RLC). RLC includes Dicer-2 and R2D2, and is crucial to unite Ago2 and RISC.[10] TATA-binding protein-associated factor 11 (TAF11) assembles the RLC by facilitating Dcr-2-R2D2 tetramerization, which increases the binding affinity to siRNA by 10-fold. Association with TAF11 would convert the R2-D2-Initiator (RDI) complex into the RLC.[11] R2D2 carries tandem double-stranded RNA-binding domains to recognize the thermodynamically stable terminus of siRNA duplexes, whereas Dicer-2 the other less stable extremity. Loading is asymmetric: the MID domain of Ago2 recognizes the thermodynamically stable end of the siRNA. Therefore, the "passenger" (sense) strand whose 5′ end is discarded by MID is ejected, while the saved "guide" (antisense) strand cooperates with AGO to form the RISC.[10]

After integration into the RISC, siRNAs base-pair to their target mRNA and cleave it, thereby preventing it from being used as a translation template.[12] Differently from siRNA, a miRNA-loaded RISC complex scans cytoplasmic mRNAs for potential complementarity. Instead of destructive cleavage (by Ago2), miRNAs rather target the 3′ untranslated region (UTR) regions of mRNAs where they typically bind with imperfect complementarity, thus blocking the access of ribosomes for translation.[13]

Exogenous dsRNA is detected and bound by an effector protein, known as RDE-4 in C. elegans and R2D2 in Drosophila, that stimulates dicer activity.[14] The mechanism producing this length specificity is unknown and this protein only binds long dsRNAs.[14]

In C. elegans this initiation response is amplified through the synthesis of a population of 'secondary' siRNAs during which the dicer-produced initiating or 'primary' siRNAs are used as templates.[15] These 'secondary' siRNAs are structurally distinct from dicer-produced siRNAs and appear to be produced by an RNA-dependent RNA polymerase (RdRP).[16][17]

MicroRNA


MicroRNAs (miRNAs) are genomically encoded non-coding RNAs that help regulate gene expression, particularly during development.[18] The phenomenon of RNA interference, broadly defined, includes the endogenously induced gene silencing effects of miRNAs as well as silencing triggered by foreign dsRNA. Mature miRNAs are structurally similar to siRNAs produced from exogenous dsRNA, but before reaching maturity, miRNAs must first undergo extensive post-transcriptional modification. A miRNA is expressed from a much longer RNA-coding gene as a primary transcript known as a pri-miRNA which is processed, in the cell nucleus, to a 70-nucleotide stem-loop structure called a pre-miRNA by the microprocessor complex. This complex consists of an RNase III enzyme called Drosha and a dsRNA-binding protein DGCR8. The dsRNA portion of this pre-miRNA is bound and cleaved by Dicer to produce the mature miRNA molecule that can be integrated into the RISC complex; thus, miRNA and siRNA share the same downstream cellular machinery.[19] First, viral encoded miRNA was described in EBV.[20] Thereafter, an increasing number of microRNAs have been described in viruses. VIRmiRNA is a comprehensive catalogue covering viral microRNA, their targets and anti-viral miRNAs [21] (see also VIRmiRNA resource: http://crdd.osdd.net/servers/virmirna/).

siRNAs derived from long dsRNA precursors differ from miRNAs in that miRNAs, especially those in animals, typically have incomplete base pairing to a target and inhibit the translation of many different mRNAs with similar sequences. In contrast, siRNAs typically base-pair perfectly and induce mRNA cleavage only in a single, specific target.[22] In Drosophila and C. elegans, miRNA and siRNA are processed by distinct argonaute proteins and dicer enzymes.[23][24]

Three prime untranslated regions and microRNAs

Three prime untranslated regions (3'UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally cause RNA interference. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.

The 3'-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.

As of 2014, the miRBase web site,[25] an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes).[26] Friedman et al.[26] estimate that &45,000 miRNA target sites within human mRNA 3'UTRs are conserved above background levels, and & 60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.

Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs.[27] Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).[28][29]

The effects of miRNA dysregulation of gene expression seem to be important in cancer.[30] For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.[31]

The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.[32][33][34]

RISC activation and catalysis

Exogenous dsRNA is detected and bound by an effector protein, known as RDE-4 in C. elegans and R2D2 in Drosophila, that stimulates dicer activity.[14] This protein only binds long dsRNAs, but the mechanism producing this length specificity is unknown.[14] This RNA-binding protein then facilitates the transfer of cleaved siRNAs to the RISC complex.[35]

In C. elegans this initiation response is amplified through the synthesis of a population of 'secondary' siRNAs during which the dicer-produced initiating or 'primary' siRNAs are used as templates.[15] These 'secondary' siRNAs are structurally distinct from dicer-produced siRNAs and appear to be produced by an RNA-dependent RNA polymerase (RdRP).[16][17]
 
small RNA Biogenesis: primary miRNAs (pri-miRNAs) are transcribed in the nucleus and fold back onto themselves as hairpins that are then trimmed in the nucleus by a microprocessor complex to form a ~60-70nt hairpin pre-RNA. This pre-miRNA is transported through the nuclear pore complex (NPC) into the cytoplasm, where Dicer further trims it to a ~20nt miRNA duplex (pre-siRNAs also enter the pathway at this step). This duplex is then loaded into Ago to form the “pre-RISC(RNA induced silencing complex)” and the passenger strand is released to form active RISC.
Left: A full-length argonaute protein from the archaea species Pyrococcus furiosus. Right: The PIWI domain of an argonaute protein in complex with double-stranded RNA.

The active components of an RNA-induced silencing complex (RISC) are endonucleases called argonaute proteins, which cleave the target mRNA strand complementary to their bound siRNA.[5] As the fragments produced by dicer are double-stranded, they could each in theory produce a functional siRNA. However, only one of the two strands, which is known as the guide strand, binds the argonaute protein and directs gene silencing. The other anti-guide strand or passenger strand is degraded during RISC activation.[36] Although it was first believed that an ATP-dependent helicase separated these two strands,[37] the process proved to be ATP-independent and performed directly by the protein components of RISC.[38][39] However, an in vitro kinetic analysis of RNAi in the presence and absence of ATP showed that ATP may be required to unwind and remove the cleaved mRNA strand from the RISC complex after catalysis.[40] The guide strand tends to be the one whose 5' end is less stably paired to its complement,[41] but strand selection is unaffected by the direction in which dicer cleaves the dsRNA before RISC incorporation.[42] Instead, the R2D2 protein may serve as the differentiating factor by binding the more-stable 5' end of the passenger strand.[43]

The structural basis for binding of RNA to the argonaute protein was examined by X-ray crystallography of the binding domain of an RNA-bound argonaute protein. Here, the phosphorylated 5' end of the RNA strand enters a conserved basic surface pocket and makes contacts through a divalent cation (an atom with two positive charges) such as magnesium and by aromatic stacking (a process that allows more than one atom to share an electron by passing it back and forth) between the 5' nucleotide in the siRNA and a conserved tyrosine residue. This site is thought to form a nucleation site for the binding of the siRNA to its mRNA target.[44] Analysis of the inhibitory effect of mismatches in either the 5’ or 3’ end of the guide strand has demonstrated that the 5’ end of the guide strand is likely responsible for matching and binding the target mRNA, while the 3’ end is responsible for physically arranging target mRNA into a cleavage-favorable RISC region.[40]

It is not understood how the activated RISC complex locates complementary mRNAs within the cell. Although the cleavage process has been proposed to be linked to translation, translation of the mRNA target is not essential for RNAi-mediated degradation.[45] Indeed, RNAi may be more effective against mRNA targets that are not translated.[46] Argonaute proteins are localized to specific regions in the cytoplasm called P-bodies (also cytoplasmic bodies or GW bodies), which are regions with high rates of mRNA decay;[47] miRNA activity is also clustered in P-bodies.[48] Disruption of P-bodies decreases the efficiency of RNA interference, suggesting that they are a critical site in the RNAi process.[49]

Transcriptional silencing

The enzyme dicer trims double stranded RNA, to form small interfering RNA or microRNA. These processed RNAs are incorporated into the RNA-induced silencing complex (RISC), which targets messenger RNA to prevent translation.[50]

Components of the RNAi pathway are used in many eukaryotes in the maintenance of the organization and structure of their genomes. Modification of histones and associated induction of heterochromatin formation serves to downregulate genes pre-transcriptionally;[51] this process is referred to as RNA-induced transcriptional silencing (RITS), and is carried out by a complex of proteins called the RITS complex. In fission yeast this complex contains argonaute, a chromodomain protein Chp1, and a protein called Tas3 of unknown function.[52] As a consequence, the induction and spread of heterochromatic regions requires the argonaute and RdRP proteins.[53] Indeed, deletion of these genes in the fission yeast S. pombe disrupts histone methylation and centromere formation,[54] causing slow or stalled anaphase during cell division.[55] In some cases, similar processes associated with histone modification have been observed to transcriptionally upregulate genes.[56]

The mechanism by which the RITS complex induces heterochromatin formation and organization is not well understood. Most studies have focused on the mating-type region in fission yeast, which may not be representative of activities in other genomic regions/organisms. In maintenance of existing heterochromatin regions, RITS forms a complex with siRNAs complementary to the local genes and stably binds local methylated histones, acting co-transcriptionally to degrade any nascent pre-mRNA transcripts that are initiated by RNA polymerase. The formation of such a heterochromatin region, though not its maintenance, is dicer-dependent, presumably because dicer is required to generate the initial complement of siRNAs that target subsequent transcripts.[57] Heterochromatin maintenance has been suggested to function as a self-reinforcing feedback loop, as new siRNAs are formed from the occasional nascent transcripts by RdRP for incorporation into local RITS complexes.[58] The relevance of observations from fission yeast mating-type regions and centromeres to mammals is not clear, as heterochromatin maintenance in mammalian cells may be independent of the components of the RNAi pathway.[59]

Crosstalk with RNA editing

The type of RNA editing that is most prevalent in higher eukaryotes converts adenosine nucleotides into inosine in dsRNAs via the enzyme adenosine deaminase (ADAR).[60] It was originally proposed in 2000 that the RNAi and A→I RNA editing pathways might compete for a common dsRNA substrate.[61] Some pre-miRNAs do undergo A→I RNA editing[62][63] and this mechanism may regulate the processing and expression of mature miRNAs.[63] Furthermore, at least one mammalian ADAR can sequester siRNAs from RNAi pathway components.[64] Further support for this model comes from studies on ADAR-null C. elegans strains indicating that A→I RNA editing may counteract RNAi silencing of endogenous genes and transgenes.[65]
 
Illustration of the major differences between plant and animal gene silencing. Natively expressed microRNA or exogenous small interfering RNA is processed by dicer and integrated into the RISC complex, which mediates gene silencing.[66]

Variation among organisms

Organisms vary in their ability to take up foreign dsRNA and use it in the RNAi pathway. The effects of RNA interference can be both systemic and heritable in plants and C. elegans, although not in Drosophila or mammals. In plants, RNAi is thought to propagate by the transfer of siRNAs between cells through plasmodesmata (channels in the cell walls that enable communication and transport).[37] Heritability comes from methylation of promoters targeted by RNAi; the new methylation pattern is copied in each new generation of the cell.[67] A broad general distinction between plants and animals lies in the targeting of endogenously produced miRNAs; in plants, miRNAs are usually perfectly or nearly perfectly complementary to their target genes and induce direct mRNA cleavage by RISC, while animals' miRNAs tend to be more divergent in sequence and induce translational repression.[66] This translational effect may be produced by inhibiting the interactions of translation initiation factors with the messenger RNA's polyadenine tail.[68]

Some eukaryotic protozoa such as Leishmania major and Trypanosoma cruzi lack the RNAi pathway entirely.[69][70] Most or all of the components are also missing in some fungi, most notably the model organism Saccharomyces cerevisiae.[71] The presence of RNAi in other budding yeast species such as Saccharomyces castellii and Candida albicans, further demonstrates that inducing two RNAi-related proteins from S. castellii facilitates RNAi in S. cerevisiae.[72] That certain ascomycetes and basidiomycetes are missing RNA interference pathways indicates that proteins required for RNA silencing have been lost independently from many fungal lineages, possibly due to the evolution of a novel pathway with similar function, or to the lack of selective advantage in certain niches.[73]

Related prokaryotic systems

Gene expression in prokaryotes is influenced by an RNA-based system similar in some respects to RNAi. Here, RNA-encoding genes control mRNA abundance or translation by producing a complementary RNA that anneals to an mRNA. However these regulatory RNAs are not generally considered to be analogous to miRNAs because the dicer enzyme is not involved.[74] It has been suggested that CRISPR interference systems in prokaryotes are analogous to eukaryotic RNA interference systems, although none of the protein components are orthologous.[75]

Biological functions

Immunity

RNA interference is a vital part of the immune response to viruses and other foreign genetic material, especially in plants where it may also prevent the self-propagation of transposons.[76] Plants such as Arabidopsis thaliana express multiple dicer homologs that are specialized to react differently when the plant is exposed to different viruses.[77] Even before the RNAi pathway was fully understood, it was known that induced gene silencing in plants could spread throughout the plant in a systemic effect and could be transferred from stock to scion plants via grafting.[78] This phenomenon has since been recognized as a feature of the plant adaptive immune system and allows the entire plant to respond to a virus after an initial localized encounter.[79] In response, many plant viruses have evolved elaborate mechanisms to suppress the RNAi response.[80] These include viral proteins that bind short double-stranded RNA fragments with single-stranded overhang ends, such as those produced by dicer.[81] Some plant genomes also express endogenous siRNAs in response to infection by specific types of bacteria.[82] These effects may be part of a generalized response to pathogens that downregulates any metabolic process in the host that aids the infection process.[83]

Although animals generally express fewer variants of the dicer enzyme than plants, RNAi in some animals produces an antiviral response. In both juvenile and adult Drosophila, RNA interference is important in antiviral innate immunity and is active against pathogens such as Drosophila X virus.[84][85] A similar role in immunity may operate in C. elegans, as argonaute proteins are upregulated in response to viruses and worms that overexpress components of the RNAi pathway are resistant to viral infection.[86][87]

The role of RNA interference in mammalian innate immunity is poorly understood, and relatively little data is available. However, the existence of viruses that encode genes able to suppress the RNAi response in mammalian cells may be evidence in favour of an RNAi-dependent mammalian immune response,[88][89] although this hypothesis has been challenged as poorly substantiated.[90] Maillard et al.[91] and Li et al.[92] provide evidence for the existence of a functional antiviral RNAi pathway in mammalian cells. Other functions for RNAi in mammalian viruses also exist, such as miRNAs expressed by the herpes virus that may act as heterochromatin organization triggers to mediate viral latency.[93]

Downregulation of genes

Endogenously expressed miRNAs, including both intronic and intergenic miRNAs, are most important in translational repression[66] and in the regulation of development, especially on the timing of morphogenesis and the maintenance of undifferentiated or incompletely differentiated cell types such as stem cells.[94] The role of endogenously expressed miRNA in downregulating gene expression was first described in C. elegans in 1993.[95] In plants this function was discovered when the "JAW microRNA" of Arabidopsis was shown to be involved in the regulation of several genes that control plant shape.[96] In plants, the majority of genes regulated by miRNAs are transcription factors;[97] thus miRNA activity is particularly wide-ranging and regulates entire gene networks during development by modulating the expression of key regulatory genes, including transcription factors as well as F-box proteins.[98] In many organisms, including humans, miRNAs are linked to the formation of tumors and dysregulation of the cell cycle. Here, miRNAs can function as both oncogenes and tumor suppressors.[99]

Upregulation of genes

RNA sequences (siRNA and miRNA) that are complementary to parts of a promoter can increase gene transcription, a phenomenon dubbed RNA activation. Part of the mechanism for how these RNA upregulate genes is known: dicer and argonaute are involved, possibly via histone demethylation.[100] miRNAs have been proposed to upregulate their target genes upon cell cycle arrest, via unknown mechanisms.[101]

Evolution

Based on parsimony-based phylogenetic analysis, the most recent common ancestor of all eukaryotes most likely already possessed an early RNA interference pathway; the absence of the pathway in certain eukaryotes is thought to be a derived characteristic.[102] This ancestral RNAi system probably contained at least one dicer-like protein, one argonaute, one PIWI protein, and an RNA-dependent RNA polymerase that may also have played other cellular roles. A large-scale comparative genomics study likewise indicates that the eukaryotic crown group already possessed these components, which may then have had closer functional associations with generalized RNA degradation systems such as the exosome.[103] This study also suggests that the RNA-binding argonaute protein family, which is shared among eukaryotes, most archaea, and at least some bacteria (such as Aquifex aeolicus), is homologous to and originally evolved from components of the translation initiation system.

The ancestral function of the RNAi system is generally agreed to have been immune defense against exogenous genetic elements such as transposons and viral genomes.[102][104] Related functions such as histone modification may have already been present in the ancestor of modern eukaryotes, although other functions such as regulation of development by miRNA are thought to have evolved later.[102]

RNA interference genes, as components of the antiviral innate immune system in many eukaryotes, are involved in an evolutionary arms race with viral genes. Some viruses have evolved mechanisms for suppressing the RNAi response in their host cells, particularly for plant viruses.[80] Studies of evolutionary rates in Drosophila have shown that genes in the RNAi pathway are subject to strong directional selection and are among the fastest-evolving genes in the Drosophila genome.[105]

Applications

Gene knockdown

The RNA interference pathway is often exploited in experimental biology to study the function of genes in cell culture and in vivo in model organisms.[5] Double-stranded RNA is synthesized with a sequence complementary to a gene of interest and introduced into a cell or organism, where it is recognized as exogenous genetic material and activates the RNAi pathway. Using this mechanism, researchers can cause a drastic decrease in the expression of a targeted gene. Studying the effects of this decrease can show the physiological role of the gene product. Since RNAi may not totally abolish expression of the gene, this technique is sometimes referred as a "knockdown", to distinguish it from "knockout" procedures in which expression of a gene is entirely eliminated.[106] In a recent study validation of RNAi silencing efficiency using gene array data showed 18.5% failure rate across 429 independent experiments.[107]

Extensive efforts in computational biology have been directed toward the design of successful dsRNA reagents that maximize gene knockdown but minimize "off-target" effects. Off-target effects arise when an introduced RNA has a base sequence that can pair with and thus reduce the expression of multiple genes. Such problems occur more frequently when the dsRNA contains repetitive sequences. It has been estimated from studying the genomes of humans, C. elegans and S. pombe that about 10% of possible siRNAs have substantial off-target effects.[9] A multitude of software tools have been developed implementing algorithms for the design of general[108][109] mammal-specific,[110] and virus-specific[111] siRNAs that are automatically checked for possible cross-reactivity.

Depending on the organism and experimental system, the exogenous RNA may be a long strand designed to be cleaved by dicer, or short RNAs designed to serve as siRNA substrates. In most mammalian cells, shorter RNAs are used because long double-stranded RNA molecules induce the mammalian interferon response, a form of innate immunity that reacts nonspecifically to foreign genetic material.[112] Mouse oocytes and cells from early mouse embryos lack this reaction to exogenous dsRNA and are therefore a common model system for studying mammalian gene-knockdown effects.[113] Specialized laboratory techniques have also been developed to improve the utility of RNAi in mammalian systems by avoiding the direct introduction of siRNA, for example, by stable transfection with a plasmid encoding the appropriate sequence from which siRNAs can be transcribed,[114] or by more elaborate lentiviral vector systems allowing the inducible activation or deactivation of transcription, known as conditional RNAi.[115][116]

Functional genomics

A normal adult Drosophila fly, a common model organism used in RNAi experiments.

Most functional genomics applications of RNAi in animals have used C. elegans[117] and Drosophila,[118] as these are the common model organisms in which RNAi is most effective. C. elegans is particularly useful for RNAi research for two reasons: firstly, the effects of gene silencing are generally heritable, and secondly because delivery of the dsRNA is extremely simple. Through a mechanism whose details are poorly understood, bacteria such as E. coli that carry the desired dsRNA can be fed to the worms and will transfer their RNA payload to the worm via the intestinal tract. This "delivery by feeding" is just as effective at inducing gene silencing as more costly and time-consuming delivery methods, such as soaking the worms in dsRNA solution and injecting dsRNA into the gonads.[119] Although delivery is more difficult in most other organisms, efforts are also underway to undertake large-scale genomic screening applications in cell culture with mammalian cells.[120]

Approaches to the design of genome-wide RNAi libraries can require more sophistication than the design of a single siRNA for a defined set of experimental conditions. Artificial neural networks are frequently used to design siRNA libraries[121] and to predict their likely efficiency at gene knockdown.[122] Mass genomic screening is widely seen as a promising method for genome annotation and has triggered the development of high-throughput screening methods based on microarrays.[123][124] However, the utility of these screens and the ability of techniques developed on model organisms to generalize to even closely related species has been questioned, for example from C. elegans to related parasitic nematodes.[125][126]

Functional genomics using RNAi is a particularly attractive technique for genomic mapping and annotation in plants because many plants are polyploid, which presents substantial challenges for more traditional genetic engineering methods. For example, RNAi has been successfully used for functional genomics studies in bread wheat (which is hexaploid)[127] as well as more common plant model systems Arabidopsis and maize.[128]

Medicine

History of RNAi use in medicine
Timeline of the use of RNAi in medicine between 1996 and 2017

The first instance of RNA silencing in animals was documented in 1996, when Guo and Kemphues observed that, by introducing sense and antisense RNA to par-1 mRNA in Caenorhabditis elegans caused degradation of the par-1 message.[129] It was thought that this degradation was triggered by single stranded RNA (ssRNA), but two years later, in 1998, Fire and Mello discovered that this ability to silence the par-1 gene expression was actually triggered by double-stranded RNA (dsRNA).[129] They would eventually share the Nobel Prize in Physiology or Medicine for this discovery.[130] Just after Fire and Mello's ground-breaking discovery, Elbashir et al. discovered, by using synthetically made small interfering RNA (siRNA), it was possible to target the silencing of specific sequences in a gene, rather than silencing the entire gene.[131] Only a year later, McCaffrey and colleagues demonstrated that this sequence specific silencing had therapeutic applications by targeting a sequence from the Hepatitis C virus in transgenic mice.[132] Since then, multiple researchers have been attempting to expand the therapeutic applications of RNAi, specifically looking to target genes that cause various types of cancer.[133][134] Finally, in 2004, this new gene silencing technology entered a Phase I clinical trial in humans for wet age-related macular degeneration.[131] Six years later the first-in-human Phase I clinical trial was started, using a nanoparticle delivery system to target solid tumors.[135] Although most research is currently looking into the applications of RNAi in cancer treatment, the list of possible applications is extensive. RNAi could potentially be used to treat viruses,[136] bacterial diseases,[137] parasites,[138] maladaptive genetic mutations,[139] control drug consumption,[140] provide pain relief,[141] and even modulate sleep.[142]

Therapeutic applications

Viral infection
Antiviral treatment is one of the earliest proposed RNAi-based medical applications, and two different types have been developed. The first type is to target viral RNAs. Many studies have shown that targeting viral RNAs can suppress the replication of numerous viruses, including HIV,[143] HPV,[144] hepatitis A,[145] hepatitis B,[146] Influenza virus,[147] and Measles virus.[148] The other strategy is to block the initial viral entries by targeting the host cell genes. For example, suppression of chemokine receptors (CXCR4 and CCR5)on host cells can prevent HIV viral entry.[149]
Cancer
While traditional chemotherapy can effectively kill cancer cells, lack of specificity for discriminating normal cells and cancer cells in these treatments usually cause severe side effects. Numerous studies have demonstrated that RNAi can provide a more specific approach to inhibit tumor growth by targeting cancer-related genes (i.e., oncogene).[150] It has also been proposed that RNAi can enhance the sensitivity of cancer cells to chemotherapeutic agents, providing a combinatorial therapeutic approach with chemotherapy.[151] Another potential RNAi-based treatment is to inhibit cell invasion and migration.[152]
Neurological diseases
RNAi strategies also show potential for treating neurodegenerative diseases. Studies in cells and in mouse have shown that specifically targeting Amyloid beta-producing genes (e.g. BACE1 and APP) by RNAi can significantly reduced the amount of Aβ peptide which is correlated with the cause of Alzheimer's disease.[153][154][155] In addition, this silencing-based approaches also provide promising results in treatment of Parkinson's disease and Polyglutamine disease.[156][157][158]

Difficulties in Therapeutic Application

To achieve the clinical potential of RNAi, siRNA must be efficiently transportated to the cells of target tissues. However, there are various barriers that must be fixed before it can be used clinically.  For example, "Naked" siRNA is susceptible to several obstacles that reduce its therapeutic efficacy.[159] Additionally, once siRNA has entered the bloodstream, naked RNA can be degraded by serum nucleases and can stimulate the innate immune system.[160] Due to its size and highly polyanionic (containing negative charges at several sites) nature, unmodified siRNA molecules cannot readily enter the cells through the cell membrane. Therefore, artificial or nanoparticle encapsulated siRNA must be used. However, transporting siRNA across the cell membrane still has its own unique challenges. If siRNA is transferred across the cell membrane, unintended toxicities can occur if therapeutic doses are not optimized, and siRNAs can exhibit off-target effects (e.g. unintended downregulation of genes with partial sequence complementarity).[161] Even after entering the cells, repeated dosing is required since their effects are diluted at each cell division.

Safety and Uses in Cancer treatment

Compared with chemotherapy or other anti-cancer drugs, there are a lot of advantages of siRNA drug.[162] SiRNA acts on the post-translational stage of gene expression, so it doesn’t modify or change DNA in a deleterious effect.[162] SiRNA can also be used to produced a specific response in a certain type of way, such as by downgrading suppression of gene expression.[162] In a single cancer cell, siRNA can cause dramatic suppression of gene expression with just several copies.[162] This happens by silencing cancer-promoting genes with RNAi, as well as targeting an mRNA sequence.[162]

RNAi drugs treat cancer by silencing certain cancer promoting genes.[162] This is done by complementing the cancer genes with the RNAi, such as keeping the mRNA sequences in accordance with the RNAi drug.[162] Ideally, RNAi is should be injected and/or chemically modified so the RNAi can reach cancer cells more efficiently.[162] RNAi uptake and regulation is monitored by the kidneys.[162]

Stimulation of immune response

The human immune system is divided into two separate branches: the innate immune system and the adaptive immune system.[163] The innate immune system is the first defense against infection and responds to pathogens in a generic fashion.[163] On the other hand, the adaptive immune system, a system that was evolved later than the innate, is composed mainly of highly specialized B and T cells that are trained to react to specific portions of pathogenic molecules.[163]

The challenge between old pathogens and new has helped create a system of guarded cells and particles that are called safe framework.[163] This framework has given humans an army systems that search out and destroy invader particles, such as pathogens, microscopic organisms, parasites, and infections.[163] The mammalian safe framework has developed to incorporate siRNA as a tool to indicate viral contamination, which has allowed siRNA is create an intense innate immune response.[163]

siRNA is controlled by the innate immune system, which can be divided into the acute inflammatory responses and antiviral responses.[163] The inflammatory response is created with signals from small signaling molecules, or cytokines.[163] These include interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-12 (IL-12) and tumor necrosis factor α (TNF-α).[163] The innate immune system generates inflammation and antiviral responses, which cause the release pattern recognition receptors (PRRs).[163] These receptors help in labeling which pathogens are viruses, fungi, or bacteria.[163] Moreover, the importance of siRNA and the innate immune system is to include more PRRs to help recognize different RNA structures.[163] This makes it more likely for the siRNA to cause an immunostimulant response in the event of the pathogen.[163]

Prospects as a Therapeutic Technique

Clinical Phase I and II studies of siRNA therapies conducted between 2015 and 2017 have demonstrated potent and durable gene knockdown in the liver, with some signs of clinical improvement and without unacceptable toxicity.[164] Two Phase III studies are in progress to treat familial neurodegenerative and cardiac syndromes caused by mutations in transthyretin (TTR).[165] Numerous publications have shown that in vivo delivery systems are very promising and are diverse in characteristics, allowing numerous applications. The nanoparticle delivery system shows the most promise yet this method presents additional challenges in the scale-up of the manufacturing process, such as the need for tightly controlled mixing processes to achieve consistent quality of the drug product.[166] The table below shows different drugs using RNA interference and what their phases and status is in clinical trials.[159]
 
Drug Target Delivery System Disease Phase Status Company Identifier
ALN–VSP02 KSP and VEGF LNP Solid tumours I Completed Alnylam Pharmaceuticals NCT01158079
siRNA–EphA2–DOPC EphA2 LNP Advanced cancers I Recruiting MD Anderson Cancer Center NCT01591356
Atu027 PKN3 LNP Solid tumours I Completed Silence Therapeutics NCT00938574
TKM–080301 PLK1 LNP Cancer I Recruiting Tekmira Pharmaceutical NCT01262235
TKM–100201 VP24, VP35, Zaire Ebola L-polymerase LNP Ebola-virus infection I Recruiting Tekmira Pharmaceutical NCT01518881
ALN–RSV01 RSV nucleocapsid Naked siRNA Respiratory syncytial virus infections II Completed Alnylam Pharmaceuticals NCT00658086
PRO-040201 ApoB LNP Hypercholesterolaemia I Terminated Tekmira Pharmaceutical NCT00927459
ALN–PCS02 PCSK9 LNP Hypercholesterolaemia I Completed Alnylam Pharmaceuticals NCT01437059
ALN–TTR02 TTR LNP Transthyretin-mediated amyloidosis II Recruiting Alnylam Pharmaceuticals NCT01617967
CALAA-01 RRM2 Cyclodextrin NP Solid tumours I Active Calando Pharmaceuticals NCT00689065
TD101 K6a (N171K mutation) Naked siRNA Pachyonychia congenita I Completed Pachyonychia Congenita Project NCT00716014
AGN211745 VEGFR1 Naked siRNA Age-related macular degeneration, choroidal neovascularization II Terminated Allergan NCT00395057
QPI-1007 CASP2 Naked siRNA Optic atrophy, non-arteritic anterior ischaemic optic neuropathy I Completed Quark Pharmaceuticals NCT01064505
I5NP p53 Naked siRNA Kidney injury, acute renal failure I Completed Quark Pharmaceuticals NCT00554359
Delayed graft function, complications of kidney transplant I, II Recruiting Quark Pharmaceuticals NCT00802347
PF-655 (PF-04523655) RTP801 (Proprietary target) Naked siRNA Choroidal neovascularization, diabetic retinopathy, diabetic macular oedema II Active Quark Pharmaceuticals NCT01445899
siG12D LODER KRAS LODER polymer Pancreatic cancer II Recruiting Silenseed NCT01676259
Bevasiranib VEGF Naked siRNA Diabetic macular oedema, macular degeneration II Completed Opko Health NCT00306904
SYL1001 TRPV1 Naked siRNA Ocular pain, dry-eye syndrome I, II Recruiting Sylentis NCT01776658
SYL040012 ADRB2 Naked siRNA Ocular hypertension, open-angle glaucoma II Recruiting Sylentis NCT01739244
CEQ508 CTNNB1 Escherichia coli-carrying shRNA Familial adenomatous polyposis I, II Recruiting Marina Biotech Unknown
RXi-109 CTGF Self-delivering RNAi compound Cicatrix scar prevention I Recruiting RXi Pharmaceuticals NCT01780077
ALN–TTRsc TTR siRNA–GalNAc conjugate Transthyretin-mediated amyloidosis I Recruiting Alnylam Pharmaceuticals NCT01814839
ARC-520 Conserved regions of HBV DPC HBV I Recruiting Arrowhead Research NCT01872065

Biotechnology

RNA interference has been used for applications in biotechnology and is nearing commercialization in others. RNAi has developed many novel crops such as nicotine free tobacco, decaffeinated coffee, nutrient fortified and hypoallergenic crops. The genetically engineered Arctic apples received FDA approval in 2015.[167] The apples were produced by RNAi suppression of PPO (polyphenol oxidase) gene making apple varieties that will not undergo browning after being sliced. PPO-silenced apples are unable to convert chlorogenic acid into quinone product.[1]

There are several opportunities for the applications of RNAi in crop science for its improvement such as stress tolerance and enhanced nutritional level. RNAi will prove its potential for inhibition of photorespiration to enhance the productivity of C3 plants. This knockdown technology may be useful in inducing early flowering, delayed ripening, delayed senescence, breaking dormancy, stress-free plants, overcoming self-sterility, etc.[1]

Foods

RNAi has been used to genetically engineer plants to produce lower levels of natural plant toxins. Such techniques take advantage of the stable and heritable RNAi phenotype in plant stocks. Cotton seeds are rich in dietary protein but naturally contain the toxic terpenoid product gossypol, making them unsuitable for human consumption. RNAi has been used to produce cotton stocks whose seeds contain reduced levels of delta-cadinene synthase, a key enzyme in gossypol production, without affecting the enzyme's production in other parts of the plant, where gossypol is itself important in preventing damage from plant pests.[168] Similar efforts have been directed toward the reduction of the cyanogenic natural product linamarin in cassava plants.[169]

No plant products that use RNAi-based genetic engineering have yet exited the experimental stage. Development efforts have successfully reduced the levels of allergens in tomato plants[170] and fortification of plants such as tomatoes with dietary antioxidants.[171] Previous commercial products, including the Flavr Savr tomato and two cultivars of ringspot-resistant papaya, were originally developed using antisense technology but likely exploited the RNAi pathway.[172][173]

Other crops

Another effort decreased the precursors of likely carcinogens in tobacco plants.[174] Other plant traits that have been engineered in the laboratory include the production of non-narcotic natural products by the opium poppy[175] and resistance to common plant viruses.[176]

Insecticide

RNAi is under development as an insecticide, employing multiple approaches, including genetic engineering and topical application.[3] Cells in the midgut of some insects take up the dsRNA molecules in the process referred to as environmental RNAi.[177] In some insects the effect is systemic as the signal spreads throughout the insect's body (referred to as systemic RNAi).[178]

RNAi technology is shown to be safe for consumption by mammals, including humans.[179]

RNAi has varying effects in different species of Lepidoptera (butterflies and moths).[180] Possibly because their saliva and gut juice is better at breaking down RNA, the cotton bollworm, the beet armyworm and the Asiatic rice borer have so far not been proven susceptible to RNAi by feeding.[3]

To develop resistance to RNAi, the western corn rootworm would have to change the genetic sequence of its Snf7 gene at multiple sites. Combining multiple strategies, such as engineering the protein Cry, derived from a bacterium called Bacillus thuringiensis (Bt), and RNAi in one plant delay the onset of resistance.[3][181]
Transgenic plants
Transgenic crops have been made to express dsRNA, carefully chosen to silence crucial genes in target pests. These dsRNAs are designed to affect only insects that express specific gene sequences. As a proof of principle, in 2009 a study showed RNAs that could kill any one of four fruit fly species while not harming the other three.[3]

In 2012 Syngenta bought Belgian RNAi firm Devgen for $522 million and Monsanto paid $29.2 million for the exclusive rights to intellectual property from Alnylam Pharmaceuticals. The International Potato Center in Lima, Peru is looking for genes to target in the sweet potato weevil, a beetle whose larvae ravage sweet potatoes globally. Other researchers are trying to silence genes in ants, caterpillars and pollen beetles. Monsanto will likely be first to market, with a transgenic corn seed that expresses dsRNA based on gene Snf7 from the western corn rootworm, a beetle whose larvae annually cause one billion dollars in damage in the United States alone. A 2012 paper showed that silencing Snf7 stunts larval growth, killing them within days. In 2013 the same team showed that the RNA affects very few other species.[3]
Topical
Alternatively dsRNA can be supplied without genetic engineering. One approach is to add them to irrigation water. The molecules are absorbed into the plants' vascular system and poison insects feeding on them. Another approach involves spraying dsRNA like a conventional pesticide. This would allow faster adaptation to resistance. Such approaches would require low cost sources of dsRNAs that do not currently exist.[3]

Genome-scale screening

Genome-scale RNAi research relies on high-throughput screening (HTS) technology. RNAi HTS technology allows genome-wide loss-of-function screening and is broadly used in the identification of genes associated with specific phenotypes. This technology has been hailed as the second genomics wave, following the first genomics wave of gene expression microarray and single nucleotide polymorphism discovery platforms.[182] One major advantage of genome-scale RNAi screening is its ability to simultaneously interrogate thousands of genes. With the ability to generate a large amount of data per experiment, genome-scale RNAi screening has led to an explosion of data generation rates. Exploiting such large data sets is a fundamental challenge, requiring suitable statistics/bioinformatics methods. The basic process of cell-based RNAi screening includes the choice of an RNAi library, robust and stable cell types, transfection with RNAi agents, treatment/incubation, signal detection, analysis and identification of important genes or therapeutical targets.[183]

History

Example petunia plants in which genes for pigmentation are silenced by RNAi. The left plant is wild-type; the right plants contain transgenes that induce suppression of both transgene and endogenous gene expression, giving rise to the unpigmented white areas of the flower.[184]

The process of RNAi was referred to as "co-suppression" and "quelling" when observed prior to the knowledge of an RNA-related mechanism. The discovery of RNAi was preceded first by observations of transcriptional inhibition by antisense RNA expressed in transgenic plants,[185] and more directly by reports of unexpected outcomes in experiments performed by plant scientists in the United States and the Netherlands in the early 1990s.[186] In an attempt to alter flower colors in petunias, researchers introduced additional copies of a gene encoding chalcone synthase, a key enzyme for flower pigmentation into petunia plants of normally pink or violet flower color. The overexpressed gene was expected to result in darker flowers, but instead caused some flowers to have less visible purple pigment, sometimes in variegated patterns, indicating that the activity of chalcone synthase had been substantially decreased or became suppressed in a context-specific manner. This would later be explained as the result of the transgene being inserted adjacent to promoters in the opposite direction in various positions throughout the genomes of some transformants, thus leading to expression of antisense transcripts and gene silencing when these promoters are active. Another early observation of RNAi was came from a study of the fungus Neurospora crassa,[187] although it was not immediately recognized as related. Further investigation of the phenomenon in plants indicated that the downregulation was due to post-transcriptional inhibition of gene expression via an increased rate of mRNA degradation.[188] This phenomenon was called co-suppression of gene expression, but the molecular mechanism remained unknown.[189]

Not long after, plant virologists working on improving plant resistance to viral diseases observed a similar unexpected phenomenon. While it was known that plants expressing virus-specific proteins showed enhanced tolerance or resistance to viral infection, it was not expected that plants carrying only short, non-coding regions of viral RNA sequences would show similar levels of protection. Researchers believed that viral RNA produced by transgenes could also inhibit viral replication.[190] The reverse experiment, in which short sequences of plant genes were introduced into viruses, showed that the targeted gene was suppressed in an infected plant.[191] This phenomenon was labeled "virus-induced gene silencing" (VIGS), and the set of such phenomena were collectively called post transcriptional gene silencing.[192]

After these initial observations in plants, laboratories searched for this phenomenon in other organisms.[193][194] Craig C. Mello and Andrew Fire's 1998 Nature paper reported a potent gene silencing effect after injecting double stranded RNA into C. elegans.[195] In investigating the regulation of muscle protein production, they observed that neither mRNA nor antisense RNA injections had an effect on protein production, but double-stranded RNA successfully silenced the targeted gene. As a result of this work, they coined the term RNAi. This discovery represented the first identification of the causative agent for the phenomenon. Fire and Mello were awarded the 2006 Nobel Prize in Physiology or Medicine.

E-patient

From Wikipedia, the free encyclopedia https://en.wikipedi...