Search This Blog

Monday, July 16, 2018

Friedel–Crafts reaction

From Wikipedia, the free encyclopedia

The Friedel–Crafts reactions are a set of reactions developed by Charles Friedel and James Crafts in 1877 to attach substituents to an aromatic ring. Friedel–Crafts reactions are of two main types: alkylation reactions and acylation reactions. Both proceed by electrophilic aromatic substitution.

Friedel–Crafts alkylation

Friedel–Crafts alkylation involves the alkylation of an aromatic ring with an alkyl halide using a strong Lewis acid catalyst.[6] With anhydrous ferric chloride as a catalyst, the alkyl group attaches at the former site of the chloride ion. The general mechanism is shown below.[7]
Mechanism for the Friedel Crafts alkylation
This reaction suffers from the disadvantage that the product is more nucleophilic than the reactant. Consequenly, overalkylation occurs. Furthermore, the reaction is only very useful for tertiary carbon and secondary alkylating agents. Otherwise the incipient carbocation (R+) will undergo a carbocation rearrangement reaction.[7]

Steric hindrance can be exploited to limit the number of alkylations, as in the t-butylation of 1,4-dimethoxybenzene.[citation needed]
t-butylation of 1,4-dimethoxybenzene
Alkylations are not limited to alkyl halides: Friedel–Crafts reactions are possible with any carbocationic intermediate such as those derived from alkenes and a protic acid, Lewis acid, enones, and epoxides. An example is the synthesis of neophyl chloride from benzene and methallyl chloride:[8]
H2C=C(CH3)CH2Cl + C6H6 → C6H5C(CH3)2CH2Cl
In one study the electrophile is a bromonium ion derived from an alkene and NBS:[9]
Friedel–Crafts alkylation by an alkene
In this reaction samarium(III) triflate is believed to activate the NBS halogen donor in halonium ion formation.

Friedel–Crafts dealkylation

Friedel–Crafts alkylation has been hypothesized to be reversible. In a reversed Friedel–Crafts reaction or Friedel–Crafts dealkylation, alkyl groups are removed in the presence of protons or other Lewis acid.

For example, in a multiple addition of ethyl bromide to benzene, ortho and para substitution is expected after the first monosubstitution step because an alkyl group is an activating group. However, the actual reaction product is 1,3,5-triethylbenzene with all alkyl groups as a meta substituent.[10] Thermodynamic reaction control makes sure that thermodynamically favored meta substitution with steric hindrance minimized takes prevalence over less favorable ortho and para substitution by chemical equilibration. The ultimate reaction product is thus the result of a series of alkylations and dealkylations.[citation needed]
synthesis of 2,4,6-triethylbenzene

Friedel–Crafts acylation

Friedel–Crafts acylation involves the acylation of aromatic rings. Typical acylating agents are acyl chlorides. Typical Lewis acid catalysts are acids and aluminium trichloride. Friedel–Crafts acylation is also possible with acid anhydrides.[11] Reaction conditions are similar to the Friedel–Crafts alkylation. This reaction has several advantages over the alkylation reaction. Due to the electron-withdrawing effect of the carbonyl group, the ketone product is always less reactive than the original molecule, so multiple acylations do not occur. Also, there are no carbocation rearrangements, as the carbonium ion is stabilized by a resonance structure in which the positive charge is on the oxygen.
Friedel–Crafts acylation overview
The viability of the Friedel–Crafts acylation depends on the stability of the acyl chloride reagent. Formyl chloride, for example, is too unstable to be isolated. Thus, synthesis of benzaldehyde via the Friedel–Crafts pathway requires that formyl chloride be synthesized in situ. This is accomplished via the Gattermann-Koch reaction, accomplished by treating benzene with carbon monoxide and hydrogen chloride under high pressure, catalyzed by a mixture of aluminium chloride and cuprous chloride.

Reaction mechanism

The reaction proceeds via generation of an acylium center:
FC acylation step 1
The reaction is completed by deprotonation of the arenium ion by AlCl4, regenerating the AlCl3 catalyst:
FC acylation step III
If desired, the resulting ketone can be subsequently reduced to the corresponding alkane substituent by either Wolff–Kishner reduction or Clemmensen reduction. The net result is the same as the Friedel–Crafts alkylation except that rearrangement is not possible.[12]

Friedel–Crafts hydroxyalkylation

Arenes react with certain aldehydes and ketones to form the hydroxyalkylated products, for example in the reaction of the mesityl derivative of glyoxal with benzene:[13]
Friedel–Crafts hydroxyalkylation
As usual, the aldehyde group is more reactive electrophile than the phenone.

Friedel–Crafts sulfonylation

Under Friedel–Crafts reaction conditions, arenes react with sulfonyl halides and sulfonic acid anhydrides affording sulfones. Commonly used catalysts include AlCl3, FeCl3, GaCl3, BF3, SbCl5, BiCl3 and Bi(OTf)3, among others.[14][15] Intramolecular Friedel–Crafts cyclization occurs with 2-phenyl-1-ethanesulfonyl chloride, 3-phenyl-1-propanesulfonyl chloride and 4-phenyl-1-butanesulfonyl chloride on heating in nitrobenzene with AlCl3.[16] Sulfenyl and sulfinyl chlorides also undergo Friedel–Crafts–type reactions, affording sulfides and sulfoxides, respectively.[17] Both aryl sulfinyl chlorides and diaryl sulfoxides can be prepared from arenes through reaction with thionyl chloride in the presence of catalysts such as BiCl3, Bi(OTf)3, LiClO4 or NaClO4.[18][19]

Scope and variations

This reaction is related to several classic named reactions:
Bogert-Cook Synthesis.png
  • The Darzens–Nenitzescu synthesis of ketones (1910, 1936) involves the acylation of cyclohexene with acetyl chloride to methylcyclohexenylketone.
  • In the related Nenitzescu reductive acylation (1936) a saturated hydrocarbon is added making it a reductive acylation to methylcyclohexylketone
  • The Nencki reaction (1881) is the ring acetylation of phenols with acids in the presence of zinc chloride.[33]
  • In a green chemistry variation aluminium chloride is replaced by graphite in an alkylation of p-xylene with 2-bromobutane. This variation will not work with primary halides from which less carbocation involvement is inferred.[34]

Dyes

Friedel–Crafts reactions have been used in the synthesis of several triarylmethane and xanthene dyes.[35] Examples are the synthesis of thymolphthalein (a pH indicator) from two equivalents of thymol and phthalic anhydride:
Thymolphthalein Synthesis
A reaction of phthalic anhydride with resorcinol in the presence of zinc chloride gives the fluorophore Fluorescein. Replacing resorcinol by N,N-diethylaminophenol in this reaction gives rhodamine B:
Rhodamine B synthesis

Haworth reactions

The Haworth reaction is a classic method for the synthesis of 1-tetralone.[36] In it benzene is reacted with succinic anhydride, the intermediate product is reduced and a second FC acylation takes place with addition of acid.[37]
Haworth reaction
In a related reaction, phenanthrene is synthesized from naphthalene and succinic anhydride in a series of steps.
Haworth Phenanthrene synthesis

Friedel–Crafts test for aromatic hydrocarbons

Reaction of chloroform with aromatic compounds using an aluminium chloride catalyst gives triarylmethanes, which are often brightly colored, as is the case in triarylmethane dyes. This is a bench test for aromatic compounds.

Humans With Amplified Intelligence Could Be More Powerful Than AI


Humans With Amplified Intelligence Could Be More Powerful Than AI
Top image: imredesiuk/shutterstock.

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It's an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous — as AI.
 
As a species, we've been amplifying our brains for millennia. Or at least we've tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today's nootropics. But none of these compare to what's in store.
 
Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.

Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger at Accelerating Future and a co-organiser of the Singularity Summit. He's given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.

Humans With Amplified Intelligence Could Be More Powerful Than AI
 
Michael, when we speak of Intelligence Amplification, what are we really talking about? Are we looking to create Einsteins? Or is it something significantly more profound?

The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

The first step will be to create a direct neural link to information. Think of it as a "telepathic Google."

The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualisation and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.

The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.

For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.

Looking ahead to the next few decades, how could AI come about? Is the human brain really that fungible?

The human brain is not really that fungible. It is the product of more than seven million years of evolutionary optimization and fine-tuning, which is to say that it's already highly optimised given its inherent constraints. Attempts to overclock it usually cause it to break, as demonstrated by the horrific effects of amphetamine addiction.

Trailer for Limitless Chemicals are not targeted enough to produce big gains in human cognitive performance. The evidence for the effectiveness of current "brain-enhancing drugs" is extremely sketchy. To achieve real strides will require brain implants with connections to millions of neurons. This will require millions of tiny electrodes, and a control system to synchronise them all. The current state of the art brain-computer interfaces have around 1000 connections. So, current devices need to be scaled up by more than 1000 times to get anywhere interesting. Even if you assume exponential improvement, it will be a while before this is possible — at least 15 to 20 years.

Improvement in IA rests upon progress in nano-manufacturing. Brain-computer interface engineers, like Ed Boyden at MIT, depend upon improvements in manufacturing to build these devices. Manufacturing is the linchpin on which everything else depends.

Given that there is very little development of atomically-precise manufacturing technologies, nanoscale self-assembly seems like the most likely route to million-electrode brain-computer interfaces. Nanoscale self-assembly is not atomically precise, but it's precise by the standards of bulk manufacturing and photolithography.

What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?

One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.

Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.

Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, "An intelligent man is sometimes forced to be drunk to spend time with his fools." What if drunkenness were not enough to instil camaraderie and mutual affection? There could be a clean "empathy break" that leads to psychopathy.

So which will come first? AI or IA?

It's very difficult to predict either. There is a tremendous bias for wanting IA to come first, because of all the fun movies and video games with intelligence-enhanced protagonists. It's important to recognise that this bias in favour of IA does not in fact influence the actual technological difficulty of the approach. My guess is that AI will come first because development is so much cheaper and cleaner.

Both endeavours are extremely difficult. They may not come to pass until the 2060s, 2070s, or later. Eventually, however, they must both come to pass — there's nothing magical about intelligence, and the demand for its enhancement is enormous. It would require nothing less than a global totalitarian Luddite dictatorship to hold either back for the long term.

What are the advantages and disadvantages to the two different developmental approaches?

The primary advantage of the AI route is that it is immeasurably cheaper and easier to do research. AI is developed on paper and in code. Most useful IA research, on the other hand, is illegal. Serious IA would require deep neurosurgery and experimental brain implants. These brain implants may malfunction, causing seizures, insanity, or death. Enhancing human intelligence in a qualitative way is not a matter of popping a few pills — you really need to develop brain implants to get any significant returns.

Most research in that area is heavily regulated and expensive. All animal testing is expensive. Theodore Berger has been working on a hippocampal implant for a number of years — and in 2004 it passed a live tissue test, but there has been very little news since then. Every few years he pops up in the media and says it's just around the corner, but I'm sceptical. Meanwhile, there is a lot of intriguing progress in Artificial Intelligence.

Does IA have the potential to be safer than AI as far as predictability and controllability is concerned? Is it important that we develop IA before super-powerful AGI?

Intelligence Augmentation is much more unpredictable and uncontrollable than AGI has the potential to be. It's actually quite dangerous, in the long term. I recently wrote an article that speculates on global political transformation caused by a large amount of power concentrated in the hands of a small group due to "miracle technologies" like IA or molecular manufacturing. I also coined the term "Maximillian", meaning "the best", to refer to a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.

Humans With Amplified Intelligence Could Be More Powerful Than AI
Image: The cognitively enhanced Reginald Barclay from the ST:TNG
episode, "The Nth Degree".

The problem with IA is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes.

AGI, on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing.

People say, "won't it reject those motivations?" It won't, because those motivations will make up its entire core of values — if it's programmed properly. There will be no "ghost in the machine" to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper "The Superintelligent Will". The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn't mean we can't code selfless agents de novo.

What roadblocks, be they technological, medical, or ethical, do you see hindering development?

The biggest roadblock is developing the appropriate manufacturing technology. Right now, we aren't even close.

Another roadblock is figuring out what exactly each neuron does, and identifying the exact positions of these neurons in individual people. Again, we're not even close.

Thirdly, we need some way to quickly test extremely fine-grained theories of brain function — what Ed Boyden calls "high throughput circuit screening" of neural circuits. The best way to do this would be to somehow create a human being without consciousness and experiment on them to our heart's content, but I have a feeling that idea might not go over so well with ethics committees.

Absent that, we'd need an extremely high-resolution simulation of the human brain. Contrary to hype surrounding "brain simulation" projects today, such a high-resolution simulation is not likely to be developed until the 2050-2080 timeframe. An Oxford analysis picks a median date of around 2080. That sounds a bit conservative to me, but in the right ballpark.

Regulation of gene expression

From Wikipedia, the free encyclopedia

Regulation of gene expression by a hormone receptor
Diagram showing at which stages in the DNA-mRNA-protein pathway expression can be controlled

Regulation of gene expression includes a wide range of mechanisms that are used by cells to increase or decrease the production of specific gene products (protein or RNA), and is informally termed gene regulation. Sophisticated programs of gene expression are widely observed in biology, for example to trigger developmental pathways, respond to environmental stimuli, or adapt to new food sources. Virtually any step of gene expression can be modulated, from transcriptional initiation, to RNA processing, and to the post-translational modification of a protein. Often, one gene regulator controls another, and so on, in a gene regulatory network.

Gene regulation is essential for viruses, prokaryotes and eukaryotes as it increases the versatility and adaptability of an organism by allowing the cell to express protein when needed. Although as early as 1951, Barbara McClintock showed interaction between two genetic loci, Activator (Ac) and Dissociator (Ds), in the color formation of maize seeds, the first discovery of a gene regulation system is widely considered to be the identification in 1961 of the lac operon, discovered by François Jacob and Jacques Monod, in which some enzymes involved in lactose metabolism are expressed by E. coli only in the presence of lactose and absence of glucose.

In multicellular organisms, gene regulation drives cellular differentiation and morphogenesis in the embryo, leading to the creation of different cell types that possess different gene expression profiles from the same genome sequence. This explains how evolution actually works at a molecular level, and is central to the science of evolutionary developmental biology ("evo-devo").

The initiating event leading to a change in gene expression includes activation or deactivation of receptors.

Regulated stages of gene expression

Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The following is a list of stages where gene expression is regulated, the most extensively utilised point is Transcription Initiation:

Modification of DNA

In eukaryotes, the accessibility of large regions of DNA can depend on its chromatin structure, which can be altered as a result of histone modifications directed by DNA methylation, ncRNA, or DNA-binding protein. Hence these modifications may up or down regulate the expression of a gene. Some of these modifications that regulate gene expression are inheritable and are referred to as epigenetic regulation.

Structural

Transcription of DNA is dictated by its structure. In general, the density of its packing is indicative of the frequency of transcription. Octameric protein complexes called nucleosomes are responsible for the amount of supercoiling of DNA, and these complexes can be temporarily modified by processes such as phosphorylation or more permanently modified by processes such as methylation. Such modifications are considered to be responsible for more or less permanent changes in gene expression levels.[1]

Chemical

Methylation of DNA is a common method of gene silencing. DNA is typically methylated by methyltransferase enzymes on cytosine nucleotides in a CpG dinucleotide sequence (also called "CpG islands" when densely clustered). Analysis of the pattern of methylation in a given region of DNA (which can be a promoter) can be achieved through a method called bisulfite mapping. Methylated cytosine residues are unchanged by the treatment, whereas unmethylated ones are changed to uracil. The differences are analyzed by DNA sequencing or by methods developed to quantify SNPs, such as Pyrosequencing (Biotage) or MassArray (Sequenom), measuring the relative amounts of C/T at the CG dinucleotide. Abnormal methylation patterns are thought to be involved in oncogenesis.[2]

Histone acetylation is also an important process in transcription. Histone acetyltransferase enzymes (HATs) such as CREB-binding protein also dissociate the DNA from the histone complex, allowing transcription to proceed. Often, DNA methylation and histone deacetylation work together in gene silencing. The combination of the two seems to be a signal for DNA to be packed more densely, lowering gene expression.[citation needed]

Regulation of transcription

1: RNA Polymerase, 2: Repressor, 3: Promoter, 4: Operator, 5: Lactose, 6: lacZ, 7: lacY, 8: lacA. Top: The gene is essentially turned off. There is no lactose to inhibit the repressor, so the repressor binds to the operator, which obstructs the RNA polymerase from binding to the promoter and making lactase. Bottom: The gene is turned on. Lactose is inhibiting the repressor, allowing the RNA polymerase to bind with the promoter, and express the genes, which synthesize lactase. Eventually, the lactase will digest all of the lactose, until there is none to bind to the repressor. The repressor will then bind to the operator, stopping the manufacture of lactase.

Regulation of transcription thus controls when transcription occurs and how much RNA is created. Transcription of a gene by RNA polymerase can be regulated by several mechanisms. Specificity factors alter the specificity of RNA polymerase for a given promoter or set of promoters, making it more or less likely to bind to them (i.e., sigma factors used in prokaryotic transcription). Repressors bind to the Operator, coding sequences on the DNA strand that are close to or overlapping the promoter region, impeding RNA polymerase's progress along the strand, thus impeding the expression of the gene.The image to the right demonstrates regulation by a repressor in the lac operon. General transcription factors position RNA polymerase at the start of a protein-coding sequence and then release the polymerase to transcribe the mRNA. Activators enhance the interaction between RNA polymerase and a particular promoter, encouraging the expression of the gene. Activators do this by increasing the attraction of RNA polymerase for the promoter, through interactions with subunits of the RNA polymerase or indirectly by changing the structure of the DNA. Enhancers are sites on the DNA helix that are bound by activators in order to loop the DNA bringing a specific promoter to the initiation complex. Enhancers are much more common in eukaryotes than prokaryotes, where only a few examples exist (to date).[3] Silencers are regions of DNA sequences that, when bound by particular transcription factors, can silence expression of the gene.

Regulation of transcription in cancer

In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites.[4] When many of a gene's promoter CpG sites are methylated the gene becomes silenced.[5] Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations.[6] However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs.[7] In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).

Regulation of transcription in addiction

One of the cardinal features of addiction is its persistence. The persistent behavioral changes appear to be due to long-lasting changes, resulting from epigenetic alterations affecting gene expression, within particular regions of the brain.[8] Drugs of abuse cause three types of epigenetic alteration in the brain. These are (1) histone acetylations and histone methylations, (2) DNA methylation at CpG sites, and (3) epigenetic downregulation or upregulation of microRNAs.[8][9] (See Epigenetics of cocaine addiction for some details.)

Chronic nicotine intake in mice alters brain cell epigenetic control of gene expression through acetylation of histones. This increases expression in the brain of the protein FosB, important in addiction.[10] Cigarette addiction was also studied in about 16,000 humans, including never smokers, current smokers, and those who had quit smoking for up to 30 years.[11] In blood cells, more than 18,000 CpG sites (of the roughly 450,000 analyzed CpG sites in the genome) had frequently altered methylation among current smokers. These CpG sites occurred in over 7,000 genes, or roughly a third of known human genes. The majority of the differentially methylated CpG sites returned to the level of never-smokers within five years of smoking cessation. However, 2,568 CpGs among 942 genes remained differentially methylated in former versus never smokers. Such remaining epigenetic changes can be viewed as “molecular scars”[9] that may affect gene expression.

In rodent models, drugs of abuse, including cocaine,[12] methampheamine,[13][14] alcohol[15] and tobacco smoke products,[16] all cause DNA damage in the brain. During repair of DNA damages some individual repair events can alter the methylation of DNA and/or the acetylations or methylations of histones at the sites of damage, and thus can contribute to leaving an epigenetic scar on chromatin.[17]

Such epigenetic scars likely contribute to the persistent epigenetic changes found in addiction.

Post-transcriptional regulation

After the DNA is transcribed and mRNA is formed, there must be some sort of regulation on how much the mRNA is translated into proteins. Cells do this by modulating the capping, splicing, addition of a Poly(A) Tail, the sequence-specific nuclear export rates, and, in several contexts, sequestration of the RNA transcript. These processes occur in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript that, in turn, is regulated and may have an affinity for certain sequences.

Three prime untranslated regions and microRNAs

Three prime untranslated regions (3'-UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression.[18] Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.

The 3'-UTR often contains miRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.

As of 2014, the miRBase web site,[19] an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes).[20] Freidman et al.[20] estimate that >45,000 miRNA target sites within human mRNA 3'-UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.

Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs.[21] Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).[22][23]

The effects of miRNA dysregulation of gene expression seem to be important in cancer.[24] For instance, in gastrointestinal cancers, a 2015 paper identified nine miRNAs as epigenetically altered and effective in down-regulating DNA repair enzymes.[25]

The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depressive disorder, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.[26][27][28]

Regulation of translation

The translation of mRNA can also be controlled by a number of mechanisms, mostly at the level of initiation. Recruitment of the small ribosomal subunit can indeed be modulated by mRNA secondary structure, antisense RNA binding, or protein binding. In both prokaryotes and eukaryotes, a large number of RNA binding proteins exist, which often are directed to their target sequence by the secondary structure of the transcript, which may change depending on certain conditions, such as temperature or presence of a ligand (aptamer). Some transcripts act as ribozymes and self-regulate their expression.

Examples of gene regulation

  • Enzyme induction is a process in which a molecule (e.g., a drug) induces (i.e., initiates or enhances) the expression of an enzyme.
  • The induction of heat shock proteins in the fruit fly Drosophila melanogaster.
  • The Lac operon is an interesting example of how gene expression can be regulated.
  • Viruses, despite having only a few genes, possess mechanisms to regulate their gene expression, typically into an early and late phase, using collinear systems regulated by anti-terminators (lambda phage) or splicing modulators (HIV).
  • Gal4 is a transcriptional activator that controls the expression of GAL1, GAL7, and GAL10 (all of which code for the metabolic of galactose in yeast). The GAL4/UAS system has been used in a variety of organisms across various phyla to study gene expression.[29]

Developmental biology

A large number of studied regulatory systems come from developmental biology. Examples include:
  • The colinearity of the Hox gene cluster with their nested antero-posterior patterning
  • Pattern generation of the hand (digits - interdigits): the gradient of sonic hedgehog (secreted inducing factor) from the zone of polarizing activity in the limb, which creates a gradient of active Gli3, which activates Gremlin, which inhibits BMPs also secreted in the limb, results in the formation of an alternating pattern of activity as a result of this reaction-diffusion system.
  • Somitogenesis is the creation of segments (somites) from a uniform tissue (Pre-somitic Mesoderm). They are formed sequentially from anterior to posterior. This is achieved in amniotes possibly by means of two opposing gradients, Retinoic acid in the anterior (wavefront) and Wnt and Fgf in the posterior, coupled to an oscillating pattern (segmentation clock) composed of FGF + Notch and Wnt in antiphase.[30]
  • Sex determination in the soma of a Drosophila requires the sensing of the ratio of autosomal genes to sex chromosome-encoded genes, which results in the production of sexless splicing factor in females, resulting in the female isoform of doublesex.[31]

Circuitry

Up-regulation and down-regulation

Up-regulation is a process that occurs within a cell triggered by a signal (originating internal or external to the cell), which results in increased expression of one or more genes and as a result the protein(s) encoded by those genes. Conversely, down-regulation is a process resulting in decreased gene and corresponding protein expression.
  • Up-regulation occurs, for example, when a cell is deficient in some kind of receptor. In this case, more receptor protein is synthesized and transported to the membrane of the cell and, thus, the sensitivity of the cell is brought back to normal, reestablishing homeostasis.
  • Down-regulation occurs, for example, when a cell is overstimulated by a neurotransmitter, hormone, or drug for a prolonged period of time, and the expression of the receptor protein is decreased in order to protect the cell (see also tachyphylaxis).

Inducible vs. repressible systems

Gene Regulation can be summarized by the response of the respective system:
  • Inducible systems - An inducible system is off unless there is the presence of some molecule (called an inducer) that allows for gene expression. The molecule is said to "induce expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
  • Repressible systems - A repressible system is on except in the presence of some molecule (called a corepressor) that suppresses gene expression. The molecule is said to "repress expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
The GAL4/UAS system is an example of both an inducible and repressible system. Gal4 binds an upstream activation sequence (UAS) to activate the transcription of the GAL1/GAL7/GAL10 cassette. On the other hand, a MIG1 response to the presence of glucose can inhibit GAL4 and therefore stop the expression of the GAL1/GAL7/GAL10 cassette.[32]

Theoretical circuits

  • Repressor/Inducer: an activation of a sensor results in the change of expression of a gene
  • negative feedback: the gene product downregulates its own production directly or indirectly, which can result in
    • keeping transcript levels constant/proportional to a factor
    • inhibition of run-away reactions when coupled with a positive feedback loop
    • creating an oscillator by taking advantage in the time delay of transcription and translation, given that the mRNA and protein half-life is shorter
  • positive feedback: the gene product upregulates its own production directly or indirectly, which can result in
    • signal amplification
    • bistable switches when two genes inhibit each other and both have positive feedback
    • pattern generation

Study methods

In general, most experiments investigating differential expression used whole cell extracts of RNA, called steady-state levels, to determine which genes changed and by how much they did. These are, however, not informative of where the regulation has occurred and may actually mask conflicting regulatory processes (see post-transcriptional regulation), but it is still the most commonly analysed (quantitative PCR and DNA microarray).
When studying gene expression, there are several methods to look at the various stages. In eukaryotes these include:
  • The local chromatin environment of the region can be determined by ChIP-chip analysis by pulling down RNA Polymerase II, Histone 3 modifications, Trithorax-group protein, Polycomb-group protein, or any other DNA-binding element to which a good antibody is available.
  • Epistatic interactions can be investigated by synthetic genetic array analysis
  • Due to post-transcriptional regulation, transcription rates and total RNA levels differ significantly. To measure the transcription rates nuclear run-on assays can be done and newer high-throughput methods are being developed, using thiol labelling instead of radioactivity.[33]
  • Only 5% of the RNA polymerised in the nucleus actually exits,[34] and not only introns, abortive products, and non-sense transcripts are degradated. Therefore, the differences in nuclear and cytoplasmic levels can be see by separating the two fractions by gentle lysis.[35]
  • Alternative splicing can be analysed with a splicing array or with a tiling array (see DNA microarray).
  • All in vivo RNA is complexed as RNPs. The quantity of transcripts bound to specific protein can be also analysed by RIP-Chip. For example, DCP2 will give an indication of sequestered protein; ribosome-bound gives and indication of transcripts active in transcription (although it should be noted that a more dated method, called polysome fractionation, is still popular in some labs)
  • Protein levels can be analysed by Mass spectrometry, which can be compared only to quantitative PCR data, as microarray data is relative and not absolute.
  • RNA and protein degradation rates are measured by means of transcription inhibitors (actinomycin D or α-amanitin) or translation inhibitors (Cycloheximide), respectively.

Taming the Multiverse

August 7, 2001 by Marcus Chown
Original link:  http://www.kurzweilai.net/taming-the-multiverse

In Ray Kurzweil’s The Singularity is Near, physicist Sir Roger Penrose is paraphrased as suggesting it is impossible to perfectly replicate a set of quantum states, so therefore perfect downloading (i.e., creating a digital or synthetic replica of the human brain based upon quantum states) is impossible. What would be required to make it possible? A solution to the problem of quantum teleportation, perhaps. But there is a further complication: the multiverse. Do we live in a world of schizophrenic tables? Does free will negate the possibility of perfect replication?

Originally published July 14, 2001 at New Scientist. Published on KurzweilAI.net August 7, 2001. Original article at New Scientist.

The Singularity is Near précis can be read here. The mechanics of quantum teleportation can be found here.

Parallel universes are no longer a figment of our imagination. They’re so real that we can reach out and touch them, and even use them to change our world.

Flicking through New Scientist, you stop at this page, think “that’s interesting” and read these words. Another you thinks “what nonsense”, and moves on. Yet another lets out a cry, keels over and dies.
Is this an insane vision? Not according to David Deutsch of the University of Oxford. Deutsch believes that our Universe is part of the multiverse, a domain of parallel universes that comprises ultimate reality.

Until now, the multiverse was a hazy, ill-defined concept-little more than a philosophical trick. But in a paper yet to be published, Deutsch has worked out the structure of the multiverse. With it, he claims, he has answered the last criticism of the sceptics. “For 70 years physicists have been hiding from it, but they can hide no longer.” If he’s right, the multiverse is no trick. It is real. So real that we can mold the fate of the universes and exploit them.

Why believe in something so extraordinary? Because it can explain one of the greatest mysteries of modern science: why the world of atoms behaves so very differently from the everyday world of trees and tables.

The theory that describes atoms and their constituents is quantum mechanics. It is hugely successful. It has led to computers, lasers and nuclear reactors, and it tells us why the Sun shines and why the ground beneath our feet is solid. But quantum theory also tells us something very disturbing about atoms and their like: they can be in many places at once. This isn’t just a crazy theory-it has observable consequences (see “Interfering with the multiverse”).

But how is it that atoms can be in many places at once whereas big things made out of atoms-tables, trees and pencils-apparently cannot? Reconciling the difference between the microscopic and the macroscopic is the central problem in quantum theory.

The many worlds interpretation is one way to do it. This idea was proposed by Princeton graduate student Hugh Everett III in 1957. According to many worlds, quantum theory doesn’t just apply to atoms, says Deutsch. “The world of tables is exactly the same as the world of atoms.”

But surely this means tables can be in many places at once. Right. But nobody has ever seen such a schizophrenic table. So what gives?

The idea is that if you observe a table that is in two places at once, there are also two versions of you-one that sees the table in one place and one that sees it in another place.

The consequences are remarkable. A universe must exist for every physical possibility. There are Earths where the Nazis prevailed in the Second World War, where Marilyn Monroe married Einstein, and where the dinosaurs survived and evolved into intelligent beings who read New Scientist.

However, many worlds is not the only interpretation of quantum theory. Physicists can choose between half a dozen interpretations, all of which predict identical outcomes for all conceivable experiments.

Deutsch dismisses them all. “Some are gibberish, like the Copenhagen interpretation,” he says-and the rest are just variations on the many worlds theme.

For example, according to the Copenhagen interpretation, the act of observing is crucial. Observation forces an atom to make up its mind, and plump for being in only one place out of all the possible places it could be. But the Copenhagen interpretation is itself open to interpretation. What constitutes an observation? For some people, this only requires a large-scale object such as a particle detector. For others it means an interaction with some kind of conscious being.

Worse still, says Deutsch, is that in this type of interpretation you have to abandon the idea of reality. Before observation, the atom doesn’t have a real position. To Deutsch, the whole thing is mysticism-throwing up our hands and saying there are some things we are not allowed to ask.

Some interpretations do try to give the microscopic world reality, but they are all disguised versions of the many worlds idea, says Deutsch. “Their proponents have fallen over backward to talk about the many worlds in a way that makes it appear as if they are not.”

In this category, Deutsch includes David Bohm’s “pilot-wave” interpretation. Bohm’s idea is that a quantum wave guides particles along their trajectories. Then the strange shape of the pilot wave can be used to explain all the odd quantum behaviours, such as interference patterns. In effect, says Deutsch, Bohm’s single universe occupies one groove in an immensely complicated multi-dimensional wave function.

“The question that pilot-wave theorists must address is: what are the unoccupied grooves?” says Deutsch. “It is no good saying they are merely theoretical and do not exist physically, for they continually jostle each other and the occupied groove, affecting its trajectory. What’s really being talked about here is parallel universes. Pilot-wave theories are parallel-universe theories in a state of chronic denial.”

Back and forth

Another disguised many worlds theory, says Deutsch, is John Cramer’s “transactional” interpretation in which information passes backward and forward through time. When you measure the position of an atom, it sends a message back to its earlier self to change its trajectory accordingly.

But as the system gets more complicated, the number of messages explodes. Soon, says Deutsch, it becomes vastly greater than the number of particles in the Universe. The full quantum evolution of a system as big as the Universe consists of an exponentially large number of classical processes, each of which contains the information to describe a whole universe. So Cramer’s idea forces the multiverse on you, says Deutsch.

So do other interpretations, according to Deutsch. “Quantum theory leaves no doubt that other universes exist in exactly the same sense that the single Universe that we see exists,” he says. “This is not a matter of interpretation. It is a logical consequence of quantum theory.”

Yet many physicists still refuse to accept the multiverse. “People say the many worlds is simply too crazy, too wasteful, too mind-blowing,” says Deutsch. “But this is an emotional not a scientific reaction. We have to take what nature gives us.”

A much more legitimate objection is that many worlds is vague and has no firm mathematical basis. Proponents talk of a multiverse that is like a stack of parallel universes. The critics point out that it cannot be that simple-quantum phenomena occur precisely because the universes interact. “What is needed is a precise mathematical model of the multiverse,” says Deutsch. And now he’s made one.

The key to Deutsch’s model sounds peculiar. He treats the multiverse as if it were a quantum computer. Quantum computers exploit the strangeness of quantum systems-their ability to be in many states at once-to do certain kinds of calculation at ludicrously high speed. For example, they could quickly search huge databases that would take an ordinary computer the lifetime of the Universe. Although the hardware is still at a very basic stage, the theory of how quantum computers process information is well advanced.

In 1985, Deutsch proved that such a machine can simulate any conceivable quantum system, and that includes the Universe itself. So to work out the basic structure of the multiverse, all you need to do is analyze a general quantum calculation. “The set of all programs that can be run on a quantum computer includes programs that would simulate the multiverse,” says Deutsch. “So we don’t have to include any details of stars and galaxies in the real Universe, we can just analyze quantum computers and look at how information flows inside them.”

If information could flow freely from one part of the multiverse to another, we’d live in a chaotic world where all possibilities would overlap. We really would see two tables at once, and worse, everything imaginable would be happening everywhere at the same time.

Deutsch found that, almost all the time, information flows only within small pieces of the quantum calculation, and not in between those pieces. These pieces, he says, are separate universes. They feel separate and autonomous because all the information we receive through our senses has come from within one universe. As Oxford philosopher Michael Lockwood put it, “We cannot look sideways, through the multiverse, any more than we can look into the future.”

Sometimes universes in Deutsch’s model peel apart only locally and fleetingly, and then slap back together again. This is the cause of quantum interference, which is at the root of everything from the two-slit experiment to the basic structure of atoms.

Other physicists are still digesting what Deutsch has to say. Anton Zeilinger of the University of Vienna remains unconvinced. “The multiverse interpretation is not the only possible one, and it is not even the simplest,” he says. Zeilinger instead uses information theory to come to very different conclusions. He thinks that quantum theory comes from limits on the information we get out of measurements (New Scientist, 17 February, p 26). As in the Copenhagen interpretation, there is no reality to what goes on before the measurement.

But Deutsch insists that his picture is more profound than Zeilinger’s. “I hope he’ll come round, and realize that the many worlds theory explains where the information in his measurements comes from.”

Why are physicists reluctant to accept many worlds? Deutsch blames logical positivism, the idea that science should concern itself only with objects that can be observed. In the early 20th century, some logical positivists even denied the existence of atoms-until the evidence became overwhelming. The evidence for the multiverse, according to Deutsch, is equally overwhelming. “Admittedly, it’s indirect,” he says. “But then, we can detect pterodactyls and quarks only indirectly too. The evidence that other universes exist is at least as strong as the evidence for pterodactyls or quarks.”

Perhaps the sceptics will be convinced by a practical demonstration of the multiverse. And Deutsch thinks he knows how. By building a quantum computer, he says, we can reach out and mold the multiverse.

“One day, a quantum computer will be built which does more simultaneous calculations than there are particles in the Universe,” says Deutsch. “Since the Universe as we see it lacks the computational resources to do the calculations, where are they being done?” It can only be in other universes, he says. “Quantum computers share information with huge numbers of versions of themselves throughout the multiverse.”

Imagine that you have a quantum PC and you set it a problem. What happens is that a huge number of versions of your PC split off from this Universe into their own separate, local universes, and work on parallel strands of the problem. A split second later, the pocket universes recombine into one, and those strands are pulled together to provide the answer that pops up on your screen. “Quantum computers are the first machines humans have ever built to exploit the multiverse directly,” says Deutsch.

At the moment, even the biggest quantum computers can only work their magic on about 6 bits of information, which in Deutsch’s view means they exploit copies of themselves in 26 universes-that’s just 64 of them. Because the computational feats of such computers are puny, people can choose to ignore the multiverse. “But something will happen when the number of parallel calculations becomes very large,” says Deutsch. “If the number is 64, people can shut their eyes but if it’s 1064, they will no longer be able to pretend.”

What would it mean for you and me to know there are inconceivably many yous and mes living out all possible histories? Surely, there is no point in making any choices for the better if all possible outcomes happen? We might as well stay in bed or commit suicide.

Deutsch does not agree. In fact, he thinks it could make real choice possible. In classical physics, he says, there is no such thing as “if”; the future is determined absolutely by the past. So there can be no free will. In the multiverse, however, there are alternatives; the quantum possibilities really happen. Free will might have a sensible definition, Deutsch thinks, because the alternatives don’t have to occur within equally large slices of the multiverse. “By making good choices, doing the right thing, we thicken the stack of universes in which versions of us live reasonable lives,” he says. “When you succeed, all the copies of you who made the same decision succeed too. What you do for the better increases the portion of the multiverse where good things happen.”

Let’s hope that deciding to read this article was the right choice.

Interfering with the multiverse

You can see the shadow of other universes using little more than a light source and two metal plates. This is the famous double-slit experiment, the touchstone of quantum weirdness.

Particles from the atomic realm such as photons, electrons or atoms are fired at the first plate, which has two vertical slits in it. The particles that go through hit the second plate on the far side.

Imagine the places that are hit show up black and that the places that are not hit show up white. After the experiment has been running for a while, and many particles have passed through the slits, the plate will be covered in vertical stripes alternating black and white. That is an interference pattern.

To make it, particles that passed through one slit have to interfere with particles that passed through the other slit. The pattern simply does not form if you shut one slit.

The strange thing is that the interference pattern forms even if particles come one at a time, with long periods in between. So what is affecting these single particles?

According to the many worlds interpretation, each particle interferes with another particle going through the other slit. What other particle? “Another particle in a neighboring universe,” says David Deutsch. He believes this is a case where two universes split apart briefly, within the experiment, then come back together again. “In my opinion, the argument for the many worlds was won with the double-slit experiment. It reveals interference between neighboring universes, the root of all quantum phenomena.”

Reproduced with permission from New Scientist issue dated July 14, 2001.

Sunday, July 15, 2018

The Paradigms and Paradoxes of Intelligence: Building a Brain

August 6, 2001 by Ray Kurzweil
Original link:  http://www.kurzweilai.net/the-paradigms-and-paradoxes-of-intelligence-building-a-brain

How to build a brain, written for “The Futurecast,” a monthly column in the Library Journal.

Originally published November 1992. Published on KurzweilAI.net August 6, 2001.

In the last two columns, we examined two methods for emulating intelligence in a machine. The recursive paradigm involves the application of massive computing power to analyze the implications of every possible course of action (e.g., every allowable move in a chess game) followed by every possible course of reaction that could follow each course of action, and so on in an exponentially exploding tree of possibilities. The neural net paradigm involves the cascading of networks of neurons, where each neuron simplifies thousands of inputs into a single judgment.

Up until recently, humans and computers have used radically different strategies in their intelligent decision-making. For example, in chess (our quintessential laboratory for studying intelligence), a human chess master memorizes about 50,000 situations and then uses his or her neural net-based pattern recognition capabilities to recognize which of these situations is most applicable to the current board position. In contrast, the computer chess master typically memorizes very few board positions and relies instead on its ability to analyze in depth every possible course of action during the time of play. The computer player will typically analyze between one million and one billion board positions for each move. In contrast, the human player does not have time to consider more than a few dozen board positions, hence the reliance on a memory of previously analyzed situations.

Humans do not have the precise memory nor the mental speed to excel at using the recursive paradigm (not without a computer to help). However, while humans will never master the recursive paradigm, machines are not restricted to it. Machines programmed to use the neural net paradigm are displaying a rapidly increasing ability to learn patterns (of speech, handwriting, faces, etc.) in a manner similar to, if still cruder than, their human creators

Computer neural net simulations have been limited by two factors: the number of neural connections that can be simulated in real time and the capacity of computer memories. While human neurons are slow (a million times slower than electronic circuits), every neuron and every interneuronal connection is operating simultaneously. With about 100 billion neurons and an average of 1000 connections per neuron, there are about 100 trillion computations being performed at the same time. At about 200 computations per second, that comes to 20 million billion (2 x 1016) calculations per second.

So how does that compare to the state-of-the-art in human-created technology? Specialized neural computers have been developed that can simulate neurons directly in hardware. These operate about a thousand times faster than neural networks simulated in software on conventional PCs. One such neural computer, the Ricoh RN100, can process 128 million connection computations per second. This type of computer represents significant progress, but it is still 150 million times slower than the human brain.

Moore speed and memory

So enters Moore’s Law. Moore’s Law is the driving force behind a technological revolution so vast that the entire computer revolution to date represents only a minor ripple of its ultimate implications. Simply stated, Moore’s Law says that computing speeds and densities double every 18 months. In other words, every 18 months we can buy a computer that is twice as fast and has twice as much memory for the same cost. Remarkably, this law has held true since the beginning of this century through numerous changes in underlying methods – from the mechanical card-based computing technology of the 1890 census, to the relay-based computers of the 1940s, to the vacuum tube-based computers of the 1950s, to the transistor-based machines of the 1960s, to the integrated circuits of today. The trend has held for thousands of different calculators and computers over the past 100 years. Computer memory, for example, is about 150 million times more powerful today (for the same unit cost) than it was in 1950.

Moore’s Law will continue unabated for many decades to come. We have not even begun to explore the third dimension in chip design. Chips today are flat, whereas the brain is organized in three dimensions. Improvements in semiconductor materials, including the development of superconducting circuits that do not generate heat, will enable the development of chips (actually cubes) with thousands of layers of circuitry combined with far smaller component geometries for an improvement in computer power of many million-fold. There are more than enough new computing technologies being developed to assure a continuation of Moore’s Law for a very long time.

Moore’s Law does more than double computing power every 18 months. It doubles both the capacity of computation (the number of computing elements) and the speed (the number of calculations per second) of each computing element. Since a neural computer is inherently massively parallel, each doubling of capacity and speed actually multiplies the number of neural connections per second by four. Thus, we can increase the power of our neural computer by a factor of 1000 every 7 1/2 years. To provide just one example among many that this rate of progress is quite reasonable, Ricoh has just announced a new version of its neural computer that is 12 times faster than the one developed two years ago. At this rate, a personal neural computer will match the capacity of the human brain in terms of neuron connections per second (i.e., 2 x 1016 calculations per second) in about 20 years, or in the year 2012. Achieving the memory capacity of the human brain (1014 analog values stored at the synapses) will take a little longer- about 27 years, or in the year 2019.

Reaching this threshold will not cause Moore’s Law to slow down. As we go through the 21st century, computer circuits will be grown like crystals with computing taking place at the molecular level. By the year 2040, your state-of-the-art personal computer will be able to simulate a society of 10,000 brains, each of which would be opening at a speed 10,000 times faster than a human brain. Or, alternatively, it could implement a single mind with 10,000 times the memory capacity of a human brain and 100 million times the speed.

The sources of knowledge

However, raw computing speed and memory capacity, even if implemented in massively parallel neural nets, will not automatically result in human-level intelligence. The architecture and organization of these resources are at least as important as the capacity itself. And then of course, neural net-based systems will need to learn their lessons. After all, the human neural net spends at least a couple of decades learning before it is considered ready for most useful tasks.

There are several sources of such knowledge. One is the extensive array of research efforts (still performed by humans) to understand the algorithms and methods underlying the hundreds of faculties we collectively call human intelligence. Progress in this arena is steady if painstaking, although in many areas – e.g., speech recognition – algorithms already exist that are just waiting for more powerful computers to enable them.

There is, of course, a source of knowledge that we can tap to accelerate greatly our understanding of how to design intelligence in a machine, and that is the human brain itself. By probing the brain’s circuits, we can essentially copy a proven design, one that took its original designer several billion years to develop. Just as the Human Genome Project (in which the entire human genetic code is being scanned and recorded) will accelerate the ability to create new treatments and drugs, a similar effort to scan and record the neural organization of the human brain can help provide the templates of intelligence. This effort has already begun. For example, an artificial retina chip created by Synaptics is fundamntally a copy of the neural organization (implemented in silicon, of course) of the human retina and its visual processing layer.

High speed, high-resolution magnetic resonance imaging (MRI) scanners are already able to resolve individual somas (neuron cell bodies) without disturbing the living tissue being scanned. More powerfull MRls, using larger magnets, would be capable of scanning individual nerve fibers that are only ten microns in diameter. Eventually, we will be able to automatically scan the presynaptic vesicles that are the site of human learning.

Layers of intelligence

This suggests two scenarios. The first is to scan portions of a brain to ascertain the architecture of interneuronal connections in different regions. The exact position of each nerve fiber is not as important as the overall pattern. With this information, we can design artificial neural nets that will operate similarly. This process will be like peeling an onion as each layer of human intelligence is revealed.

A more difficult scenario would be to noninvasively scan someone’s brain to map the locations and interconnections of the somas, axons, dendrites, synapses, presynaptic vesicles, and other neural components. Its entire organization could then be re-created on a neural computer of sufficient capacity, including the contents of its memory. We can peer inside someone’s brain today with MRI scanners, which are increasing their resolution with each new generation of the device.

There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth (i.e., speed of scanning), lack of vibration, and safety. For a number of reasons, it will be easier to scan the brain of someone recently deceased than of someone still living (it is easier to get someone deceased to sit still for one thing), but noninvasively, scanning a living brain will ultimately become feasible as MRI and other scanning technologies continue to improve in resolution and speed.If people were scanned and then re-created in a neural computer, one might wonder just who are those people in the machine? The answer would depend on whom you ask. If you ask the people in the machine, they would strenuously claim to be the original persons, having lived certain lives, gone into a scanner, and then woke up in the machine. On the other hand, the original people who were scanned would claim that the people in the machine are imposters, those who appear to share their memories and personality, but are definitely different people.

Many other issues are raised by these scenarios. A machine intelligence that was derived from human intelligence would need a body. A disembodied mind would become quickly depressed. While progress will be made in this area as well, building a suitable artificial body will in many ways be more challenging than building an artificial mind. Even partial success in the first and easiest of the two scenarios above (scanning portions of a brain to ascertain general principles of construction) will present new dilemmas. If, as seems likely, the next century will produce PCs with memory capacities and computational capabilities vastly outstripping the human brain, even a partial mastery of human cognitive facilities will be formidable. At a minimum, we are likely to see the Luddite issue (i.e., concern over the negative impact of machines on human employment) become of intense interest once again. We will examine these and other issues when we take a look at the impact of machine intelligence on life in the 21st century in an upcoming series of Futurecasts.

Meanwhile, back in the closing days of the 20th century, we all share an intense interest in making the most of our human intelligence and frail yet marvelous human bodies. I have long been interested in our health and well being and have discovered a rather unexpected perspective: we actually have the knowledge to virtually eliminate heart disease and cancer. I will share some of these thoughts in the next Futurecasts.

Reprinted with permission from Library Journal, November 1992. Copyright © 1992, Reed Elsevier, USA

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...