Adding
a molecular structure containing carbon, chromium, and oxygen atoms
retains graphene’s superior conductive properties. The metal atoms
(silver, in this experiment) to be bonded are then added to the oxygen
atoms on top. (credit: Songwei Che et al./Nano Letters)
University of Illinois at Chicago scientists have solved a fundamental problem that has held back the use of wonder material graphene in a wide variety of electronics applications.
When graphene is bonded (attached) to metal atoms (such as
molybdenum) in devices such as solar cells, graphene’s superior
conduction properties degrade.
The solution: Instead of adding molecules directly to the individual
carbon atoms of graphene, the new method first adds a sort of buffer
(consisting of chromium, carbon, and oxygen atoms) to the graphene, and
then adds the metal atoms to this buffer material instead. That enables
the graphene to retain its unique properties of electrical conduction.
In an experiment, the researchers successfully added silver
nanoparticles to graphene with this method. That increased the
material’s ability to boost the efficiency of graphene-based solar cells
by 11 fold, said Vikas Berry, associate professor and department head
of chemical engineering and senior author of a paper on the research,
published in Nano Letters.
Researchers at Indian Institute of Technology and Clemson
University were also involved in the study. The research was funded by
the National Science Foundation.
Abstract of Retained Carrier-Mobility and Enhanced Plasmonic-Photovoltaics of Graphene via ring-centered η6 Functionalization and Nanointerfacing
Binding graphene with auxiliary nanoparticles for plasmonics,
photovoltaics, and/or optoelectronics, while retaining the
trigonal-planar bonding of sp2 hybridized carbons to maintain
its carrier-mobility, has remained a challenge. The conventional
nanoparticle-incorporation route for graphene is to create
nucleation/attachment sites via “carbon-centered” covalent
functionalization, which changes the local hybridization of carbon atoms
from trigonal-planar sp2to tetrahedral sp3. This
disrupts the lattice planarity of graphene, thus dramatically
deteriorating its mobility and innate superior properties. Here, we show
large-area, vapor-phase, “ring-centered” hexahapto (η6) functionalization of graphene to create nucleation-sites for silver nanoparticles (AgNPs) without disrupting its sp2 character. This is achieved by the grafting of chromium tricarbonyl [Cr(CO)3] with all six carbon atoms (sigma-bonding) in the benzenoid ring on graphene to form an (η6-graphene)Cr(CO)3 complex.
This nondestructive functionalization preserves the lattice continuum
with a retention in charge carrier mobility (9% increase at 10 K); with
AgNPs attached on graphene/n-Si solar cells, we report an ∼11-fold
plasmonic-enhancement in the power conversion efficiency (1.24%).
The gravitational interaction of antimatter with matter or antimatter
has not been conclusively observed by physicists. While the consensus
among physicists is that gravity will attract both matter and antimatter
at the same rate that matter attracts matter, there is a strong desire
to confirm this experimentally.
Antimatter's rarity and tendency to annihilate
when brought into contact with matter makes its study a technically
demanding task. Most methods for the creation of antimatter
(specifically antihydrogen) result in high-energy particles and atoms of high kinetic energy, which are unsuitable for gravity-related study. In recent years, first ALPHA[1][2] and then ATRAP[3]
have trapped antihydrogen atoms at CERN; in 2012 ALPHA used such atoms
to set the first free-fall loose bounds on the gravitational interaction
of antimatter with matter, measured to within ±7500% of ordinary
gravity[4],
not enough for a clear scientific statement about the sign of gravity
acting on antimatter. Future experiments need to be performed with
higher precision, either with beams of antihydrogen (AEGIS) or with
trapped antihydrogen (ALPHA or GBAR).
Three hypotheses
Thus far, there are three hypotheses about how antimatter gravitationally interacts with normal matter:
Normal gravity: The standard assumption is that gravitational interactions of matter and antimatter are identical.
Antigravity: Some authors argue that antimatter repels matter with the same magnitude as matter attracts itself. (see below).
Gravivector and graviscalar: Later difficulties in creating quantum gravity theories have led to the idea that antimatter may react with a slightly different magnitude.[5]
Experiments
Supernova 1987A
One source of experimental evidence in favor of normal gravity was the observation of neutrinos from Supernova 1987A. In 1987, three neutrino detectors around the world simultaneously observed a cascade of neutrinos emanating from a supernova in the Large Magellanic Cloud. Although the supernova happened about 164,000 light years away, both neutrinos and antineutrinos seem to have been detected virtually simultaneously[clarification needed].
If both were actually observed, then any difference in the
gravitational interaction would have to be very small. However, neutrino
detectors cannot distinguish perfectly between neutrinos and
antineutrinos; in fact, the two may be identical. Some physicists conservatively estimate that there is less than a 10%
chance that no regular neutrinos were observed at all. Others estimate
even lower probabilities, some as low as 1%.[6] Unfortunately, this accuracy is unlikely to be improved by duplicating the experiment any time soon. The last known supernova to occur at such a close range prior to Supernova 1987A was around 1867.[7]
Fairbank's experiments
Physicist William Fairbank attempted a laboratory experiment to directly measure the gravitational acceleration of both electrons and positrons. However, their charge-to-mass ratio is so large that electromagnetic effects overwhelmed the experiment.
It is difficult to directly observe gravitational forces at the
particle level. For charged particles, the electromagnetic force
overwhelms the much weaker gravitational interaction. Even antiparticles
in neutral antimatter, such as antihydrogen, must be kept separate from
their counterparts in the matter that forms the experimental equipment,
which requires strong electromagnetic fields. These fields, e.g. in the
form of atomic traps, exert forces on these antiparticles which easily
overwhelm the gravitational force of Earth and nearby test masses. Since
all production methods for antiparticles result in high-energy
antimatter particles, the necessary cooling for observation of
gravitational effects in a laboratory environment requires very
elaborate experimental techniques and very careful control of the
trapping fields.
Cold neutral antihydrogen experiments
Since 2010 the production of cold antihydrogen has become possible at the Antiproton Decelerator at CERN.
Antihydrogen, which is electrically neutral, should make it possible to
directly measure the gravitational attraction of antimatter particles
to the matter Earth. In 2013, experiments on antihydrogen atoms
released from the ALPHA trap set direct, i.e. freefall, coarse limits on
antimatter gravity.[4]
These limits were coarse, with a relative precision of ± 100%, thus,
far from a clear statement even for the sign of gravity acting on
antimatter. Future experiments at CERN with beams of antihydrogen, such
as AEGIS, or with trapped antihydrogen, such as ALPHA and GBAR, have to
improve the sensitivity to make a clear, scientific statement about
gravity on antimatter.[8]
Superconductor-positron interactions
A hypothesis originally suggested by early experiments with positron
interactions with HTSCs suggests that under certain conditions the weak
hypothetical antigravitational fields of the positrons could form into a
beam. If so then a relatively simple device consisting of a YBCO or
BSCCO disk with acoustic coupling to three or more ultrasonic
transducers set up so that the vibrational pattern of the Cooper pair
generating domains rotate or precess around the centre axis under a weak
electrical bias could form such a beam and be detected with relatively
simple Peltier-cooled linear accelerometers common to cellphones and
other devices.
[9]
A pair of atomic clocks (eg Rb modules used as primary standards) with
one placed in the beam and the other used as an absolute reference with
both powered by independent batteries using no magnetic components (ie
lead acid) should over time show a discrepancy, going well beyond that
expected for a magnetic field.
This would also scale with distance so at twice the distance you would
expect to see 1/4 of the effect due to the inverse square law.
As this in itself would be new physics it is not clear if this would
have significant effects on large amounts of antimatter in nature, if
the antiparticles were also entangled there could be a larger effect on
cosmological scales.
Arguments against a gravitational repulsion of matter and antimatter
When
antimatter was first discovered in 1932, physicists wondered about how
it would react to gravity. Initial analysis focused on whether
antimatter should react the same as matter or react oppositely. Several
theoretical arguments arose which convinced physicists that antimatter
would react exactly the same as normal matter. They inferred that a
gravitational repulsion between matter and antimatter was implausible as
it would violate CPT invariance, conservation of energy, result in vacuum instability, and result in CP violation. It was also theorized that it would be inconsistent with the results of the Eötvös test of the weak equivalence principle. Many of these early theoretical objections were later overturned.[10]
The equivalence principle
The equivalence principle
predicts that the gravitational acceleration of antimatter is the same
as that of ordinary matter. A matter-antimatter gravitational repulsion
is thus excluded from this point of view. Furthermore, photons, which are their own antiparticles in the framework of the Standard Model, have in a large number of astronomical tests (gravitational redshift and gravitational lensing, for example) been observed to interact with the gravitational field of ordinary matter exactly as predicted by the general theory of relativity. This is a feature that has to be explained by any theory predicting that matter and antimatter repel.
CPT theorem
The CPT theorem implies that the difference between the properties of a matter particle and those of its antimatter counterpart is completely
described by C-inversion. Since this C-inversion doesn't affect
gravitational mass, the CPT theorem predicts that the gravitational mass
of antimatter is the same as that of ordinary matter.[11]
A repulsive gravity is then excluded, since that would imply a
difference in sign between the observable gravitational mass of matter
and antimatter.
Morrison's argument
In 1958, Philip Morrison argued that antigravity would violate conservation of energy.
If matter and antimatter responded oppositely to a gravitational field,
then it would take no energy to change the height of a
particle-antiparticle pair. However, when moving through a gravitational
potential, the frequency and energy of light is shifted. Morrison
argued that energy would be created by producing
matter and antimatter at one height and then annihilating it higher up,
since the photons used in production would have less energy than the
photons yielded from annihilation.[12] However, it was later found that antigravity would still not violate the second law of thermodynamics.[13]
Schiff's argument
Later in 1958, L. Schiff used quantum field theory to argue that antigravity would be inconsistent with the results of the Eötvös experiment.[14] However, the renormalization technique used in Schiff's analysis is heavily criticized, and his work is seen as inconclusive.[10]
In 2014 the argument was redone by Cabbolet, who concluded however that
it merely demonstrates the incompatibility of the Standard Model and
gravitational repulsion.[15]
Good's argument
In 1961, Myron L. Good argued that antigravity would result in the observation of an unacceptably high amount of CP violation in the anomalous regeneration of kaons.[16]
At the time, CP violation had not yet been observed. However, Good's
argument is criticized for being expressed in terms of absolute
potentials. By rephrasing the argument in terms of relative potentials, Gabriel Chardin found that it resulted in an amount of kaon regeneration which agrees with observation.[17]
He argues that antigravity is in fact a potential explanation for CP
violation based on his models on K mesons. His results date back to
1992. Since then however, studies on CP violation mechanisms in the B
mesons systems have fundamentally invalidated these explanations.
Gerard 't Hooft's argument
According to Gerard 't Hooft,
every physicist recognizes immediately what is wrong with the idea of
gravitational repulsion: if a ball is thrown high up in the air so that
it falls back, then its motion is symmetric under time-reversal; and
therefore, the ball falls also down in opposite time-direction.[18]
Since a matter particle in opposite time-direction is an antiparticle,
this proves according to 't Hooft that antimatter falls down on earth
just like "normal" matter.
However, Cabbolet replied that 't Hooft's argument is false, and only
proves that an anti-ball falls down on an anti-earth – which is not
disputed.[19]
Theories of gravitational repulsion
As
long as repulsive gravity has not been refuted experimentally, one can
speculate about physical principles that would bring about such a
repulsion. Thus far, three radically different theories have been
published:
The first theory of repulsive gravity was a quantum theory published by Kowitt.[20]
In this modified Dirac theory, Kowitt postulated that the positron is
not a hole in the sea of electrons-with-negative-energy as in usual Dirac hole theory,
but instead is a hole in the sea of
electrons-with-negative-energy-and-positive-gravitational-mass: this
yields a modified C-inversion, by which the positron has positive energy
but negative gravitational mass. Repulsive gravity is then described by
adding extra terms (mgΦg and mgAg)
to the wave equation. The idea is that the wave function of a positron
moving in the gravitational field of a matter particle evolves such that
in time it becomes more probable to find the positron further away from
the matter particle.
Classical theories of repulsive gravity have been published by Santilli and Villata.[21][22][23][24] Both theories are extensions of General Relativity,
and are experimentally indistinguishable. The general idea remains that
gravity is the deflection of a continuous particle trajectory due to
the curvature of spacetime, but antiparticles now 'live' in an inverted
spacetime. The equation of motion for antiparticles is then obtained
from the equation of motion of ordinary particles by applying the C, P,
and T-operators (Villata) or by applying isodual maps (Santilli),
which amounts to the same thing: the equation of motion for
antiparticles then predicts a repulsion of matter and antimatter. It has
to be taken that the observed trajectories of antiparticles are projections on our
spacetime of the true trajectories in the inverted spacetime. However,
it has been argued on methodological and ontological grounds that the
area of application of Villata’s theory cannot be extended to include
the microcosmos.[25] These objections were subsequently dismissed by Villata.[26]
The first non-classical, non-quantum physical principles underlying a
matter-antimatter gravitational repulsion have been published by
Cabbolet.[11][27]
He introduces the Elementary Process Theory, which uses a new language
for physics, i.e. a new mathematical formalism and new physical
concepts, and which is incompatible with both quantum mechanics and
general relativity. The core idea is that nonzero rest mass particles
such as electrons, protons, neutrons and their antimatter counterparts
exhibit stepwise motion as they alternate between a particlelike state
of rest and a wavelike state of motion. Gravitation then takes place in a
wavelike state, and the theory allows, for example, that the wavelike
states of protons and antiprotons interact differently with the earth’s
gravitational field.
Further authors[28][29][30]
have used a matter-antimatter gravitational repulsion to explain
cosmological observations, but these publications do not address the
physical principles of gravitational repulsion.
How
a graphene-based transistor would work. A graphene nanoribbon (GNR) is
created by unzipping (opening up) a portion of a carbon nanotube (CNT)
(the flat area, shown with pink arrows above it). The GRN switching is
controlled by two surrounding parallel CNTs. The magnitudes and relative
directions of the control current, ICTRL (blue arrows) in the CNTs
determine the rotation direction of the magnetic fields, B (green). The
magnetic fields then control the GNR magnetization (based on the recent
discovery of negative magnetoresistance), which causes the GNR to switch
from resistive (no current) to conductive, resulting in current flow,
IGNR (pink arrows) — in other words, causing the GNR to act as a
transistor gate. The magnitude of the current flow through the GNR
functions as the binary gate output — with binary 1 representing the
current flow of the conductive state and binary 0 representing no
current (the resistive state). (credit: Joseph S. Friedman et al./Nature
Communications)
A future graphene-based transistor using spintronics could lead to tinier computers that are a thousand times faster and use a hundredth of the power of silicon-based computers.
The radical transistor concept, created by a team of researchers at Northwestern University, The University of Texas at Dallas, University of Illinois at Urbana-Champaign, and University of Central Florida, is explained this month in an open-access paper in the journal Nature Communications.
Transistors act as on and off switches. A series of transistors in
different arrangements act as logic gates, allowing microprocessors to
solve complex arithmetic and logic problems. But the speed of computer
microprocessors that rely on silicon transistors has been relatively
stagnant since around 2005, with clock speeds mostly in the 3 to 4
gigahertz range.
Clock speeds approaching the terahertz range
The researchers discovered that by applying a magnetic field to a
graphene ribbon (created by unzipping a carbon nanotube), they could
change the resistance of current flowing through the ribbon. The
magnetic field — controlled by increasing or decreasing the current
through adjacent carbon nanotubes — increased or decreased the flow of
current.
A cascading series of graphene transistor-based logic circuits could
produce a massive jump, with clock speeds approaching the terahertz
range — a thousand times faster.* They would also be smaller and
substantially more efficient, allowing device-makers to shrink
technology and squeeze in more functionality, according to Ryan M.
Gelfand, an assistant professor in The College of Optics & Photonics
at the University of Central Florida.
The researchers hope to inspire the fabrication of these cascaded
logic circuits to stimulate a future transformative generation of
energy-efficient computing.
* Unlike other spintronic logic proposals, these new logic gates
can be cascaded directly through the carbon materials without requiring
intermediate circuits and amplification between gates. That would result
in compact circuits with reduced area that are far more efficient than
with CMOS switching, which is limited by charge transfer and
accumulation from RLC (resistance-inductance-capacitance) interconnect delays.
Abstract of Cascaded spintronic logic with low-dimensional carbon
Remarkable breakthroughs have established the functionality of
graphene and carbon nanotube transistors as replacements to silicon in
conventional computing structures, and numerous spintronic logic gates
have been presented. However, an efficient cascaded logic structure that
exploits electron spin has not yet been demonstrated. In this work, we
introduce and analyse a cascaded spintronic computing system composed
solely of low-dimensional carbon materials. We propose a spintronic
switch based on the recent discovery of negative magnetoresistance in
graphene nanoribbons, and demonstrate its feasibility through
tight-binding calculations of the band structure. Covalently connected
carbon nanotubes create magnetic fields through graphene nanoribbons,
cascading logic gates through incoherent spintronic switching. The
exceptional material properties of carbon materials permit Terahertz
operation and two orders of magnitude decrease in power-delay product
compared to cutting-edge microprocessors. We hope to inspire the
fabrication of these cascaded logic circuits to stimulate a
transformative generation of energy-efficient computing.
Summary: Researchers use machine learning technology to identify brain based dimensions of mental health disorders.
Source: University of Pennsylvania.
A
new study using machine learning has identified brain-based dimensions
of mental health disorders, an advance towards much-needed biomarkers to
more accurately diagnose and treat patients. A team at Penn Medicine
led by Theodore D. Satterthwaite, MD, an assistant professor in the
department of Psychiatry, mapped abnormalities in brain networks to four
dimensions of psychopathology: mood, psychosis, fear, and disruptive
externalizing behavior. The research is published in Nature Communications this week.
Currently,
psychiatry relies on patient reporting and physician observations alone
for clinical decision making, while other branches of medicine have
incorporated biomarkers to aid in diagnosis, determination of prognosis,
and selection of treatment for patients. While previous studies using
standard clinical diagnostic categories have found evidence for brain
abnormalities, the high level of diversity within disorders and
comorbidity between disorders has limited how this kind of research may
lead to improvements in clinical care.
“Psychiatry is behind the
rest of medicine when it comes to diagnosing illness,” said
Satterthwaite. “For example, when a patient comes in to see a doctor
with most problems, in addition to talking to the patient, the physician
will recommend lab tests and imaging studies to help diagnose their
condition. Right now, that is not how things work in psychiatry. In most
cases, all psychiatric diagnoses rely on just talking to the patient.
One of the reasons for this is that we don’t understand how
abnormalities in the brain lead to psychiatric symptoms. This research
effort aims to link mental health issues and their associated brain
network abnormalities to psychiatric symptoms using a data-driven
approach.”
To uncover the brain networks associated with
psychiatric disorders, the team studied a large sample of adolescents
and young adults (999 participants, ages 8 to 22). All participants
completed both functional MRI scans and a comprehensive evaluation of
psychiatric symptoms as part of the Philadelphia Neurodevelopmental
Cohort (PNC), an effort lead by Raquel E. Gur, MD, PhD, professor of
Psychiatry, Neurology, and Radiology, that was funded by the National
Institute of Mental Health. The brain and symptom data were then jointly
analyzed using a machine learning method called sparse canonical
correlation analysis.
This analysis revealed patterns of changes
in brain networks that were strongly related to psychiatric symptoms. In
particular, the findings highlighted four distinct dimensions of
psychopathology – mood, psychosis, fear, and disruptive behavior – all
of which were associated with a distinct pattern of abnormal
connectivity across the brain.
The researchers found that each
brain-guided dimension contained symptoms from several different
clinical diagnostic categories. For example, the mood dimension was
comprised of symptoms from three categories, e.g. depression (feeling
sad), mania (irritability), and obsessive-compulsive disorder (recurrent
thoughts of self-harm). Similarly, the disruptive externalizing
behavior dimension was driven primarily by symptoms of both Attention
Deficit Hyperactivity Disorder(ADHD) and Oppositional Defiant Disorder
(ODD), but also included the irritability item from the depression
domain. These findings suggest that when both brain and symptomatic data
are taken into consideration, psychiatric symptoms do not neatly fall
into established categories. Instead, groups of symptoms emerge from
diverse clinical domains to form dimensions that are linked to specific
patterns of abnormal connectivity in the brain.
“In addition to
these specific brain patterns in each dimension, we also found common
brain connectivity abnormalities that are shared across dimensions,”
said Cedric Xia, a MD-PhD candidate and the paper’s lead author.
“Specifically, a pair of brain networks called default mode network and
frontal-parietal network, whose connections usually grow apart during
brain development, become abnormally integrated in all dimensions.”
Cross clinical diagnostic categories. NeuroscienceNews.com image is credited to Penn Medicine.
These
two brain networks have long intrigued psychiatrists and
neuroscientists because of their crucial role in complex mental
processes such as self-control, memory, and social interactions. The
findings in this study support the theory that many types of psychiatric
illness are related to abnormalities of brain development.
The
team also examined how psychopathology differed across age and sex. They
found that patterns associated with both mood and psychosis became
significantly more prominent with age. Additionally, brain connectivity
patterns linked to mood and fear were both stronger in female
participants than males.
“This study shows that we can start to
use the brain to guide our understanding of psychiatric disorders in a
way that’s fundamentally different than grouping symptoms into clinical
diagnostic categories. By moving away from clinical labels developed
decades ago, perhaps we can let the biology speak for itself,” said
Satterthwaite. “Our ultimate hope is that understanding the biology of
mental illnesses will allow us to develop better treatments for our
patients.”
About this neuroscience research article
Additional
Penn authors include Zongming Ma, Rastko Ciric, Shi Gu, Richard F.
Betzel, Antonia N. Kaczkurkin, Monica E. Calkins, Philip A. Cook, Angel
García de la Garza, Simon N. Vandekar, Zaixu Cui, Tyler M. Moore, David
R. Roalf, Kosha Ruparel, Daniel H. Wolf, Christos Davatzikos, Ruben C.
Gur, Raquel E. Gur, Russell T. Shinohara, and Danielle S. Bassett.
Funding:
This study was supported by grants from National Institute of Health
(R01MH107703, R01MH112847, R21MH106799, R01MH107235, R01MH113550,
R01EB022573, P50MH096891, R01MH101111, K01MH102609, K08MH079364,
R01NS085211). The PNC was supported by MH089983 and MH089924. Additional
support was provided by the Penn-CHOP Lifespan Brain Institute and the
Dowshen Program for Neuroscience.
Source: Hannah Messinger – University of Pennsylvania Publisher: Organized by NeuroscienceNews.com. Image Source: NeuroscienceNews.com image is credited to Penn Medicine. Original Research: Open access research
for “Linked dimensions of psychopathology and connectivity in
functional brain networks” by Cedric Huchuan Xia, Zongming Ma, Rastko
Ciric, Shi Gu, Richard F. Betzel, Antonia N. Kaczkurkin, Monica E.
Calkins, Philip A. Cook, Angel García de la Garza, Simon N. Vandekar,
Zaixu Cui, Tyler M. Moore, David R. Roalf, Kosha Ruparel, Daniel H.
Wolf, Christos Davatzikos, Ruben C. Gur, Raquel E. Gur, Russell T.
Shinohara, Danielle S. Bassett & Theodore D. Satterthwaite in Nature Communications. Published August 1 2018. doi:10.1038/s41467-018-05317-y
(Top)
Predicted brain activation patterns and semantic features (colors) for
two pairs of sentences. (Left: “The flood damaged the hospital”;
(Right): “The storm destroyed the theater.” (Bottom) observed similar
activation patterns and semantic features. (credit: Jing Wang et
al./Human Brain Mapping)
By combining machine-learning algorithms with fMRI brain imaging
technology, Carnegie Mellon University (CMU) scientists have discovered,
in essense, how to “read minds.”
The researchers used functional magnetic resonance imaging (fMRI) to
view how the brain encodes various thoughts (based on blood-flow
patterns in the brain). They discovered that the mind’s building blocks
for constructing complex thoughts are formed, not by words, but by
specific combinations of the brain’s various sub-systems.
Following up on previous research, the findings, published in Human Brain Mapping (open-access preprint here) and funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA), provide new evidence that the neural dimensions of concept representation are universal across people and languages.
“One of the big advances of the human brain was the ability to
combine individual concepts into complex thoughts, to think not just of
‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” said
CMU’s Marcel Just, the D.O. Hebb University Professor of Psychology in
the Dietrich College of Humanities and Social Sciences. “We have
finally developed a way to see thoughts of that complexity in the fMRI
signal. The discovery of this correspondence between thoughts and brain
activation patterns tells us what the thoughts are built of.”
Goal: A brain map of all types of knowledge
(Top)
Specific brain regions associated with the four large-scale semantic
factors: people (yellow), places (red), actions and their consequences
(blue), and feelings (green). (Bottom) Word clouds associated with each
large-scale semantic factor underlying sentence representations. These
word clouds comprise the seven “neurally plausible semantic features”
(such as “high-arousal”) most associated with each of the four semantic
factors. (credit: Jing Wang et al./Human Brain Mapping)
The researchers used 240 specific events (described by sentences such
as “The storm destroyed the theater”) in the study, with seven adult
participants. They measured the brain’s coding of these events using 42
“neurally plausible semantic features” — such as person, setting, size,
social interaction, and physical action (as shown in the word clouds in
the illustration above). By measuring the specific activation of each of
these 42 features in a person’s brain system, the program could tell
what types of thoughts that person was focused on.
The researchers used a computational model to assess how the detected
brain activation patterns (shown in the top illustration, for example)
for 239 of the event sentences corresponded to the detected neurally
plausible semantic features that characterized each sentence. The
program was then able to decode the features of the 240th left-out
sentence. (For “cross-validation,” they did the same for the other 239
sentences.)
The model was able to predict the features of the left-out sentence
with 87 percent accuracy, despite never being exposed to its activation
before. It was also able to work in the other direction: to predict the
activation pattern of a previously unseen sentence, knowing only its
semantic features.
“Our method overcomes the unfortunate property of fMRI to smear
together the signals emanating from brain events that occur close
together in time, like the reading of two successive words in a
sentence,” Just explained. “This advance makes it possible for the first
time to decode thoughts containing several concepts. That’s what most
human thoughts are composed of.”
“A next step might be to decode the general type of topic a person is
thinking about, such as geology or skateboarding,” he added. “We are on
the way to making a map of all the types of knowledge in the brain.”
Future possibilities
It’s conceivable that the CMU brain-mapping method might be combined
one day with other “mind reading” methods, such as UC Berkeley’s method
for using fMRI and computational models to decode and reconstruct people’s imagined visual experiences. Plus whatever Neuralink discovers.
Or if the CMU method could be replaced by noninvasive functional near-infrared spectroscopy (fNIRS), Facebook’s Building8 research concept
(proposed by former DARPA head Regina Dugan) might be incorporated (a
filter for creating quasi ballistic photons, avoiding diffusion and
creating a narrow beam for precise targeting of brain areas, combined
with a new method of detecting blood-oxygen levels).
Using fNIRS might also allow for adapting the method to infer thoughts of locked-in paralyzed patients, as in the Wyss Center for Bio and Neuroengineering research. It might even lead to ways to generally enhance human communication.
The CMU research is supported by the Office of the Director of
National Intelligence (ODNI) via the Intelligence Advanced Research
Projects Activity (IARPA) and the Air Force Research Laboratory (AFRL).
CMU has created some of the first cognitive tutors, helped to
develop the Jeopardy-winning Watson, founded a groundbreaking doctoral
program in neural computation, and is the birthplace of artificial
intelligence and cognitive psychology. CMU also launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviors.
Abstract of Predicting the Brain Activation Pattern Associated
With the Propositional Content of a Sentence: Modeling Neural
Representations of Events and States
Even though much has recently been learned about the neural
representation of individual concepts and categories, neuroimaging
research is only beginning to reveal how more complex thoughts, such as
event and state descriptions, are neurally represented. We present a
predictive computational theory of the neural representations of
individual events and states as they are described in 240 sentences.
Regression models were trained to determine the mapping between 42
neurally plausible semantic features (NPSFs) and thematic roles of the
concepts of a proposition and the fMRI activation patterns of various
cortical regions that process different types of information. Given a
semantic characterization of the content of a sentence that is new to
the model, the model can reliably predict the resulting neural
signature, or, given an observed neural signature of a new sentence, the
model can predict its semantic content. The models were also reliably
generalizable across participants. This computational model provides an
account of the brain representation of a complex yet fundamental unit of
thought, namely, the conceptual content of a proposition. In addition
to characterizing a sentence representation at the level of the semantic
and thematic features of its component concepts, factor analysis was
used to develop a higher level characterization of a sentence,
specifying the general type of event representation that the sentence
evokes (e.g., a social interaction versus a change of physical state)
and the voxel locations most strongly associated with each of the
factors.
Angel investor + VC in high growth startups @gsvcap
I
wouldn’t do Professor Pedro Domingos justice by trying to describe his
entire book in this blog post, but I did want to share one core thought
as I reflect on his book.
In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one…www.amazon.com
Domingos’ core argument is that machine learning needs an unifying theorem, not unlike the Standard Model in physics or the Central Dogma
in biology. He takes readers through a historical tour of artificial
intelligence and machine learning and breaks down the five main schools
of machine learning (below). But he argues that each has its limitations
and the main goal for current researchers should be to discover/create
“The Master Algorithm” that has the ability to learn any concept aka act
as a general purpose learner.
As with any great book, it leads to more questions than answers. My main question, as applied to startups, is this:
What’s the speed at which machine learning is improving?
Why is this an important question?
For the past several decades, the category defining companies from Intel to to Apple to Google to Facebook have benefited from 2 core unifying theories of technology.
First, Moore’s Law
created the underlying framework for the speed at which computing power
increases (doubling every two years or so) that has directly enabled a
generation of products. Products that were at first bulky and expensive,
such as room-sized mainframes, were able to ride Moore’s Law and become
smaller and cheap, leading to mass products like phones and smart
watches.
Second, Metcalfe’s Law governed the value of a network of users (n(n
− 1)/2) that has enabled a generation of Internet services to
effectively serve the majority of the world’s Internet population. As
more users join a network, their value grow exponentially while costs
generally grow linearly. This incentivizes even more users and the
flywheel is set in motion.
So now the question is…is there a third “Law” that governs the speed of improvement of machine learning.
In Lee Kai-Fu’s (李开复) commencement speech at Columbia, he gives hints at this.
In speaking about his investments in now publicly listed Meitu
and two other AI investments, he notes that in all three cases, the AI
technology underlying the startups went from essentially not useful to
indispensable.
The
three software companies I mentioned earlier, when they were first
launched: often made people uglier, lost millions in bad loans, and
thought I was some talk show celebrity. But given time and much more
data, their self-learning made them dramatically better than people. Not
only are they better, they don’t get tired nor emotional. They don’t go
on strike, and they are infinitely scalable. With hardware, software,
and networking costs coming down, all they cost is electricity.
In god we trust, all others bring data…
To put some data behind it, if we look at the ImageNet Challenge,
AI image recognition technology has improved 10X from 2010 to 2016,
catalyzed by the introduction of deep learning methods in 2012.
On
the backs of this “Law” of machine learning improvement, we’ll see a
Cambrian explosion of new products and services that fits Kai-Fu’s
description of products that are at first flawed, but with time and
data, become essential and, for all practical purposes, perfect.
Questions, not answers
The
ImageNet Challenge and image recognition is just one application of AI
so it doesn’t give us enough to say what the “Law” of machine learning
improvement is. I can’t say AI is doubling in intelligence every 18 to
24 months or that AI gets exponentially better by a factor of n(n − 1)/2 with each data point.
But
I do think a particular “Law” governing the rate at which AI is
improving exists and I can’t wait for someone in the field to articulate
(or solve) it.
Because
understanding the speed at which artificial intelligence is getting
more intelligent will allow us to understand the third major
foundational wave, in addition to Moore’s and Metcalfe’s Law, that will
bring us the dominant companies and the brilliant products of the Age of
AI.
Join 25,000+ people who read the weekly 🤖Machine Learnings🤖newsletter to understand how AI will impact the way they work.
Multiwall carbon nanotubes (MWCNTs) could safely help repair damaged
connections between neurons by serving as supporting scaffolds for
growth or as connections between neurons.
That’s the conclusion of an in-vitro (lab) open-access study
with cultured neurons (taken from the hippcampus of neonatal rats) by a
multi-disciplinary team of scientists in Italy and Spain, published in
the journal Nanomedicine: Nanotechnology, Biology, and Medicine.
A multi-walled carbon nanotube (credit: Eric Wieser/CC)
The study addressed whether MWCNTs that are interfaced to neurons
affect synaptic transmission by modifying the lipid (fatty) cholesterol
structure in artificial neural membranes.
Significantly, they found that MWCNTs:
Facilitate the full growth of neurons and the formation of
new synapses. “This growth, however, is not indiscriminate and unlimited
since, as we proved, after a few weeks, a physiological balance is
attained.”
Do not interfere with the composition of lipids (cholesterol in particular), which make up the cellular membrane in neurons.
Do not interfere in the transmission of signals through synapses.
The researchers also noted that they recently reported (in an open access paper) low tissue reaction when multiwall carbon nanotubes were implanted in vivo (in live animals) to reconnect damaged spinal neurons.
The researchers say they proved that carbon nanotubes “perform
excellently in terms of duration, adaptability and mechanical
compatibility with tissue” and that “now we know that their interaction
with biological material, too, is efficient. Based on this evidence, we
are already studying an in vivo application, and preliminary results appear to be quite promising in terms of recovery of lost neurological functions.”
The research team comprised scientists from SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone, and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE.
Abstract of Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces
Carbon nanotube-based biomaterials critically contribute to the
design of many prosthetic devices, with a particular impact in the
development of bioelectronics components for novel neural interfaces.
These nanomaterials combine excellent physical and chemical properties
with peculiar nanostructured topography, thought to be crucial to their
integration with neural tissue as long-term implants. The junction
between carbon nanotubes and neural tissue can be particularly worthy of
scientific attention and has been reported to significantly impact
synapse construction in cultured neuronal networks. In this framework,
the interaction of 2D carbon nanotube platforms with biological
membranes is of paramount importance. Here we study carbon nanotube
ability to interfere with lipid membrane structure and dynamics in
cultured hippocampal neurons. While excluding that carbon nanotubes
alter the homeostasis of neuronal membrane lipids, in particular
cholesterol, we document in aged cultures an unprecedented functional
integration between carbon nanotubes and the physiological maturation of
the synaptic circuits.
The tau (τ), also called the tau lepton, tau particle, or tauon, is an elementary particle similar to the electron, with negative electric charge and a spin of 1/2. Together with the electron, the muon, and the three neutrinos, it is a lepton. Like all elementary particles with half-integer spin, the tau has a corresponding antiparticle of opposite charge but equal mass and spin, which in the tau's case is the antitau (also called the positive tau). Tau particles are denoted by τ− and the antitau by τ+.
Tau leptons have a lifetime of 2.9×10−13 s and a mass of 1776.82 MeV/c2 (compared to 105.7 MeV/c2 for muons and 0.511 MeV/c2
for electrons). Since their interactions are very similar to those of
the electron, a tau can be thought of as a much heavier version of the
electron. Because of their greater mass, tau particles do not emit as
much bremsstrahlung radiation as electrons; consequently they are potentially highly penetrating, much more so than electrons.
Because of their short lifetime, the range of the tau is mainly
set by their decay length, which is too small for bremsstrahlung to be
noticeable. Their penetrating power appears only at ultra-high velocity /
ultra-high energy (above PeV energies), when time dilation extends their path-length.[4]
As with the case of the other charged leptons, the tau has an associated tau neutrino, denoted by ν τ.
History
The tau was anticipated in a 1971 paper by Yung-Su Tsai.[5] Providing the theory for this discovery, the tau was detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his and Tsai's colleagues at the SLAC-LBL group.[2] Their equipment consisted of SLAC's then-new e+– e− colliding ring, called SPEAR, and the LBL magnetic detector. They could detect and distinguish between leptons, hadrons and photons. They did not detect the tau directly, but rather discovered anomalous events:
We have discovered 64 events of the form
e+ + e− → e± + μ∓ + at least two undetected particles
for which we have no conventional explanation.
The need for at least two undetected particles was shown by the
inability to conserve energy and momentum with only one. However, no
other muons, electrons, photons, or hadrons were detected. It was
proposed that this event was the production and subsequent decay of a
new particle pair:
e+ + e− → τ+ + τ− → e± + μ∓ + 4 ν
This was difficult to verify, because the energy to produce the τ+ τ− pair is similar to the threshold for D meson production. The mass and spin of the tau was subsequently established by work done at DESY-Hamburg with the Double Arm Spectrometer (DASP), and at SLAC-Stanford with the SPEAR Direct Electron Counter (DELCO),
The symbol τ was derived from the Greek τρίτον (triton, meaning "third" in English), since it was the third charged lepton discovered.[6]
The tau is the only lepton that can decay into hadrons – the other leptons do not have the necessary mass. Like the other decay modes of the tau, the hadronic decay is through the weak interaction.[7]
17.82% for decay into a tau neutrino, electron and electron antineutrino;
17.39% for decay into a tau neutrino, muon and muon antineutrino.
The similarity of values of the two branching ratios is a consequence of lepton universality.
Exotic atoms
The tau lepton is predicted to form exotic atoms like other charged subatomic particles. One of such, called tauonium by the analogy to muonium, consists of an antitauon and an electron: τ+ e−.[8]
Another one is an onium atom τ+ τ− called true tauonium
and is difficult to detect due to tau's extremely short lifetime at low
(non-relativistic) energies needed to form this atom. Its detection is
important for quantum electrodynamics.