Apple’s first paper on artificial intelligence, published Dec. 22 on arXiv (open access), describes a method for improving the ability of a deep neural network to recognize images.
To train neural networks to recognize images, AI researchers have
typically labeled (identified or described) each image in a dataset. For
example, last year, Georgia Institute of Technology researchers
developed a deep-learning method to recognize images taken at regular intervals on a person’s wearable smartphone camera.
Example
images from dataset of 40,000 egocentric images with their respective
labels (credit: Daniel Castro et al./Georgia Institute of Technology)
The idea was to demonstrate that deep-learning can “understand” human
behavior and the habits of a specific person, and based on that, the AI
system could offer suggestions to the user.
The problem with that method is the huge amount of time required to
manually label the images (40,000 in this case). So AI researchers have
turned to using synthetic images (such as from a video) that are
pre-labeled (in captions, for example).
Creating superrealistic image recognition
But that, in turn, also has limitations. “Synthetic data is often not
realistic enough, leading the network to learn details only present in
synthetic images and fail to generalize well on real images,” the
authors explain.
Simulated+Unsupervised (S+U) learning (credit: Ashish Shrivastava et al./arXiv)
So instead, the researchers have developed a new approach called “Simulated+Unsupervised (S+U) learning.”
The idea is to still use pre-labeled synthetic images (like the
“Synthetic” image in the above illustration), but refine their realism
by matching synthetic images to unlabeled real images (in this case,
eyes) — thus creating a “superrealistic” image (the “Refined” image
above), allowing for more accurate, faster image recognition, while
preserving the labeling.
To do that, the researchers used a relatively new method (created in 2014) called Generative Adversarial Networks (GANs), which uses two neural networks that sort of compete with each other to create a series of superrealistic images.*
A visual Turing test
“To quantitatively evaluate the visual quality of the refined images,
we designed a simple user study where subjects were asked to classify
images as real or refined synthetic. Each subject was shown a random
selection of 50 real images and 50 refined images in a random order, and
was asked to label the images as either real or refined. The subjects
found it very hard to tell the difference between the real images and
the refined images.” — Ashish Shrivastava et al./arXiv
So will Siri develop the ability to identify that person whose name
you forgot and whisper it to you in your AirPods, or automatically bring
up that person’s Facebook page and latest tweet? Or is that getting too
creepy?
* Simulated+Unsupervised (S+U) learning is “an interesting
variant of adversarial gradient-based methods,” Jürgen Schmidhuber,
Scientific Director of IDSIA (Swiss AI Lab), told KurzweilAI.
“An image synthesizer’s output is piped into a refiner net whose
output is classified by an adversarial net trained to distinguish real
from fake images. The refiner net tries to convince the classifier that
it’s output is real, while being punished for deviating too much from
the synthesizer output. Very nice and rather convincing applications!”
Schmidhuber also briefly explained his 1991 paper [1] that
introduced gradient-based adversarial networks for unsupervised learning
“when computers were about 100,000 times slower than today. The method
was called Predictability Minimization (PM).
“An encoder network receives real vector-valued data samples
(such as images) and produces codes thereof across so-called code units.
A predictor network is trained to predict each code component from the
remaining components. The encoder, however, is trained to maximize the
same objective function minimized by the predictor.
“That is, predictor and encoder fight each other, to motivate the
encoder to achieve a ‘holy grail’ of unsupervised learning, namely, a
factorial code of the data, where the code components are statistically
independent, which makes subsequent classification easy. One can attach
an optional autoencoder to the code to reconstruct data from its code.
After perfect training, one can randomly activate the code units in
proportion to their mean values, to read out patterns distributed like
the original training patterns, assuming the code has become factorial
indeed.
“PM and Generative Adversarial Networks (GANs) may be viewed as
symmetric approaches. PM is directly trying to map data to its factorial
code, from which patterns can be sampled that are distributed like the
original data. While GANs start with a random (usually factorial)
distribution of codes, and directly learn to decode the codes into
‘good’ patterns. Both PM and GANs employ gradient-based adversarial nets
that play a minimax game to achieve their goals.”
[1] J. Schmidhuber. Learning factorial codes by predictability
minimization. Technical Report CU-CS-565-91, Dept. of Comp. Sci.,
University of Colorado at Boulder, December 1991. Later also published
in Neural Computation, 4(6):863-879, 1992. More: http://people.idsia.ch/~juergen/ica.html
Majority Christian (mainly Anglican and Catholic),[2] large minority no religious affiliation,[2] small numbers of other religions, various local indigenous religions grounded in Australian Aboriginal mythology
Aboriginal dwellings in Hermannsburg, Northern Territory, 1923. Image: Herbert Basedow
Aboriginal Australians are legally defined as people who are members "of the Aboriginal race of Australia" (indigenous to mainland Australia or to the island of Tasmania).
A new definition was proposed in the Constitutional Section of the Department of Aboriginal Affairs' Report on a Review of the Administration of the Working Definition of Aboriginal and Torres Strait Islanders (Canberra, 1981):
An Aboriginal or Torres Strait
Islander is a person of Aboriginal or Torres Strait Islander descent who
identifies as an Aboriginal or Torres Strait Islander and is accepted
as such by the community in which he (she) lives.[7]
Membership of the Indigenous people
depends on biological descent from the Indigenous people and on mutual
recognition of a particular person's membership by that person and by
the elders or other persons enjoying traditional authority among those
people.[7]
The category "Aboriginal Australia" was coined by the British
after they began colonising Australia in 1788, to refer collectively to
all people they found already inhabiting the continent, and later to
the descendants of any of those people. Until the 1980s, the sole legal
and administrative criterion for inclusion in this category was race,
classified according to visible physical characteristics or known
ancestors. As in the British slave colonies of North America and the Caribbean, where the principle of partus sequitur ventrem
was adopted from 1662, children's status was determined by that of
their mothers: if born to Aboriginal mothers, children were considered
Aboriginal, regardless of their paternity.[8]
In the era of colonial and post-colonial government,
access to basic human rights depended upon your race. If you were a
"full blooded Aboriginal native ... [or] any person apparently having an
admixture of Aboriginal blood", a half-caste being the "offspring of an
Aboriginal mother and other than Aboriginal father" (but not of an
Aboriginal father and other than Aboriginal mother), a "quadroon",
or had a "strain" of Aboriginal blood you were forced to live on
Reserves or Missions, work for rations, given minimal education, and
needed governmental approval to marry, visit relatives or use electrical
appliances.[9]
The Constitution of Australia, in its original form as of 1901, referred to Aboriginals twice, but without definition. Section 51(xxvi)
gave the Commonwealth parliament a power to legislate with respect to
"the people of any race" throughout the Commonwealth, except for people
of "the aboriginal race". The purpose of this provision was to give the
Commonwealth power to regulate non-white immigrant workers, who would
follow work opportunities interstate.[10] The only other reference, Section 127,
provided that "aboriginal natives shall not be counted" in reckoning
the size of the population of the Commonwealth or any part of it. The
purpose of Section 127 was to prevent the inclusion of Aboriginal people
in Section 24 determinations of the distribution of House of Representatives seats amongst the states and territories.[11]
After these references were removed by the 1967 referendum,
the Australian Constitution had no references to Aboriginals. Since
that time, there have been a number of proposals to amend the
constitution to specifically mention Indigenous Australians.[12][13]
The change to Section 51(xxvi) enabled the Commonwealth
parliament to enact laws specifically with respect to Aboriginal peoples
as a "race". In the Tasmanian Dam Case of 1983, the High Court of Australia
was asked to determine whether Commonwealth legislation, whose
application could relate to Aboriginal people—parts of the World
Heritage Properties Conservation Act 1983 (Cth) as well as related
legislation—was supported by Section 51(xxvi) in its new form. The case
concerned an application of legislation that would preserve cultural
heritage of Aboriginal Tasmanians. It was held that Aboriginal
Australians and Torres Strait Islanders, together or separately, and any
part of either, could be regarded as a "race" for this purpose. As to
the criteria for identifying a person as a member of such a "race", the
definition by Justice Deane has become accepted as current law.[9] Deane said:
It is unnecessary, for the purposes of the present case,
to consider the meaning to be given to the phrase "people of any race"
in s. 51(xxvi). Plainly, the words have a wide and non-technical meaning
... . The phrase is, in my view, apposite to refer to all Australian
Aboriginals collectively. Any doubt, which might otherwise exist in that
regard, is removed by reference to the wording of par. (xxvi) in its
original form. The phrase is also apposite to refer to any identifiable
racial sub-group among Australian Aboriginals. By "Australian
Aboriginal" I mean, in accordance with what I understand to be the
conventional meaning of that term, a person of Aboriginal descent,
albeit mixed, who identifies himself as such and who is recognised by
the Aboriginal community as an Aboriginal.[14]
While Deane's three-part definition reaches beyond the biological
criterion to an individual's self-identification, it has been criticised
as continuing to accept the biological criterion as primary.[9]
It has been found difficult to apply, both in each of its parts and as
to the relations among the parts; biological "descent" has been a
fall-back criterion.[15]
Definitions from Aboriginal Australians
Eve Fesl, a Gabi-Gabi woman, wrote in the Aboriginal Law Bulletin describing how she and possibly other Aboriginal people preferred to be identified:
The word 'aborigine' refers to an
indigenous person of any country. If it is to be used to refer to us as a
specific group of people, it should be spelt with a capital 'A', i.e.,
'Aborigine'.[16]
While the term 'indigenous' is being more commonly used by Australian
Government and non-Government organisations to describe Aboriginal
Australians, Lowitja O'Donoghue, commenting on the prospect of possible amendments to Australia's constitution, said:
I really can't tell you of a time
when 'indigenous' became current, but I personally have an objection to
it, and so do many other Aboriginal and Torres Strait Islander
people. ... This has just really crept up on us ... like thieves in the
night. ... We are very happy with our involvement with indigenous
people around the world, on the international forum ... because they're
our brothers and sisters. But we do object to it being used here in
Australia.[17]
O'Donoghue said that the term indigenous robbed the
traditional owners of Australia of an identity because some
non-Aboriginal people now wanted to refer to themselves as indigenous
because they were born there.[17]
Definitions from academia
Dean of Indigenous Research and Education at Charles Darwin University, Professor MaryAnn Bin-Sallik,
has lectured on the ways Aboriginal Australians have been categorised
and labelled over time. Her lecture offered a new perspective on the
terms urban, traditional and of Indigenous descent as used to define and categorise Aboriginal Australians:
Not only are these categories
inappropriate, they serve to divide us. ... Government's insistence on
categorising us with modern words like 'urban', 'traditional' and 'of
Aboriginal descent' are really only replacing old terms 'half-caste' and
'full-blood' – based on our colouring.[18]
She called for a replacement of this terminology by that of "Aborigine" or "Torres Strait Islander" – "irrespective of hue".[18] It could be argued that the indigenous tribes of Papua New Guinea and Western New Guinea (Indonesia) are more closely related to the Aboriginal Australians than to any tribes found in Indonesia, however due to ongoing conflict in the regions of West Papua, these tribes are being marginalized from their closest relations.[19][20]
Origins
Scholars had disagreed whether the closest kin of Aboriginal
Australians outside Australia were certain South Asian groups or African
groups. The latter would imply a migration pattern in which their
ancestors passed through South Asia to Australia without intermingling
genetically with other populations along the way.[21]
In a 2011 genetic study by Ramussen et al., researchers took a
DNA sample from an early 20th century lock of an Aboriginal person's
hair with low European admixture. They found that the ancestors of the
Aboriginal population split off from the Eurasian population between
62,000 and 75,000 BP,
whereas the European and Asian populations split only 25,000 to 38,000
years BP, indicating an extended period of Aboriginal genetic isolation.
These Aboriginal ancestors migrated into South Asia and then into
Australia, where they stayed, with the result that, outside of Africa,
the Aboriginal peoples have occupied the same territory continuously
longer than any other human populations. These findings suggest that
modern Aboriginal peoples are the direct descendants of migrants who
left Africa up to 75,000 years ago.[22][23] This finding is compatible with earlier archaeological finds of human remains near Lake Mungo that date to approximately 40,000 years ago.
The same genetic study of 2011 found evidence that Aboriginal peoples carry some of the genes associated with the Denisovan
(a species of human related to but distinct from Neanderthals) peoples
of Asia; the study suggests that there is an increase in allele
sharing between the Denisovans and the Aboriginal Australians genome
compared to other Eurasians and Africans. Examining DNA from a finger
bone excavated in Siberia,
researchers concluded that the Denisovans migrated from Siberia to
tropical parts of Asia and that they interbred with modern humans in
South-East Asia 44,000 years ago, before Australia separated from Papua
New Guinea approximately 11,700 years BP.
They contributed DNA to Aboriginal Australians along with present-day
New Guineans and an indigenous tribe in the Philippines known as Mamanwa.[citation needed]
This study makes Aboriginal Australians one of the oldest living
populations in the world and possibly the oldest outside of Africa,
confirming they may also have the oldest continuous culture on the
planet.[24] The Papuans have more sharing alleles than Aboriginal peoples.[clarification needed] The data suggest that modern and archaic humans interbred in Asia before the migration to Australia.[25]
One 2017 paper in Nature evaluated artifacts in Kakadu and concluded "Human occupation began around 65,000 years ago".[26]
A 2013 study by the Max Planck Institute for Evolutionary Anthropology
found that there was a migration of genes from India to Australia
around 2000 BCE. The researchers had two theories for this: either some
Indians had contact with people in Indonesia who eventually transferred
those genes from India to Australian Aborigines, or that a group of
Indians migrated all the way from India to Australia and intermingled
with the locals directly. Their research also shows that these new
arrivals came at a time when dingoes first appeared in the fossil record, and when Aboriginal peoples first used microliths in hunting. In addition, they arrived just as one of the Aboriginal language groups was undergoing a rapid expansion.[27][28]
In a 2001 study, blood samples were collected from some Warlpiri
members of the Northern Territory to study the genetic makeup of the
Warlpiri Tribe of Aboriginal Australians, who are not representative of
all Aboriginal Tribes in Australia. The study concluded that the
Warlpiri are descended from ancient Asians whose DNA is still somewhat
present in Southeastern Asian groups, although greatly diminished. The
Warlpiri DNA also lacks certain information found in modern Asian
genomes, and carries information not found in other genomes, reinforcing
the idea of ancient Aboriginal isolation.[29]
Aboriginal Australians are genetically most similar to the indigenous populations of Papua New Guinea, and more distantly related to groups from East India. They are quite distinct from the indigenous populations of Borneo and Malaysia,
sharing relatively little genomic information as compared to the groups
from Papua New Guinea and India. This indicates that Australia was
isolated for a long time from the rest of Southeast Asia, and remained
untouched by migrations and population expansions into that area.[29]
The Australian Aborigines are genetically evolved to stand a wide
range of environmental temperatures. They were observed to have been
able to sleep naked on the ground at night in below freezing conditions
in desert conditions where the temperatures easily rose to above 40
degrees Celsius during the day. By the same token, Tasmanian Aborigines
would sleep in snow drifts with nothing on apart from an animal skin.
According to the April 2017 edition of the National Geographic
magazine, it is believed that this ability of Australian Aborigines is
due to a beneficial mutation in the genes which regulate hormones that
control body temperature.[30]
Health
Aboriginal Australians have disproportionately high rates[31]
of severe physical disability, as much as three times that of
non-Aboriginal Australians, possibly due to higher rates of chronic
diseases such as diabetes and kidney disease. In a study comparing Aboriginal Australians to non-Aboriginal Australians, obesity
and smoking rates were higher among Aboriginals, which are contributing
factors or causes of serious health issues. The study also showed that
Aboriginal Australians were more likely to self-report their health as
"excellent/very good" in spite of extant severe physical limitations.
An article on 20 January 2017 in The Lancet describes the suicide rate among Aboriginal Australians as a "catastrophic crisis":
In 2015, more than 150 [Aborigines]
died by suicide, the highest figure ever recorded nationally and double
the rate of [non-Aborigines], according to the Australian Bureau of
Statistics. Additionally, [Aboriginal] children make up one in three
child suicides despite making up a minuscule percentage of the
population. Moreover, in parts of the country such as Kimberley, WA,
suicide rates among [Aborigines] are among the highest in the world.
The report advocates Aboriginal-led national response to the crisis,
asserting that suicide prevention programmes have failed this segment of
the population.[32] The ex-prisoner population of Australian Aboriginals is particularly at risk of committing suicide; organisations such as Ngalla Maya have been set up to offer assistance.[33]
One study reports that Aboriginal Australians are significantly affected by infectious diseases, particularly in rural areas.[34] These diseases include strongyloidiasis, hookworm caused by Ancylostoma duodenale,scabies, and streptococcal
infections. Because poverty is also prevalent in Aboriginal
populations, the need for medical assistance is even greater in many
Aboriginal Australian communities. The researchers suggested the use of mass drug administration
(MDA) as a method of combating the diseases found commonly among
Aboriginal peoples, while also highlighting the importance of
"sanitation, access to clean water, good food, integrated vector control
and management, childhood immunizations, and personal and family
hygiene".[34]
Another study examining the psychosocial functioning of
high-risk-exposed and low-risk-exposed Aboriginal Australians aged 12–17
found that in high-risk youths, personal well-being was protected by a
sense of solidarity and common low socioeconomic status. However, in
low-risk youths, perceptions of racism caused poor psychosocial
functioning. The researchers suggested that factors such as racism,
discrimination and alienation contributed to physiological health risks
in ethnic minority families. The study also mentioned the effect of
poverty on Aboriginal populations: higher morbidity and mortality rates.[35]
Aboriginal Australians suffer from high rates of heart disease. Cardiovascular diseases are the leading cause of death worldwide and among Aboriginal Australians. Aboriginal people develop atrial fibrillation,
a condition that sharply increases the risk of stroke, much earlier
than non-Aboriginal Australians on average. The life expectancy for
Aboriginal Australians is 10 years lower than non-Aboriginal
Australians. Technologies such as the Wireless ambulatory ECG are being developed to screen at-risk individuals, particularly rural Australians, for atrial fibrillation.[36]
The incidence rate of cancer was lower in Aboriginal Australians than non-Aboriginal Australians in 2005–2009.[37]
However, some cancers, including lung cancer and liver cancer, were
significantly more common in Aboriginal people. The overall mortality
rate of Aboriginal Australians due to cancer was 1.3 times higher than
non-Aboriginals in 2013. This may be because they are less likely to
receive the necessary treatments in time, or because the cancers that
they tend to develop are often more lethal than other cancers.
Tobacco usage
According to the Australian Bureau of Statistics, a large number of Aboriginal Australians use tobacco, perhaps 41% of people aged 15 and up.[38]
This number has declined in recent years, but remains relatively high.
The smoking rate is roughly equal for men and women across all age
groups, but the smoking rate is much higher in rural than in urban
areas. The prevalence of smoking exacerbates existing health problems
such as cardiovascular diseases and cancer. The Australian government has encouraged its citizens, both Aboriginal and non-Aboriginal, to stop smoking or to not start.
Alcohol usage
In the Northern Territory
(which has the greatest proportion of Aboriginal Australians), per
capita alcohol consumption for adults is 1.5 times the national average.
Nearly half of Aboriginal adults in the Northern Territory reported
alcohol usage. In addition to the inherent risks associated with alcohol
use, its consumption also tends to increase domestic violence.
Aboriginal people account for 60% of the facial fracture victims in the
Northern Territory, though they only constitute approximately 30% of its
population. Due to the complex nature of the alcohol and domestic
violence issue in the Northern Territory, proposed solutions are
contentious. However, there has recently been increased media attention
to this problem.[39]
Diet
Modern
Aboriginal Australians living in rural areas tend to have nutritionally
poor diets, where higher food costs drive people to consume cheaper,
lower quality foods. The average diet is high in refined carbohydrates
and salt, and low in fruit and vegetables. There are several challenges
in improving diets for Aboriginal Australians, such as shorter shelf
lives of fresh foods, resistance to changing existing consumption
habits, and disagreements on how to implement changes. Some suggest the
use of taxes on unhealthy foods and beverages to discourage their
consumption, but this approach is questionable. Providing subsidies for
healthy foods has proven effective in other countries, but has yet to be
proven useful for Aboriginal Australians specifically.[40]
Groups
Dispersing across the Australian continent over time, the ancient
people expanded and differentiated into distinct groups, each with its
own language and culture.[41] More than 400 distinct Australian Aboriginal peoples have been identified, distinguished by names designating their ancestral languages, dialects, or distinctive speech patterns.[42]
Historically, these groups lived in three main cultural areas, the
Northern, Southern, and Central cultural areas. The Northern and
Southern areas, having richer natural marine and woodland resources,
were more densely populated than the Central area.[41]
Highly sensitive magnetic field sensor (credit: ETH Zurich/Peter Rüegg)
Swiss researchers have succeeded in measuring changes in strong
magnetic fields with unprecedented precision, they report in the
open-access journal Nature Communications. The finding may find widespread use in medicine and other areas.
In their experiments, the researchers at the Institute for Biomedical Engineering, which is operated jointly by ETH Zurich and the University of Zurich,
magnetized a water droplet inside a magnetic resonance imaging (MRI)
scanner, a device used for medical imaging. The researchers were able to
detect even the tiniest variations of the magnetic field strength
within the droplet. These changes were up to 10-12 (1 trillion) times smaller than the 7 tesla field strength of the MRI scanner used in the experiment.
“Until now, it was possible only to measure such small variations in weak magnetic fields,” says Klaas Prüssmann,
Professor of Bioimaging at ETH Zurich and the University of Zurich. An
example of a weak magnetic field is that of the Earth, where the field
strength is just a few dozen microtesla. For fields of this kind, highly
sensitive measurement methods are already able to detect variations of
about a trillionth of the field strength, says Prüssmann. “Now, we have a
similarly sensitive method for strong fields of more than one tesla,
such as those used … in medical imaging.”
The scientists based the sensing technique on the principle of
nuclear magnetic resonance (NMR), which also serves as the basis for
magnetic resonance imaging and the spectroscopic methods that biologists
use to elucidate the 3D structure of molecules, but with 1000 times
greater sensitivity than current NMR methods.
Ultra-sensitive recordings of heart contractions in an MRI machine
Real-time
magnetic field recordings of cardiac activity. Magnetic field dynamics
generated by the beating human heart in a background of 7 tesla,
recorded at three different positions on the chest and neck, along with
simultaneous electrocardiogram (ECG). (credit: Simon Gross et al./Nature
Communications)
The scientists carried out an experiment in which they positioned
their sensor in front of the chest of a volunteer test subject inside an
MRI scanner. They were able to detect periodic changes in the magnetic
field, which pulsated in time with the heartbeat. The measurement curve
is similar to an electrocardiogram (ECG), but measures a mechanical
process (the contraction of the heart) rather than electrical
conduction.
“We are in the process of analyzing and refining our magnetometer
measurement technique in collaboration with cardiologists and signal
processing experts,” says Prüssmann. “Ultimately, we hope that our
sensor will be able to provide information on heart disease — and do so
non-invasively and in real time.”
The new measurement technique could also be used in the development
of new contrast agents for magnetic resonance imaging and improved
nuclear magnetic resonance (NMR) spectroscopy for applications in
biological and chemical research.
A radiation-free approach to imaging molecules in the brain
Scientists
hoping to see molecules that control brain activity have devised a
probe that lets them image such molecules without using chemical or
radioactive labels. The sensors consist of proteins that detect a
particular target, which causes them to dilate blood vessels, producing a
change in blood flow that can be imaged with magnetic resonance imaging
(MRI) or other techniques. (credit: Mitul Desai et al./ Nature
Communications)
In a related development, MIT scientists hoping to get a glimpse of
molecules that control brain activity have devised a new sensor that
allows them to image these molecules without using any chemical or
radioactive labels (which feature low resolution and can’t be easily
used to watch dynamic events).
The new sensors consist of enzymes called proteases designed to
detect a particular target, which causes them to dilate blood vessels in
the immediate area. This produces a change in blood flow that can be
imaged with magnetic resonance imaging (MRI) or other imaging
techniques.*
A peptide called calcitonin gene-related peptide (CGRP)
acts on a receptor in smooth muscle cells (left) to induce cAMP
production, resulting in relaxation of vascular smooth muscle cells and
consequent vasodilation (middle). That induces haemodynamic effects
visible by MRI and other imaging methods (right). (credit: Mitul Desai
et al./ Nature Communications)
“This is an idea that enables us to detect molecules that are in the
brain at biologically low levels, and to do that with these imaging
agents or contrast agents that can ultimately be used in humans,” says
Alan Jasanoff, an MIT professor of biological engineering and brain and
cognitive sciences. “We can also turn them on and off, and that’s really
key to trying to detect dynamic processes in the brain.”
Monitoring neurotransmitters at 100 times lower levels
In a paper appearing in the Dec. 2 issue of open-access Nature Communications,
Jasanoff and his colleagues explain that they used proteases (sometimes
used as biomarkers to diagnose diseases such as cancer and Alzheimer’s
disease) to demonstrate the validity of their approach. But now they’re
working on adapting these imaging agents to monitor neurotransmitters,
such as dopamine and serotonin, which are critical to cognition and
processing emotions.
“What we want to be able to do is detect levels of neurotransmitter
that are 100-fold lower than what we’ve seen so far. We also want to be
able to use far less of these molecular imaging agents in organisms.
That’s one of the key hurdles to trying to bring this approach into
people,” Jasanoff says.
“Many behaviors involve turning on genes, and you could use this kind
of approach to measure where and when the genes are turned on in
different parts of the brain,” Jasanoff says.
His lab is also working on ways to deliver the peptides without
injecting them, which would require finding a way to get them to pass
through the blood-brain barrier. This barrier separates the brain from
circulating blood and prevents large molecules from entering the brain.
Jeff Bulte, a professor of radiology and radiological science at the
Johns Hopkins School of Medicine, described the technique as “original
and innovative,” while adding that its safety and long-term
physiological effects will require more study.
“It’s interesting that they have designed a reporter without using
any kind of metal probe or contrast agent,” says Bulte, who was not
involved in the research. “An MRI reporter that works really well is the
holy grail in the field of molecular and cellular imaging.”
The research was funded by the National Institutes of Health BRAIN
Initiative, the MIT Simons Center for the Social Brain, and fellowships
from the Boehringer Ingelheim Fonds and the Friends of the McGovern
Institute.
* To make their probes, the researchers modified a naturally
occurring peptide called calcitonin gene-related peptide (CGRP), which
is active primarily during migraines or inflammation. The researchers
engineered the peptides so that they are trapped within a protein cage
that keeps them from interacting with blood vessels. When the peptides
encounter proteases in the brain, the proteases cut the cages open and
the CGRP causes nearby blood vessels to dilate. Imaging this dilation
with MRI allows the researchers to determine where the proteases were
detected.
Another possible application for this type of imaging is to
engineer cells so that the gene for CGRP is turned on at the same time
that a gene of interest is turned on. That way, scientists could use the
CGRP-induced changes in blood flow to track which cells are expressing
the target gene, which could help them determine the roles of those
cells and genes in different behaviors. Jasanoff’s team demonstrated the
feasibility of this approach by showing that implanted cells expressing
CGRP could be recognized by imaging.
Abstract of Dynamic nuclear magnetic resonance field sensing with part-per-trillion resolution
High-field magnets of up to tens of teslas in strength advance
applications in physics, chemistry and the life sciences. However,
progress in generating such high fields has not been matched by
corresponding advances in magnetic field measurement. Based mostly on
nuclear magnetic resonance, dynamic high-field magnetometry is currently
limited to resolutions in the nanotesla range. Here we report a
concerted approach involving tailored materials, magnetostatics and
detection electronics to enhance the resolution of nuclear magnetic
resonance sensing by three orders of magnitude. The relative sensitivity
thus achieved amounts to 1 part per trillion (10−12). To
exemplify this capability we demonstrate the direct detection and
relaxometry of nuclear polarization and real-time recording of dynamic
susceptibility effects related to human heart function. Enhanced
high-field magnetometry will generally permit a fresh look at magnetic
phenomena that scale with field strength. It also promises to facilitate
the development and operation of high-field magnets.
Abstract of Molecular imaging with engineered physiology
In vivo imaging techniques are powerful tools for evaluating
biological systems. Relating image signals to precise molecular
phenomena can be challenging, however, due to limitations of the
existing optical, magnetic and radioactive imaging probe mechanisms.
Here we demonstrate a concept for molecular imaging which bypasses the
need for conventional imaging agents by perturbing the endogenous
multimodal contrast provided by the vasculature. Variants of the
calcitonin gene-related peptide artificially activate vasodilation
pathways in rat brain and induce contrast changes that are readily
measured by optical and magnetic resonance imaging. CGRP-based agents
induce effects at nanomolar concentrations in deep tissue and can be
engineered into switchable analyte-dependent forms and genetically
encoded reporters suitable for molecular imaging or cell tracking. Such
artificially engineered physiological changes, therefore, provide a
highly versatile means for sensitive analysis of molecular events in
living organisms.
The Paleolithic diet, Paleo diet, caveman diet, or stone-age diet is a modern fad diet
requiring the sole or predominant consumption of foods presumed to have
been the only foods available to or consumed by humans during the Paleolithic era.
The digestive abilities of anatomically modern humans, however, are different from those of Paleolithic humans, which undermines the diet's core premise.[4]
During the 2.6-million-year-long Paleolithic era, the highly variable
climate and worldwide spread of human populations meant that humans
were, by necessity, nutritionally adaptable. Supporters of the diet
mistakenly presuppose that human digestion has remained essentially
unchanged over time.[4][5]
While there is wide variability in the way the paleo diet is interpreted,[6] the diet typically includes vegetables, fruits, nuts, roots, and meat and typically excludes foods such as dairy products, grains, sugar, legumes, processed oils, salt, alcohol or coffee.[1][additional citation(s) needed] The diet is based on avoiding not just processed foods, but rather the foods that humans began eating after the Neolithic Revolution when humans transitioned from hunter-gatherer lifestyles to settled agriculture.[3] The ideas behind the diet can be traced to Walter Voegtlin,[7]:41 and were popularized in the best-selling books of Loren Cordain.[8]
Like other fad diets, the Paleo diet is promoted as a way of improving health.[2]
There is some evidence that following this diet may lead to
improvements in terms of body composition and metabolic effects compared
with the typical Western diet[6] or compared with diets recommended by national nutritional guidelines.[9] There is no good evidence, however, that the diet helps with weight loss, other than through the normal mechanisms of calorie restriction.[10] Following the Paleo diet can lead to an inadequate calcium intake, and side effects can include weakness, diarrhea, and headaches.[3][10]
History and terminology
According to Adrienne Rose Johnson, the idea that the primitive diet
was superior to current dietary habits dates back to the 1890s with such
writers as Dr.Emmet Densmore and Dr.John Harvey Kellogg. Densmore proclaimed that "bread is the staff of death," while Kellogg supported a diet of starchy and grain-based foods.[11] The idea of a Paleolithic diet can be traced to a 1975 book by gastroenterologist Walter Voegtlin,[7]:41 which in 1985 was further developed by Stanley Boyd Eaton and Melvin Konner, and popularized by Loren Cordain in his 2002 book The Paleo Diet.[8] The terms caveman diet and stone-age diet are also used,[12] as is Paleo Diet, trademarked by Cordain.[13]
In 2012 the Paleolithic diet was described as being one of the
"latest trends" in diets, based on the popularity of diet books about
it;[14] in 2013 the diet was Google's most searched-for weight-loss method.[15]
The diet
advises eating only foods presumed to be available to Paleolithic
humans, but there is wide variability in people's understanding of what
foods these were, and an accompanying ongoing debate.[3]
In the original description of the paleo diet in Cordain's 2002
book, he advocated eating as much like Paleolithic people as possible,
which meant:[19]
55% of daily calories from seafood and lean meat, evenly divided
15% of daily calories from each of fruits, vegetables, and nuts and seeds
no dairy, almost no grains (which Cordain described as "starvation food" for Paleolithic people), no added salt, no added sugar
The diet is based on avoiding not just modern processed foods, but also the foods that humans began eating after the Neolithic Revolution.[3]
The scientific literature generally uses the term "Paleo nutrition pattern", which has been variously described as:
"Vegetables, fruits, nuts, roots, meat, and organ meats";[3]
"vegetables (including root vegetables), fruit (including fruit
oils, e.g., olive oil, coconut oil, and palm oil), nuts, fish, meat, and
eggs, and it excluded dairy, grain-based foods, legumes, extra sugar,
and nutritional products of industry (including refined fats and refined
carbohydrates)";[9] and
"avoids processed foods, and emphasizes eating vegetables, fruits, nuts and seeds, eggs, and lean meats".[6]
Health effects
Seeds such as walnuts are eaten as part of the diet.
The aspects of the Paleo diet that advise eating fewer processed
foods and less sugar and salt are consistent with mainstream advice
about diet.[1] Diets with a paleo nutrition pattern have some similarities to traditional ethnic diets such as the Mediterranean diet that have been found to be healthier than the Western diet.[3][6] Following the Paleo diet, however, can lead to nutritional deficiencies such as those of vitaminD and calcium, which in turn could lead to compromised bone health;[1][20] it can also lead to an increased risk of ingesting toxins from high fish consumption.[3]
Research into the weight loss effects of the paleolithic diet has generally been of poor quality.[10] One trial of obese
postmenopausal women found improvements in weight and fat loss after
six months, but the benefits had ceased by 24 months; side effects among
participants included "weakness, diarrhea, and headaches".[10] In general, any weight loss caused by the diet is merely the result of calorie restriction, rather than a special feature of the diet itself.[10]
As of 2016 there are limited data on the metabolic effects on
humans eating a Paleo diet, but the data are based on clinical trials
that have been too small to have a statistical significance sufficient to allow the drawing of generalizations.[3][6][20][not in citation given]
These preliminary trials have found that participants eating a paleo
nutrition pattern had better measures of cardiovascular and metabolic
health than people eating a standard diet,[3][9] though the evidence is not strong enough to recommend the Paleo diet for treatment of metabolic syndrome.[9] As of 2014 there was no evidence the paleo diet is effective in treating inflammatory bowel disease.[21]
The rationale for the Paleolithic diet derives from proponents' claims relating to evolutionary medicine.[22] Advocates of the diet state that humans were genetically
adapted to eating specifically those foods that were readily available
to them in their local environments. These foods therefore shaped the
nutritional needs of Paleolithic humans. They argue that the physiology and metabolism of modern humans have changed little since the Paleolithic era.[23]
Natural selection is a long process, and the cultural and lifestyle
changes introduced by western culture have occurred quickly. The
argument is that modern humans have therefore not been able to adapt to
the new circumstances.[24] The agricultural revolution brought the addition of grains and dairy to the diet.[25]
According to the model from the evolutionary discordance hypothesis, "[M]any chronic diseases and degenerative conditions evident in modern Western populations have arisen because of a mismatch between Stone Age genes and modern lifestyles."[26]
Advocates of the modern Paleo diet have formed their dietary
recommendations based on this hypothesis. They argue that modern humans
should follow a diet that is nutritionally closer to that of their
Paleolithic ancestors.
The evolutionary discordance is incomplete, since it is based
mainly on the genetic understanding of the human diet and a unique model
of human ancestral diets, without taking into account the flexibility
and variability of the human dietary behaviors over time.[27]
Studies of a variety of populations around the world show that humans
can live healthily with a wide variety of diets, and that in fact,
humans have evolved to be flexible eaters.[28]Lactose tolerance
is an example of how some humans have adapted to the introduction of
dairy into their diet. While the introduction of grains, dairy, and
legumes during the Neolithic revolution
may have had some adverse effects on modern humans, if humans had not
been nutritionally adaptable, these technological developments would
have been dropped.[29]
Evolutionary biologist Marlene Zuk
writes that the idea that our genetic makeup today matches that of our
ancestors is misconceived, and that in debate Cordain was "taken aback"
when told that 10,000 years was "plenty of time" for an evolutionary
change in human digestive abilities to have taken place.[4]:114 On this basis Zuk dismisses Cordain's claim that the paleo diet is "the one and only diet that fits our genetic makeup".[4]
Diseases of affluence
Advocates of the diet argue that the increase in diseases of affluence
after the dawn of agriculture was caused by changes in diet, but others
have countered that it may be that pre-agricultural hunter-gatherers
did not suffer from the diseases of affluence because they did not live
long enough to develop them.[30] Based on the data from hunter-gatherer populations still in existence, it is estimated that at age15, life expectancy was an additional 39 years, for a total age of 54.[31] At age 45, it is estimated that average life expectancy was an additional 19 years, for a total age of 64 years.[32][33] That is to say, in such societies, most deaths occurred in childhood or
young adulthood; thus, the population of elderly – and the prevalence
of diseases of affluence – was much reduced. Excessive food energy
intake relative to energy expended, rather than the consumption of
specific foods, is more likely to underlie the diseases of affluence.
"The health concerns of the industrial world, where calorie-packed foods
are readily available, stem not from deviations from a specific diet
but from an imbalance between the energy humans consume and the energy
humans spend."[34]
Adoption of the Paleolithic diet assumes that modern humans can reproduce the hunter-gatherer diet. Molecular biologist Marion Nestle
argues that "knowledge of the relative proportions of animal and plant
foods in the diets of early humans is circumstantial, incomplete, and
debatable and that there are insufficient data to identify the
composition of a genetically determined optimal diet. The evidence
related to Paleolithic diets is best interpreted as supporting the idea
that diets based largely on plant foods promote health and longevity, at
least under conditions of food abundance and physical activity."[35] Ideas about Paleolithic diet and nutrition are at best hypothetical.[36]
The data for Cordain's book only came from six contemporary hunter-gatherer groups, mainly living in marginal habitats.[37] One of the studies was on the !Kung, whose diet was recorded for a single month, and one was on the Inuit.[37][38][39] Due to these limitations, the book has been criticized as painting an incomplete picture of the diets of Paleolithic humans.[37] It has been noted that the rationale for the diet does not adequately account for the fact that, due to the pressures of artificial selection,
most modern domesticated plants and animals differ drastically from
their Paleolithic ancestors; likewise, their nutritional profiles are
very different from their ancient counterparts. For example, wild almonds produce potentially fatal levels of cyanide, but this trait has been bred out of domesticated varieties using artificial selection. Many vegetables, such as broccoli, did not exist in the Paleolithic period; broccoli, cabbage, cauliflower, and kale are modern cultivars of the ancient species Brassica oleracea.[29]
Trying to devise an ideal diet by studying contemporary
hunter-gatherers is difficult because of the great disparities that
exist; for example, the animal-derived calorie percentage ranges from
25% for the Gwi people of southern Africa to 99% for the Alaskan Nunamiut.[40]
Researchers have proposed that cooked starches met the energy
demands of an increasing brain size, based on variations in the copy
number of genes encoding for amylase.