Search This Blog

Tuesday, August 7, 2018

Aboriginal Australians

From Wikipedia, the free encyclopedia

Aboriginal Australians
Australian Aboriginal Flag.svg
Total population
606,164 (2011)[1]
2.7% of Australia's population
Regions with significant populations
 Northern Territory 29.8%
 Queensland 4.2%
 Western Australia 3.8%
 New South Wales 2.9%
 South Australia 2.3%
 Victoria 0.85%
Languages
Several hundred Indigenous Australian languages, many no longer spoken, Australian English, Australian Aboriginal English, Kriol
Religion
Majority Christian (mainly Anglican and Catholic),[2] large minority no religious affiliation,[2] small numbers of other religions, various local indigenous religions grounded in Australian Aboriginal mythology
Related ethnic groups
Torres Strait Islanders, Tasmanian Aboriginals, Papuans
Aboriginal dwellings in Hermannsburg, Northern Territory, 1923. Image: Herbert Basedow

Aboriginal Australians are legally defined as people who are members "of the Aboriginal race of Australia" (indigenous to mainland Australia or to the island of Tasmania).

Legal and administrative definitions

Aboriginal dancers in 1981
 
Arnhem Land artist at work
 

A new definition was proposed in the Constitutional Section of the Department of Aboriginal Affairs' Report on a Review of the Administration of the Working Definition of Aboriginal and Torres Strait Islanders (Canberra, 1981):
An Aboriginal or Torres Strait Islander is a person of Aboriginal or Torres Strait Islander descent who identifies as an Aboriginal or Torres Strait Islander and is accepted as such by the community in which he (she) lives.[7]
Justice Gerard Brennan in his leading judgment in Mabo v Queensland (No 2) stated:
Membership of the Indigenous people depends on biological descent from the Indigenous people and on mutual recognition of a particular person's membership by that person and by the elders or other persons enjoying traditional authority among those people.[7]
The category "Aboriginal Australia" was coined by the British after they began colonising Australia in 1788, to refer collectively to all people they found already inhabiting the continent, and later to the descendants of any of those people. Until the 1980s, the sole legal and administrative criterion for inclusion in this category was race, classified according to visible physical characteristics or known ancestors. As in the British slave colonies of North America and the Caribbean, where the principle of partus sequitur ventrem was adopted from 1662, children's status was determined by that of their mothers: if born to Aboriginal mothers, children were considered Aboriginal, regardless of their paternity.[8]
In the era of colonial and post-colonial government, access to basic human rights depended upon your race. If you were a "full blooded Aboriginal native ... [or] any person apparently having an admixture of Aboriginal blood", a half-caste being the "offspring of an Aboriginal mother and other than Aboriginal father" (but not of an Aboriginal father and other than Aboriginal mother), a "quadroon", or had a "strain" of Aboriginal blood you were forced to live on Reserves or Missions, work for rations, given minimal education, and needed governmental approval to marry, visit relatives or use electrical appliances.[9]
The Constitution of Australia, in its original form as of 1901, referred to Aboriginals twice, but without definition. Section 51(xxvi) gave the Commonwealth parliament a power to legislate with respect to "the people of any race" throughout the Commonwealth, except for people of "the aboriginal race". The purpose of this provision was to give the Commonwealth power to regulate non-white immigrant workers, who would follow work opportunities interstate.[10] The only other reference, Section 127, provided that "aboriginal natives shall not be counted" in reckoning the size of the population of the Commonwealth or any part of it. The purpose of Section 127 was to prevent the inclusion of Aboriginal people in Section 24 determinations of the distribution of House of Representatives seats amongst the states and territories.[11]

After these references were removed by the 1967 referendum, the Australian Constitution had no references to Aboriginals. Since that time, there have been a number of proposals to amend the constitution to specifically mention Indigenous Australians.[12][13]

The change to Section 51(xxvi) enabled the Commonwealth parliament to enact laws specifically with respect to Aboriginal peoples as a "race". In the Tasmanian Dam Case of 1983, the High Court of Australia was asked to determine whether Commonwealth legislation, whose application could relate to Aboriginal people—parts of the World Heritage Properties Conservation Act 1983 (Cth) as well as related legislation—was supported by Section 51(xxvi) in its new form. The case concerned an application of legislation that would preserve cultural heritage of Aboriginal Tasmanians. It was held that Aboriginal Australians and Torres Strait Islanders, together or separately, and any part of either, could be regarded as a "race" for this purpose. As to the criteria for identifying a person as a member of such a "race", the definition by Justice Deane has become accepted as current law.[9] Deane said:
It is unnecessary, for the purposes of the present case, to consider the meaning to be given to the phrase "people of any race" in s. 51(xxvi). Plainly, the words have a wide and non-technical meaning ... . The phrase is, in my view, apposite to refer to all Australian Aboriginals collectively. Any doubt, which might otherwise exist in that regard, is removed by reference to the wording of par. (xxvi) in its original form. The phrase is also apposite to refer to any identifiable racial sub-group among Australian Aboriginals. By "Australian Aboriginal" I mean, in accordance with what I understand to be the conventional meaning of that term, a person of Aboriginal descent, albeit mixed, who identifies himself as such and who is recognised by the Aboriginal community as an Aboriginal.[14]
While Deane's three-part definition reaches beyond the biological criterion to an individual's self-identification, it has been criticised as continuing to accept the biological criterion as primary.[9] It has been found difficult to apply, both in each of its parts and as to the relations among the parts; biological "descent" has been a fall-back criterion.[15]

Definitions from Aboriginal Australians

Eve Fesl, a Gabi-Gabi woman, wrote in the Aboriginal Law Bulletin describing how she and possibly other Aboriginal people preferred to be identified:
The word 'aborigine' refers to an indigenous person of any country. If it is to be used to refer to us as a specific group of people, it should be spelt with a capital 'A', i.e., 'Aborigine'.[16]
While the term 'indigenous' is being more commonly used by Australian Government and non-Government organisations to describe Aboriginal Australians, Lowitja O'Donoghue, commenting on the prospect of possible amendments to Australia's constitution, said:
I really can't tell you of a time when 'indigenous' became current, but I personally have an objection to it, and so do many other Aboriginal and Torres Strait Islander people. ... This has just really crept up on us ... like thieves in the night. ... We are very happy with our involvement with indigenous people around the world, on the international forum ... because they're our brothers and sisters. But we do object to it being used here in Australia.[17]
O'Donoghue said that the term indigenous robbed the traditional owners of Australia of an identity because some non-Aboriginal people now wanted to refer to themselves as indigenous because they were born there.[17]

Definitions from academia

Dean of Indigenous Research and Education at Charles Darwin University, Professor MaryAnn Bin-Sallik, has lectured on the ways Aboriginal Australians have been categorised and labelled over time. Her lecture offered a new perspective on the terms urban, traditional and of Indigenous descent as used to define and categorise Aboriginal Australians:
Not only are these categories inappropriate, they serve to divide us. ... Government's insistence on categorising us with modern words like 'urban', 'traditional' and 'of Aboriginal descent' are really only replacing old terms 'half-caste' and 'full-blood' – based on our colouring.[18]
She called for a replacement of this terminology by that of "Aborigine" or "Torres Strait Islander" – "irrespective of hue".[18] It could be argued that the indigenous tribes of Papua New Guinea and Western New Guinea (Indonesia) are more closely related to the Aboriginal Australians than to any tribes found in Indonesia, however due to ongoing conflict in the regions of West Papua, these tribes are being marginalized from their closest relations.[19][20]

Origins

Scholars had disagreed whether the closest kin of Aboriginal Australians outside Australia were certain South Asian groups or African groups. The latter would imply a migration pattern in which their ancestors passed through South Asia to Australia without intermingling genetically with other populations along the way.[21]
In a 2011 genetic study by Ramussen et al., researchers took a DNA sample from an early 20th century lock of an Aboriginal person's hair with low European admixture. They found that the ancestors of the Aboriginal population split off from the Eurasian population between 62,000 and 75,000 BP, whereas the European and Asian populations split only 25,000 to 38,000 years BP, indicating an extended period of Aboriginal genetic isolation. These Aboriginal ancestors migrated into South Asia and then into Australia, where they stayed, with the result that, outside of Africa, the Aboriginal peoples have occupied the same territory continuously longer than any other human populations. These findings suggest that modern Aboriginal peoples are the direct descendants of migrants who left Africa up to 75,000 years ago.[22][23] This finding is compatible with earlier archaeological finds of human remains near Lake Mungo that date to approximately 40,000 years ago.

The same genetic study of 2011 found evidence that Aboriginal peoples carry some of the genes associated with the Denisovan (a species of human related to but distinct from Neanderthals) peoples of Asia; the study suggests that there is an increase in allele sharing between the Denisovans and the Aboriginal Australians genome compared to other Eurasians and Africans. Examining DNA from a finger bone excavated in Siberia, researchers concluded that the Denisovans migrated from Siberia to tropical parts of Asia and that they interbred with modern humans in South-East Asia 44,000 years ago, before Australia separated from Papua New Guinea approximately 11,700 years BP. They contributed DNA to Aboriginal Australians along with present-day New Guineans and an indigenous tribe in the Philippines known as Mamanwa.[citation needed] This study makes Aboriginal Australians one of the oldest living populations in the world and possibly the oldest outside of Africa, confirming they may also have the oldest continuous culture on the planet.[24] The Papuans have more sharing alleles than Aboriginal peoples.[clarification needed] The data suggest that modern and archaic humans interbred in Asia before the migration to Australia.[25]

One 2017 paper in Nature evaluated artifacts in Kakadu and concluded "Human occupation began around 65,000 years ago".[26]

A 2013 study by the Max Planck Institute for Evolutionary Anthropology found that there was a migration of genes from India to Australia around 2000 BCE. The researchers had two theories for this: either some Indians had contact with people in Indonesia who eventually transferred those genes from India to Australian Aborigines, or that a group of Indians migrated all the way from India to Australia and intermingled with the locals directly. Their research also shows that these new arrivals came at a time when dingoes first appeared in the fossil record, and when Aboriginal peoples first used microliths in hunting. In addition, they arrived just as one of the Aboriginal language groups was undergoing a rapid expansion.[27][28]

In a 2001 study, blood samples were collected from some Warlpiri members of the Northern Territory to study the genetic makeup of the Warlpiri Tribe of Aboriginal Australians, who are not representative of all Aboriginal Tribes in Australia. The study concluded that the Warlpiri are descended from ancient Asians whose DNA is still somewhat present in Southeastern Asian groups, although greatly diminished. The Warlpiri DNA also lacks certain information found in modern Asian genomes, and carries information not found in other genomes, reinforcing the idea of ancient Aboriginal isolation.[29]

Aboriginal Australians are genetically most similar to the indigenous populations of Papua New Guinea, and more distantly related to groups from East India. They are quite distinct from the indigenous populations of Borneo and Malaysia, sharing relatively little genomic information as compared to the groups from Papua New Guinea and India. This indicates that Australia was isolated for a long time from the rest of Southeast Asia, and remained untouched by migrations and population expansions into that area.[29]

The Australian Aborigines are genetically evolved to stand a wide range of environmental temperatures. They were observed to have been able to sleep naked on the ground at night in below freezing conditions in desert conditions where the temperatures easily rose to above 40 degrees Celsius during the day. By the same token, Tasmanian Aborigines would sleep in snow drifts with nothing on apart from an animal skin. According to the April 2017 edition of the National Geographic magazine, it is believed that this ability of Australian Aborigines is due to a beneficial mutation in the genes which regulate hormones that control body temperature.[30]

Health

Aboriginal Australians have disproportionately high rates[31] of severe physical disability, as much as three times that of non-Aboriginal Australians, possibly due to higher rates of chronic diseases such as diabetes and kidney disease. In a study comparing Aboriginal Australians to non-Aboriginal Australians, obesity and smoking rates were higher among Aboriginals, which are contributing factors or causes of serious health issues. The study also showed that Aboriginal Australians were more likely to self-report their health as "excellent/very good" in spite of extant severe physical limitations.
An article on 20 January 2017 in The Lancet describes the suicide rate among Aboriginal Australians as a "catastrophic crisis":
In 2015, more than 150 [Aborigines] died by suicide, the highest figure ever recorded nationally and double the rate of [non-Aborigines], according to the Australian Bureau of Statistics. Additionally, [Aboriginal] children make up one in three child suicides despite making up a minuscule percentage of the population. Moreover, in parts of the country such as Kimberley, WA, suicide rates among [Aborigines] are among the highest in the world.
The report advocates Aboriginal-led national response to the crisis, asserting that suicide prevention programmes have failed this segment of the population.[32] The ex-prisoner population of Australian Aboriginals is particularly at risk of committing suicide; organisations such as Ngalla Maya have been set up to offer assistance.[33]

One study reports that Aboriginal Australians are significantly affected by infectious diseases, particularly in rural areas.[34] These diseases include strongyloidiasis, hookworm caused by Ancylostoma duodenale, scabies, and streptococcal infections. Because poverty is also prevalent in Aboriginal populations, the need for medical assistance is even greater in many Aboriginal Australian communities. The researchers suggested the use of mass drug administration (MDA) as a method of combating the diseases found commonly among Aboriginal peoples, while also highlighting the importance of "sanitation, access to clean water, good food, integrated vector control and management, childhood immunizations, and personal and family hygiene".[34]

Another study examining the psychosocial functioning of high-risk-exposed and low-risk-exposed Aboriginal Australians aged 12–17 found that in high-risk youths, personal well-being was protected by a sense of solidarity and common low socioeconomic status. However, in low-risk youths, perceptions of racism caused poor psychosocial functioning. The researchers suggested that factors such as racism, discrimination and alienation contributed to physiological health risks in ethnic minority families. The study also mentioned the effect of poverty on Aboriginal populations: higher morbidity and mortality rates.[35]

Aboriginal Australians suffer from high rates of heart disease. Cardiovascular diseases are the leading cause of death worldwide and among Aboriginal Australians. Aboriginal people develop atrial fibrillation, a condition that sharply increases the risk of stroke, much earlier than non-Aboriginal Australians on average. The life expectancy for Aboriginal Australians is 10 years lower than non-Aboriginal Australians. Technologies such as the Wireless ambulatory ECG are being developed to screen at-risk individuals, particularly rural Australians, for atrial fibrillation.[36]

The incidence rate of cancer was lower in Aboriginal Australians than non-Aboriginal Australians in 2005–2009.[37] However, some cancers, including lung cancer and liver cancer, were significantly more common in Aboriginal people. The overall mortality rate of Aboriginal Australians due to cancer was 1.3 times higher than non-Aboriginals in 2013. This may be because they are less likely to receive the necessary treatments in time, or because the cancers that they tend to develop are often more lethal than other cancers.

Tobacco usage

According to the Australian Bureau of Statistics, a large number of Aboriginal Australians use tobacco, perhaps 41% of people aged 15 and up.[38] This number has declined in recent years, but remains relatively high. The smoking rate is roughly equal for men and women across all age groups, but the smoking rate is much higher in rural than in urban areas. The prevalence of smoking exacerbates existing health problems such as cardiovascular diseases and cancer. The Australian government has encouraged its citizens, both Aboriginal and non-Aboriginal, to stop smoking or to not start.

Alcohol usage

In the Northern Territory (which has the greatest proportion of Aboriginal Australians), per capita alcohol consumption for adults is 1.5 times the national average. Nearly half of Aboriginal adults in the Northern Territory reported alcohol usage. In addition to the inherent risks associated with alcohol use, its consumption also tends to increase domestic violence. Aboriginal people account for 60% of the facial fracture victims in the Northern Territory, though they only constitute approximately 30% of its population. Due to the complex nature of the alcohol and domestic violence issue in the Northern Territory, proposed solutions are contentious. However, there has recently been increased media attention to this problem.[39]

Diet

Modern Aboriginal Australians living in rural areas tend to have nutritionally poor diets, where higher food costs drive people to consume cheaper, lower quality foods. The average diet is high in refined carbohydrates and salt, and low in fruit and vegetables. There are several challenges in improving diets for Aboriginal Australians, such as shorter shelf lives of fresh foods, resistance to changing existing consumption habits, and disagreements on how to implement changes. Some suggest the use of taxes on unhealthy foods and beverages to discourage their consumption, but this approach is questionable. Providing subsidies for healthy foods has proven effective in other countries, but has yet to be proven useful for Aboriginal Australians specifically.[40]

Groups

Dispersing across the Australian continent over time, the ancient people expanded and differentiated into distinct groups, each with its own language and culture.[41] More than 400 distinct Australian Aboriginal peoples have been identified, distinguished by names designating their ancestral languages, dialects, or distinctive speech patterns.[42] Historically, these groups lived in three main cultural areas, the Northern, Southern, and Central cultural areas. The Northern and Southern areas, having richer natural marine and woodland resources, were more densely populated than the Central area.[41]

Names used by Aboriginal Australian people

There are various other names from Australian Aboriginal languages commonly used to identify groups based on geography, including:

Anarchist Analysis

Several Anarchists (particulary anarcho-primitivists) such as Harold Barclay and Bob Black have praised aboriginal culture as it shows the potential long-life[44], high living standards and beauty of an anarchist society.[45] Other Anarchists such as Peter Gelderloos have been critical of several aboriginal societies for their violent and patriarchal nature.

MRI breakthroughs include ultra-sensitive MRI magnetic field sensing, more-sensitive monitoring without chemical or radioactive labels

Heart mechanical contractions recorded in MRI machine for first time; hope to monitor neurotransmitters at 100 times lower levels
December 30, 2016
Original link:  http://www.kurzweilai.net/mri-breakthroughs-include-ultra-sensitive-mri-magnetic-field-sensing-more-sensitive-monitoring-without-chemical-or-radioactive-labels
Highly sensitive magnetic field sensor (credit: ETH Zurich/Peter Rüegg)

Swiss researchers have succeeded in measuring changes in strong magnetic fields with unprecedented precision, they report in the open-access journal Nature Communications. The finding may find widespread use in medicine and other areas.

In their experiments, the researchers at the Institute for Biomedical Engineering, which is operated jointly by ETH Zurich and the University of Zurich, magnetized a water droplet inside a magnetic resonance imaging (MRI) scanner, a device used for medical imaging. The researchers were able to detect even the tiniest variations of the magnetic field strength within the droplet. These changes were up to 10-12 (1 trillion) times smaller than the 7 tesla field strength of the MRI scanner used in the experiment.

“Until now, it was possible only to measure such small variations in weak magnetic fields,” says Klaas Prüssmann, Professor of Bioimaging at ETH Zurich and the University of Zurich. An example of a weak magnetic field is that of the Earth, where the field strength is just a few dozen microtesla. For fields of this kind, highly sensitive measurement methods are already able to detect variations of about a trillionth of the field strength, says Prüssmann. “Now, we have a similarly sensitive method for strong fields of more than one tesla, such as those used … in medical imaging.”

The scientists based the sensing technique on the principle of nuclear magnetic resonance (NMR), which also serves as the basis for magnetic resonance imaging and the spectroscopic methods that biologists use to elucidate the 3D structure of molecules, but with 1000 times greater sensitivity than current NMR methods.

Ultra-sensitive recordings of heart contractions in an MRI machine

Real-time magnetic field recordings of cardiac activity. Magnetic field dynamics generated by the beating human heart in a background of 7 tesla, recorded at three different positions on the chest and neck, along with simultaneous electrocardiogram (ECG). (credit: Simon Gross et al./Nature Communications)

The scientists carried out an experiment in which they positioned their sensor in front of the chest of a volunteer test subject inside an MRI scanner. They were able to detect periodic changes in the magnetic field, which pulsated in time with the heartbeat. The measurement curve is similar to an electrocardiogram (ECG), but measures a mechanical process (the contraction of the heart) rather than electrical conduction.

“We are in the process of analyzing and refining our magnetometer measurement technique in collaboration with cardiologists and signal processing experts,” says Prüssmann. “Ultimately, we hope that our sensor will be able to provide information on heart disease — and do so non-invasively and in real time.”

The new measurement technique could also be used in the development of new contrast agents for magnetic resonance imaging and improved nuclear magnetic resonance (NMR) spectroscopy for applications in biological and chemical research.

A radiation-free approach to imaging molecules in the brain

Scientists hoping to see molecules that control brain activity have devised a probe that lets them image such molecules without using chemical or radioactive labels. The sensors consist of proteins that detect a particular target, which causes them to dilate blood vessels, producing a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other techniques. (credit: Mitul Desai et al./ Nature Communications)

In a related development, MIT scientists hoping to get a glimpse of molecules that control brain activity have devised a new sensor that allows them to image these molecules without using any chemical or radioactive labels (which feature low resolution and can’t be easily used to watch dynamic events).

The new sensors consist of enzymes called proteases designed to detect a particular target, which causes them to dilate blood vessels in the immediate area. This produces a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other imaging techniques.*

A peptide called calcitonin gene-related peptide (CGRP) acts on a receptor in smooth muscle cells (left) to induce cAMP production, resulting in relaxation of vascular smooth muscle cells and consequent vasodilation (middle). That induces haemodynamic effects visible by MRI and other imaging methods (right). (credit: Mitul Desai et al./ Nature Communications)

“This is an idea that enables us to detect molecules that are in the brain at biologically low levels, and to do that with these imaging agents or contrast agents that can ultimately be used in humans,” says Alan Jasanoff, an MIT professor of biological engineering and brain and cognitive sciences. “We can also turn them on and off, and that’s really key to trying to detect dynamic processes in the brain.”

Monitoring neurotransmitters at 100 times lower levels

In a paper appearing in the Dec. 2 issue of open-access Nature Communications, Jasanoff and his colleagues explain that they used proteases (sometimes used as biomarkers to diagnose diseases such as cancer and Alzheimer’s disease) to demonstrate the validity of their approach. But now they’re working on adapting these imaging agents to monitor neurotransmitters, such as dopamine and serotonin, which are critical to cognition and processing emotions.

“What we want to be able to do is detect levels of neurotransmitter that are 100-fold lower than what we’ve seen so far. We also want to be able to use far less of these molecular imaging agents in organisms. That’s one of the key hurdles to trying to bring this approach into people,” Jasanoff says.

“Many behaviors involve turning on genes, and you could use this kind of approach to measure where and when the genes are turned on in different parts of the brain,” Jasanoff says.

His lab is also working on ways to deliver the peptides without injecting them, which would require finding a way to get them to pass through the blood-brain barrier. This barrier separates the brain from circulating blood and prevents large molecules from entering the brain.

Jeff Bulte, a professor of radiology and radiological science at the Johns Hopkins School of Medicine, described the technique as “original and innovative,” while adding that its safety and long-term physiological effects will require more study.

“It’s interesting that they have designed a reporter without using any kind of metal probe or contrast agent,” says Bulte, who was not involved in the research. “An MRI reporter that works really well is the holy grail in the field of molecular and cellular imaging.”

The research was funded by the National Institutes of Health BRAIN Initiative, the MIT Simons Center for the Social Brain, and fellowships from the Boehringer Ingelheim Fonds and the Friends of the McGovern Institute.

* To make their probes, the researchers modified a naturally occurring peptide called calcitonin gene-related peptide (CGRP), which is active primarily during migraines or inflammation. The researchers engineered the peptides so that they are trapped within a protein cage that keeps them from interacting with blood vessels. When the peptides encounter proteases in the brain, the proteases cut the cages open and the CGRP causes nearby blood vessels to dilate. Imaging this dilation with MRI allows the researchers to determine where the proteases were detected.

Another possible application for this type of imaging is to engineer cells so that the gene for CGRP is turned on at the same time that a gene of interest is turned on. That way, scientists could use the CGRP-induced changes in blood flow to track which cells are expressing the target gene, which could help them determine the roles of those cells and genes in different behaviors. Jasanoff’s team demonstrated the feasibility of this approach by showing that implanted cells expressing CGRP could be recognized by imaging.


Abstract of Dynamic nuclear magnetic resonance field sensing with part-per-trillion resolution

High-field magnets of up to tens of teslas in strength advance applications in physics, chemistry and the life sciences. However, progress in generating such high fields has not been matched by corresponding advances in magnetic field measurement. Based mostly on nuclear magnetic resonance, dynamic high-field magnetometry is currently limited to resolutions in the nanotesla range. Here we report a concerted approach involving tailored materials, magnetostatics and detection electronics to enhance the resolution of nuclear magnetic resonance sensing by three orders of magnitude. The relative sensitivity thus achieved amounts to 1 part per trillion (10−12). To exemplify this capability we demonstrate the direct detection and relaxometry of nuclear polarization and real-time recording of dynamic susceptibility effects related to human heart function. Enhanced high-field magnetometry will generally permit a fresh look at magnetic phenomena that scale with field strength. It also promises to facilitate the development and operation of high-field magnets.

Abstract of Molecular imaging with engineered physiology

In vivo imaging techniques are powerful tools for evaluating biological systems. Relating image signals to precise molecular phenomena can be challenging, however, due to limitations of the existing optical, magnetic and radioactive imaging probe mechanisms. Here we demonstrate a concept for molecular imaging which bypasses the need for conventional imaging agents by perturbing the endogenous multimodal contrast provided by the vasculature. Variants of the calcitonin gene-related peptide artificially activate vasodilation pathways in rat brain and induce contrast changes that are readily measured by optical and magnetic resonance imaging. CGRP-based agents induce effects at nanomolar concentrations in deep tissue and can be engineered into switchable analyte-dependent forms and genetically encoded reporters suitable for molecular imaging or cell tracking. Such artificially engineered physiological changes, therefore, provide a highly versatile means for sensitive analysis of molecular events in living organisms.

Paleolithic diet

From Wikipedia, the free encyclopedia

Wild fruit is an important feature of the diet

The Paleolithic diet, Paleo diet, caveman diet, or stone-age diet is a modern fad diet requiring the sole or predominant consumption of foods presumed to have been the only foods available to or consumed by humans during the Paleolithic era.

The digestive abilities of anatomically modern humans, however, are different from those of Paleolithic humans, which undermines the diet's core premise.[4] During the 2.6-million-year-long Paleolithic era, the highly variable climate and worldwide spread of human populations meant that humans were, by necessity, nutritionally adaptable. Supporters of the diet mistakenly presuppose that human digestion has remained essentially unchanged over time.[4][5]

While there is wide variability in the way the paleo diet is interpreted,[6] the diet typically includes vegetables, fruits, nuts, roots, and meat and typically excludes foods such as dairy products, grains, sugar, legumes, processed oils, salt, alcohol or coffee.[1][additional citation(s) needed] The diet is based on avoiding not just processed foods, but rather the foods that humans began eating after the Neolithic Revolution when humans transitioned from hunter-gatherer lifestyles to settled agriculture.[3] The ideas behind the diet can be traced to Walter Voegtlin,[7]:41 and were popularized in the best-selling books of Loren Cordain.[8]

Like other fad diets, the Paleo diet is promoted as a way of improving health.[2] There is some evidence that following this diet may lead to improvements in terms of body composition and metabolic effects compared with the typical Western diet[6] or compared with diets recommended by national nutritional guidelines.[9] There is no good evidence, however, that the diet helps with weight loss, other than through the normal mechanisms of calorie restriction.[10] Following the Paleo diet can lead to an inadequate calcium intake, and side effects can include weakness, diarrhea, and headaches.[3][10]

History and terminology

According to Adrienne Rose Johnson, the idea that the primitive diet was superior to current dietary habits dates back to the 1890s with such writers as Dr. Emmet Densmore and Dr. John Harvey Kellogg. Densmore proclaimed that "bread is the staff of death," while Kellogg supported a diet of starchy and grain-based foods.[11] The idea of a Paleolithic diet can be traced to a 1975 book by gastroenterologist Walter Voegtlin,[7]:41 which in 1985 was further developed by Stanley Boyd Eaton and Melvin Konner, and popularized by Loren Cordain in his 2002 book The Paleo Diet.[8] The terms caveman diet and stone-age diet are also used,[12] as is Paleo Diet, trademarked by Cordain.[13]

In 2012 the Paleolithic diet was described as being one of the "latest trends" in diets, based on the popularity of diet books about it;[14] in 2013 the diet was Google's most searched-for weight-loss method.[15]

Like other fad diets, the paleo diet is marketed with an appeal to nature and a narrative of conspiracy theories about how nutritional research, which does not support the supposed benefits of the paleo diet, is controlled by a malign food industry.[2][16] A Paleo lifestyle and ideology have developed around the diet.[17][18]

Foods

The diet advises eating only foods presumed to be available to Paleolithic humans, but there is wide variability in people's understanding of what foods these were, and an accompanying ongoing debate.[3]

In the original description of the paleo diet in Cordain's 2002 book, he advocated eating as much like Paleolithic people as possible, which meant:[19]
  • 55% of daily calories from seafood and lean meat, evenly divided
  • 15% of daily calories from each of fruits, vegetables, and nuts and seeds
  • no dairy, almost no grains (which Cordain described as "starvation food" for Paleolithic people), no added salt, no added sugar
The diet is based on avoiding not just modern processed foods, but also the foods that humans began eating after the Neolithic Revolution.[3]

The scientific literature generally uses the term "Paleo nutrition pattern", which has been variously described as:
  • "Vegetables, fruits, nuts, roots, meat, and organ meats";[3]
  • "vegetables (including root vegetables), fruit (including fruit oils, e.g., olive oil, coconut oil, and palm oil), nuts, fish, meat, and eggs, and it excluded dairy, grain-based foods, legumes, extra sugar, and nutritional products of industry (including refined fats and refined carbohydrates)";[9] and
  • "avoids processed foods, and emphasizes eating vegetables, fruits, nuts and seeds, eggs, and lean meats".[6]

Health effects

Seeds such as walnuts are eaten as part of the diet.

The aspects of the Paleo diet that advise eating fewer processed foods and less sugar and salt are consistent with mainstream advice about diet.[1] Diets with a paleo nutrition pattern have some similarities to traditional ethnic diets such as the Mediterranean diet that have been found to be healthier than the Western diet.[3][6] Following the Paleo diet, however, can lead to nutritional deficiencies such as those of vitamin D and calcium, which in turn could lead to compromised bone health;[1][20] it can also lead to an increased risk of ingesting toxins from high fish consumption.[3]

Research into the weight loss effects of the paleolithic diet has generally been of poor quality.[10] One trial of obese postmenopausal women found improvements in weight and fat loss after six months, but the benefits had ceased by 24 months; side effects among participants included "weakness, diarrhea, and headaches".[10] In general, any weight loss caused by the diet is merely the result of calorie restriction, rather than a special feature of the diet itself.[10]

As of 2016 there are limited data on the metabolic effects on humans eating a Paleo diet, but the data are based on clinical trials that have been too small to have a statistical significance sufficient to allow the drawing of generalizations.[3][6][20][not in citation given] These preliminary trials have found that participants eating a paleo nutrition pattern had better measures of cardiovascular and metabolic health than people eating a standard diet,[3][9] though the evidence is not strong enough to recommend the Paleo diet for treatment of metabolic syndrome.[9] As of 2014 there was no evidence the paleo diet is effective in treating inflammatory bowel disease.[21]

Rationale and counter-arguments

Paleolithic carving of a mammoth

Adaptation

The rationale for the Paleolithic diet derives from proponents' claims relating to evolutionary medicine.[22] Advocates of the diet state that humans were genetically adapted to eating specifically those foods that were readily available to them in their local environments. These foods therefore shaped the nutritional needs of Paleolithic humans. They argue that the physiology and metabolism of modern humans have changed little since the Paleolithic era.[23] Natural selection is a long process, and the cultural and lifestyle changes introduced by western culture have occurred quickly. The argument is that modern humans have therefore not been able to adapt to the new circumstances.[24] The agricultural revolution brought the addition of grains and dairy to the diet.[25]

According to the model from the evolutionary discordance hypothesis, "[M]any chronic diseases and degenerative conditions evident in modern Western populations have arisen because of a mismatch between Stone Age genes and modern lifestyles."[26] Advocates of the modern Paleo diet have formed their dietary recommendations based on this hypothesis. They argue that modern humans should follow a diet that is nutritionally closer to that of their Paleolithic ancestors.

The evolutionary discordance is incomplete, since it is based mainly on the genetic understanding of the human diet and a unique model of human ancestral diets, without taking into account the flexibility and variability of the human dietary behaviors over time.[27] Studies of a variety of populations around the world show that humans can live healthily with a wide variety of diets, and that in fact, humans have evolved to be flexible eaters.[28] Lactose tolerance is an example of how some humans have adapted to the introduction of dairy into their diet. While the introduction of grains, dairy, and legumes during the Neolithic revolution may have had some adverse effects on modern humans, if humans had not been nutritionally adaptable, these technological developments would have been dropped.[29]

Evolutionary biologist Marlene Zuk writes that the idea that our genetic makeup today matches that of our ancestors is misconceived, and that in debate Cordain was "taken aback" when told that 10,000 years was "plenty of time" for an evolutionary change in human digestive abilities to have taken place.[4]:114 On this basis Zuk dismisses Cordain's claim that the paleo diet is "the one and only diet that fits our genetic makeup".[4]

Diseases of affluence

Advocates of the diet argue that the increase in diseases of affluence after the dawn of agriculture was caused by changes in diet, but others have countered that it may be that pre-agricultural hunter-gatherers did not suffer from the diseases of affluence because they did not live long enough to develop them.[30] Based on the data from hunter-gatherer populations still in existence, it is estimated that at age 15, life expectancy was an additional 39 years, for a total age of 54.[31] At age 45, it is estimated that average life expectancy was an additional 19 years, for a total age of 64 years.[32][33] That is to say, in such societies, most deaths occurred in childhood or young adulthood; thus, the population of elderly – and the prevalence of diseases of affluence – was much reduced. Excessive food energy intake relative to energy expended, rather than the consumption of specific foods, is more likely to underlie the diseases of affluence. "The health concerns of the industrial world, where calorie-packed foods are readily available, stem not from deviations from a specific diet but from an imbalance between the energy humans consume and the energy humans spend."[34]

Historical diet

Brassica oleracea, an edible wild plant

Adoption of the Paleolithic diet assumes that modern humans can reproduce the hunter-gatherer diet. Molecular biologist Marion Nestle argues that "knowledge of the relative proportions of animal and plant foods in the diets of early humans is circumstantial, incomplete, and debatable and that there are insufficient data to identify the composition of a genetically determined optimal diet. The evidence related to Paleolithic diets is best interpreted as supporting the idea that diets based largely on plant foods promote health and longevity, at least under conditions of food abundance and physical activity."[35] Ideas about Paleolithic diet and nutrition are at best hypothetical.[36]

The data for Cordain's book only came from six contemporary hunter-gatherer groups, mainly living in marginal habitats.[37] One of the studies was on the !Kung, whose diet was recorded for a single month, and one was on the Inuit.[37][38][39] Due to these limitations, the book has been criticized as painting an incomplete picture of the diets of Paleolithic humans.[37] It has been noted that the rationale for the diet does not adequately account for the fact that, due to the pressures of artificial selection, most modern domesticated plants and animals differ drastically from their Paleolithic ancestors; likewise, their nutritional profiles are very different from their ancient counterparts. For example, wild almonds produce potentially fatal levels of cyanide, but this trait has been bred out of domesticated varieties using artificial selection. Many vegetables, such as broccoli, did not exist in the Paleolithic period; broccoli, cabbage, cauliflower, and kale are modern cultivars of the ancient species Brassica oleracea.[29]

Trying to devise an ideal diet by studying contemporary hunter-gatherers is difficult because of the great disparities that exist; for example, the animal-derived calorie percentage ranges from 25% for the Gwi people of southern Africa to 99% for the Alaskan Nunamiut.[40]

Researchers have proposed that cooked starches met the energy demands of an increasing brain size, based on variations in the copy number of genes encoding for amylase.

Monday, August 6, 2018

Prehistoric medicine

From Wikipedia, the free encyclopedia

A skull showing evidence of trepanning

Prehistoric medicine is any use of medicine from before the invention of writing and the documented history of medicine. Because the timing of the invention of writing varies per culture and region, the term "prehistoric medicine" encompasses a wide range of time periods and dates.

The study of prehistoric medicine relies heavily on artifacts and human remains, and on anthropology. Previously uncontacted peoples and certain indigenous peoples who live in a traditional way have been the subject of anthropological studies in order to gain insight into both contemporary and ancient practices.

Disease and mortality

Different diseases and ailments were more common in prehistory than today; there is evidence that many people suffered from osteoarthritis, probably caused by the lifting of heavy objects which would have been a daily and necessary task in their societies.[citation needed] For example, the transport of latte stones, though this practice only started during the neolithic era which involved hyper extension and torque of the lower back, while dragging the stones, may have contributed to the development of micro fractures in the spine and subsequent spondylolysis. Things such as cuts, bruises, and breakages of bone, without antiseptics, proper facilities, or knowledge of germs, would become very serious if infected, as they did not have sufficient ways to treat infection.[3][unreliable source?] There is also evidence of rickets, bone deformity and bone wastage (Osteomalacia),[4] which is caused by a lack of Vitamin D.

The life expectancy in prehistoric times was low, 25–40 years,[5] with men living longer than women; archaeological evidence of women and babies found together suggests that many women would have died in childbirth, perhaps accounting for the lower life expectancy in women than men. Another possible explanation for the shorter life spans of prehistoric humans may be malnutrition; also, men as hunters may have sometimes received better food than the woman, who would consequently have been less resistant to disease.[6]

Treatments for disease

Plant materials

Herbs such as rosemary may have been used for medicinal purposes by prehistoric peoples.[which?][7] 
 
Plant materials (herbs and substances derived from natural sources)[10] were among the treatments for diseases in prehistoric cultures.[which?] Since plant materials quickly rot under most conditions, historians are unlikely to fully understand which species were used in prehistoric medicine. A speculative view can be obtained by researching the climate of the respective society and then checking which species continue to grow in similar conditions today[11] and through anthropological studies of existing indigenous peoples.[12][13] Unlike the ancient civilisations which could source plant materials internationally, prehistoric societies would have been restricted to localised areas, though nomadic tribes may have had a greater variety of plant materials at their disposal than more stationary societies.

The effects of different plant materials could have been found through trial and error. Gathering and dispensing of plant materials was in most cultures handled by women, who cared for the health of their family.[15] Plant materials were an important cure for diseases throughout history.[16] This fund of knowledge would have been passed down orally through the generations.

The birch polypore fungus, commonly found in alpine environments, may have been used as a laxative by prehistoric peoples living in Northern Europe, since it is known to bring on short bouts of diarrhoea when ingested, and was found among the possessions of a mummified man.[17]

The use of earth and clays

Earths and clays may have provided prehistoric peoples with some of their first medicines. This is related to geophagy, which is extremely widespread among animals in the wild as well as among domesticated animals. In particular, geophagy is widespread among contemporary non-human primates.[18] Also, early humans could have learned about the use of various healing clays by observing animal behaviour. Such clay is used both internally and externally, such as for treating wounds, and after surgery (see below).[citation needed] Geophagy, and the external use of clay are both still quite widespread among aboriginal peoples around the world, as well as among pre-industrial populations.

Surgery

Trepanning (sometimes Trephining) was a basic surgical operation carried out in prehistoric societies across the world,[19][20] although evidence shows a concentration of the practice in Peru.[16][19][21] Several theories question the reasoning behind trepanning; it could have been used to cure certain conditions such as headaches and epilepsy.[22][23]. There is evidence discovered of bone tissue surrounding the surgical hole partially grown back, so therefore survival of the procedure did occur at least on occasion.[16]

Many prehistoric peoples,[which?] where applicable (geographically and technologically), were able to set broken or fractured bones using clay materials. An injured area was covered in clay, which then set hard so that the bone could heal properly without interference.[1] Also, primarily in the Americas, the pincers of certain ant species were used to close up wounds from infection; the ant was held above the wound until it bit, where its head would be removed allowing the pincers to remain and hold closed the wound.[24]

Magic and medicine men

Yup'ik shaman exorcising evil spirits from a sick boy.[25]

Medicine men (also witch-doctors, shamans) maintained the health of their tribe by gathering and distributing herbs, performing minor surgical procedures,[26] providing medical advice, and supernatural treatments such as charms, spells, and amulets to ward off evil spirits.[27] In Apache society, as would likely have been the case in many others, the medicine men initiate a ceremony over the patient, which is attended by family and friends. It consists of magic formulas, prayers, and drumming. The medicine man then, from patients' recalling of their past and possible offenses against their religion or tribal rules, reveals the nature of the disease and how to treat it.

They were believed by the tribe to be able to contact spirits or gods and use their supernatural powers to cure the patient, and, in the process, remove evil spirits. If neither this method nor trepanning worked, the spirit was considered too powerful to be driven out of the person.[citation needed] Medicine men would likely have been central figures in the tribal system, because of their medical knowledge and because they could seemingly contact the gods. Their religious and medical training were, necessarily, passed down orally.[28]

Dentistry

Archaeologists in Mehrgarh in Balochistan province in the present day Pakistan discovered that the people of Indus Valley Civilization from the early Harappan periods (c. 3300 BC) had knowledge of medicine and dentistry. The physical anthropologist who carried out the examinations, Professor Andrea Cucina from the University of Missouri, made the discovery when he was cleaning the teeth from one of the men. Later research in the same area found evidence of teeth having been drilled dating to 7,000 B.C.E.[29]

The problem of evidence

There is no written evidence which can be used for investigation into the prehistoric period of history by definition. Historians must use other sources such as human remains and anthropological studies of societies living under similar conditions. A variety of problems arise when the aforementioned sources are used.

Human remains from this period are rare and many have undoubtedly been destroyed by burial rituals or made useless by damage.[30][31] The most informative archaeological evidence are mummies, remains which have been preserved by either freezing or in peat bogs;[32][33] no evidence exists to suggest that prehistoric people mummified the dead for religious reasons, as Ancient Egyptians did. These bodies can provide scientists with subjects' (at the time of death): weight, illnesses, height, diet, age, and bone conditions,[34] which grant vital indications of how developed prehistoric medicine was.

Not technically classed as 'written evidence', prehistoric people left many kinds of paintings, using paints made of minerals such as lime, clay and charcoal, and brushes made from feathers, animal fur, or twigs on the walls caves. Although many of these paintings are thought to have a spiritual or religious purpose,[35] there have been some, such as a man with antlers (thought to be a medicine man), which have revealed some part of prehistoric medicine. Many cave paintings of human hands have shown missing fingers (none have been shown without thumbs), which suggests that these were cut off for sacrificial or practical purposes, as is the case among the Pygmies and Khoikhoi.[36]

The writings of certain cultures (such as the Romans) can be used as evidence in discovering how their contemporary prehistoric cultures practiced medicine. People who live a similar nomadic existence today have been used as a source of evidence too, but obviously there are distinct differences in the environments in which nomadic people lived; prehistoric people who once lived in Britain for example, cannot be effectively compared to aboriginal peoples in Australia, because of the geographical differences.

Why Aren't There More Women Futurists?

Vasily Fedosenko /Reuters
In the future, everyone’s going to have a robot assistant. That’s the story, at least. And as part of that long-running narrative, Facebook just launched its virtual assistant. They’re calling it Moneypenny—the secretary from the James Bond Films. Which means the symbol of our march forward, once again, ends up being a nod back. In this case, Moneypenny is a send-up to an age when Bond’s womanizing was a symbol of manliness and many women were, no matter what they wanted to be doing, secretaries.

Why can’t people imagine a future without falling into the sexist past? Why does the road ahead keep leading us back to a place that looks like the Tomorrowland of the 1950s? Well, when it comes to Moneypenny, here’s a relevant datapoint: More than two thirds of Facebook employees are men. That’s a ratio reflected among another key group: futurists.

Both the World Future Society and the Association of Professional Futurists are headed by women right now. And both of those women talked to me about their desire to bring more women to the field. Cindy Frewen, the head of the Association of Professional Futurists, estimates that about a third of their members are women. Amy Zalman, the CEO of the World Future Society, says that 23 percent of her group’s members identify as female. But most lists of “top futurists” perhaps include one female name. Often, that woman is no longer working in the field.
Somehow, I’ve become a person who reports on futurists. I produce and host a podcast about what might happen in the future called Meanwhile in the Future. I write a column about people living cutting-edge lives for BBC Future. And one thing I’ve noticed is how overwhelmingly male and white they are.

It turns out that what makes someone a futurist, and what makes something futurism, isn’t well defined. When you ask those who are part of official futurist societies, like the APF and the WFS, they often struggle to answer. There are some possible credentials—namely: a degree in foresight, an emerging specialty that often intersects with studies of technology and business. But the discipline isn’t well established—there’s no foresight degree at Yale, or Harvard. And there are plenty of people who practice futurology who don’t have one.

Zalman defines a futurist as a person who embraces a certain way of thinking. “Being a futurist these days means that you take seriously a worldview and a set of activities and the recognition that foresight, with a capital F, isn’t just thinking about what are the top 10 things this year, what are the trends unfolding.”

Frewen says that futurism won’t ever be like architecture or medicine, in that “it’s never going to be a licensed field.” But there are still things that many futurists agree people in their field shouldn’t do. “We think of things now as more systems-based and more uncertain, you don’t know what the future is, and that’s a basic concept, so we try to avoid the people who think they can always know this is going to get better.”

Some people think of science fiction authors as futurists, while others don’t. Some members of the APF include singularity researchers, others don’t want to. Some people lump transhumanists into a broader category of futurists. Others don’t. Here are some of the people popularly known as futurists: Aubrey de Gray, the chief researcher at the Strategies for Engineered Negligible Senescence Research Foundation; Elon Musk, the head of SpaceX; Sergey Brin, the co-founder of Google; Ray Kurzweil, the director of engineering at Google. They don’t necessarily belong to a particular society—they might not even self-identify as futurists!—but they are driving the conversation about the future—very often on stages, in public, backed by profitable corporations or well-heeled investors.

Which means the media ends up turning to Brin and Musk and de Gray and Kurzweil to explain what is going to happen, why it matters, and ultimately whether it’s all going to be okay. The thing is: The futures that get imagined depend largely on the person or people doing the imagining.
* * *
Why are there so few women? Much of it comes down to the same reasons there are so few women in science and technology, fields with direct links to futurism (which has a better ring to it than “strategic foresight,” the term some futurists prefer).

Zalman says futurism has actually fought to present itself in a certain way. When the field was founded in the 1960s, it came with a reputation that still lingers a bit today, she says. “Like magicians, crystal ball gazers, sort of flakey, that’s the reputation that followed the WFS for awhile. Because the field itself had to struggle to be taken seriously, that put more pressure on folks to demonstrate that they were scientific. And it was coded masculine.” While futurism includes not simply the future of gadgets, the field found itself pushing away some of the perceived “softer” elements of foresight: social change, family structures, cultural impacts—in favor of mathematical modeling and technology.

Madeline Ashby, a futurist with a degree in strategic foresight who has worked for organizations like Intel Labs, the Institute for the Future, SciFutures, and Nesta, says that another big part of the gender imbalance has to do with optimism. “If you ask me, the one reason why futurism as a discipline is so white and male, is because white males have the ability to offer the most optimistic vision,” she says. They can get up on stage and tell us that the world will be okay, that technology will fix all our problems, that we’ll live forever. Mark Stevenson wrote a book called An Optimist’s Tour of the Future. TED speakers always seem to end their talk, no matter how dire, on an upward-facing note.

Ashby says that any time she speaks in front of a crowd, and offers a grim view of the future, someone (almost always a man) invariably asks why she can’t be more positive. “Why is this so depressing, why is this so dystopian,” they ask. “Because when you talk about the future you don’t get rape threats, that’s why,” she says. “For a long time the future has belonged to people who have not had to struggle, and I think that will still be true. But as more and more systems collapse, currency, energy, the ability to get water, the ability to work, the future will increasingly belong to those who know how to hustle, and those people are not the people who are producing those purely optimistic futures.”

“I don’t know if I kind of pick up on the optimism as I pick up on the utter absurdity,” said Sarah Kember, a professor of technology at the University of London who’s applied feminist theory to futurism for years. “And that’s great for me in some ways, it’s been a traditional feminist strategy to expose absurdity. It’s a key critique.” She points out that as someone whose job it is to take a step back and analyze things like futurism from an outside view, a lot of the mainstream futurism starts to look pretty silly. “You’ve got smart bras and vibrating pants and talking kitchen worktops and augmented-reality bedroom mirrors that read the tags on your clothing and tell you what not to wear, and there’s no reflection on any of this at all,” she says.

Both Frewen of the APF and Zalman of the WFS told me that they were concerned about the gender imbalance in their field, and that they are hoping to help change it. But they also both reminded me that, compared to a lot of fields, futurism is a tiny speciality. And it’s homogeneous in other ways, too. The majority of the WFS members are white, and most of them are 55 to 65 years old. “It is not okay for the WFS, although we care about them, to have only men from North America between the ages of 55 and 65,” Zalman says. “We need all those other voices because they represent an experience.”
* * *
Any time someone points out a gender or racial imbalance in a field (or, most often, the combination of the two) a certain set of people ask: Who cares? The future belongs to all of us—or, ultimately, none of us—why does it matter if the vast majority of futurists are white men? It matters for the same reasons diversity drives market growth: because when only one type of person is engaged in asking key questions about a specialty—envisioning the future or otherwise—they miss a entire frameworks for identifying and solving problems. The relative absence of women at Apple is why the Apple Health kit didn’t have period tracking until a few months ago, and why a revolutionary artificial heart can be deemed a success even when it doesn’t fit 80 percent of women.

Which brings us back to Moneypenny, and all the other virtual assistants of the future. There are all sorts of firms and companies working to build robotic servants. Chrome butlers, chefs, and housekeepers. But the fantasy of having an indentured servant is a peculiar one to some. “That whole idea of creating robots that are in service to us has always bothered me,” says Nnedi Okorafor, a science fiction author. “I’ve always sided with the robots. That whole idea of creating these creatures that are human-like and then have them be in servitude to us, that is not my fantasy and I find it highly problematic that it would be anyone’s.”

Or take longevity, for example. The idea that people could, or even should, push to lengthen lifespans as far as possible is popular. The life-extension movement, with Aubrey de Gray as one (very bearded) spokesman, has raised millions of dollars to investigate how to extend the lifespan of humans. But this is arguably only an ideal future if you’re in as a comfortable position as his. “Living forever only works if you’re a rich vampire from an Anne Rice novel, which is to say that you have compound interest,” jokes Ashby. “It really only works if you have significant real-estate investments and fast money and slow money.” (Time travel, as the comedian Louis C.K. has pointed out, is another thing that is a distinctly white male preoccupation—going back in time, for marginalized groups, means giving up more of their rights.)

Beyond the particular futures that get funded and developed, there’s also a broader issue with the ways in which people think about what forces actually shape the future. “We get some really ready-made easy ways of thinking about the future by thinking that the future is shapeable by tech development,” said Kember, the professor of technology at University of London.

In the 1980s, two futurists (a man and a woman) wrote a book that invited key members of the futurist community to write essays on what they saw coming. The book was called What Futurists Believe, and it included profiles of 17 futurists, including Arthur C. Clarke and Peter Schwartz. All seventeen people profiled were men. And in some ways, they were very close to predicting the future. They seemed to grasp the importance of the cell phone and the trajectory of the personal computer. But they completely missed a huge set of other things. “What they never got right was the social side, they never saw flattened organizations, social media, the uprisings in the Middle East, ISIS using Twitter,” says Frewen.

Terry Grim, a professor in the Studies of the Future program at the University of Houston, recalls a video she saw from the 1960s depicting the office of the future. “It had everything pretty much right, they had envisioned the computer and fax machine and forward-looking technology products.” But there was something missing: “There were no women in the office,” she said.

Okorafor says that she’s gotten so used to not seeing anybody like herself in visions of the future that it’s not really surprising to her when it happens. “I feel like more of a tourist when I experience these imaginings, this isn’t even a place where I would exist in the first place,” she says. “In the type of setting, the environment, and the way everything is set up just doesn’t feel like it would be my future at all, and this is something that I experience regularly when I read or watch imagined futures, and this is part of what made me start writing my own.”

This is also perhaps why futurists often don’t talk about some of the issues and problems that many people face every day—harassment, child care, work-life balance, water rights, immigration, police brutality. “When you lose out on women’s voices you lose out on the issues that they have to deal with,” Ashby says. She was recently at a futures event where people presented on a global trends report, and there was nothing in the slides on the future of law enforcement. The questions that many people face about their futures are lost in the futures being imagined.
* * *
In the 1970s, Alvin Toffler’s book Future Shock argued that there are three types of futurism the world needed: a science of futurism that could talk about the probability of things happening, an art of futurism that could explore what is possible, and a politics of futurism that could investigate what is preferable. Futurism has done well to develop the first side, building devices and technologies and frameworks through which to see technical advances. But Zalman says that it’s fallen down a bit on the other two. “Arts and humanities are given short shrift.”

In some ways, the art and politics of futurism are the harder pieces of the pie. Technology is often predictable. Humans, less so. “The solution to make things better is a really messy policy solution that has to be negotiated, it’s not pulling the sword from the stone or implanting the alien saucers with your stupid Mac virus or killing the shark, it’s getting people in a room with free coffee and doughnuts and getting them to talk,” said Ashby.

In order to understand what those who have never really felt welcome in the field of futurism think, I called someone who writes and talks about the future, but who doesn’t call themselves a futurist: Monica Byrne. Byrne is a science-fiction author and opinion writer who often tackles questions of how we see the future, and what kinds of futures we deem preferable. But when she thinks about “futurism” as a field, she doesn’t see herself. “I think the term futurist is itself is something I see white men claiming for themselves, and isn’t something that would occur to me to call myself even though I functionally am one,” she says.

Okorafor says that she too has never really called herself a futurist, even though much of what she does is use her writing to explore what’s possible. “When you sent me your email and you mentioned futurism I think that’s really the first time I started thinking about that label for myself. And it fits. It feels comfortable.”

When Byrne thinks about the term futurists, she thinks about a power struggle. “What I see is a bid for control over what the future will look like. And it is a future that is, that to me doesn’t look much different from Asimov science fiction covers. Which is not a future I’m interested in.”

The futurism that involves glass houses and 400-year-old men doesn’t interest her. “When I think about the kind of future I want to build, it’s very soft and human, it’s very erotic, and I feel like so much of what I identify as futurism is very glossy, chrome painted science fiction covers, they’re sterile.” She laughs. “Who cares about your jetpack? How does technology enable us to keep loving each other?”

We want to hear what you think. Submit a letter to the editor or write to letters@theatlantic.com.

How to form the world’s smallest self-assembling nanowires — just 3 atoms wide

December 30, 2016
Original link:  http://www.kurzweilai.net/how-to-form-the-worlds-smallest-self-assembling-nanowires-just-3-atoms-wide
This animation shows molecular building blocks joining the tip of a growing self-assembling nanowire. Each block consists of a diamondoid — the smallest possible bit of diamond — attached to sulfur and copper atoms (yellow and brown spheres). Like LEGO blocks, they only fit together in certain ways that are determined by their size and shape. The copper and sulfur atoms form a conductive wire in the middle, and the diamondoids form an insulating outer shell. (credit: SLAC National Accelerator Laboratory)

Scientists at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have discovered a way to use diamondoids* — the smallest possible bits of diamond — to self-assemble atoms, LEGO-style, into the thinnest possible electrical wires, just three atoms wide.

The new technique could potentially be used to build tiny wires for a wide range of applications, including fabrics that generate electricity, optoelectronic devices that employ both electricity and light, and superconducting materials that conduct electricity without any loss. The scientists reported their results last week in Nature Materials.

The researchers started with the smallest possible diamondoids —interlocking cages of carbon and hydrogen — and attached a sulfur atom to each. Floating in a solution, each sulfur atom bonded with a single copper ion — creating a semiconducting combination of copper and sulfur known as a chalcogenide.

A conventional insulated electrical copper wire (credit: Alibaba)

That created the basic nanowire building blocks, which then drifted toward each other, drawn by “unusually strong” van der Waals attraction between the diamondoids, and attached themselves to the growing tip of the nanowire. The attached diamondoids formed an insulating shell — creating the nanoscale equivalent of a conventional insulated electrical wire.

Although there are other ways to get materials to self-assemble, this is the first one shown to make a nanowire with a solid, crystalline core that has good electronic properties, said study co-author Nicholas Melosh, an associate professor at SLAC and Stanford and investigator with SIMES, the Stanford Institute for Materials and Energy Sciences at SLAC.

The team also included researchers from Lawrence Berkeley National Laboratory, the National Autonomous University of Mexico (UNAM) and Justus-Liebig University in Germany. The work was funded by the DOE Office of Science and the German Research Foundation.

* Found naturally in petroleum fluids, they are extracted and separated by size and geometry in a SLAC laboratory.



Citation: Yan et al., Nature Materials, 26 December 2016 (10.1038/nmat4823)
Press Office Contact: Andrew Gordon, agordon@slac.stanford.edu, (650) 926-2282


Abstract of Hybrid metal–organic chalcogenide nanowires with electrically conductive inorganic core through diamondoid-directed assembly

Controlling inorganic structure and dimensionality through structure-directing agents is a versatile approach for new materials synthesis that has been used extensively for metal–organic frameworks and coordination polymers. However, the lack of ‘solid’ inorganic cores requires charge transport through single-atom chains and/or organic groups, limiting their electronic properties. Here, we report that strongly interacting diamondoid structure-directing agents guide the growth of hybrid metal–organic chalcogenide nanowires with solid inorganic cores having three-atom cross-sections, representing the smallest possible nanowires. The strong van der Waals attraction between diamondoids overcomes steric repulsion leading to a cis configuration at the active growth front, enabling face-on addition of precursors for nanowire elongation. These nanowires have band-like electronic properties, low effective carrier masses and three orders-of-magnitude conductivity modulation by hole doping. This discovery highlights a previously unexplored regime of structure-directing agents compared with traditional surfactant, block copolymer or metal–organic framework linkers.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...