Search This Blog

Sunday, February 11, 2024

Race and genetics

From Wikipedia, the free encyclopedia
 
Researchers have investigated the relationship between race and genetics as part of efforts to understand how biology may or may not contribute to human racial categorization. Today, the consensus among scientists is that race is a social construct, and that using it as a proxy for genetic differences among populations is misleading.

Many constructions of race are associated with phenotypical traits and geographic ancestry, and scholars like Carl Linnaeus have proposed scientific models for the organization of race since at least the 18th century. Following the discovery of Mendelian genetics and the mapping of the human genome, questions about the biology of race have often been framed in terms of genetics. A wide range of research methods have been employed to examine patterns of human variation and their relations to ancestry and racial groups, including studies of individual traits, studies of large populations and genetic clusters, and studies of genetic risk factors for disease.

Research into race and genetics has also been criticized as emerging from, or contributing to, scientific racism. Genetic studies of traits and populations have been used to justify social inequalities associated with race, despite the fact that patterns of human variation have been shown to be mostly clinal, with human genetic code being approximately 99.6%-99.9% identical between individuals, and with no clear boundaries between groups.

Some researchers have argued that race can act as a proxy for genetic ancestry because individuals of the same racial category may share a common ancestry, but this view has fallen increasingly out of favor among experts. The mainstream view is that it is necessary to distinguish between biology and the social, political, cultural, and economic factors that contribute to conceptions of race.

Overview

The concept of race

The concept of "race" as a classification system of humans based on visible physical characteristics emerged over the last five centuries, influenced by European colonialism. However, there is widespread evidence of what would be described in modern terms as racial consciousness throughout the entirety of recorded history. For example, in Ancient Egypt there were four broad racial divisions of human beings: Egyptians, Asiatics, Libyans, and Nubians. There was also Aristotle of Ancient Greece, who once wrote: "The peoples of Asia... lack spirit, so that they are in continuous subjection and slavery." The concept has manifested in different forms based on social conditions of a particular group, often used to justify unequal treatment. Early influential attempts to classify humans into discrete races include 4 races in Carl Linnaeus's Systema Naturae (Homo europaeus, asiaticus, americanus, and afer) and 5 races in Johann Friedrich Blumenbach's On the Natural Variety of Mankind. Notably, over the next centuries, scholars argued for anywhere from 3 to more than 60 race categories. Race concepts have changed within a society over time; for example, in the United States social and legal designations of "White" have been inconsistently applied to Native Americans, Arab Americans, and Asian Americans, among other groups (See main article: Definitions of whiteness in the United States). Race categories also vary worldwide; for example, the same person might be perceived as belonging to a different category in the United States versus Brazil. Because of the arbitrariness inherent in the concept of race, it is difficult to relate it to biology in a straightforward way.

Race and human genetic variation

There is broad consensus across the biological and social sciences that race is a social construct, not an accurate representation of human genetic variation. Humans are remarkably genetically similar, sharing approximately 99.6%-99.9% of their genetic code with one another. We nonetheless see wide individual variation in phenotype, which arises from both genetic differences and complex gene-environment interactions. The vast majority of this genetic variation occurs within groups; very little genetic variation differentiates between groups. Crucially, the between-group genetic differences that do exist do not map onto socially recognized categories of race. Furthermore, although human populations show some genetic clustering across geographic space, human genetic variation is "clinal", or continuous. This, in addition to the fact that different traits vary on different clines, makes it impossible to draw discrete genetic boundaries around human groups. Finally, insights from ancient DNA are revealing that no human population is "pure" – all populations represent a long history of migration and mixing.

Sources of human genetic variation

Genetic variation arises from mutations, from natural selection, migration between populations (gene flow) and from the reshuffling of genes through sexual reproduction. Mutations lead to a change in the DNA structure, as the order of the bases are rearranged. Resultantly, different polypeptide proteins are coded. Some mutations may be positive and can help the individual survive more effectively in their environment. Mutation is counteracted by natural selection and by genetic drift; note too the founder effect, when a small number of initial founders establish a population which hence starts with a correspondingly small degree of genetic variation. Epigenetic inheritance involves heritable changes in phenotype (appearance) or gene expression caused by mechanisms other than changes in the DNA sequence.

Human phenotypes are highly polygenic (dependent on interaction by many genes) and are influenced by environment as well as by genetics.

Nucleotide diversity is based on single mutations, single nucleotide polymorphisms (SNPs). The nucleotide diversity between humans is about 0.1 percent (one difference per one thousand nucleotides between two humans chosen at random). This amounts to approximately three million SNPs (since the human genome has about three billion nucleotides). There are an estimated ten million SNPs in the human population.

Research has shown that non-SNP (structural) variation accounts for more human genetic variation than single nucleotide diversity. Structural variation includes copy-number variation and results from deletions, inversions, insertions and duplications. It is estimated that approximately 0.4 to 0.6 percent of the genomes of unrelated people differ.

Genetic basis for race

Much scientific research has been organized around the question of whether or not there is genetic basis for race. In Luigi Luca Cavalli-Sforza's book (circa 1994) "The History and Geography of Human Genes" he writes, "From a scientific point of view, the concept of race has failed to obtain any consensus; none is likely, given the gradual variation in existence. It may be objected that the racial stereotypes have a consistency that allows even the layman to classify individuals. However, the major stereotypes, all based on skin color, hair color and form, and facial traits, reflect superficial differences that are not confirmed by deeper analysis with more reliable genetic traits and whose origin dates from recent evolution mostly under the effect of climate and perhaps sexual selection".

A more up-to-date and comprehensive book authored by geneticist David Reich (2018) reaffirms the conclusion that the traditional views which assert a biological basis for race are wrong:

Today, many people assume that humans can be grouped biologically into "primeval" groups, corresponding to our notion of "races"... But this long-held view about "race" has just in the last years been proven wrong.

— David Reich, Who We Are and How We Got Here, (Introduction, pg. xxiv).

Research methods

Scientists investigating human variation have used a series of methods to characterize how different populations vary.

Early studies of traits, proteins, and genes

Early racial classification attempts measured surface traits, particularly skin color, hair color and texture, eye color, and head size and shape. (Measurements of the latter through craniometry were repeatedly discredited in the late 19th and mid-20th centuries due to a lack of correlation of phenotypic traits with racial categorization.) In actuality, biological adaptation plays the biggest role in these bodily features and skin type. A relative handful of genes accounts for the inherited factors shaping a person's appearance. Humans have an estimated 19,000–20,000 human protein-coding genes. Richard Sturm and David Duffy describe 11 genes that affect skin pigmentation and explain most variations in human skin color, the most significant of which are MC1R, ASIP, OCA2, and TYR. There is evidence that as many as 16 different genes could be responsible for eye color in humans; however, the main two genes associated with eye color variation are OCA2 and HERC2, and both are localized in chromosome 15.

Analysis of blood proteins and between-group genetics

Multicolored world map
Geographic distribution of blood group A
Multicolored world map
Geographic distribution of blood group B

Before the discovery of DNA, scientists used blood proteins (the human blood group systems) to study human genetic variation. Research by Ludwik and Hanka Herschfeld during World War I found that the incidence of blood groups A and B differed by region; for example, among Europeans 15 percent were group B and 40 percent group A. Eastern Europeans and Russians had a higher incidence of group B; people from India had the greatest incidence. The Herschfelds concluded that humans comprised two "biochemical races", originating separately. It was hypothesized that these two races later mixed, resulting in the patterns of groups A and B. This was one of the first theories of racial differences to include the idea that human variation did not correlate with genetic variation. It was expected that groups with similar proportions of blood groups would be more closely related, but instead it was often found that groups separated by great distances (such as those from Madagascar and Russia), had similar incidences. It was later discovered that the ABO blood group system is not just common to humans, but shared with other primates, and likely predates all human groups.

In 1972, Richard Lewontin performed a FST statistical analysis using 17 markers (including blood-group proteins). He found that the majority of genetic differences between humans (85.4 percent) were found within a population, 8.3 percent were found between populations within a race and 6.3 percent were found to differentiate races (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines in his study). Since then, other analyses have found FST values of 6–10 percent between continental human groups, 5–15 percent between different populations on the same continent and 75–85 percent within populations. This view has been affirmed by the American Anthropological Association and the American Association of Physical Anthropologists since.

Critiques of blood protein analysis

While acknowledging Lewontin's observation that humans are genetically homogeneous, A. W. F. Edwards in his 2003 paper "Human Genetic Diversity: Lewontin's Fallacy" argued that information distinguishing populations from each other is hidden in the correlation structure of allele frequencies, making it possible to classify individuals using mathematical techniques. Edwards argued that even if the probability of misclassifying an individual based on a single genetic marker is as high as 30 percent (as Lewontin reported in 1972), the misclassification probability nears zero if enough genetic markers are studied simultaneously. Edwards saw Lewontin's argument as based on a political stance, denying biological differences to argue for social equality. Edwards' paper is reprinted, commented upon by experts such as Noah Rosenberg, and given further context in an interview with philosopher of science Rasmus Grønfeldt Winther in a recent anthology.

As referred to before, Edwards criticises Lewontin's paper as he took 17 different traits and analysed them independently, without looking at them in conjunction with any other protein. Thus, it would have been fairly convenient for Lewontin to come up with the conclusion that racial naturalism is not tenable, according to his argument. Sesardic also strengthened Edwards' view, as he used an illustration referring to squares and triangles, and showed that if you look at one trait in isolation, then it will most likely be a bad predicator of which group the individual belongs to. In contrast, in a 2014 paper, reprinted in the 2018 Edwards Cambridge University Press volume, Rasmus Grønfeldt Winther argues that "Lewontin's Fallacy" is effectively a misnomer, as there really are two different sets of methods and questions at play in studying the genomic population structure of our species: "variance partitioning" and "clustering analysis." According to Winther, they are "two sides of the same mathematics coin" and neither "necessarily implies anything about the reality of human groups."

Current studies of population genetics

Researchers currently use genetic testing, which may involve hundreds (or thousands) of genetic markers or the entire genome.

Structure

Principal component analysis of fifty populations, color-coded by region, illustrates the differentiation and overlap of populations found using this method of analysis.
Individuals mostly have genetic variants which are found in multiple regions of the world. Based on data from "A unified genealogy of modern and ancient genomes".

Several methods to examine and quantify genetic subgroups exist, including cluster and principal components analysis. Genetic markers from individuals are examined to find a population's genetic structure. While subgroups overlap when examining variants of one marker only, when a number of markers are examined different subgroups have different average genetic structure. An individual may be described as belonging to several subgroups. These subgroups may be more or less distinct, depending on how much overlap there is with other subgroups.

In cluster analysis, the number of clusters to search for K is determined in advance; how distinct the clusters are varies.

The results obtained from cluster analyses depend on several factors:

  • A large number of genetic markers studied facilitates finding distinct clusters.
  • Some genetic markers vary more than others, so fewer are required to find distinct clusters. Ancestry-informative markers exhibit substantially different frequencies between populations from different geographical regions. Using AIMs, scientists can determine a person's ancestral continent of origin based solely on their DNA. AIMs can also be used to determine someone's admixture proportions.
  • The more individuals studied, the easier it becomes to detect distinct clusters (statistical noise is reduced).
  • Low genetic variation makes it more difficult to find distinct clusters. Greater geographic distance generally increases genetic variation, making identifying clusters easier.
  • A similar cluster structure is seen with different genetic markers when the number of genetic markers included is sufficiently large. The clustering structure obtained with different statistical techniques is similar. A similar cluster structure is found in the original sample with a subsample of the original sample.

Recent studies have been published using an increasing number of genetic markers.

Focus on study of structure has been criticized for giving the general public a misleading impression of human genetic variation, obscuring the general finding that genetic variants which are limited to one region tend to be rare within that region, variants that are common within a region tend to be shared across the globe, and most differences between individuals, whether they come from the same region or different regions, are due to global variants.

Distance

Genetic distance is genetic divergence between species or populations of a species. It may compare the genetic similarity of related species, such as humans and chimpanzees. Within a species, genetic distance measures divergence between subgroups. Genetic distance significantly correlates to geographic distance between populations, a phenomenon sometimes known as "isolation by distance". Genetic distance may be the result of physical boundaries restricting gene flow such as islands, deserts, mountains or forests. Genetic distance is measured by the fixation index (FST). FST is the correlation of randomly chosen alleles in a subgroup to a larger population. It is often expressed as a proportion of genetic diversity. This comparison of genetic variability within (and between) populations is used in population genetics. The values range from 0 to 1; zero indicates the two populations are freely interbreeding, and one would indicate that two populations are separate.

Many studies place the average FST distance between human races at about 0.125. Henry Harpending argued that this value implies on a world scale a "kinship between two individuals of the same human population is equivalent to kinship between grandparent and grandchild or between half siblings". In fact, the formulas derived in Harpending's paper in the "Kinship in a subdivided population" section imply that two unrelated individuals of the same race have a higher coefficient of kinship (0.125) than an individual and their mixed race half-sibling (0.109).

Critiques of FST

While acknowledging that FST remains useful, a number of scientists have written about other approaches to characterizing human genetic variation. Long & Kittles (2009) stated that FST failed to identify important variation and that when the analysis includes only humans, FST = 0.119, but adding chimpanzees increases it only to FST = 0.183. Mountain & Risch (2004) argued that an FST estimate of 0.10–0.15 does not rule out a genetic basis for phenotypic differences between groups and that a low FST estimate implies little about the degree to which genes contribute to between-group differences. Pearse & Crandall 2004 wrote that FST figures cannot distinguish between a situation of high migration between populations with a long divergence time, and one of a relatively recent shared history but no ongoing gene flow. In their 2015 article, Keith Hunley, Graciela Cabana, and Jeffrey Long (who had previously criticized Lewontin's statistical methodology with Rick Kittles) recalculate the apportionment of human diversity using a more complex model than Lewontin and his successors. They conclude: "In sum, we concur with Lewontin's conclusion that Western-based racial classifications have no taxonomic significance, and we hope that this research, which takes into account our current understanding of the structure of human diversity, places his seminal finding on firmer evolutionary footing."

Anthropologists (such as C. Loring Brace), philosopher Jonathan Kaplan and geneticist Joseph Graves have argued that while it is possible to find biological and genetic variation roughly corresponding to race, this is true for almost all geographically distinct populations: the cluster structure of genetic data is dependent on the initial hypotheses of the researcher and the populations sampled. When one samples continental groups, the clusters become continental; with other sampling patterns, the clusters would be different. Weiss and Fullerton note that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form; all other populations would be composed of genetic admixtures of Maori, Icelandic and Mayan material. Kaplan therefore concludes that, while differences in particular allele frequencies can be used to identify populations that loosely correspond to the racial categories common in Western social discourse, the differences are of no more biological significance than the differences found between any human populations (e.g., the Spanish and Portuguese).

Historical and geographical analyses

Current-population genetic structure does not imply that differing clusters or components indicate only one ancestral home per group; for example, a genetic cluster in the US comprises Hispanics with European, Native American and African ancestry.

Geographic analyses attempt to identify places of origin, their relative importance and possible causes of genetic variation in an area. The results can be presented as maps showing genetic variation. Cavalli-Sforza and colleagues argue that if genetic variations are investigated, they often correspond to population migrations due to new sources of food, improved transportation or shifts in political power. For example, in Europe the most significant direction of genetic variation corresponds to the spread of agriculture from the Middle East to Europe between 10,000 and 6,000 years ago. Such geographic analysis works best in the absence of recent large-scale, rapid migrations.

Historic analyses use differences in genetic variation (measured by genetic distance) as a molecular clock indicating the evolutionary relation of species or groups, and can be used to create evolutionary trees reconstructing population separations.

Results of genetic-ancestry research are supported if they agree with research results from other fields, such as linguistics or archeology. Cavalli-Sforza and colleagues have argued that there is a correspondence between language families found in linguistic research and the population tree they found in their 1994 study. There are generally shorter genetic distances between populations using languages from the same language family. Exceptions to this rule are also found, for example Sami, who are genetically associated with populations speaking languages from other language families. The Sami speak a Uralic language, but are genetically primarily European. This is argued to have resulted from migration (and interbreeding) with Europeans while retaining their original language. Agreement also exists between research dates in archeology and those calculated using genetic distance.

Self-identification studies

Jorde and Wooding found that while clusters from genetic markers were correlated with some traditional concepts of race, the correlations were imperfect and imprecise due to the continuous and overlapping nature of genetic variation, noting that ancestry, which can be accurately determined, is not equivalent to the concept of race.

A 2005 study by Tang and colleagues used 326 genetic markers to determine genetic clusters. The 3,636 subjects, from the United States and Taiwan, self-identified as belonging to white, African American, East Asian or Hispanic ethnic groups. The study found "nearly perfect correspondence between genetic cluster and SIRE for major ethnic groups living in the United States, with a discrepancy rate of only 0.14 percent". Paschou et al. found "essentially perfect" agreement between 51 self-identified populations of origin and the population's genetic structure, using 650,000 genetic markers. Selecting for informative genetic markers allowed a reduction to less than 650, while retaining near-total accuracy.

Correspondence between genetic clusters in a population (such as the current US population) and self-identified race or ethnic groups does not mean that such a cluster (or group) corresponds to only one ethnic group. African Americans have an estimated 20–25-percent European genetic admixture; Hispanics have European, Native American and African ancestry. In Brazil there has been extensive admixture between Europeans, Amerindians and Africans. As a result, skin color differences within the population are not gradual, and there are relatively weak associations between self-reported race and African ancestry. Ethnoracial self- classification in Brazilians is certainly not random with respect to genome individual ancestry, but the strength of the association between the phenotype and median proportion of African ancestry varies largely across population.

Critique of genetic-distance studies and clusters

Colored circles, illustrating gene-pool changes
A change in a gene pool may be abrupt or clinal.

Genetic distances generally increase continually with geographic distance, which makes a dividing line arbitrary. Any two neighboring settlements will exhibit some genetic difference from each other, which could be defined as a race. Therefore, attempts to classify races impose an artificial discontinuity on a naturally occurring phenomenon. This explains why studies on population genetic structure yield varying results, depending on methodology.

Rosenberg and colleagues (2005) have argued, based on cluster analysis of the 52 populations in the Human Genetic Diversity Panel, that populations do not always vary continuously and a population's genetic structure is consistent if enough genetic markers (and subjects) are included.

Examination of the relationship between genetic and geographic distance supports a view in which the clusters arise not as an artifact of the sampling scheme, but from small discontinuous jumps in genetic distance for most population pairs on opposite sides of geographic barriers, in comparison with genetic distance for pairs on the same side. Thus, analysis of the 993-locus dataset corroborates our earlier results: if enough markers are used with a sufficiently large worldwide sample, individuals can be partitioned into genetic clusters that match major geographic subdivisions of the globe, with some individuals from intermediate geographic locations having mixed membership in the clusters that correspond to neighboring regions.

They also wrote, regarding a model with five clusters corresponding to Africa, Eurasia (Europe, Middle East, and Central/South Asia), East Asia, Oceania, and the Americas:

For population pairs from the same cluster, as geographic distance increases, genetic distance increases in a linear manner, consistent with a clinal population structure. However, for pairs from different clusters, genetic distance is generally larger than that between intracluster pairs that have the same geographic distance. For example, genetic distances for population pairs with one population in Eurasia and the other in East Asia are greater than those for pairs at equivalent geographic distance within Eurasia or within East Asia. Loosely speaking, it is these small discontinuous jumps in genetic distance—across oceans, the Himalayas, and the Sahara—that provide the basis for the ability of STRUCTURE to identify clusters that correspond to geographic regions.

This applies to populations in their ancestral homes when migrations and gene flow were slow; large, rapid migrations exhibit different characteristics. Tang and colleagues (2004) wrote, "we detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity—as opposed to current residence—is the major determinant of genetic structure in the U.S. population".

Gene clusters from Rosenberg (2006) for K=7 clusters. (Cluster analysis divides a dataset into any prespecified number of clusters.) Individuals have genes from multiple clusters. The cluster prevalent only among the Kalash people (yellow) only splits off at K=7 and greater.

Cluster analysis has been criticized because the number of clusters to search for is decided in advance, with different values possible (although with varying degrees of probability). Principal component analysis does not decide in advance how many components for which to search.

The 2002 study by Rosenberg et al. exemplifies why meanings of these clusterings are disputable. The study shows that at the K=5 cluster analysis, genetic clusterings roughly map onto each of the five major geographical regions. Similar results were gathered in further studies in 2005.

Critique of ancestry-informative markers

Ancestry-informative markers (AIMs) are a genealogy tracing technology that has come under much criticism due to its reliance on reference populations. In a 2015 article, Troy Duster outlines how contemporary technology allows the tracing of ancestral lineage but along only the lines of one maternal and one paternal line. That is, of 64 total great-great-great-great-grandparents, only one from each parent is identified, implying the other 62 ancestors are ignored in tracing efforts. Furthermore, the 'reference populations' used as markers for membership of a particular group are designated arbitrarily and contemporarily. In other words, using populations who currently reside in given places as references for certain races and ethnic groups is unreliable due to the demographic changes which have occurred over many centuries in those places. Furthermore, ancestry-informative markers being widely shared among the whole human population, it is their frequency which is tested, not their mere absence/presence. A threshold of relative frequency has, therefore, to be set. According to Duster, the criteria for setting such thresholds are a trade secret of the companies marketing the tests. Thus, we cannot say anything conclusive on whether they are appropriate. Results of AIMs are extremely sensitive to where this bar is set. Given that many genetic traits are found to be very similar amid many different populations, the designated threshold frequencies are very important. This can also lead to mistakes, given that many populations may share the same patterns, if not exactly the same genes. "This means that someone from Bulgaria whose ancestors go back to the fifteenth century could (and sometime does) map as partly 'Native American'". This happens because AIMs rely on a '100% purity' assumption of reference populations. That is, they assume that a pattern of traits would ideally be a necessary and sufficient condition for assigning an individual to an ancestral reference populations.

Race, genetics, and medicine

There are certain statistical differences between racial groups in susceptibility to certain diseases. Genes change in response to local diseases; for example, people who are Duffy-negative tend to have a higher resistance to malaria. The Duffy negative phenotype is highly frequent in central Africa and the frequency decreases with distance away from Central Africa, with higher frequencies in global populations with high degrees of recent African immigration. This suggests that the Duffy negative genotype evolved in Sub-Saharan Africa and was subsequently positively selected for in the Malaria endemic zone. A number of genetic conditions prevalent in malaria-endemic areas may provide genetic resistance to malaria, including sickle cell disease, thalassaemias and glucose-6-phosphate dehydrogenase. Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European ancestry; a hypothesized heterozygote advantage, providing resistance to diseases earlier common in Europe, has been challenged. Scientists Michael Yudell, Dorothy Roberts, Rob DeSalle, and Sarah Tishkoff argue that using these associations in the practice of medicine has led doctors to overlook or misidentify disease: "For example, hemoglobinopathies can be misdiagnosed because of the identification of sickle-cell as a 'Black' disease and thalassemia as a 'Mediterranean' disease. Cystic fibrosis is underdiagnosed in populations of African ancestry, because it is thought of as a 'White' disease."

Information about a person's population of origin may aid in diagnosis, and adverse drug responses may vary by group. Because of the correlation between self-identified race and genetic clusters, medical treatments influenced by genetics have varying rates of success between self-defined racial groups. For this reason, some physicians consider a patient's race in choosing the most effective treatment, and some drugs are marketed with race-specific instructions. Jorde and Wooding (2004) have argued that because of genetic variation within racial groups, when "it finally becomes feasible and available, individual genetic assessment of relevant genes will probably prove more useful than race in medical decision making". However, race continues to be a factor when examining groups (such as epidemiologic research). Some doctors and scientists such as geneticist Neil Risch argue that using self-identified race as a proxy for ancestry is necessary to be able to get a sufficiently broad sample of different ancestral populations, and in turn to be able to provide health care that is tailored to the needs of minority groups.

Usage in scientific journals

Some scientific journals have addressed previous methodological errors by requiring more rigorous scrutiny of population variables. Since 2000, Nature Genetics requires its authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved". Editors of Nature Genetics say that "[they] hope that this will raise awareness and inspire more rigorous designs of genetic and epidemiological studies".

A 2021 study that examined over 11,000 papers from 1949 to 2018 in The American Journal of Human Genetics, found that "race" was used in only 5% of papers published in the last decade, down from 22% in the first. Together with an increase in use of the terms "ethnicity," "ancestry," and location-based terms, it suggests that human geneticists have mostly abandoned the term "race."

Gene-environment interactions

Lorusso and Bacchini argue that self-identified race is of greater use in medicine as it correlates strongly with risk-related exposomes that are potentially heritable when they become embodied in the epigenome. They summarise evidence of the link between racial discrimination and health outcomes due to poorer food quality, access to healthcare, housing conditions, education, access to information, exposure to infectious agents and toxic substances, and material scarcity. They also cite evidence that this process can work positively – for example, the psychological advantage of perceiving oneself at the top of a social hierarchy is linked to improved health. However they caution that the effects of discrimination do not offer a complete explanation for differential rates of disease and risk factors between racial groups, and the employment of self-identified race has the potential to reinforce racial inequalities.

Objections to racial naturalism

Racial naturalism is the view that racial classifications are grounded in objective patterns of genetic similarities and differences. Proponents of this view have justified it using the scientific evidence described above. However, this view is controversial and philosophers of race have put forward four main objections to it.

Semantic objections, such as the discreteness objection, argue that the human populations picked out in population-genetic research are not races and do not correspond to what "race" means in the United States. "The discreteness objection does not require there to be no genetic admixture in the human species in order for there to be US 'racial groups' ... rather ... what the objection claims is that membership in US racial groups is different from membership in continental populations. ... Thus, strictly speaking, Blacks are not identical to Africans, Whites are not identical to Eurasians, Asians are not identical to East Asians and so forth." Therefore, it could be argued that scientific research is not really about race.

The next two objections, are metaphysical objections which argue that even if the semantic objections fail, human genetic clustering results do not support the biological reality of race. The 'very important objection' stipulates that races in the US definition fail to be important to biology, in the sense that continental populations do not form biological subspecies. The 'objectively real objection' states that "US racial groups are not biologically real because they are not objectively real in the sense of existing independently of human interest, belief, or some other mental state of humans." Racial naturalists, such as Quayshawn Spencer, have responded to each of these objections with counter-arguments. There are also methodological critics who reject racial naturalism because of concerns relating to the experimental design, execution, or interpretation of the relevant population-genetic research.

Another semantic objection is the visibility objection which refutes the claim that there are US racial groups in human population structures. Philosophers such as Joshua Glasgow and Naomi Zack believe that US racial groups cannot be defined by visible traits, such as skin colour and physical attributes: "The ancestral genetic tracking material has no effect on phenotypes, or biological traits of organisms, which would include the traits deemed racial, because the ancestral tracking genetic material plays no role in the production of proteins it is not the kind of material that 'codes' for protein production." Spencer contends that certain racial discourses require visible groups, but disagrees that this is a requirement in all US racial discourse.

A different objection states that US racial groups are not biologically real because they are not objectively real in the sense of existing independently of some mental state of humans. Proponents of this second metaphysical objection include Naomi Zack and Ron Sundstrom. Spencer argues that an entity can be both biologically real and socially constructed. Spencer states that in order to accurately capture real biological entities, social factors must also be considered.

It has been argued that knowledge of a person's race is limited in value, since people of the same race vary from one another. David J. Witherspoon and colleagues have argued that when individuals are assigned to population groups, two randomly chosen individuals from different populations can resemble each other more than a randomly chosen member of their own group. They found that many thousands of genetic markers had to be used for the answer to "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "Never". This assumed three population groups, separated by large geographic distances (European, African and East Asian). The global human population is more complex, and studying a large number of groups would require an increased number of markers for the same answer. They conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes", and "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population".

This is similar to the conclusion reached by anthropologist Norman Sauer in a 1992 article on the ability of forensic anthropologists to assign "race" to a skeleton, based on craniofacial features and limb morphology. Sauer said, "the successful assignment of race to a skeletal specimen is not a vindication of the race concept, but rather a prediction that an individual, while alive was assigned to a particular socially constructed 'racial' category. A specimen may display features that point to African ancestry. In this country that person is likely to have been labeled Black regardless of whether or not such a race actually exists in nature".

Criticism of race-based medicines

Troy Duster points out that genetics is often not the predominant determinant of disease susceptibilities, even though they might correlate with specific socially defined categories. This is because this research oftentimes lacks control for a multiplicity of socio-economic factors. He cites data collected by King and Rewers that indicates how dietary differences play a significant role in explaining variations of diabetes prevalence between populations.

Duster elaborates by putting forward the example of the Pima of Arizona, a population suffering from disproportionately high rates of diabetes. The reason for such, he argues, was not necessarily a result of the prevalence of the FABP2 gene, which is associated with insulin resistance. Rather he argues that scientists often discount the lifestyle implications under specific socio-historical contexts. For instance, near the end of the 19th century, the Pima economy was predominantly agriculture-based. However, as the European American population settles into traditionally Pima territory, the Pima lifestyles became heavily Westernised. Within three decades, the incidence of diabetes increased multiple folds. Governmental provision of free relatively high-fat food to alleviate the prevalence of poverty in the population is noted as an explanation of this phenomenon.

Lorusso and Bacchini argue against the assumption that "self-identified race is a good proxy for a specific genetic ancestry" on the basis that self-identified race is complex: it depends on a range of psychological, cultural and social factors, and is therefore "not a robust proxy for genetic ancestry". Furthermore, they explain that an individual's self-identified race is made up of further, collectively arbitrary factors: personal opinions about what race is and the extent to which it should be taken into consideration in everyday life. Furthermore, individuals who share a genetic ancestry may differ in their racial self-identification across historical or socioeconomic contexts. From this, Lorusso and Bacchini conclude that the accuracy in the prediction of genetic ancestry on the basis of self-identification is low, specifically in racially admixed populations born out of complex ancestral histories.

Meat alternative

From Wikipedia, the free encyclopedia
A tempeh burger
Chinese style tofu from Buddhist cuisine is prepared as an alternative to meat.
Two slices of vegetarian bacon

A meat alternative or meat substitute (also called plant-based meat, mock meat, or fake meat sometimes pejoratively), is a food product made from vegetarian or vegan ingredients, eaten as a replacement for meat. Meat alternatives typically approximate qualities of specific types of meat, such as mouthfeel, flavor, appearance, or chemical characteristics. Plant- and fungus-based substitutes are frequently made with soy (e.g. tofu, tempeh, and textured vegetable protein), but may also be made from wheat gluten as in seitan, pea protein as in the Beyond Burger, or mycoprotein as in Quorn.

Meat alternatives are typically consumed as a source of dietary protein by vegetarians, vegans, and people following religious and cultural dietary laws. However, global demand for sustainable diets has also increased their popularity among non-vegetarians and flexitarians seeking to reduce the environmental impact of meat production.

Meat substitution has a long history. Tofu was invented in China as early as 200 BCE, and in the Middle Ages, chopped nuts and grapes were used as a substitute for mincemeat during Lent. Since the 2010s, startup companies such as Impossible Foods and Beyond Meat have popularized pre-made plant-based substitutes for ground beef, patties, and vegan chicken nuggets as commercial products.

History

A nut and lentil roast from the Good Health journal, in 1902
Advert for John Harvey Kellogg's Protose meat substitute
The vegan Beyond Burger from Beyond Meat
Cheeseburger made with a vegan patty from Impossible Burger

Tofu, a meat alternative made from soybeans, was invented in China by the Han dynasty (206 BC–220 CE). Drawings of tofu production have been discovered in a Han dynasty tomb. Its use as a meat alternative is recorded in a document written by Tao Gu (simplified Chinese: 陶谷; traditional Chinese: 陶穀; pinyin: Táo Gǔ, 903–970). Tao describes how tofu was popularly known as "small mutton" (Chinese: 小宰羊; pinyin: xiǎo zǎiyáng), which shows that the Chinese valued tofu as an imitation meat. Tofu was widely consumed during the Tang dynasty (618–907), and likely spread to Japan during the later Tang or early Song dynasty.

In the third century CE, Athenaeus describes a preparation of mock anchovy in his work Deipnosophistae:

He took a female turnip, shred it fine
Into the figure of the delicate fish;
Then did he pour on oil and savoury salt
With careful hand in due proportion.
On that he strew'd twelve grains of poppy seed,
Food which the Scythians love; then boil'd it all.
And when the turnip touch'd the royal lips,
Thus spake the king to the admiring guests:
"A cook is quite as useful as a poet,
And quite as wise, and these anchovies show it."

Wheat gluten has been documented in China since the sixth century. The oldest reference to wheat gluten appears in the Qimin Yaoshu, a Chinese agricultural encyclopedia written by Jia Sixie in 535. The encyclopedia mentions noodles prepared from wheat gluten called bo duo. Wheat gluten was known as mian jin by the Song dynasty (960–1279).

Prior to the arrival of Buddhism, northern China was predominantly a meat-consuming culture. The vegetarian dietary laws of Buddhism led to development of meat substitutes as a replacement for the meat-based dishes that the Chinese were no longer able to consume as Buddhists. Meat alternatives such as tofu and wheat gluten are still associated with Buddhist cuisine in China and other parts of East Asia. Meat alternatives were also popular in Medieval Europe during Lent, which prohibited the consumption of warm-blooded animals, eggs, and dairy products. Chopped almonds and grapes were used as a substitute for mincemeat. Diced bread was made into imitation cracklings and greaves.

John Harvey Kellogg developed meat replacements variously from nuts, grains, and soy, starting around 1877, to feed patients in his vegetarian Battle Creek Sanitarium. Kellogg's Sanitas Nut Food Company sold his meat substitute Protose, made from peanuts and wheat gluten. It became Kellogg's most popular product as several thousand tons had been consumed by 1930.

There was an increased interest in meat substitutes during the late 19th century and first half of the 20th century. Prior to 1950, interest in plant-based meat substitutes came from vegetarians searching for alternatives to meat protein for ethical reasons, and regular meat-eaters who were confronted with food shortages during World War I and World War II.

Henrietta Latham Dwight authored a vegetarian cookbook, The Golden Age Cook-Book in 1898 which included meat substitute recipes such as a "mock chicken" recipe made from breadcrumbs, eggs, lemon juice and walnuts and a "mock clam soup" made from marrowfat beans and cream. Dietitian Sarah Tyson Rorer authored the cookbook, Mrs. Rorer's Vegetable Cookery and Meat Substitutes in 1909. The book includes a mock veal roast recipe made from lentils, breadcrumbs and peanuts. In 1943, Kellogg made his first soy-based meat analog, called Soy Protose, which contained 32% soy. In 1945, Mildred Lager commented that soybeans "are the best meat substitute from the vegetable kingdom, they will always be used to a great extent by the vegetarian in place of meat."

In July 2016, Impossible Foods launched the Impossible Burger, a beef substitute which claims to offer appearance, taste and cooking properties similar to meat. In April 2019, Burger King partnered with Impossible Foods to launch the plant-based Impossible Whopper, which was released nationwide later that year, becoming one of the most successful product launches in Burger King's history. By October 2019, restaurants, such as Carl's Jr, Hardee's, A&W, Dunkin Donuts, and KFC were selling plant-based meat products. Nestlé entered the plant-based burger market in 2019 with the introduction of the "Awesome Burger". Kellogg's Morningstar Farms brand tested its Incogmeato line of plant-based protein products in early September 2019, with plans for a US-wide rollout in early 2020.

Types

A vegan faux-meat pie, containing soy protein and mushrooms, from an Australian bakery

Some vegetarian meat alternatives are based on centuries-old recipes for seitan (wheat gluten), rice, mushrooms, legumes, tempeh, yam flour or pressed-tofu, with flavoring added to make the finished product taste like chicken, beef, lamb, ham, sausage, seafood, etc. Other alternatives use modified defatted peanut flour, yuba and textured vegetable protein (TVP); yuba and TVP are both soy-based meat alternatives, the former made by layering the thin skin which forms on top of boiled soy milk, and the latter being a dry bulk commodity derived from soy and soy protein concentrate. Some meat alternatives include mycoprotein, such as Quorn which usually uses egg white as a binder. Another type of single cell protein-based meat alternative (which does not use fungi however but rather bacteria) is Calysta.

Production and composition

To produce meat alternatives with a meat-like texture, two approaches can be followed: bottom-up and top-down. With bottom-up structuring, individual fibers are made separately and then assembled into larger products. An example of a meat alternative made using a bottom-up strategy is cultured meat. The top-down approach, on the other hand, induces a fibrous structure by deforming the material, resulting in fibrousness on a larger length scale. An example of a top-down technique is food extrusion.

The types of ingredients that can be used to create meat substitutes is expanding, from companies like Plentify, which are using high-protein bacteria found in the human microbiome, to companies like Meati Foods, that are cultivating the mycelium of fungi—in this case, Neurospora crassa—to form steaks, chicken breasts, or fish.

Soy protein isolates or soybean flour and gluten are usually used as foundation for most meat substitutes that are available on the market. Soy protein isolate is a highly pure form of soy protein with a minimum protein content of 90%. The process of extracting the protein from the soybeans starts with the dehulling, or decortication, of the seeds. The seeds are then treated with solvents such as hexane to extract the oil from them. The oil-free soybean meal is then suspended in water and treated with alkali to dissolve the protein while leaving behind the carbohydrates. The alkaline solution is then treated with acidic substances to precipitate the protein, before being washed and dried. The removal of fats and carbohydrates results in a product that has a relatively neutral flavor. Soy protein is also considered a "complete protein" as it contains all of the essential amino acids that are crucial for proper human growth and development.

After the textured base material is obtained, a number of flavorings can be used to give a meaty flavor to the product. The recipe for a basic vegan chicken flavor is known since 1972, exploiting the Maillard reaction to produce aromas from simple chemicals. Later understanding of the source of aroma in cooked meat also found lipid oxidation and thiamine breakdown to be important processes. By using more complex starting materials such as yeast extract (considered a natural flavoring in the EU), hydrolyzed vegetable protein, various fermented foods, and spices, these reactions are also replicated during cooking to produce richer and more convincing meat flavors.

Commerce

Average price of meat substitutes worldwide from 2013 to 2021 with projections to 2026. Includes vegetarian and vegan meat substitutes. Projection made in October 2021. Prices were converted to USD using average exchange rates of the first year.

Meat substitutes represent around 11% of the world's meat and substitutes market in 2020. As shown in the graph, this market share is different from region to region. From 2013 to 2021, the world average price of meat substitutes fell continuously, by an overall 33%. The only exception was a 0.3% increase in 2020, compared to 2019. The price will continue to decrease, according to projections by Statista (see average price graph).

The motivation for seeking out meat substitutes varies among consumers. The market for meat alternatives is highly dependent on "meat-reducers", who are primarily motivated by health consciousness and weight management. Consumers who identify as vegan, vegetarian or pescetarian are more likely to endorse concerns regarding animal welfare and/or environmentalism as primary motivators. Additionally, some cultural beliefs and religions place prohibitions on consuming some or all animal products, including Hinduism, Judaism, Islam, Christianity, Jainism, and Buddhism.

Vegan meats are consumed in restaurants, grocery stores, bakeries, vegan school meals, and in homes. The sector for plant-based meats grew by 37% in North America over 2017–18. In 2018–19, sales of plant-based meats in the United States were $895 million, with the global market for meat alternatives forecast to be $140 billion by 2029. Seeking a healthy alternative to meat, curiosity, and trends toward veganism were drivers for the meat alternative market in 2019. Sales of plant-based meats increased during the 2020 COVID-19 epidemic.[41] The book The End of Animal Farming by Jacy Reese Anthis argues that plant-based food and cultured meat will completely replace animal-based food by 2100.

Impact

Environmental

Besides ethical and health motivations, developing better meat alternatives has the potential to reduce the environmental impact of meat production, an important concern given that the global demand for meat products is predicted to increase by 15 percent by 2031. Research on meats and no-meat substitutes suggests that no-meat products can offer substantial benefits over the production of beef, and to a lesser extent pork and chicken, in terms of greenhouse gas production, water and land use. A 2022 report from the Boston Consulting Group found that investment in improving and scaling up the production of meat and dairy alternatives leads to big greenhouse gas reductions compared with other investments.

According to The Good Food Institute, improving efficiency of the Western diet is crucial for achieving sustainability. As the global population grows, the way land is used will be reconsidered. 33% of the habitable land on Earth is used to support animals. Of all the land used for agriculture, 77% is used on animal agriculture even though this sector only supplies 17% of the total food supply. Plant-based meat can use a potential 47–99% less land than conventional meat does, freeing up more opportunities for production. Of the total water used in global agriculture, 33% goes to animal agriculture while it could be used for drinking water or other growing purposes under a different strategy. Plant-based meat uses 72–99% less water than conventional meat production.

Pollution is the next largest contribution to wasted water. Pesticides used in animal feed production as well as waste runoff into reservoirs can cause ecological damage and even human illness as well as taking water directly out of the usable supply. Animal agriculture is the main contributor to the food sector greenhouse gas emissions. Production of plant-based meat alternatives emits 30–90% less than conventional meat production. While also contributing less to this total pollution, much of the land being used for animal feed could be used to mitigate the negative effects we've already had on the planet through carbon recycling, soil conservation, and renewable energy production. In addition to the ecological harm caused by the current industry, excess antibiotics given to animals cause resistant microbes that may render some of the life-saving drugs used in human medicine useless. Plant-based meat requires no antibiotics and would greatly reduce microbe antibiotic resistance.

A 2023 study published in Nature Communications found that replacing just half of the beef, chicken, dairy and pork products consumed by the global population with plant-based alternatives could reduce the amount of land used by agriculture by almost a third, bring deforestation for agriculture nearly to a halt, help restore biodiversity through rewilding the land and reduce GHG emissions from agriculture by 31% in 2050, paving a clearer path to achieving both climate and biodiversity goals.

Health

In 2021, the American Heart Association stated that there is "limited evidence on the short- and long-term health effects" of plant-based meat alternatives. The same year, the World Health Organization stated that there are "significant knowledge gaps in the nutritional composition" of meat alternatives and more research is needed to investigate their health impacts. An 2023 systematic review, however, concluded that replacing red and highly-processed meat with a variety of meat alternatives improved Quality Adjusted Life Years, led to significant health system savings, and reduced greenhouse gas emissions. Replacement of meat with minimally-processed vegetarian alternatives such as legumes had the greatest effect.

Criticism

Companies producing plant-based meat alternatives, including Beyond Meat and Impossible Foods, have been criticized for their marketing and makeup of their products as well as their use of animal testing. Dietitians have claimed they are not necessarily healthier than meat due to their highly processed nature and sodium content.

John Mackey, co-founder and CEO of Whole Foods, and Brian Niccol, CEO of Chipotle Mexican Grill, have criticized meat alternatives as ultra-processed foods. Chipotle has claimed it will not carry these products at their restaurants due to their highly processed nature. CNBC wrote in 2019 of Chipotle joining "the likes of Taco Bell ... and Arby's in committing to excluding meatless meats on its menu." In response, Beyond Meat invited Niccol to visit its manufacturing site to see the production process. Chipotle later developed its own "plant-based chorizo". In September 2022, Taco Bell also began adding plant-based meat alternatives to its menu.

Some consulting firms and analysts demand more transparency in terms of the environmental impact of plant-based meat. Through a survey, analysts from Deloitte discovered that some consumers negatively linked meat alternatives to being "woke" and politically-left leaning. These ideas emerged in response to Cracker Barrel's introduction of Impossible Sausages in their restaurants in August, 2022. In 2021, 68% of consumers who purchased plant based meats believed it was healthier than animal meat. The number dropping to 60% in 2022, demonstrating a decline in consumers beliefs in the healthiness of these meats.

Some states have instituted legislation stating that meat alternatives are not allowed to label themselves as "meat". In Louisiana, the so-called, "Truth in Labeling of Food Products Act" was challenged by Tofurkey, complaining of free speech violations and was successful on those grounds.

Alternative meats companies Beyond Meat and Impossible Foods have attempted to appeal to meat eaters. University of Oregon marketing professor Steffen Jahn thinks that this has run afoul of human psychology, saying “the mimicking of real meat introduces that comparison of authenticity.” Jahn argues that marketing plant-based meats with traditional meats leads to an artificiality that many consumers do not love. Consumer psychologists split foods into categories of “virtue” and “vice” foods, which ultimately guide how products are marketed and sold. Virtue foods are those that less gratifying appealing in the short term, and typically healthier, whereas vice foods are the opposite, having more long term consequences. Many ready-made meat alternatives combine these categories with their long list of ingredients. Consumers who are likely to want to be “virtuous” by avoiding damage to the environment or animals are also likely to want “virtuous” food in the form of simple ingredients

Feature (computer vision)

In computer vision and image processing, a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general neighborhood operation or feature detection applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.

More broadly a feature is any piece of information which is relevant for solving the computational task related to a certain application. This is the same sense as feature in machine learning and pattern recognition generally, though image processing has a very sophisticated collection of features. The feature concept is very general and the choice of features in a particular computer vision system may be highly dependent on the specific problem at hand.

Definition

There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Nevertheless, a feature is typically defined as an "interesting" part of an image, and features are used as a starting point for many computer vision algorithms.

Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector is repeatability: whether or not the same feature will be detected in two or more different images of the same scene.

Feature detection is a low-level image processing operation. That is, it is usually performed as the first operation on an image, and examines every pixel to see if there is a feature present at that pixel. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by a Gaussian kernel in a scale-space representation and one or several feature images are computed, often expressed in terms of local image derivative operations.

Occasionally, when feature detection is computationally expensive and there are time constraints, a higher level algorithm may be used to guide the feature detection stage, so that only certain parts of the image are searched for features.

There are many computer vision algorithms that use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability.

When features are defined in terms of local neighborhood operations applied to an image, a procedure commonly referred to as feature extraction, one can distinguish between feature detection approaches that produce local decisions whether there is a feature of a given type at a given image point or not, and those who produce non-binary data as result. The distinction becomes relevant when the resulting detected features are relatively sparse. Although local decisions are made, the output from a feature detection step does not need to be a binary image. The result is often represented in terms of sets of (connected or unconnected) coordinates of the image points where features have been detected, sometimes with subpixel accuracy.

When feature extraction is done without local decision making, the result is often referred to as a feature image. Consequently, a feature image can be seen as an image in the sense that it is a function of the same spatial (or temporal) variables as the original image, but where the pixel values hold information about image features instead of intensity or color. This means that a feature image can be processed in a similar way as an ordinary image generated by an image sensor. Feature images are also often computed as integrated step in algorithms for feature detection.

Feature vectors and feature spaces

In some applications, it is not sufficient to extract only one type of feature to obtain the relevant information from the image data. Instead two or more different features are extracted, resulting in two or more feature descriptors at each image point. A common practice is to organize the information provided by all these descriptors as the elements of one single vector, commonly referred to as a feature vector. The set of all possible feature vectors constitutes a feature space.

A common example of feature vectors appears when each image point is to be classified as belonging to a specific class. Assuming that each image point has a corresponding feature vector based on a suitable set of features, meaning that each class is well separated in the corresponding feature space, the classification of each image point can be done using standard classification method.

Simplified example of training a neural network in object detection: The network is trained by multiple images that are known to depict starfish and sea urchins, which are correlated with "nodes" that represent visual features. The starfish match with a ringed texture and a star outline, whereas most sea urchins match with a striped texture and oval shape. However, the instance of a ring textured sea urchin creates a weakly weighted association between them.
 
Subsequent run of the network on an input image (left): The network correctly detects the starfish. However, the weakly weighted association between ringed texture and sea urchin also confers a weak signal to the latter from one of two features. In addition, a shell that was not included in the training gives a weak signal for the oval shape, also resulting in a weak signal for the sea urchin output. These weak signals may result in a false positive result for sea urchin.
In reality, textures and outlines would not be represented by single nodes, but rather by associated weight patterns of multiple nodes.

Another and related example occurs when neural network-based processing is applied to images. The input data fed to the neural network is often given in terms of a feature vector from each image point, where the vector is constructed from several different features extracted from the image data. During a learning phase, the network can itself find which combinations of different features are useful for solving the problem at hand.

Types

Edges

Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In practice, edges are usually defined as sets of points in the image which have a strong gradient magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value.

Locally, edges have a one-dimensional structure.

Corners / interest points

The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two dimensional structure. The name "Corner" arose since early algorithms first performed edge detection, and then analysed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of curvature in the image gradient. It was then noticed that the so-called corners were also being detected on parts of the image which were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition.

Blobs / regions of interest points

Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector.

Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to their response properties to different types of image structures at different scales, the LoG and DoH blob detectors are also mentioned in the article on corner detection.

Ridges

For elongated objects, the notion of ridges is a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of a medial axis. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images—see ridge detection.

Detection

Feature detection includes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.

The extraction of features are sometimes made over several scalings. One of these methods is the scale-invariant feature transform (SIFT).

Common feature detectors and their classification:
Feature detector Edge Corner Blob Ridge
Canny Yes No No No
Sobel Yes No No No
Harris & Stephens / Plessey Yes Yes No No
SUSAN Yes Yes No No
Shi & Tomasi No Yes No No
Level curve curvature No Yes No No
FAST No Yes Yes No
Laplacian of Gaussian No Yes Yes No
Difference of Gaussians No Yes Yes No
Determinant of Hessian No Yes Yes No
Hessian strength feature measures No Yes Yes No
MSER No No Yes No
Principal curvature ridges No No No Yes
Grey-level blobs No No Yes No

Extraction

Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can mention N-jets and local histograms (see scale-invariant feature transform for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection.

Low-level

Curvature

Image motion

Shape based

Flexible methods

  • Deformable, parameterized shapes
  • Active contours (snakes)

Representation

A specific image feature, defined in terms of a specific structure in the image data, can often be represented in different ways. For example, an edge can be represented as a boolean variable in each image point that describes whether an edge is present at that point. Alternatively, we can instead use a representation which provides a certainty measure instead of a boolean statement of the edge's existence and combine this with information about the orientation of the edge. Similarly, the color of a specific region can either be represented in terms of the average color (three scalars) or a color histogram (three functions).

When a computer vision system or computer vision algorithm is designed the choice of feature representation can be a critical issue. In some cases, a higher level of detail in the description of a feature may be necessary for solving the problem, but this comes at the cost of having to deal with more data and more demanding processing. Below, some of the factors which are relevant for choosing a suitable representation are discussed. In this discussion, an instance of a feature representation is referred to as a feature descriptor, or simply descriptor.

Certainty or confidence

Two examples of image features are local edge orientation and local velocity in an image sequence. In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood. Local velocity is undefined if the corresponding image region does not contain any spatial variation. As a consequence of this observation, it may be relevant to use a feature representation which includes a measure of certainty or confidence related to the statement about the feature value. Otherwise, it is a typical situation that the same descriptor is used to represent feature values of low certainty and feature values close to zero, with a resulting ambiguity in the interpretation of this descriptor. Depending on the application, such an ambiguity may or may not be acceptable.

In particular, if a featured image will be used in subsequent processing, it may be a good idea to employ a feature representation that includes information about certainty or confidence. This enables a new feature descriptor to be computed from several descriptors, for example computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties. In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image. The resulting feature image will, in general, be more stable to noise.

Averageability

In addition to having certainty measures included in the representation, the representation of the corresponding feature values may itself be suitable for an averaging operation or not. Most feature representations can be averaged in practice, but only in certain cases can the resulting descriptor be given a correct interpretation in terms of a feature value. Such representations are referred to as averageable.

For example, if the orientation of an edge is represented in terms of an angle, this representation must have a discontinuity where the angle wraps from its maximal value to its minimal value. Consequently, it can happen that two similar orientations are represented by angles which have a mean that does not lie close to either of the original angles and, hence, this representation is not averageable. There are other representations of edge orientation, such as the structure tensor, which are averageable.

Another example relates to motion, where in some cases only the normal velocity relative to some edge can be extracted. If two such features have been extracted and they can be assumed to refer to same true velocity, this velocity is not given as the average of the normal velocity vectors. Hence, normal velocity vectors are not averageable. Instead, there are other representations of motions, using matrices or tensors, that give the true velocity in terms of an average operation of the normal velocity descriptors.

Matching

Features detected in each image can be matched across multiple images to establish corresponding features such as corresponding points.

The algorithm is based on comparing and analyzing point correspondences between the reference image and the target image. If any part of the cluttered scene shares correspondences greater than the threshold, that part of the cluttered scene image is targeted and considered to include the reference object there.

Agricultural robot

From Wikipedia, the free encyclopedia
Autonomous agricultural robot

An agricultural robot is a robot deployed for agricultural purposes. The main area of application of robots in agriculture today is at the harvesting stage. Emerging applications of robots or drones in agriculture include weed control, cloud seeding, planting seeds, harvesting, environmental monitoring and soil analysis.According to Verified Market Research, the agricultural robots market is expected to reach $11.58 billion by 2025.

General

Fruit picking robots, driverless tractor / sprayers, and sheep shearing robots are designed to replace human labor. In most cases, a lot of factors have to be considered (e.g., the size and color of the fruit to be picked) before the commencement of a task. Robots can be used for other horticultural tasks such as pruning, weeding, spraying and monitoring. Robots can also be used in livestock applications (livestock robotics) such as automatic milking, washing and castrating. Robots like these have many benefits for the agricultural industry, including a higher quality of fresh produce, lower production costs, and a decreased need for manual labor. They can also be used to automate manual tasks, such as weed or bracken spraying, where the use of tractors and other human-operated vehicles is too dangerous for the operators.

Designs

Fieldwork robot

The mechanical design consists of an end effector, manipulator, and gripper. Several factors must be considered in the design of the manipulator, including the task, economic efficiency, and required motions. The end effector influences the market value of the fruit and the gripper's design is based on the crop that is being harvested.

End effector

An end effector in an agricultural robot is the device found at the end of the robotic arm, used for various agricultural operations. Several different kinds of end effectors have been developed. In an agricultural operation involving grapes in Japan, end effectors are used for harvesting, berry-thinning, spraying, and bagging. Each was designed according to the nature of the task and the shape and size of the target fruit. For instance, the end effectors used for harvesting were designed to grasp, cut, and push the bunches of grapes.

Berry thinning is another operation performed on the grapes, and is used to enhance the market value of the grapes, increase the grapes' size, and facilitate the bunching process. For berry thinning, an end effector consists of an upper, middle, and lower part. The upper part has two plates and a rubber that can open and close. The two plates compress the grapes to cut off the rachis branches and extract the bunch of grapes. The middle part contains a plate of needles, a compression spring, and another plate which has holes spread across its surface. When the two plates compress, the needles punch holes through the grapes. Next, the lower part has a cutting device which can cut the bunch to standardize its length.

For spraying, the end effector consists of a spray nozzle that is attached to a manipulator. In practice, producers want to ensure that the chemical liquid is evenly distributed across the bunch. Thus, the design allows for an even distribution of the chemical by making the nozzle move at a constant speed while keeping distance from the target.

The final step in grape production is the bagging process. The bagging end effector is designed with a bag feeder and two mechanical fingers. In the bagging process, the bag feeder is composed of slits which continuously supply bags to the fingers in an up and down motion. While the bag is being fed to the fingers, two leaf springs that are located on the upper end of the bag hold the bag open. The bags are produced to contain the grapes in bunches. Once the bagging process is complete, the fingers open and release the bag. This shuts the leaf springs, which seal the bag and prevent it from opening again.

Gripper

The gripper is a grasping device that is used for harvesting the target crop. Design of the gripper is based on simplicity, low cost, and effectiveness. Thus, the design usually consists of two mechanical fingers that are able to move in synchrony when performing their task. Specifics of the design depend on the task that is being performed. For example, in a procedure that required plants to be cut for harvesting, the gripper was equipped with a sharp blade.

Manipulator

The manipulator allows the gripper and end effector to navigate through their environment. The manipulator consists of four-bar parallel links that maintain the gripper's position and height. The manipulator also can utilize one, two, or three pneumatic actuators. Pneumatic actuators are motors which produce linear and rotary motion by converting compressed air into energy. The pneumatic actuator is the most effective actuator for agricultural robots because of its high power-weight ratio. The most cost efficient design for the manipulator is the single actuator configuration, yet this is the least flexible option.

Development

The first development of robotics in agriculture can be dated as early as the 1920s, with research to incorporate automatic vehicle guidance into agriculture beginning to take shape. This research led to the advancements between the 1950s and 60s of autonomous agricultural vehicles. The concept was not perfect however, with the vehicles still needing a cable system to guide their path. Robots in agriculture continued to develop as technologies in other sectors began to develop as well. It was not until the 1980s, following the development of the computer, that machine vision guidance became possible.

Other developments over the years included the harvesting of oranges using a robot both in France and the US.

While robots have been incorporated in indoor industrial settings for decades, outdoor robots for the use of agriculture are considered more complex and difficult to develop. This is due to concerns over safety, but also over the complexity of picking crops subject to different environmental factors and unpredictability.

Demand in the market

There are concerns over the amount of labor the agricultural sector needs. With an aging population, Japan is unable to meet the demands of the agricultural labor market. Similarly, the United States currently depends on a large number of immigrant workers, but between the decrease in seasonal farmworkers and increased efforts to stop immigration by the government, they too are unable to meet the demand. Businesses are often forced to let crops rot due to an inability to pick them all by the end of the season. Additionally, there are concerns over the growing population that will need to be fed over the next years. Because of this, there is a large desire to improve agricultural machinery to make it more cost efficient and viable for continued use.

Current applications and trends

Unmanned tractor "Uralets-224"

Much of the current research continues to work towards autonomous agricultural vehicles. This research is based on the advancements made in driver-assist systems and self-driving cars.

While robots have already been incorporated in many areas of agricultural farm work, they are still largely missing in the harvest of various crops. This has started to change as companies begin to develop robots that complete more specific tasks on the farm. The biggest concern over robots harvesting crops comes from harvesting soft crops such as strawberries which can easily be damaged or missed entirely. Despite these concerns, progress in this area is being made. According to Gary Wishnatzki, the co-founder of Harvest Croo Robotics, one of their strawberry pickers currently being tested in Florida can "pick a 25-acre field in just three days and replace a crew of about 30 farm workers". Similar progress is being made in harvesting apples, grapes, and other crops. In the case of apple harvesting robots, current developments have been too slow to be commercially viable. Modern robots are able to harvest apples at a rate of one every five to ten seconds while the average human harvests at a rate of one per second.

Another goal being set by agricultural companies involves the collection of data. There are rising concerns over the growing population and the decreasing labor available to feed them. Data collection is being developed as a way to increase productivity on farms. AgriData is currently developing new technology to do just this and help farmers better determine the best time to harvest their crops by scanning fruit trees.

Applications

Robots have many fields of application in agriculture. Some examples and prototypes of robots include the Merlin Robot Milker, Rosphere, Harvest Automation, Orange Harvester, lettuce bot, and weeder.

According to David Gardner, chief executive of the Royal Agricultural Society of England, a robot can complete a complicated task if its repetitive and the robot is allowed to sit in a single place. Furthermore, robots that work on repetitive tasks (e.g. milking) fulfill their role to a consistent and particular standard.

  • One case of a large scale use of robots in farming is the milk bot. It is widespread among British dairy farms because of its efficiency and nonrequirement to move.
  • Another field of application is horticulture. One horticultural application is the development of RV100 by Harvest Automation Inc. RV 100 is designed to transport potted plants in a greenhouse or outdoor setting. The functions of RV100 in handling and organizing potted plants include spacing capabilities, collection, and consolidation. The benefits of using RV100 for this task include high placement accuracy, autonomous outdoor and indoor function, and reduced production costs.

Benefits of many applications may include ecosystem/environmental benefits, and reduced costs for labor (which may translate to reduced food costs), which may be of special importance for food production in regions where there are labor shortages (see above) or where labor is relatively expensive. Benefits also include the general advantages of automation such as in terms of productivity/availability and increasing availability of human resources for other tasks or e.g. making work more engaging.

Examples and further applications

  • Weed control using lasers (e.g. LaserWeeder by Carbon Robotics)
  • Precision agriculture robots applying low amounts of herbicides and fertilizers with precision while mapping plant locations
  • Picking robots are under development
  • Vinobot and Vinoculer
  • LSU's AgBot
  • Burro, a carrying and path following robot with the potential to expand into picking and phytopathology
  • Harvest Automation is a company founded by former iRobot employees to develop robots for greenhouses
  • Root AI has made a tomato-picking robot for use in greenhouses
  • Strawberry picking robot from Robotic Harvesting and Agrobot
  • Small Robot Company developed a range of small agricultural robots, each one being focused on a particular task (weeding, spraying, drilling holes, ...) and controlled by an AI system
  • Agreenculture 
  • ecoRobotix has made a solar-powered weeding and spraying robot
  • Blue River Technology has developed a farm implement for a tractor which only sprays plants that require spraying, reducing herbicide use by 90%
  • Casmobot next generation slope mower
  • Fieldrobot Event is a competition in mobile agricultural robotics
  • HortiBot - A Plant Nursing Robot
  • Lettuce Bot - Organic Weed Elimination and Thinning of Lettuce
  • Rice planting robot developed by the Japanese National Agricultural Research Centre
  • ROS Agriculture - Open source software for agricultural robots using the Robot Operating System
  • The IBEX autonomous weed spraying robot for extreme terrain, under development
  • FarmBot, Open Source CNC Farming
  • VAE, under development by an Argentinean ag-tech startup, aims to become a universal platform for multiple agricultural applications, from precision spraying to livestock handling.
  • ACFR RIPPA: for spot spraying
  • ACFR SwagBot; for livestock monitoring
  • ACFR Digital Farmhand: for spraying, weeding and seeding
  • Thorvald - an autonomous modular multi-purpose agricultural robot developed by Saga Robotics.
  • Copper in biology

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cop...