Search This Blog

Sunday, August 19, 2018

Cultural evolution

From Wikipedia, the free encyclopedia

Cultural evolution is an evolutionary theory of social change. It follows from the definition of culture as "information capable of affecting individuals' behavior that they acquire from other members of their species through teaching, imitation and other forms of social transmission". Cultural evolution is the change of this information over time.

Cultural evolution, historically also known as sociocultural evolution, was originally developed in the 19th century by anthropologists stemming from Charles Darwin's research on evolution. Today, cultural evolution has become the basis for a growing field of scientific research in the social sciences, including anthropology, economics, psychology and organizational studies. Previously, it was believed that social change resulted from biological adaptations, but anthropologists now commonly accept that social changes arise in consequence of a combination of social, evolutionary and biological influences.[3][4]

There have been a number of different approaches to the study of cultural evolution, including dual inheritance theory, sociocultural evolution, memetics, cultural evolutionism and other variants on cultural selection theory. The approaches differ not just in the history of their development and discipline of origin but in how they conceptualize the process of cultural evolution and the assumptions, theories and methods that they apply to its study. In recent years, there has been a convergence of the cluster of related theories towards seeing cultural evolution as a unified discipline in its own right.[5][6] In 2017, the Cultural Evolution Society held its inaugural meeting in Jena, Germany.

History

Aristotle thought that development of cultural form (such as poetry) stops when it reaches its maturity.[7] In 1873 in Harper's New Monthly Magazine, it was written: "By the principle which Darwin describes as natural selection short words are gaining the advantage over long words, direct forms of expression are gaining the advantage over indirect, words of precise meaning the advantage of the ambiguous, and local idioms are everywhere in disadvantage".[8]

Cultural evolution, in the Darwinian sense of variation and selective inheritance, could be said to trace back to Darwin himself.[9] He argued for both customs (1874 p. 239) and "inherited habits" as contributing to human evolution, grounding both in the innate capacity for acquiring language.

Darwin's ideas, along with those of such as Comte and Quetelet, influenced a number of what would now be called social scientists in the late nineteenth and early twentieth centuries. Hodgson and Knudsen[12] single out David George Ritchie and Thorstein Veblen, crediting the former with anticipating both dual inheritance theory and universal Darwinism. Despite the stereotypical image of social Darwinism that developed later in the century, neither Ritchie nor Veblen were on the political right.

The early years of the 20th century and particularly the First World War saw biological concepts and metaphors shunned by most social sciences. Even uttering the word evolution carried "serious risk to one's intellectual reputation." Darwinian ideas were also in decline following the rediscovery of Mendelian genetics but were revived, especially by Fisher, Haldane and Wright, who developed the first population genetic models and as it became known the modern synthesis.

Cultural evolutionary concepts, or even metaphors, revived more slowly. If there was one influential individual in the revival it was probably Donald T. Campbell. In 1960[13] he drew on Wright to draw a parallel between genetic evolution and the "blind variation and selective retention" of creative ideas; work that was developed into a full theory of "socio-cultural evolution" in 1965[14] (a work that includes references to other works in the then current revival of interest in the field. Campbell (1965 26) was clear that he understood cultural evolution not as an analogy "from organic evolution per se, but rather from a general model for quasiteleological processes for which organic evolution is but one instance".

Others pursued more specific analogies notably the anthropologist F. T. (Ted) Cloak who argued in 1975[15] for the existence of learnt cultural instructions (cultural corpuscles or i-culture) resulting in material artefacts (m-culture) such as wheels.[16] The argument thereby introduced as to whether cultural evolution requires neurological instructions continues to the present day.

Unilinear theory

In the 19th century cultural evolution was thought to follow a unilineal pattern whereby all cultures progressively develop over time. The underlying assumption being that Cultural Evolution itself led to the growth and development of civilization [3][17][18]

Thomas Hobbes in the 17th Century declared indigenous culture to have "no arts, no letters, no society" and he described facing life as "solitary, poor, nasty, brutish, and short." He, like other scholars of his time, reasoned that everything positive and esteemed resulted from the slow development away from this poor lowly state of being.[3]

Under the theory of unilinear Cultural Evolution, all societies and cultures develop on the same path. The first to present a general unilineal theory was Herbert Spencer. Spencer suggested that humans develop into more complex beings as culture progresses, where people originally lived in "undifferentiated hordes" culture progresses and develops to the point where civilization develops hierarchies. The concept behind unilinear theory is that the steady accumulation of knowledge and culture leads to the separation of the various modern day sciences and the build-up of cultural norms present in modern-day society [3][17]

In Lewis H. Morgan's book Ancient Society (1877), Morgan labels seven differing stages of human culture: lower, middle, and upper savagery; lower, middle, and upper barbarism; and civilization. He justifies this staging classification by referencing societies whose cultural traits resembled those of each of his stage classifications of the cultural progression. Morgan gave no example of lower savagery, as even at the time of writing few examples remained of this cultural type. At the time of expounding his theory, Morgan's work was highly respected and became a foundation for much of anthropological study that was to follow.

Cultural particularism

There began a widespread condemnation of unilinear theory in the late 19th century. Unilinear cultural evolution implicitly assumes that culture was borne out of the United States and Western Europe. That was seen by many to be racist, as it assumed that some individuals and cultures were more evolved than others.[3]

Franz Boas, a German-born anthropologist, was the instigator of the movement known as 'cultural particularism' in which the emphasis shifted to a multilinear approach to cultural evolution. That differed to the unilinear approach that used to be favoured in the sense that cultures were no longer compared, but they were assessed uniquely. Boas, along with several of his pupils, notably A.L. Kroeber, Ruth Benedict and Margaret Mead, changed the focus of anthropological research to the effect that instead of generalizing cultures, the attention was now on collecting empirical evidence of how individual cultures change and develop.[3]
Multilinear theory
Cultural particularism dominated popular thought for the first half of the 20th century before American anthropologists, including Leslie A. White, Julian H. Steward, Marshall D. Sahlins, and Elman R. Service, revived the debate on cultural evolution. These theorists were the first to introduce the idea of multilinear cultural evolution.[3]

Under multilinear theory, there are no fixed stages (as in unilinear theory) towards cultural development. Instead, there are several stages of differing lengths and forms. Although, individual cultures develop differently and cultural evolution occurs differently, multilinear theory acknowledges that cultures and societies do tend to develop and move forward.[3][19]

Leslie A. White focused on the idea that different cultures had differing amounts of 'energy', White argued that with greater energy societies could possess greater levels of social differentiation. He rejected separation of modern societies from primitive societies. In contrast, Steward argued, much like Darwin's theory of evolution, that culture adapts to its surroundings. 'Evolution and Culture' by Sahlins and Service is an attempt to condense the views of White and Steward into a universal theory of multilinear evolution.[3]

Memetics

Richard Dawkins' 1976 book The Selfish Gene proposed the concept of the "meme", which is analogous to that of the gene. A meme is an idea-replicator that can reproduce itself, by jumping from mind to mind via the process of one human learning from another via imitation. Along with the "virus of the mind" image, the meme might be thought of as a "unit of culture" (an idea, belief, pattern of behaviour, etc.), which spreads among the individuals of a population. The variation and selection in the copying process enables Darwinian evolution among memeplexes and therefore is a candidate for a mechanism of cultural evolution. As memes are "selfish" in that they are "interested" only in their own success, they could well be in conflict with their biological host's genetic interests.

Consequently, a "meme's eye" view might account for certain evolved cultural traits, such as suicide terrorism, that are successful at spreading meme of martyrdom, but fatal to their hosts and often other people.

Evolutionary epistemology

"Evolutionary epistemology" can also refer to a theory that applies the concepts of biological evolution to the growth of human knowledge and argues that units of knowledge themselves, particularly scientific theories, evolve according to selection. In that case, a theory, like the germ theory of disease, becomes more or less credible according to changes in the body of knowledge surrounding it.

Evolutionary epistemology is a naturalistic approach to epistemology, which emphasizes the importance of natural selection in two primary roles. In the first role, selection is the generator and maintainer of the reliability of our senses and cognitive mechanisms, as well as the "fit" between those mechanisms and the world. In the second role, trial and error learning and the evolution of scientific theories are construed as selection processes.

One of the hallmarks of evolutionary epistemology is the notion that empirical testing alone does not justify the pragmatic value of scientific theories but rather that social and methodological processes select those theories with the closest "fit" to a given problem. The mere fact that a theory has survived the most rigorous empirical tests available does not, in the calculus of probability, predict its ability to survive future testing. Karl Popper used Newtonian physics as an example of a body of theories so thoroughly confirmed by testing as to be considered unassailable but were nevertheless overturned by Einstein's bold insights into the nature of space-time. For the evolutionary epistemologist, all theories are true only provisionally, regardless of the degree of empirical testing they have survived.

Popper is considered by many to have given evolutionary epistemology its first comprehensive treatment, bur Donald T. Campbell had coined the phrase in 1974.[20]

Dual Inheritance Theory

Taken from the main page:

Dual inheritance theory (DIT), also known as gene–culture coevolution or biocultural evolution, was developed in the 1960s through early 1980s to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. Genes and culture continually interact in a feedback loop, changes in genes can lead to changes in culture which can then influence genetic selection, and vice versa. One of the theory's central claims is that culture evolves partly through a Darwinian selection process, which dual inheritance theorists often describe by analogy to genetic evolution."

Criticism and controversy

As a relatively new and growing scientific field, cultural evolution is undergoing much formative debate. Some of the prominent conversations are revolving around Universal Darwinism,[14][21] dual inheritance theory,[22] and memetics.[23][24][25][26]

More recently, cultural evolution has drawn conversations from multi-disciplinary sources with movement towards a unified view between the natural and social sciences. There remains some accusation of biological reductionism, as opposed to cultural naturalism, and scientific efforts are often mistakenly associated with Social Darwinism. However, some useful parallels between biological and social evolution still appear to be found.[27]

Criticism of historic approaches to Cultural Evolution:

Cultural evolution has been criticized over the past two centuries that it has advanced its development into the form it holds today. Morgan's theory of evolution implies that all cultures follow the same basic pattern. Human culture is not linear, different cultures develop in different directions and at differing paces, and it is not satisfactory or productive to assume cultures develop in the same way.[28]

A further key critique of cultural evolutionism is what is known as "armchair anthropology". The name results from the fact that many of the anthropologists advancing theories had not seen first hand the cultures they were studying. The research and data collected was carried out by explorers and missionaries as opposed to the anthropologists themselves. Edward Tylor was the epitome of that and did very little of his own research.[25][28]Cultural evolution is also criticized for being ethnocentric, cultures are still seen to be attempting to emulate western civilization. Under ethnocentricity, primitive societies are said to be not yet at the cultural levels of other western societies[28][29]

Much of the criticism aimed at cultural evolution is focused on the unilinear approach to social change. Broadly speaking in the second half of the 20th century the criticisms of cultural evolution have been answered by the multilinear theory. Ethnocentricity, for example, is more prevalent under the unilinear theory.[28][25][29]

Some recent approaches, such as Dual Inheritance Theory, make use of empirical methods including psychological and animal studies, field site research, and computational models.

Dual inheritance theory

From Wikipedia, the free encyclopedia

Dual inheritance theory (DIT), also known as gene–culture coevolution or biocultural evolution, was developed in the 1960s through early 1980s to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. Genes and culture continually interact in a feedback loop, changes in genes can lead to changes in culture which can then influence genetic selection, and vice versa. One of the theory's central claims is that culture evolves partly through a Darwinian selection process, which dual inheritance theorists often describe by analogy to genetic evolution.

'Culture', in this context is defined as 'socially learned behavior', and 'social learning' is defined as copying behaviors observed in others or acquiring behaviors through being taught by others. Most of the modelling done in the field relies on the first dynamic (copying) though it can be extended to teaching. Social learning at its simplest involves blind copying of behaviors from a model (someone observed behaving), though it is also understood to have many potential biases, including success bias (copying from those who are perceived to be better off), status bias (copying from those with higher status), homophily (copying from those most like ourselves), conformist bias (disproportionately picking up behaviors that more people are performing), etc.. Understanding social learning is a system of pattern replication, and understanding that there are different rates of survival for different socially learned cultural variants, this sets up, by definition, an evolutionary structure: cultural evolution.[4]

Because genetic evolution is relatively well understood, most of DIT examines cultural evolution and the interactions between cultural evolution and genetic evolution.

Theoretical basis

DIT holds that genetic and cultural evolution interacted in the evolution of Homo sapiens. DIT recognizes that the natural selection of genotypes is an important component of the evolution of human behavior and that cultural traits can be constrained by genetic imperatives. However, DIT also recognizes that genetic evolution has endowed the human species with a parallel evolutionary process of cultural evolution. DIT makes three main claims:[5]

Culture capacities are adaptations

The human capacity to store and transmit culture arose from genetically evolved psychological mechanisms. This implies that at some point during the evolution of the human species a type of social learning leading to cumulative cultural evolution was evolutionarily advantageous.

Culture evolves

Social learning processes give rise to cultural evolution. Cultural traits are transmitted differently from genetic traits and, therefore, result in different population-level effects on behavioral variation.

Genes and culture co-evolve

Cultural traits alter the social and physical environments under which genetic selection operates. For example, the cultural adoptions of agriculture and dairying have, in humans, caused genetic selection for the traits to digest starch and lactose, respectively.[6][7][8][9][10][11] As another example, it is likely that once culture became adaptive, genetic selection caused a refinement of the cognitive architecture that stores and transmits cultural information. This refinement may have further influenced the way culture is stored and the biases that govern its transmission.

DIT also predicts that, under certain situations, cultural evolution may select for traits that are genetically maladaptive. An example of this is the demographic transition, which describes the fall of birth rates within industrialized societies. Dual inheritance theorists hypothesize that the demographic transition may be a result of a prestige bias, where individuals that forgo reproduction to gain more influence in industrial societies are more likely to be chosen as cultural models.[12][13]

View of culture

People have defined the word "culture" to describe a large set of different phenomena.[14][15] A definition that sums up what is meant by "culture" in DIT is:
Culture is socially learned information stored in individuals' brains that is capable of affecting behavior.[16][17]
This view of culture emphasizes population thinking by focusing on the process by which culture is generated and maintained. It also views culture as a dynamic property of individuals, as opposed to a view of culture as a superorganic entity to which individuals must conform.[18] This view's main advantage is that it connects individual-level processes to population-level outcomes.[19]

Genetic influence on cultural evolution

Genes affect cultural evolution via psychological predispositions on cultural learning.[20] Genes encode much of the information needed to form the human brain. Genes constrain the brain's structure and, hence, the ability of the brain to acquire and store culture. Genes may also endow individuals with certain types of transmission bias (described below).

Cultural influences on genetic evolution

Culture can profoundly influence gene frequencies in a population.

Lactase persistence

One of the best known examples is the prevalence of the genotype for adult lactose absorption in human populations, such as Northern Europeans and some African societies, with a long history of raising cattle for milk. Until around 7,500 years ago,[21] lactase production stopped shortly after weaning,[22] and in societies which did not develop dairying, such as East Asians and Amerindians, this is still true today.[23][24] In areas with lactase persistence, it is believed that by domesticating animals, a source of milk became available while an adult and thus strong selection for lactase persistence could occur,[21][25] in a Scandinavian population the estimated selection coefficient was 0.09-0.19.[25] This implies that the cultural practice of raising cattle first for meat and later for milk led to selection for genetic traits for lactose digestion.[26] Recently, analysis of natural selection on the human genome suggests that civilization has accelerated genetic change in humans over the past 10,000 years.[27]

Food processing

Culture has driven changes to the human digestive systems making many digestive organs, like our teeth or stomach, smaller than expected for primates of a similar size,[28] and has been attributed to one of the reasons why humans have such large brains compared to other great apes.[29][30] This is due to food processing. Early examples of food processing include pounding, marinating and most notably cooking. Pounding meat breaks down the muscle fibres, hence taking away some of the job from the mouth, teeth and jaw.[31][32] Marinating emulates the action of the stomach with high acid levels. Cooking partially breaks down food making it more easily digestible. Food enters the body effectively partly digested, and as such food processing reduces the work that the digestive system has to do. This means that there is selection for smaller digestive organs as the tissue is energetically expensive,[28] those with smaller digestive organs can process their food but at a lower energetic cost than those with larger organs.[33] Cooking is notable because the energy available from food increases when cooked and this also means less time is spent looking for food.[29][34][35]

Humans living on cooked diets spend only a fraction of their day chewing compared to other extant primates living on raw diets. American girls and boys spent on average 8 and 7 percent of their day chewing respectively, compared to chimpanzees who spend more than 6 hours a day chewing.[36] This frees up time which can be used for hunting. A raw diet means hunting is constrained since time spent hunting is time not spent eating and chewing plant material, but cooking reduces the time required to get the day's energy requirements, allowing for more subsistence activities.[37] Digestibility of cooked carbohydrates is approximately on average 30% higher than digestibility of non cooked carbohydrates.[34][38] This increased energy intake, more free time and savings made on tissue used in the digestive system allowed for the selection of genes for larger brain size.

Despite its benefits, brain tissue requires a large amount of calories, hence a main constraint in selection for larger brains is calorie intake. A greater calorie intake can support greater quantities of brain tissue. This is argued to explain why human brains can be much larger than other apes, since humans are the only ape to engage in food processing.[29] The cooking of food has influenced genes to the extent that, research suggests, humans cannot live without cooking.[39][29] A study on 513 individuals consuming long term raw diets found that as the percentage of their diet which was made up of raw food and/or the length they had been on a diet of raw food increased, their BMI decreased.[39] This is despite access to many non thermal processing, like grinding, pounding or heating to 48 deg. c. (118 deg. F).[39] With approximately 86 billion neurons in the human brain and 60–70 kg body mass, an exclusively raw diet close to that of what extant primates have would be not viable as, when modelled, it is argued that it would require an infeasible level of more than nine hours of feeding everyday.[29] However, this is contested, with alternative modelling showing enough calories could be obtained within 5–6 hours per day.[40] Some scientists and anthropologists point to evidence that brain size in the Homo lineage started to increase well before the advent of cooking due to increased consumption of meat[28][40][41] and that basic food processing (slicing) accounts for the size reduction in organs related to chewing.[42] Cornélio et al. argues that improving cooperative abilities and a varying of diet to more meat and seeds improved foraging and hunting efficiency. It is this that allowed for the brain expansion, independent of cooking which they argue came much later, a consequence from the complex cognition that developed.[40] Yet this is still an example of a cultural shift in diet and the resulting genetic evolution. Further criticism comes from the controversy of the archaeological evidence available. Some claim there is a lack of evidence of fire control when brain sizes first started expanding.[40][43] Wrangham argues that anatomical evidence around the time of the origin of Homo erectus (1.8 million years ago), indicates that the control of fire and hence cooking occurred.[34] At this time, the largest reductions in tooth size in the entirety of human evolution occurred, indicating that softer foods became prevalent in the diet. Also at this time was a narrowing of the pelvis indicating a smaller gut and also there is evidence that there was a loss of the ability to climb which Wrangham argues indicates the control of fire, since sleeping on the ground needs fire to ward off predators.[44] The proposed increases in brain size from food processing will have led to a greater mental capacity for further cultural innovation in food processing which will have increased digestive efficiency further providing more energy for further gains in brain size.[45] This positive feedback loop is argued to have led to the rapid brain size increases seen in the Homo lineage.[46][40]

Mechanisms of cultural evolution

In DIT, the evolution and maintenance of cultures is described by five major mechanisms: natural selection of cultural variants, random variation, cultural drift, guided variation and transmission bias.

Natural selection

Cultural differences among individuals can lead to differential survival of individuals. The patterns of this selective process depend on transmission biases and can result in behavior that is more adaptive to a given environment.

Random variation

Random variation arises from errors in the learning, display or recall of cultural information, and is roughly analogous to the process of mutation in genetic evolution.

Cultural drift

Cultural drift is a process roughly analogous to genetic drift in evolutionary biology.[47][48][49] In cultural drift, the frequency of cultural traits in a population may be subject to random fluctuations due to chance variations in which traits are observed and transmitted (sometimes called "sampling error").[50] These fluctuations might cause cultural variants to disappear from a population. This effect should be especially strong in small populations.[51] A model by Hahn and Bentley shows that cultural drift gives a reasonably good approximation to changes in the popularity of American baby names.[50] Drift processes have also been suggested to explain changes in archaeological pottery and technology patent applications.[49] Changes in the songs of song birds are also thought to arise from drift processes, where distinct dialects in different groups occur due to errors in songbird singing and acquisition by successive generations.[52] Cultural drift is also observed in an early computer model of cultural evolution.[53]

Guided variation

Cultural traits may be gained in a population through the process of individual learning. Once an individual learns a novel trait, it can be transmitted to other members of the population. The process of guided variation depends on an adaptive standard that determines what cultural variants are learned.

Biased transmission

Understanding the different ways that culture traits can be transmitted between individuals has been an important part of DIT research since the 1970s.[54][55] Transmission biases occur when some cultural variants are favored over others during the process of cultural transmission.[56] Boyd and Richerson (1985)[56] defined and analytically modeled a number of possible transmission biases. The list of biases has been refined over the years, especially by Henrich and McElreath.[57]

Content bias

Content biases result from situations where some aspect of a cultural variant's content makes them more likely to be adopted.[58] Content biases can result from genetic preferences, preferences determined by existing cultural traits, or a combination of the two. For example, food preferences can result from genetic preferences for sugary or fatty foods and socially-learned eating practices and taboos.[58] Content biases are sometimes called "direct biases."[56]

Context bias

Context biases result from individuals using clues about the social structure of their population to determine what cultural variants to adopt. This determination is made without reference to the content of the variant. There are two major categories of context biases: model-based biases, and frequency-dependent biases.
Model-based biases
Model-based biases result when an individual is biased to choose a particular "cultural model" to imitate. There are four major categories of model-based biases: prestige bias, skill bias, success bias, and similarity bias.[5][59] A "prestige bias" results when individuals are more likely to imitate cultural models that are seen as having more prestige. A measure of prestige could be the amount of deference shown to a potential cultural model by other individuals. A "skill bias" results when individuals can directly observe different cultural models performing a learned skill and are more likely to imitate cultural models that perform better at the specific skill. A "success bias" results from individuals preferentially imitating cultural models that they determine are most generally successful (as opposed to successful at a specific skill as in the skill bias.) A "similarity bias" results when individuals are more likely to imitate cultural models that are perceived as being similar to the individual based on specific traits.
Frequency-dependent biases
Frequency-dependent biases result when an individual is biased to choose particular cultural variants based on their perceived frequency in the population. The most explored frequency-dependent bias is the "conformity bias." Conformity biases result when individuals attempt to copy the mean or the mode cultural variant in the population. Another possible frequency dependent bias is the "rarity bias." The rarity bias results when individuals preferentially choose cultural variants that are less common in the population. The rarity bias is also sometimes called a "nonconformist" or "anti-conformist" bias.

Social learning and cumulative cultural evolution

In DIT, the evolution of culture is dependent on the evolution of social learning. Analytic models show that social learning becomes evolutionarily beneficial when the environment changes with enough frequency that genetic inheritance can not track the changes, but not fast enough that individual learning is more efficient.[60] For environments that have very little variability, social learning is not needed since genes can adapt fast enough to the changes that occur, and innate behaviour is able to deal with the constant environment.[61] In fast changing environments cultural learning would not be useful because what the previous generation knew is now outdated and will provide no benefit in the changed environment, and hence individual learning is more beneficial. It is only in the moderately changing environment where cultural learning becomes useful since each generation shares a mostly similar environment but genes have insufficient time to change to changes in the environment.[62] While other species have social learning, and thus some level of culture, only humans, some birds and chimpanzees are known to have cumulative culture.[63] Boyd and Richerson argue that the evolution of cumulative culture depends on observational learning and is uncommon in other species because it is ineffective when it is rare in a population. They propose that the environmental changes occurring in the Pleistocene may have provided the right environmental conditions.[62] Michael Tomasello argues that cumulative cultural evolution results from a ratchet effect that began when humans developed the cognitive architecture to understand others as mental agents.[64] Furthermore, Tomasello proposed in the 80s that there are some disparities between the observational learning mechanisms found in humans and great apes - which go some way to explain the observable difference between great ape traditions and human types of culture.

Cultural group selection

Although group selection is commonly thought to be nonexistent or unimportant in genetic evolution,[65][66][67] DIT predicts that, due to the nature of cultural inheritance, it may be an important force in cultural evolution. Group selection occurs in cultural evolution because conformist biases make it difficult for novel cultural traits to spread through a population (see above section on transmission biases). Conformist bias also helps maintain variation between groups. These two properties, rare in genetic transmission, are necessary for group selection to operate.[68] Based on an earlier model by Cavalli-Sforza and Feldman,[69] Boyd and Richerson show that conformist biases are almost inevitable when traits spread through social learning,[70] implying that group selection is common in cultural evolution. Analysis of small groups in New Guinea imply that cultural group selection might be a good explanation for slowly changing aspects of social structure, but not for rapidly changing fads.[71] The ability of cultural evolution to maintain intergroup diversity is what allows for the study of cultural phylogenetics.[72]

Historical development

The idea that human cultures undergo a similar evolutionary process as genetic evolution goes back at least to Darwin[73] In the 1960s, Donald T. Campbell published some of the first theoretical work that adapted principles of evolutionary theory to the evolution of cultures.[74] In 1976, two developments in cultural evolutionary theory set the stage for DIT. In that year Richard Dawkins's The Selfish Gene introduced ideas of cultural evolution to a popular audience. Although one of the best-selling science books of all time, because of its lack of mathematical rigor, it had little effect on the development of DIT. Also in 1976, geneticists Marcus Feldman and Luigi Luca Cavalli-Sforza published the first dynamic models of gene–culture coevolution.[75] These models were to form the basis for subsequent work on DIT, heralded by the publication of three seminal books in the 1980s.

The first was Charles Lumsden and E.O. Wilson's Genes, Mind and Culture.[76] This book outlined a series of mathematical models of how genetic evolution might favor the selection of cultural traits and how cultural traits might, in turn, affect the speed of genetic evolution. While it was the first book published describing how genes and culture might coevolve, it had relatively little effect on the further development of DIT.[77] Some critics felt that their models depended too heavily on genetic mechanisms at the expense of cultural mechanisms.[78] Controversy surrounding Wilson's sociobiological theories may also have decreased the lasting effect of this book.[77]

The second 1981 book was Cavalli-Sforza and Feldman's Cultural Transmission and Evolution: A Quantitative Approach.[48] Borrowing heavily from population genetics and epidemiology, this book built a mathematical theory concerning the spread of cultural traits. It describes the evolutionary implications of vertical transmission, passing cultural traits from parents to offspring; oblique transmission, passing cultural traits from any member of an older generation to a younger generation; and horizontal transmission, passing traits between members of the same population.

The next significant DIT publication was Robert Boyd and Peter Richerson's 1985 Culture and the Evolutionary Process.[56] This book presents the now-standard mathematical models of the evolution of social learning under different environmental conditions, the population effects of social learning, various forces of selection on cultural learning rules, different forms of biased transmission and their population-level effects, and conflicts between cultural and genetic evolution. The book's conclusion also outlined areas for future research that are still relevant today.[79]

Current and future research

In their 1985 book, Boyd and Richerson outlined an agenda for future DIT research. This agenda, outlined below, called for the development of both theoretical models and empirical research. DIT has since built a rich tradition of theoretical models over the past two decades.[80] However, there has not been a comparable level of empirical work.

In a 2006 interview Harvard biologist E. O. Wilson expressed disappointment at the little attention afforded to DIT:
"...for some reason I haven't fully fathomed, this most promising frontier of scientific research has attracted very few people and very little effort."[81]
Kevin Laland and Gillian Brown attribute this lack of attention to DIT's heavy reliance on formal modeling.
"In many ways the most complex and potentially rewarding of all approaches, [DIT], with its multiple processes and cerebral onslaught of sigmas and deltas, may appear too abstract to all but the most enthusiastic reader. Until such a time as the theoretical hieroglyphics can be translated into a respectable empirical science most observers will remain immune to its message."[82]
Economist Herbert Gintis disagrees with this critique, citing empirical work as well as more recent work using techniques from behavioral economics.[83] These behavioral economic techniques have been adapted to test predictions of cultural evolutionary models in laboratory settings[84][85][86] as well as studying differences in cooperation in fifteen small-scale societies in the field.[87]

Since one of the goals of DIT is to explain the distribution of human cultural traits, ethnographic and ethnologic techniques may also be useful for testing hypothesis stemming from DIT. Although findings from traditional ethnologic studies have been used to buttress DIT arguments,[88][89] thus far there have been little ethnographic fieldwork designed to explicitly test these hypotheses.[71][87][90]

Herb Gintis has named DIT one of the two major conceptual theories with potential for unifying the behavioral sciences, including economics, biology, anthropology, sociology, psychology and political science. Because it addresses both the genetic and cultural components of human inheritance, Gintis sees DIT models as providing the best explanations for the ultimate cause of human behavior and the best paradigm for integrating those disciplines with evolutionary theory.[91] In a review of competing evolutionary perspectives on human behavior, Laland and Brown see DIT as the best candidate for uniting the other evolutionary perspectives under one theoretical umbrella.[92]

Relation to other fields

Sociology and cultural anthropology

Two major topics of study in both sociology and cultural anthropology are human cultures and cultural variation. However, Dual Inheritance theorists charge that both disciplines too often treat culture as a static superorganic entity that dictates human behavior.[93][94] Cultures are defined by a suite of common traits shared by a large group of people. DIT theorists argue that this doesn't sufficiently explain variation in cultural traits at the individual level. By contrast, DIT models human culture at the individual level and views culture as the result of a dynamic evolutionary process at the population level.[93][95]

Human sociobiology and evolutionary psychology

Evolutionary psychologists study the evolved architecture of the human mind. They see it as composed of many different programs that process information, each with assumptions and procedures that were specialized by natural selection to solve a different adaptive problem faced by our hunter-gatherer ancestors (e.g., choosing mates, hunting, avoiding predators, cooperating, using aggression).[96] These evolved programs contain content-rich assumptions about how the world and other people work. As ideas are passed from mind to mind, they are changed by these evolved inference systems (much like messages get changed in a game of telephone). But the changes are not random. Evolved programs add and subtract information, reshaping the ideas in ways that make them more "intuitive", more memorable, and more attention-grabbing. In other words, "memes" (ideas) are not like genes. Genes are copied faithfully as they are replicated, but ideas are not. It’s not just that ideas mutate every once in awhile, like genes do. Ideas are transformed every time they are passed from mind to mind, because the sender's message is being interpreted by evolved inference systems in the receiver.[97][98] There is no necessary contradiction between evolutionary psychology and DIT, but evolutionary psychologists argue that the psychology implicit in many DIT models is too simple; evolved programs have a rich inferential structure not captured by the idea of a "content bias". They also argue that some of the phenomena DIT models attribute to cultural evolution are cases of "evoked culture"—situations in which different evolved programs are activated in different places, in response to cues in the environment.[99]

Human sociobiologists try to understand how maximizing genetic fitness, in either the modern era or past environments, can explain human behavior. When faced with a trait that seems maladaptive, some sociobiologists try to determine how the trait actually increases genetic fitness (maybe through kin selection or by speculating about early evolutionary environments). Dual inheritance theorists, in contrast, will consider a variety of genetic and cultural processes in addition to natural selection on genes.

Human behavioral ecology

Human behavioral ecology (HBE) and DIT have a similar relationship to what ecology and evolutionary biology have in the biological sciences. HBE is more concerned about ecological process and DIT more focused on historical process.[100] One difference is that human behavioral ecologists often assume that culture is a system that produces the most adaptive outcome in a given environment. This implies that similar behavioral traditions should be found in similar environments. However, this is not always the case. A study of African cultures showed that cultural history was a better predictor of cultural traits than local ecological conditions.[101]

Memetics

Memetics, which comes from the meme idea described in Dawkins's The Selfish Gene, is similar to DIT in that it treats culture as an evolutionary process that is distinct from genetic transmission. However, there are some philosophical differences between memetics and DIT.[102] One difference is that memetics' focus is on the selection potential of discrete replicators (memes), where DIT allows for transmission of both non-replicators and non-discrete cultural variants. DIT does not assume that replicators are necessary for cumulative adaptive evolution. DIT also more strongly emphasizes the role of genetic inheritance in shaping the capacity for cultural evolution. But perhaps the biggest difference is a difference in academic lineage. Memetics as a label is more influential in popular culture than in academia. Critics of memetics argue that it is lacking in empirical support or is conceptually ill-founded, and question whether there is hope for the memetic research program succeeding. Proponents point out that many cultural traits are discrete, and that many existing models of cultural inheritance assume discrete cultural units, and hence involve memes.[103]

Criticisms

A number of criticisms of DIT have been put forward.[104][105][106] From some points of view, use of the term ‘dual inheritance’ to refer to both what is transmitted genetically and what is transmitted culturally is technically misleading.[citation needed] Many opponents cite horizontal transmission of ideas to be so "different" from the typical vertical transmission (reproduction) in genetic evolution that it is not evolution. However, 1) even genetic evolution uses non-vertical transmission through the environmental alteration of the genome during life by acquired circumstance: epigenetics, and 2) genetic evolution is also affected by direct horizontal transmission between separate species of plants and strains of bacteria: horizontal gene transfer. Other critics argue that there can be no "dual" inheritance without cultural inheritance being "sequestered" by the biotic genome.[citation needed] Evidence for this process is scarce and controversial. Why this is a demand of critics, however, can be considered unclear as it refutes none of the central claims laid down by proponents of DIT.

More serious criticisms of DIT arise from the choice of Darwinian selection as an explanatory framework for culture. Some argue, cultural evolution does not possess the algorithmic structure of a process that can be modeled in a Darwinian framework as characterized by John von Neumann[107] and used by John Holland to design the genetic algorithm.[108] Forcing culture into a Darwinian framework gives a distorted picture of the process for several reasons. First, some argue Darwinian selection only works as an explanatory framework when variation is randomly generated.[citation needed] To the extent that transmission biases are operative in culture, they mitigate the effect of Darwinian change, i.e. change in the distribution of variants over generations of exposure to selective pressures.[citation needed] Second, since acquired change can accumulate orders of magnitude faster than inherited change, if it is not getting regularly discarded each generation, it quickly overwhelms the population-level mechanism of change identified by Darwin; it ‘swamps the phylogenetic signal’.[citation needed] DIT proponents reply that the theory includes a very important role for decision-making forces.[109] As a point of history, Darwin had a rather sophisticated theory of human cultural evolution that depended on natural selection "to a subordinate degree" compared to "laws, customs, and traditions" supported by public opinion.[110] When critics complain that DIT is too "Darwinian" they are falsely claiming that it is too dependent on ideas related to the neodarwinian synthesis which dropped Darwin's own belief that the inheritance of acquired variation is important and which ignored his ideas on cultural evolution in humans.[111]

Another discord in opinion stems from DIT opponents' assertion that there exists some "creative force" that is applied to each idea as it is received and before it is passed on, and that this agency is so powerful that it can be stronger than the selective system of other individuals assessing what to teach and whether your idea has merit.[citation needed] But if this criticism was valid then it would be comparatively much easier to argue an unpopular or incorrect concepts than it actually is. In addition, nothing about DIT runs counter to the idea that an internally selective process (some would call creativity) also determines the fitness of ideas received and sent. In fact this decision making is a large part of the territory embraced by DIT proponents but is poorly understood due to limitations in neurobiology.

Related criticisms of the effort to frame culture in Darwinian terms have been leveled by Richard Lewontin,[112] Niles Eldredge,[113] and Stuart Kauffman.

Saturday, August 18, 2018

Futures studies

From Wikipedia, the free encyclopedia

Moore's law is an example of futures studies; it is a statistical collection of past and present trends with the goal of accurately extrapolating future trends.

Futures studies (also called futurology) is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. In general, it can be considered as a branch of the social sciences and parallel to the field of history. Futures studies (colloquially called "futures" by many of the field's practitioners) seeks to understand what is likely to continue and what could plausibly change. Part of the discipline thus seeks a systematic and pattern-based understanding of past and present, and to determine the likelihood of future events and trends.

Unlike the physical sciences where a narrower, more specified system is studied, futures studies concerns a much bigger and more complex world system[3]. The methodology and knowledge are much less proven as compared to natural science or even social science like sociology and economics. There is a debate as to whether this discipline is an art or science and sometimes described by scientists as pseudoscience.[4][5]

Overview

Futures studies is an interdisciplinary field, studying past and present changes, and aggregating and analyzing both lay and professional strategies and opinions with respect to future. It includes analyzing the sources, patterns, and causes of change and stability in an attempt to develop foresight and to map possible futures[6]. Around the world the field is variously referred to as futures studies, strategic foresight, futuristics, futures thinking, futuring, and futurology. Futures studies and strategic foresight are the academic field's most commonly used terms in the English-speaking world.

Foresight was the original term and was first used in this sense by H.G. Wells in 1932.[7] "Futurology" is a term common in encyclopedias, though it is used almost exclusively by nonpractitioners today, at least in the English-speaking world. "Futurology" is defined as the "study of the future."[8] The term was coined by German professor Ossip K. Flechtheim in the mid-1940s, who proposed it as a new branch of knowledge that would include a new science of probability. This term may have fallen from favor in recent decades because modern practitioners stress the importance of alternative and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures [9].

Three factors usually distinguish futures studies from the research conducted by other disciplines (although all of these disciplines overlap, to differing degrees)[10]. First, futures studies often examines not only possible but also probable, preferable, and "wild card" futures. Second, futures studies typically attempts to gain a holistic or systemic view based on insights from a range of different disciplines, generally focusing on the STEEP[11] categories of Social, Technological, Economic, Environmental and Political. Third, futures studies challenges and unpacks the assumptions behind dominant and contending views of the future. The future thus is not empty but fraught with hidden assumptions. For example, many people expect the collapse of the Earth's ecosystem in the near future, while others believe the current ecosystem will survive indefinitely. A foresight approach would seek to analyze and highlight the assumptions underpinning such views.
As a field, futures studies expands on the research component, by emphasizing the communication of a strategy and the actionable steps needed to implement the plan or plans leading to the preferable future. It is in this regard, that futures studies evolves from an academic exercise to a more traditional business-like practice, looking to better prepare organizations for the future[12].

Futures studies does not generally focus on short term predictions such as interest rates over the next business cycle, or of managers or investors with short-term time horizons. Most strategic planning, which develops operational plans for preferred futures with time horizons of one to three years, is also not considered futures. Plans and strategies with longer time horizons that specifically attempt to anticipate possible future events are definitely part of the field. As a rule, futures studies is generally concerned with changes of transformative impact, rather than those of an incremental or narrow scope.

The futures field also excludes those who make future predictions through professed supernatural means.

History

Origins

Sir Thomas More, originator of the 'Utopian' ideal.

Johan Galtung and Sohail Inayatullah[13] argue in Macrohistory and Macrohistorians that the search for grand patterns of social change goes all the way back to Ssu-Ma Chien (145-90BC) and his theory of the cycles of virtue, although the work of Ibn Khaldun (1332–1406) such as The Muqaddimah[14] would be an example that is perhaps more intelligible to modern sociology. Early western examples include Sir Thomas More’s “Utopia,” published in 1516, and based upon Plato’s “Republic,” in which a future society has overcome poverty and misery to create a perfect model for living. This work was so powerful that utopias have come to represent positive and fulfilling futures in which everyone’s needs are met.[15]

Some intellectual foundations of futures studies appeared in the mid-19th century. Isadore Comte, considered the father of scientific philosophy, was heavily influenced by the work of utopian socialist Henri Saint-Simon, and his discussion of the metapatterns of social change presages futures studies as a scholarly dialogue.[16]

The first works that attempt to make systematic predictions for the future were written in the 18th century. Memoirs of the Twentieth Century written by Samuel Madden in 1733, takes the form of a series of diplomatic letters written in 1997 and 1998 from British representatives in the foreign cities of Constantinople, Rome, Paris, and Moscow.[17] However, the technology of the 20th century is identical to that of Madden's own era - the focus is instead on the political and religious state of the world in the future. Madden went on to write The Reign of George VI, 1900 to 1925, where (in the context of the boom in canal construction at the time) he envisioned a large network of waterways that would radically transform patterns of living - "Villages grew into towns and towns became cities".[18]

In 1845, Scientific American, the oldest continuously published magazine in the U.S., began publishing articles about scientific and technological research, with a focus upon the future implications of such research. It would be followed in 1872 by the magazine Popular Science, which was aimed at a more general readership.[15]

The genre of science fiction became established towards the end of the 19th century, with notable writers, including Jules Verne and H. G. Wells, setting their stories in an imagined future world.

Early 20th Century

H. G. Wells first advocated for 'future studies' in a lecture delivered in 1902.

According to W. Warren Wagar, the founder of future studies was H. G. Wells. His Anticipations of the Reaction of Mechanical and Scientific Progress Upon Human Life and Thought: An Experiment in Prophecy, was first serially published in The Fortnightly Review in 1901.[19] Anticipating what the world would be like in the year 2000, the book is interesting both for its hits (trains and cars resulting in the dispersion of population from cities to suburbs; moral restrictions declining as men and women seek greater sexual freedom; the defeat of German militarism, the existence of a European Union, and a world order maintained by "English-speaking peoples" based on the urban core between Chicago and New York[20]) and its misses (he did not expect successful aircraft before 1950, and averred that "my imagination refuses to see any sort of submarine doing anything but suffocate its crew and founder at sea").[21][22]

Moving from narrow technological predictions, Wells envisioned the eventual collapse of the capitalist world system after a series of destructive total wars. From this havoc would ultimately emerge a world of peace and plenty, controlled by competent technocrats.[19]

The work was a bestseller, and Wells was invited to deliver a lecture at the Royal Institution in 1902, entitled The Discovery of the Future. The lecture was well-received and was soon republished in book form. He advocated for the establishment of a new academic study of the future that would be grounded in scientific methodology rather than just speculation. He argued that a scientifically ordered vision of the future "will be just as certain, just as strictly science, and perhaps just as detailed as the picture that has been built up within the last hundred years to make the geological past." Although conscious of the difficulty in arriving at entirely accurate predictions, he thought that it would still be possible to arrive at a "working knowledge of things in the future".[19]

In his fictional works, Wells predicted the invention and use of the atomic bomb in The World Set Free (1914).[23] In The Shape of Things to Come (1933) the impending World War and cities destroyed by aerial bombardment was depicted.[24] However, he didn't stop advocating for the establishment of a futures science. In a 1933 BBC broadcast he called for the establishment of "Departments and Professors of Foresight", foreshadowing the development of modern academic futures studies by approximately 40 years.[7]

At the beginning of the 20th century future works were often shaped by political forces and turmoil. The WWI era led to adoption of futures thinking in institutions throughout Europe. The Russian Revolution led to the 1921 establishment of the Soviet Union’s Gosplan, or State Planning Committee, which was active until the dissolution of the Soviet Union. Gosplan was responsible for economic planning and created plans in five year increments to govern the economy. One of the first Soviet dissidents, Yevgeny Zamyatin, published the first dystopian novel, We, in 1921. The science fiction and political satire featured a future police state and was the first work censored by the Soviet censorship board, leading to Zamyatin’s political exile.[15]

In the United States, President Hoover created the Research Committee on Social Trends, which produced a report in 1933. The head of the committee, William F. Ogburn, analyzed the past to chart trends and project those trends into the future, with a focus on technology. Similar technique was used during The Great Depression, with the addition of alternative futures and a set of likely outcomes that resulted in the creation of Social Security and the Tennessee Valley development project.[15]

The WWII era emphasized the growing need for foresight. The Nazis used strategic plans to unify and mobilize their society with a focus on creating a fascist utopia. This planning and the subsequent war forced global leaders to create their own strategic plans in response. The post-war era saw the creation of numerous nation states with complex political alliances and was further complicated by the introduction of nuclear power.

Project RAND was created in 1946 as joint project between the United States Army Air Forces and the Douglas Aircraft Company, and later incorporated as the non-profit RAND corporation. Their objective was the future of weapons, and long-range planning to meet future threats. Their work has formed the basis of US strategy and policy in regard to nuclear weapons, the Cold War, and the space race.[15]

Mid-Century Emergence

Futures studies truly emerged as an academic discipline in the mid-1960s.[25] First-generation futurists included Herman Kahn, an American Cold War strategist for the RAND Corporation who wrote On Thermonuclear War (1960), Thinking about the unthinkable (1962) and The Year 2000: a framework for speculation on the next thirty-three years (1967); Bertrand de Jouvenel, a French economist who founded Futuribles International in 1960; and Dennis Gabor, a Hungarian-British scientist who wrote Inventing the Future (1963) and The Mature Society. A View of the Future (1972).[16]

Future studies had a parallel origin with the birth of systems science in academia, and with the idea of national economic and political planning, most notably in France and the Soviet Union.[16][26] In the 1950s, the people of France were continuing to reconstruct their war-torn country. In the process, French scholars, philosophers, writers, and artists searched for what could constitute a more positive future for humanity. The Soviet Union similarly participated in postwar rebuilding, but did so in the context of an established national economic planning process, which also required a long-term, systemic statement of social goals. Future studies was therefore primarily engaged in national planning, and the construction of national symbols.

Rachel Carson, author of The Silent Spring, which helped launch the environmental movement and a new direction for futures research.

By contrast, in the United States, futures studies as a discipline emerged from the successful application of the tools and perspectives of systems analysis, especially with regard to quartermastering the war-effort. The Society for General Systems Research, founded in 1955, sought to understand cybernetics and the practical application of systems sciences, greatly influencing the U.S. foresight community.[15] These differing origins account for an initial schism between futures studies in America and futures studies in Europe: U.S. practitioners focused on applied projects, quantitative tools and systems analysis, whereas Europeans preferred to investigate the long-range future of humanity and the Earth, what might constitute that future, what symbols and semantics might express it, and who might articulate these.[27][28]

By the 1960s, academics, philosophers, writers and artists across the globe had begun to explore enough future scenarios so as to fashion a common dialogue. Several of the most notable writers to emerge during this era include: sociologist Fred L. Polak, whose work Images of the Future (1961) discusses the importance of images to society’s creation of the future; Marshall McLuhan, whose The Gutenberg Galaxy (1962) and Understanding Media: The Extensions of Man (1964) put forth his theories on how technologies change our cognitive understanding; and Rachel Carson’s The Silent Spring (1962) which was hugely influential not only to future studies but also the creation of the environmental movement.[15]

Inventors such as Buckminster Fuller also began highlighting the effect technology might have on global trends as time progressed.

By the 1970s there was an obvious shift in the use and development of futures studies; it’s focus was no longer exclusive to governments and militaries. Instead, it embraced a wide array of technologies, social issues, and concerns. This discussion on the intersection of population growth, resource availability and use, economic growth, quality of life, and environmental sustainability – referred to as the "global problematique" – came to wide public attention with the publication of Limits to Growth, a study sponsored by the Club of Rome which detailed the results of a computer simulation of the future based on economic and population growth.[22] Public investment in the future was further enhanced by the publication of Alvin Toffler’s bestseller Future Shock (1970), and its exploration of how great amounts of change can overwhelm people and create a social paralysis due to “information overload.”[15]

Further development

International dialogue became institutionalized in the form of the World Futures Studies Federation (WFSF), founded in 1967, with the noted sociologist, Johan Galtung, serving as its first president. In the United States, the publisher Edward Cornish, concerned with these issues, started the World Future Society, an organization focused more on interested laypeople.

The first doctoral program on the Study of the Future, was founded in 1969 at the University Of Massachusetts by Christoper Dede and Billy Rojas.The next graduate program (Master's degree) was also founded by Christopher Dede in 1975 at the University of Houston–Clear Lake,.[29] Oliver Markley of SRI (now SRI International) was hired in 1978 to move the program into a more applied and professional direction. The program moved to the University of Houston in 2007 and renamed the degree to Foresight.[30] The program has remained focused on preparing professional futurists and providing high-quality foresight training for individuals and organizations in business, government, education, and non-profits.[31] In 1976, the M.A. Program in Public Policy in Alternative Futures at the University of Hawaii at Manoa was established.[32] The Hawaii program locates futures studies within a pedagogical space defined by neo-Marxism, critical political economic theory, and literary criticism. In the years following the foundation of these two programs, single courses in Futures Studies at all levels of education have proliferated, but complete programs occur only rarely. In 2012, the Finland Futures Research Centre started a master's degree Programme in Futures Studies at Turku School of Economics, a business school which is part of the University of Turku in Turku, Finland.[33]

As a transdisciplinary field, futures studies attracts generalists. This transdisciplinary nature can also cause problems, owing to it sometimes falling between the cracks of disciplinary boundaries; it also has caused some difficulty in achieving recognition within the traditional curricula of the sciences and the humanities. In contrast to "Futures Studies" at the undergraduate level, some graduate programs in strategic leadership or management offer masters or doctorate programs in "strategic foresight" for mid-career professionals, some even online. Nevertheless, comparatively few new PhDs graduate in Futures Studies each year.

The field currently faces the great challenge of creating a coherent conceptual framework, codified into a well-documented curriculum (or curricula) featuring widely accepted and consistent concepts and theoretical paradigms linked to quantitative and qualitative methods, exemplars of those research methods, and guidelines for their ethical and appropriate application within society. As an indication that previously disparate intellectual dialogues have in fact started converging into a recognizable discipline,[34] at least six solidly-researched and well-accepted first attempts to synthesize a coherent framework for the field have appeared: Eleonora Masini (sk)'s Why Futures Studies?,[35] James Dator's Advancing Futures Studies,[36] Ziauddin Sardar's Rescuing all of our Futures,[37] Sohail Inayatullah's Questioning the future,[38] Richard A. Slaughter's The Knowledge Base of Futures Studies,[39] a collection of essays by senior practitioners, and Wendell Bell's two-volume work, The Foundations of Futures Studies.[40]

Probability and predictability

Some aspects of the future, such as celestial mechanics, are highly predictable, and may even be described by relatively simple mathematical models. At present however, science has yielded only a special minority of such "easy to predict" physical processes. Theories such as chaos theory, nonlinear science and standard evolutionary theory have allowed us to understand many complex systems as contingent (sensitively dependent on complex environmental conditions) and stochastic (random within constraints), making the vast majority of future events unpredictable, in any specific case.

Not surprisingly, the tension between predictability and unpredictability is a source of controversy and conflict among futures studies scholars and practitioners. Some argue that the future is essentially unpredictable, and that "the best way to predict the future is to create it." Others believe, as Flechtheim, that advances in science, probability, modeling and statistics will allow us to continue to improve our understanding of probable futures, while this area presently remains less well developed than methods for exploring possible and preferable futures.

As an example, consider the process of electing the president of the United States. At one level we observe that any U.S. citizen over 35 may run for president, so this process may appear too unconstrained for useful prediction. Yet further investigation demonstrates that only certain public individuals (current and former presidents and vice presidents, senators, state governors, popular military commanders, mayors of very large cities, etc.) receive the appropriate "social credentials" that are historical prerequisites for election. Thus with a minimum of effort at formulating the problem for statistical prediction, a much reduced pool of candidates can be described, improving our probabilistic foresight. Applying further statistical intelligence to this problem, we can observe that in certain election prediction markets such as the Iowa Electronic Markets, reliable forecasts have been generated over long spans of time and conditions, with results superior to individual experts or polls. Such markets, which may be operated publicly or as an internal market, are just one of several promising frontiers in predictive futures research.

Such improvements in the predictability of individual events do not though, from a complexity theory viewpoint, address the unpredictability inherent in dealing with entire systems, which emerge from the interaction between multiple individual events.

Futurology is sometimes described by scientists as pseudoscience.[4][5]

Methodologies

In terms of methodology, futures practitioners employ a wide range of approaches, models and methods, in both theory and practice, many of which are derived from or informed by other academic or professional disciplines [1], including social sciences such as economics, psychology, sociology, religious studies, cultural studies, history, geography, and political science; physical and life sciences such as physics, chemistry, astronomy, biology; mathematics, including statistics, game theory and econometrics; applied disciplines such as engineering, computer sciences, and business management (particularly strategy).

The largest internationally peer-reviewed collection of futures research methods (1,300 pages) is Futures Research Methodology 3.0. Each of the 37 methods or groups of methods contains: an executive overview of each method’s history, description of the method, primary and alternative usages, strengths and weaknesses, uses in combination with other methods, and speculation about future evolution of the method. Some also contain appendixes with applications, links to software, and sources for further information.

Given its unique objectives and material, the practice of futures studies only rarely features employment of the scientific method in the sense of controlled, repeatable and verifiable experiments with highly standardized methodologies. However, many futurists are informed by scientific techniques or work primarily within scientific domains. Borrowing from history, the futurist might project patterns observed in past civilizations upon present-day society to model what might happen in the future, or borrowing from technology, the futurist may model possible social and cultural responses to an emerging technology based on established principles of the diffusion of innovation. In short, the futures practitioner enjoys the synergies of an interdisciplinary laboratory.

As the plural term “futures” suggests, one of the fundamental assumptions in futures studies is that the future is plural not singular.[2] That is, the future consists not of one inevitable future that is to be “predicted,” but rather of multiple alternative futures of varying likelihood which may be derived and described, and about which it is impossible to say with certainty which one will occur. The primary effort in futures studies, then, is to identify and describe alternative futures in order to better understand the driving forces of the present or the structural dynamics of a particular subject or subjects. The exercise of identifying alternative futures includes collecting quantitative and qualitative data about the possibility, probability, and desirability of change. The plural term "futures" in futures studies denotes both the rich variety of alternative futures, including the subset of preferable futures (normative futures), that can be studied, as well as the tenet that the future is many.

At present, the general futures studies model has been summarized as being concerned with "three Ps and a W", or possible, probable, and preferable futures, plus wildcards, which are low probability but high impact events (positive or negative). Many futurists, however, do not use the wild card approach. Rather, they use a methodology called Emerging Issues Analysis. It searches for the drivers of change, issues that are likely to move from unknown to the known, from low impact to high impact.

In terms of technique, futures practitioners originally concentrated on extrapolating present technological, economic or social trends, or on attempting to predict future trends. Over time, the discipline has come to put more and more focus on the examination of social systems and uncertainties, to the end of articulating scenarios. The practice of scenario development facilitates the examination of worldviews and assumptions through the causal layered analysis method (and others), the creation of preferred visions of the future, and the use of exercises such as backcasting to connect the present with alternative futures. Apart from extrapolation and scenarios, many dozens of methods and techniques are used in futures research (see below).

The general practice of futures studies also sometimes includes the articulation of normative or preferred futures, and a major thread of practice involves connecting both extrapolated (exploratory) and normative research to assist individuals and organizations to model preferred futures amid shifting social changes. Practitioners use varying proportions of collaboration, creativity and research to derive and define alternative futures, and to the degree that a “preferred” future might be sought, especially in an organizational context, techniques may also be deployed to develop plans or strategies for directed future shaping or implementation of a preferred future.

While some futurists are not concerned with assigning probability to future scenarios, other futurists find probabilities useful in certain situations, such as when probabilities stimulate thinking about scenarios within organizations [3]. When dealing with the three Ps and a W model, estimates of probability are involved with two of the four central concerns (discerning and classifying both probable and wildcard events), while considering the range of possible futures, recognizing the plurality of existing alternative futures, characterizing and attempting to resolve normative disagreements on the future, and envisioning and creating preferred futures are other major areas of scholarship. Most estimates of probability in futures studies are normative and qualitative, though significant progress on statistical and quantitative methods (technology and information growth curves, cliometrics, predictive psychology, prediction markets, crowdvoting forecasts,[31][better source needed] etc.) has been made in recent decades.

Futures techniques

Futures techniques or methodologies may be viewed as “frameworks for making sense of data generated by structured processes to think about the future”.[41] There is no single set of methods that are appropriate for all futures research. Different futures researchers intentionally or unintentionally promote use of favored techniques over a more structured approach. Selection of methods for use on futures research projects has so far been dominated by the intuition and insight of practitioners; but can better identify a balanced selection of techniques via acknowledgement of foresight as a process together with familiarity with the fundamental attributes of most commonly used methods.[42]
Futurists use a diverse range of forecasting methods including:

Shaping alternative futures

Futurists use scenarios – alternative possible futures – as an important tool. To some extent, people can determine what they consider probable or desirable using qualitative and quantitative methods. By looking at a variety of possibilities one comes closer to shaping the future, rather than merely predicting it. Shaping alternative futures starts by establishing a number of scenarios. Setting up scenarios takes place as a process with many stages. One of those stages involves the study of trends. A trend persists long-term and long-range; it affects many societal groups, grows slowly and appears to have a profound basis. In contrast, a fad operates in the short term, shows the vagaries of fashion, affects particular societal groups, and spreads quickly but superficially.

Sample predicted futures range from predicted ecological catastrophes, through a utopian future where the poorest human being lives in what present-day observers would regard as wealth and comfort, through the transformation of humanity into a posthuman life-form, to the destruction of all life on Earth in, say, a nanotechnological disaster.

Futurists have a decidedly mixed reputation and a patchy track record at successful prediction. For reasons of convenience, they often extrapolate present technical and societal trends and assume they will develop at the same rate into the future; but technical progress and social upheavals, in reality, take place in fits and starts and in different areas at different rates.

Many 1950s futurists predicted commonplace space tourism by the year 2000, but ignored the possibilities of ubiquitous, cheap computers. On the other hand, many forecasts have portrayed the future with some degree of accuracy. Current futurists often present multiple scenarios that help their audience envision what "may" occur instead of merely "predicting the future". They claim that understanding potential scenarios helps individuals and organizations prepare with flexibility.

Many corporations use futurists as part of their risk management strategy, for horizon scanning and emerging issues analysis, and to identify wild cards – low probability, potentially high-impact risks.[44] Every successful and unsuccessful business engages in futuring to some degree – for example in research and development, innovation and market research, anticipating competitor behavior and so on.[45][46]

Weak signals, the future sign and wild cards

In futures research "weak signals" may be understood as advanced, noisy and socially situated indicators of change in trends and systems that constitute raw informational material for enabling anticipatory action. There is some confusion about the definition of weak signal by various researchers and consultants. Sometimes it is referred as future oriented information, sometimes more like emerging issues. The confusion has been partly clarified with the concept 'the future sign', by separating signal, issue and interpretation of the future sign.[47]

A weak signal can be an early indicator of coming change, and an example might also help clarify the confusion. On May 27, 2012, hundreds of people gathered for a “Take the Flour Back” demonstration at Rothamsted Research in Harpenden, UK, to oppose a publicly funded trial of genetically modified wheat. This was a weak signal for a broader shift in consumer sentiment against genetically modified foods. When Whole Foods mandated the labeling of GMOs in 2013, this non-GMO idea had already become a trend and was about to be a topic of mainstream awareness.

"Wild cards" refer to low-probability and high-impact events, such as existential risks. This concept may be embedded in standard foresight projects and introduced into anticipatory decision-making activity in order to increase the ability of social groups adapt to surprises arising in turbulent business environments. Such sudden and unique incidents might constitute turning points in the evolution of a certain trend or system. Wild cards may or may not be announced by weak signals, which are incomplete and fragmented data from which relevant foresight information might be inferred. Sometimes, mistakenly, wild cards and weak signals are considered as synonyms, which they are not.[48] One of the most often cited examples of a wild card event in recent history is 9/11. Nothing had happened in the past that could point to such a possibility and yet it had a huge impact on everyday life in the United States, from simple tasks like how to travel via airplane to deeper cultural values. Wild card events might also be natural disasters, such as Hurricane Katrina, which can force the relocation of huge populations and wipe out entire crops to completely disrupt the supply chain of many businesses. Although wild card events can’t be predicted, after they occur it is often easy to reflect back and convincingly explain why they happened.

Near-term predictions

A long-running tradition in various cultures, and especially in the media, involves various spokespersons making predictions for the upcoming year at the beginning of the year. These predictions sometimes base themselves on current trends in culture (music, movies, fashion, politics); sometimes they make hopeful guesses as to what major events might take place over the course of the next year.

Some of these predictions come true as the year unfolds, though many fail. When predicted events fail to take place, the authors of the predictions often state that misinterpretation of the "signs" and portents may explain the failure of the prediction.

Marketers have increasingly started to embrace futures studies, in an effort to benefit from an increasingly competitive marketplace with fast production cycles, using such techniques as trendspotting as popularized by Faith Popcorn.[dubious ]

Trend analysis and forecasting

Mega-trends

Trends come in different sizes. A mega-trend extends over many generations, and in cases of climate, mega-trends can cover periods prior to human existence. They describe complex interactions between many factors. The increase in population from the palaeolithic period to the present provides an example.

Potential trends

Possible new trends grow from innovations, projects, beliefs or actions that have the potential to grow and eventually go mainstream in the future.

Branching trends

Very often, trends relate to one another the same way as a tree-trunk relates to branches and twigs. For example, a well-documented movement toward equality between men and women might represent a branch trend. The trend toward reducing differences in the salaries of men and women in the Western world could form a twig on that branch.

Life-cycle of a trend

When a potential trend gets enough confirmation in the various media, surveys or questionnaires to show that it has an increasingly accepted value, behavior or technology, it becomes accepted as a bona fide trend. Trends can also gain confirmation by the existence of other trends perceived as springing from the same branch. Some commentators claim that when 15% to 25% of a given population integrates an innovation, project, belief or action into their daily life then a trend becomes mainstream.

General Hype Cycle used to visualize technological life stages of maturity, adoption, and social application.

Life cycle of technologies

Because new advances in technology have the potential to reshape our society, one of the jobs of a futurist is to follow these developments and consider their implications. However, the latest innovations take time to make an impact. Every new technology goes through its own life cycle of maturity, adoption, and social application that must be taken into consideration before a probable vision of the future can be created.

Gartner created their Hype Cycle to illustrate the phases a technology moves through as it grows from research and development to mainstream adoption. The unrealistic expectations and subsequent disillusionment that virtual reality experienced in the 1990s and early 2000s is an example of the middle phases encountered before a technology can begin to be integrated into society.[49]

Education

Education in the field of futures studies has taken place for some time. Beginning in the United States of America in the 1960s, it has since developed in many different countries. Futures education encourages the use of concepts, tools and processes that allow students to think long-term, consequentially, and imaginatively. It generally helps students to:
  1. conceptualize more just and sustainable human and planetary futures.
  2. develop knowledge and skills of methods and tools used to help people understand, map, and influence the future by exploring probable and preferred futures.
  3. understand the dynamics and influence that human, social and ecological systems have on alternative futures.
  4. conscientize responsibility and action on the part of students toward creating better futures.
Thorough documentation of the history of futures education exists, for example in the work of Richard A. Slaughter (2004),[50] David Hicks, Ivana Milojević[51] to name a few.

While futures studies remains a relatively new academic tradition, numerous tertiary institutions around the world teach it. These vary from small programs, or universities with just one or two classes, to programs that offer certificates and incorporate futures studies into other degrees, (for example in planning, business, environmental studies, economics, development studies, science and technology studies). Various formal Masters-level programs exist on six continents. Finally, doctoral dissertations around the world have incorporated futures studies. A recent survey documented approximately 50 cases of futures studies at the tertiary level.[52]

The largest Futures Studies program in the world is at Tamkang University, Taiwan.[citation needed] Futures Studies is a required course at the undergraduate level, with between three and five thousand students taking classes on an annual basis. Housed in the Graduate Institute of Futures Studies is an MA Program. Only ten students are accepted annually in the program. Associated with the program is the Journal of Futures Studies.[53]

The longest running Future Studies program in North America was established in 1975 at the University of Houston–Clear Lake.[54] It moved to the University of Houston in 2007 and renamed the degree to Foresight. The program was established on the belief that if history is studied and taught in an academic setting, then so should the future. Its mission is to prepare professional futurists. The curriculum incorporates a blend of the essential theory, a framework and methods for doing the work, and a focus on application for clients in business, government, nonprofits, and society in general.[55]

As of 2003, over 40 tertiary education establishments around the world were delivering one or more courses in futures studies. The World Futures Studies Federation[56] has a comprehensive survey of global futures programs and courses. The Acceleration Studies Foundation maintains an annotated list of primary and secondary graduate futures studies programs.[57]

Organizations such as Teach The Future also aim to promote future studies in the secondary school curriculum in order to develop structured approaches to thinking about the future in public school students. The rationale is that a sophisticated approach to thinking about, anticipating, and planning for the future is a core skill requirement that every student should have, similar to literacy and math skills.

Applications of foresight and specific fields

General applicability and use of foresight products

Several corporations and government agencies utilize foresight products to both better understand potential risks and prepare for potential opportunities. Several government agencies publish material for internal stakeholders as well as make that material available to broader public. Examples of this include the US Congressional Budget Office long term budget projections,[58] the National Intelligence Center,[59] and the United Kingdom Government Office for Science.[60] Much of this material is used by policy makers to inform policy decisions and government agencies to develop long term plan. Several corporations, particularly those with long product development lifecycles, utilize foresight and future studies products and practitioners in the development of their business strategies. The Shell Corporation is one such entity.[61] Foresight professionals and their tools are increasingly being utilized in both the private and public areas to help leaders deal with an increasingly complex and interconnected world.

Design

Design and futures studies have many synergies as interdisciplinary fields with a natural orientation towards the future. Both incorporate studies of human behavior, global trends, strategic insights, and anticipatory solutions.

Designers have adopted futures methodologies including scenarios, trend forecasting, and futures research. Design thinking and specific techniques including ethnography, rapid prototyping, and critical design have been incorporated into in futures as well. In addition to borrowing techniques from one another, futurists and designers have joined to form agencies marrying both competencies to positive effect. The continued interrelation of the two fields is an encouraging trend that has spawned much interesting work.

The Association for Professional Futurists has also held meetings discussing the ways in which Design Thinking and Futures Thinking intersect and benefit one another.

Imperial cycles and world order

Imperial cycles represent an "expanding pulsation" of "mathematically describable" macro-historic trend.[62] The List of largest empires contains imperial record progression in terms of territory or percentage of world population under single imperial rule.

Chinese philosopher K'ang Yu-wei and French demographer Georges Vacher de Lapouge in the late 19th century were the first to stress that the trend cannot proceed indefinitely on the definite surface of the globe. The trend is bound to culminate in a world empire. K'ang Yu-wei estimated that the matter will be decided in the contest between Washington and Berlin; Vacher de Lapouge foresaw this contest between the United States and Russia and estimated the chance of the United States higher.[63] Both published their futures studies before H. G. Wells introduced the science of future in his Anticipations (1901).

Four later anthropologists—Hornell Hart, Raoul Naroll, Louis Morano, and Robert Carneiro—researched the expanding imperial cycles. They reached the same conclusion that a world empire is not only pre-determined but close at hand and attempted to estimate the time of its appearance.[64]

Education

As foresight has expanded to include a broader range of social concerns all levels and types of education have been addressed, including formal and informal education. Many countries are beginning to implement Foresight in their Education policy. A few programs are listed below:
  • Finland's FinnSight 2015[65] - Implementation began in 2006 and though at the time was not referred to as "Foresight" they tend to display the characteristics of a foresight program.
  • Singapore's Ministry of Education Master plan for Information Technology in Education[66] - This third Masterplan continues what was built on in the 1st and 2nd plans to transform learning environments to equip students to compete in a knowledge economy.
  • The World Future Society, founded in 1966, is the largest and longest-running community of futurists in the world. WFS established and built futurism from the ground up—through publications, global summits, and advisory roles to world leaders in business and government.[25]
By the early 2000s, educators began to independently institute futures studies (sometimes referred to as futures thinking) lessons in K-12 classroom environments.[67] To meet the need, non-profit futures organizations designed curriculum plans to supply educators with materials on the topic. Many of the curriculum plans were developed to meet common core standards. Futures studies education methods for youth typically include age-appropriate collaborative activities, games, systems thinking and scenario building exercises.[68]

Science fiction

Wendell Bell and Ed Cornish acknowledge science fiction as a catalyst to future studies, conjuring up visions of tomorrow.[69] Science fiction’s potential to provide an “imaginative social vision” is its contribution to futures studies and public perspective. Productive sci-fi presents plausible, normative scenarios.[69] Jim Dator attributes the foundational concepts of “images of the future” to Wendell Bell, for clarifying Fred Polak’s concept in Images of the Future, as it applies to futures studies. Similar to futures studies’ scenarios thinking, empirically supported visions of the future are a window into what the future could be. Pamela Sargent states, “Science fiction reflects attitudes typical of this century.” She gives a brief history of impactful sci-fi publications, like The Foundation Trilogy, by Isaac Asimov and Starship Troopers, by Robert A. Heinlein.[72] Alternate perspectives validate sci-fi as part of the fuzzy “images of the future.”[71] However, the challenge is the lack of consistent futures research based literature frameworks.[72] Ian Miles reviews The New Encyclopedia of Science Fiction,” identifying ways Science Fiction and Futures Studies “cross-fertilize, as well as the ways in which they differ distinctly.” Science Fiction cannot be simply considered fictionalized Futures Studies. It may have aims other than “prediction, and be no more concerned with shaping the future than any other genre of literature.” [73] It is not to be understood as an explicit pillar of futures studies, due to its inconsistency of integrated futures research. Additionally, Dennis Livingston, a literature and Futures journal critic says, “The depiction of truly alternative societies has not been one of science fiction’s strong points, especially” preferred, normative envisages.[74]

Government agencies

Several governments have formalized strategic foresight agencies to encourage long range strategic societal planning, with most notable are the governments of Singapore, Finland, and the United Arab Emirates. Other governments with strategic foresight agencies include Canada's Policy Horizons Canada and the Malaysia's Malaysian Foresight Institute.

The Singapore government's Centre for Strategic Futures (CSF) is part of the Strategy Group within the Prime Minister's Office. Their mission is to position the Singapore government to navigate emerging strategic challenges and harness potential opportunities.[75] Singapore’s early formal efforts in strategic foresight began in 1991 with the establishment of the Risk Detection and Scenario Planning Office in the Ministry of Defence.[76] In addition to the CSF, the Singapore government has established the Strategic Futures Network, which brings together deputy secretary-level officers and foresight units across the government to discuss emerging trends that may have implications for Singapore.[76]

Since the 1990s, Finland has integrated strategic foresight within the parliament and Prime Minister’s Office.[77] The government is required to present a “Report of the Future” each parliamentary term for review by the parliamentary Committee for the Future. Led by the Prime Minister’s Office, the Government Foresight Group coordinates the government’s foresight efforts.[78] Futures research is supported by the Finnish Society for Futures Studies (established in 1980), the Finland Futures Research Centre (established in 1992), and the Finland Futures Academy (established in 1998) in coordination with foresight units in various government agencies.[78]

In the United Arab Emirates, Sheikh Mohammed bin Rashid, Vice President and Ruler of Dubai, announced in September 2016 that all government ministries were to appoint Directors of Future Planning. Sheikh Mohammed described the UAE Strategy for the Future as an "integrated strategy to forecast our nation’s future, aiming to anticipate challenges and seize opportunities".[79] The Ministry of Cabinet Affairs and Future(MOCAF) is mandated with crafting the UAE Strategy for the Future and is responsible for the portfolio of the future of UAE.[80]

Risk analysis and management

Foresight is also applied when studying potential risks to society and how to effectively deal with them.[81][82] These risks may arise from the development and adoption of emerging technologies and/or social change. Special interest lies on hypothetical future events that have the potential to damage human well-being on a global scale - global catastrophic risks.[83] Such events may cripple or destroy modern civilization or, in the case of existential risks, even cause human extinction.[84] Potential global catastrophic risks include but are not limited to hostile artificial intelligence, nanotechnology weapons, climate change, nuclear warfare, total war, and pandemics.

Research centers

Futurists

Several authors have become recognized as futurists.[88] They research trends, particularly in technology, and write their observations, conclusions, and predictions. In earlier eras, many futurists were at academic institutions. John McHale, author of The Future of the Future, published a 'Futures Directory', and directed a think tank called The Centre For Integrative Studies at a university. Futurists have started consulting groups or earn money as speakers, with examples including Alvin Toffler, John Naisbitt and Patrick Dixon. Frank Feather is a business speaker that presents himself as a pragmatic futurist. Some futurists have commonalities with science fiction, and some science-fiction writers, such as Arthur C. Clarke, are known as futurists.[citation needed] In the introduction to The Left Hand of Darkness, Ursula K. Le Guin distinguished futurists from novelists, writing of the study as the business of prophets, clairvoyants, and futurists. In her words, "a novelist's business is lying".

A survey of 108 futurists found that they share a variety of assumptions, including in their description of the present as a critical moment in an historical transformation, in their recognition and belief in complexity, and in their being motivated by change and having a desire for an active role bringing change (versus simply being involved in forecasting).[89]

Notable futurists

Books

APF's list of most significant futures works

The Association for Professional Futurists recognizes the Most Significant Futures Works for the purpose of identifying and rewarding the work of foresight professionals and others whose work illuminates aspects of the future.[94]

Other notable foresight books

Functional programming

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Functional_programming In computer sc...