Search This Blog

Monday, March 16, 2015

Nature versus nurture


From Wikipedia, the free encyclopedia

Scholarly and popular discussion about nature and nurture relates to the relative importance of an individual's innate qualities ("nature" in the sense of nativism or innatism) as compared to an individual's personal experiences ("nurture" in the sense of empiricism or behaviorism) in causing individual differences in physical and behavioral traits.

The phrase "nature and nurture" in its modern sense was coined[1][2][3] by the English Victorian polymath Francis Galton in discussion of the influence of heredity and environment on social advancement, although the terms had been contrasted previously, for example by Shakespeare (in his play, The Tempest: 4.1). Galton was influenced[4] by the book On the Origin of Species written by his half-cousin, Charles Darwin. The concept embodied in the phrase has been criticized[3][4] for its binary simplification of two tightly interwoven parameters, as for example an environment of wealth, education, and social privilege are often historically passed to genetic offspring, even though wealth, education, and social privilege are not part of the human biological system, and so cannot be directly attributed to genetics.

The view that humans acquire all or almost all their behavioral traits from "nurture" was termed tabula rasa ("blank slate") by philosopher John Locke. The blank slate view proposes that humans develop only from environmental influences. This question was once considered to be an appropriate division of developmental influences, but since both types of factors are known to play interacting roles in development, most modern psychologists and other scholars of human development consider the question naive—representing an outdated state of knowledge.[5][6][7][8]

One may also refer to the concepts of innatism and empiricism as genetic determinism and environmentalism respectively. These two conflicting approaches have influenced research agendas for a century. While genetic determinism holds that the development is primarily influenced by the genetic code of a person, environmentalism emphasises the influence of experiences and social factors. In the twenty-first century, a consensus is developing that both genetic and environmental agents influence development interactively.[9]:85 In the social and political sciences, the nature versus nurture debate may be contrasted with the structure versus agency debate (that is, socialization versus individual autonomy). For a discussion of nature versus nurture in language and other human universals, see also psychological nativism.

In their 2014 survey of scientists, many respondents wrote that the familiar distinction between nature and nurture has outlived its usefulness, and should be retired. One reason is the explosion of work in the field of epigenetics. Scientists believe that there is a long and circuitous route, with many feedback loops, from a particular set of genes to a feature of the adult organism. Culture is a biological phenomenon: a set of abilities and practices that allow members of one generation to learn and change and to pass the results of that learning on to the next generation.[10][11]

Scientific approach

To disentangle the effects of genes and environment, behavioral geneticists perform adoption and twin studies. These provide ways to seek to decompose the variance in a population into genetic and environmental components. This move from individuals to populations makes a critical difference in the way people think about nature and nurture. This difference is perhaps highlighted in the quote attributed to psychologist Donald Hebb who is said to have once answered a journalist's question of "which, nature or nurture, contributes more to personality?" by asking in response, "Which contributes more to the area of a rectangle, its length or its width?"[12] For a particular rectangle, its area is indeed the product of its length and width. Moving to a population, however, this analogy masks the fact that there are many individuals, and that it is meaningful to talk about their differences.[13]

Scientific approaches also seek to break down variance beyond these two categories of nature and nurture. Thus rather than "nurture", behavior geneticists distinguish shared family factors (i.e., those shared by siblings, making them more similar) and nonshared factors (i.e., those that uniquely affect individuals, making siblings different). To express the portion of the variance due to the "nature" component, behavioral geneticists generally refer to the heritability of a trait.

With regard to personality traits and adult IQ in the general U.S. population, the portion of the overall variance that can be attributed to shared family effects is often negligible.[14]

In her Pulitzer Prize-nominated book The Nurture Assumption, author Judith Harris argues that "nurture," as traditionally defined in terms of family upbringing does not effectively explain the variance for most traits (such as adult IQ and the Big Five personality traits) in the general population of the United States. On the contrary, Harris suggests that either peer groups or random environmental factors (i.e., those that are independent of family upbringing) are more important than family environmental effects.[15][16]

Although "nurture" has historically been referred to as the care given to children by the parents, with the mother playing a role of particular importance, this term is now regarded by some as any environmental factor in the contemporary nature versus nurture debate. Thus the definition of "nurture" has expanded to include influences on development arising from prenatal, parental, extended family, and peer experiences, and extending to influences such as media, marketing, and socio-economic status. Indeed, a substantial source of environmental input to human nature may arise from stochastic variations in prenatal development.[17][18]

Heritability estimates

This chart illustrates three patterns one might see when studying the influence of genes and environment on traits in individuals. Trait A shows a high sibling correlation, but little heritability (i.e. high shared environmental variance c2; low heritability h2). Trait B shows a high heritability since correlation of trait rises sharply with degree of genetic similarity. Trait C shows low heritability, but also low correlations generally; this means Trait C has a high nonshared environmental variance e2. In other words, the degree to which individuals display Trait C has little to do with either genes or broadly predictable environmental factors—roughly, the outcome approaches random for an individual. Notice also that even identical twins raised in a common family rarely show 100% trait correlation.

It is important to note that the term heritability refers only to the degree of genetic variation between people on a trait. It does not refer to the degree to which a trait of a particular individual is due to environmental or genetic factors. The traits of an individual are always a complex interweaving of both.[19] For an individual, even strongly genetically influenced, or "obligate" traits, such as eye color, assume the inputs of a typical environment during ontogenetic development (e.g., certain ranges of temperatures, oxygen levels, etc.).

In contrast, the "heritability index" statistically quantifies the extent to which variation between individuals on a trait is due to variation in the genes those individuals carry. In animals where breeding and environments can be controlled experimentally, heritability can be determined relatively easily. Such experiments would be unethical for human research. This problem can be overcome by finding existing populations of humans that reflect the experimental setting the researcher wishes to create.

One way to determine the contribution of genes and environment to a trait is to study twins. In one kind of study, identical twins reared apart are compared to randomly selected pairs of people. The twins share identical genes, but different family environments. In another kind of twin study, identical twins reared together (who share family environment and genes) are compared to fraternal twins reared together (who also share family environment but only share half their genes). Another condition that permits the disassociation of genes and environment is adoption. In one kind of adoption study, biological siblings reared together (who share the same family environment and half their genes) are compared to adoptive siblings (who share their family environment but none of their genes).

In many cases, it has been found that genes make a substantial contribution, including psychological traits such as intelligence and personality.[20] Yet heritability may differ in other circumstances, for instance environmental deprivation. Examples of low, medium, and high heritability traits include:

Low heritability Medium heritability High heritability
Specific language Weight Blood type
Specific religion Religiosity Eye color

Twin and adoption studies have their methodological limits. For example, both are limited to the range of environments and genes which they sample. Almost all of these studies are conducted in Western, first-world countries, and therefore cannot be extrapolated globally to include poorer, non-western populations. Additionally, both types of studies depend on particular assumptions, such as the equal environments assumption in the case of twin studies, and the lack of pre-adoptive effects in the case of adoption studies.

Interaction of genes and environment

Heritability refers to the origins of differences between people. Individual development, even of highly heritable traits, such as eye color, depends on a range of environmental factors, from the other genes in the organism, to physical variables such as temperature, oxygen levels etc. during its development or ontogenesis.

The variability of trait can be meaningfully spoken of as being due in certain proportions to genetic differences ("nature"), or environments ("nurture"). For highly penetrant Mendelian genetic disorders such as Huntington's disease virtually all the incidence of the disease is due to genetic differences. Huntington's animal models live much longer or shorter lives depending on how they are cared for[citation needed].

At the other extreme, traits such as native language are environmentally determined: linguists have found that any child (if capable of learning a language at all) can learn any human language with equal facility.[21] With virtually all biological and psychological traits, however, genes and environment work in concert, communicating back and forth to create the individual.

At a molecular level, genes interact with signals from other genes and from the environment. While there are many thousands of single-gene-locus traits, so-called complex traits are due to the additive effects of many (often hundreds) of small gene effects. A good example of this is height, where variance appears to be spread across many hundreds of loci.[22]

Extreme genetic or environmental conditions can predominate in rare circumstances—if a child is born mute due to a genetic mutation, it will not learn to speak any language regardless of the environment; similarly, someone who is practically certain to eventually develop Huntington's disease according to their genotype may die in an unrelated accident (an environmental event) long before the disease will manifest itself.

The "two buckets" view of heritability.

More realistic "homogenous mudpie" view of heritability.
Steven Pinker likewise described several examples:[23]
concrete behavioral traits that patently depend on content provided by the home or culture—which language one speaks, which religion one practices, which political party one supports—are not heritable at all. But traits that reflect the underlying talents and temperaments—how proficient with language a person is, how religious, how liberal or conservative—are partially heritable.
When traits are determined by a complex interaction of genotype and environment it is possible to measure the heritability of a trait within a population. However, many non-scientists who encounter a report of a trait having a certain percentage heritability imagine non-interactional, additive contributions of genes and environment to the trait. As an analogy, some laypeople may think of the degree of a trait being made up of two "buckets," genes and environment, each able to hold a certain capacity of the trait. But even for intermediate heritabilities, a trait is always shaped by both genetic dispositions and the environments in which people develop, merely with greater and lesser plasticities associated with these heritability measures.

Heritability measures always refer to the degree of variation between individuals in a population. That is, as these statistics cannot be applied at the level of the individual, it would be incorrect to say that while the heritability index of personality is about 0.6, 60% of one's personality is obtained from one's parents and 40% from the environment. To help to understand this, imagine that all humans were genetic clones. The heritability index for all traits would be zero (all variability between clonal individuals must be due to environmental factors). And, contrary to erroneous interpretations of the heritability index, as societies become more egalitarian (everyone has more similar experiences) the heritability index goes up (as environments become more similar, variability between individuals is due more to genetic factors).

One should also take into account the fact that the variables of heritability and environmentality are not precise and vary within a chosen population and across cultures. It would be more accurate to state that the degree of heritability and environmentality is measured in its reference to a particular phenotype in a chosen group of a population in a given period of time. The accuracy of the calculations is further hindered by the number of coefficients taken into consideration, age being one such variable. The display of the influence of heritability and environmentality differs drastically across age groups: the older the studied age is, the more noticeable the heritability factor becomes, the younger the test subjects are, the more likely it is to show signs of strong influence of the environmental factors.

Some have pointed out that environmental inputs affect the expression of genes (see the article on epigenetics). This is one explanation of how environment can influence the extent to which a genetic disposition will actually manifest.[citation needed] The interactions of genes with environment, called gene–environment interactions, are another component of the nature–nurture debate. A classic example of gene–environment interaction is the ability of a diet low in the amino acid phenylalanine to partially suppress the genetic disease phenylketonuria. Yet another complication to the nature–nurture debate is the existence of gene-environment correlations. These correlations indicate that individuals with certain genotypes are more likely to find themselves in certain environments. Thus, it appears that genes can shape (the selection or creation of) environments. Even using experiments like those described above, it can be very difficult to determine convincingly the relative contribution of genes and environment.

A very convincing experiment conducted by T.J. Bouchard, Jr. has shown data that has been significant evidence for the importance of genes when testing middle-aged twins reared together and reared apart. The results shown have been important evidence against the importance of environment when determining, happiness, for example. In the Minnesota study of twins reared apart, it was actually found that there was higher correlation for monozygotic twins reared apart (.52)than monozygotic twins reared together (.44). Also, highlighting the importance of genes, these correlations found much higher correlation among monozygotic than dizygotic twins that had a correlation of .08 when reared together and -.02 when reared apart (Lykken & Tellegen, 1996).

Obligate vs. facultative adaptations

Traits may be considered to be adaptations (such as the umbilical cord), byproducts of adaptations (the belly button) or due to random variation (convex or concave belly button shape).[24] An alternative to contrasting nature and nurture focuses on "obligate vs. facultative" adaptations.[24] Adaptations may be generally more obligate (robust in the face of typical environmental variation) or more facultative (sensitive to typical environmental variation). For example, the rewarding sweet taste of sugar and the pain of bodily injury are obligate psychological adaptations—typical environmental variability during development does not much affect their operation.[25] On the other hand, facultative adaptations are somewhat like "if-then" statements.[26] An example of a facultative psychological adaptation may be adult attachment style. The attachment style of adults, (for example, a "secure attachment style," the propensity to develop close, trusting bonds with others) is proposed to be conditional on whether an individual's early childhood caregivers could be trusted to provide reliable assistance and attention. An example of a facultative physiological adaptation is tanning of skin on exposure to sunlight (to prevent skin damage).

Advanced techniques

The power of quantitative studies of heritable traits has been expanded by the development of new techniques. Developmental genetic analysis examines the effects of genes over the course of a human lifespan. For example, early studies of intelligence, which mostly examined young children, found that heritability measures 40–50%. Subsequent developmental genetic analyses found that variance attributable to additive environmental effects is less apparent in older individuals,[27][28][29] with estimated heritability of IQ being higher than that in adulthood.

Another advanced technique, multivariate genetic analysis, examines the genetic contribution to several traits that vary together. For example, multivariate genetic analysis has demonstrated that the genetic determinants of all specific cognitive abilities (e.g., memory, spatial reasoning, processing speed) overlap greatly, such that the genes associated with any specific cognitive ability will affect all others. Similarly, multivariate genetic analysis has found that genes that affect scholastic achievement completely overlap with the genes that affect cognitive ability.

Extremes analysis, examines the link between normal and pathological traits. For example, it is hypothesized that a given behavioral disorder may represent an extreme of a continuous distribution of a normal behavior and hence an extreme of a continuous distribution of genetic and environmental variation. Depression, phobias, and reading disabilities have been examined in this context.

For a few highly heritable traits, some studies have identified loci associated with variance in that trait in some individuals. For example, research groups have identified loci that are associated with schizophrenia in subsets of patients with that diagnosis.[30]

Nature and nurture

"Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I'll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors." – John B. Watson

The nature side of this debate emphasizes how much of an organism reflects biological factors. But, on the other hand genes are activated at appropriate times during development and are the basis for protein production. Proteins include a wide range of molecules, such as hormones and enzymes that act in the body as signaling and structural molecules to direct development. When looking at the influence of genes in the Nature vs. Nurture debate there has been found to be variation in the promotor region of the serotonin transporter gene (5-HTTLPR). The discovery of this inherited, genetic "happiness gene" is promising evidence for the nature side of the debate when measuring life satisfaction (Jan-Emmanuel De Neve, 2010). The nurture side, on the other hand, emphasizes how much of an organism reflects environmental factors. In reality, it is most likely an interaction of both genes and environment, nature and nurture, that affects the development of a person. Even in the womb, genes interact with hormones in the environment to signal the start of a new developmental phase. The hormonal environment, likewise, does not act independently of the genes and it cannot correct lethal errors in the genetic makeup of a fetus. The genes and the environment must be in sync for normal development. Similarly, even if a person has inherited genes for taller than average height, the person may not grow to be as tall as is genetically possible if proper nutrition is not provided. Here too the interaction of genes and the environment is blurred. It has been suggested that the key to understanding complex human behaviour and diseases is to study genes, the environment, and the interactions between the two equally.[31]

IQ debate

Evidence suggests that family environmental factors may have an effect upon childhood IQ, accounting for up to a quarter of the variance. The American Psychological Association's report "Intelligence: Knowns and Unknowns" (1995) states that there is no doubt that normal child development requires a certain minimum level of responsible care. Here, environment is playing a role in what is believed to be fully genetic (intelligence) but it was found that severely deprived, neglectful, or abusive environments have highly negative effects on many aspects of children's intellect development. Beyond that minimum, however, the role of family experience is in serious dispute. On the other hand, by late adolescence this correlation disappears, such that adoptive siblings no longer have similar IQ scores.[14]
Moreover, adoption studies indicate that, by adulthood, adoptive siblings are no more similar in IQ than strangers (IQ correlation near zero), while full siblings show an IQ correlation of 0.6. Twin studies reinforce this pattern: monozygotic (identical) twins raised separately are highly similar in IQ (0.74), more so than dizygotic (fraternal) twins raised together (0.6) and much more than adoptive siblings (~0.0).[32] Recent adoption studies also found that supportive parents can have a positive effect on the development of their children.[33]

Personality traits

Personality is a frequently cited example of a heritable trait that has been studied in twins and adoptions. The most famous categorical organization of heritable personality traits were created by Goldberg (1990) in which he had college students rate their personalities on 1400 dimensions to begin, and then narrowed these down into "The Big Five" factors of personality—Openness, conscientiousness, extraversion, agreeableness, and neuroticism. The close genetic relationship between positive personality traits and, for example, our happiness traits are the mirror images of comorbidity in psychopathology(Kendler et al., 2006, 2007). These personality factors were consistent across cultures, and many experiments have also tested the heritability of these traits. Identical twins reared apart are far more similar in personality than randomly selected pairs of people. Likewise, identical twins are more similar than fraternal twins. Also, biological siblings are more similar in personality than adoptive siblings. Each observation suggests that personality is heritable to a certain extent. A supporting article had focused on the heritability of personality (which is estimated to be around 50% for subjective well-being) in which an experiment was conducted using a representative sample of 973 twin pairs to test the heritable differences in subjective well-being which were found to be fully accounted for by the genetic model of the Five-Factor Model’s personality domains.[34] However, these same study designs allow for the examination of environment as well as genes. Adoption studies also directly measure the strength of shared family effects. Adopted siblings share only family environment. Most adoption studies indicate that by adulthood the personalities of adopted siblings are little or no more similar than random pairs of strangers. This would mean that shared family effects on personality are zero by adulthood. As is the case with personality, non-shared environmental effects are often found to out-weigh shared environmental effects. That is, environmental effects that are typically thought to be life-shaping (such as family life) may have less of an impact than non-shared effects, which are harder to identify. One possible source of non-shared effects is the environment of pre-natal development. Random variations in the genetic program of development may be a substantial source of non-shared environment. These results suggest that "nurture" may not be the predominant factor in "environment". Environment and our situations, do in fact impact our lives, but not the way in which we would typically react to these environmental factors. We are preset with personality traits that are the basis for how we would react to situations. An example would be how extraverted prisoners become less happy than introverted prisoners and would react to their incarceration more negatively due to their preset extraverted personality.[19]:Ch 19

Genetics

Genomics

The relationship between personality and people's own well-being is influenced and mediated by genes (Weiss, Bates, & Luciano, 2008). There has been found to be a stable set point for happiness that is characteristic of the individual (largely determined by the individual's genes). Happiness fluctuates around that setpoint (again, genetically determined) based on whether good things or bad things are happening to us ("nurture"), but only fluctuates in small magnitude in a normal human. The midpoint of these fluctuations is determined by the "great genetic lottery" that people are born with, which leads them to conclude that how happy they may feel at the moment or overtime is simply due to the luck of the draw, or gene. This fluctuation was also not due to educational attainment, which only accounted for less than 2% of the variance in well-being for women, and less than 1% of the variance for men.(Lykken & Tellegen, 1996).
With the advent of genomic sequencing, it has become possible to search for and identify specific gene polymorphisms that affect traits such as IQ and personality. These techniques work by tracking the association of differences in a trait of interest with differences in specific molecular markers or functional variants. An example of a visible human traits for which the precise genetic basis of differences are relatively well known is eye color. For traits with many genes affecting the outcome, a smaller portion of the variance is currently understood: For instance for height known gene variants account for around 5-10% of height variance at present.[citation needed] When discussing the significant role of genetic heritability in relation to one's level of happiness, it has been found that from 44% to 52% of the variance in one's well-being is associated with genetic variation. Based on the retest of smaller samples of twins studies after 4,5, and 10 years, it is estimated that the heritability of the genetic stable component of subjective well-being approaches 80% (Lykken & Tellegen, 1996). Other studies that have found that genes are a large influence in the variance found in happiness measures, exactly around 35-50%(Roysamb et al., 2002; Stubbe et al., 2005; Nes et al., 2005, 2006).

Linkage and association studies

In their attempts to locate the genes responsible for configuring certain phenotypes, researches resort to two different techniques. Linkage study facilitates the process of determining a specific location in which a gene of interested is located. This methodology is applied only among individuals that are related and does not serve to pinpoint specific genes. It does, however, narrow down the area of search, making it easier to locate one or several genes in the genome which constitute a specific trait.

Association studies, on the other hand, are more hypothetic and seek to verify whether a particular genetic variable really influences the phenotype of interest. In association studies it is more common to use case-control approach, comparing the subject with a relatively higher or lower hereditary determinants with the control subject.

Philosophical difficulties

Philosophical questions regarding nature and nurture include the question of the nature of the trait itself, questions of determinism, and whether the question is well posed.

As well as asking if a trait such as IQ is heritable, one can ask what it is about "intelligence" that is being inherited. Similarly, if in a broad set of environments genes account for almost all observed variation in a trait then this raises the notion of genetic determinism and or biological determinism, and the level of analysis which is appropriate for the trait. Finally, as early as 1951, Calvin Hall[35] suggested that discussion opposing nature and nurture was fruitless. Environments may be able to be varied in ways that affect development: This would alter the heritability of the character changes, too. Conversely, if the genetic composition of a population changes, then heritability may also change.

The example of phenylketonuria (PKU) is informative. Untreated, this is a completely penetrant genetic disorder causing brain damage and progressive mental retardation. PKU can be treated by the elimination of phenylalanine from the diet. Hence, a character (PKU) that used to have a virtually perfect heritability is not heritable any more if modern medicine is available (the actual allele causing PKU would still be inherited, but the phenotype PKU would not be expressed anymore). It is useful then to think of what is inherited as a mechanism for breaking down phenylalanine. Separately from this we can consider whether the organism has other mechanisms (for instance a drug that breakdown this amino acid) or does not need the mechanism (due to dietary exclusion).

Similarly, within, say, an inbred strain of mice, no genetic variation is present and every character will have a zero heritability. If the complications of gene–environment interactions and correlations (see above) are added, then it appears to many that heritability, the epitome of the nature–nurture opposition, is "a station passed."[36]

A related concept is the view that the idea that either nature or nurture explains a creature's behavior is an example of the single cause fallacy.

Free will

Some believe that evolutionary explanations describes factors which limit our free will, in that it can be seen to imply that we behave in ways in which we are ‘naturally inclined’. J. Mizzoni wrote “There are some moral philosophers (such as Thomas Nagel) who believe that evolutionary considerations are irrelevant to a full understanding of the foundations of ethics. Other moral philosophers (such as J. L. Mackie) tell quite a different story. They hold that the admission of the evolutionary origins of human beings compels us to concede that there are no foundations for ethics.”[37]

Critics of this ethical view point out that whether or not a behavioral trait is inherited does not affect whether it can be changed by one's culture or independent choice,[38] and that evolutionary inclinations could be discarded in ethical and political discussions regardless of whether they exist or not.[39]

Leda Cosmides and John Tooby noted that William James (1842–1910) argued that humans have more "instincts" than animals, and that greater freedom of action is the result of having more psychological instincts, not fewer.[40] Daniel C. Dennett explores this idea in his 2003 book Freedom Evolves.

In popular culture

The nature vs. nurture debate has come up in fiction in various ways. An obvious example of the debate can be seen in science-fiction stories involving cloning, where genetically identical people experience different lives growing up and are consequently shaped into different people despite beginning with the same potential. An example of this is The Boys from Brazil, where multiple clones of Hitler are created as part of a Nazi scientist's experiments to recreate the Führer by arranging for the clones to experience the same defining moments that Hitler did, or Star Trek: Nemesis, where the film's villain is a clone of protagonist Captain Jean-Luc Picard who regularly 'defends' his actions by arguing that Picard would do the same thing if he had lived the clone's life (although Picard's crew argue that Picard has shown a desire to better himself from the beginning that the clone has never displayed). Other, more complex examples include the film Trading Places- where two wealthy brothers make a bet over the debate by ruining the life of one of their employees and promoting a street bum to replace him to see what happens-, the series Orphan Black, featuring multiple clones of one woman shaped in various ways by their upbringing (ranging from a cop to a con artist to a religious assassin), or the differences between the Marvel Comics characters Nate Grey and Cable, two people who are essentially alternate versions of each other from two different timelines, beginning with the same power potential but developing in a highly different manner afterwards due to Cable's military training contrasting with Nate's reliance on his powers.

Implications in law

Recently, the nature versus nurture debate has entered the realm of law and criminal defense. In some cases, lawyers for violent offenders have begun to argue that an individual’s genes, rather than their rational decision-making processes, can cause criminal activity.[41] Early attempts to employ a genetic defense were concerned with XYY syndrome – a genetic abnormality in which men have a second Y chromosome. However, several critiques argue that XYY individuals are not predisposed to aggression or violence, discrediting the theory as a plausible criminal defense.[42]

Evidence that supports the genetic defense stems includes H.G. Brunner's 1993 discovery of what is now known as Brunner Syndrome, and a series of studies on mice. Proponents of the defense suggest that individuals cannot be held accountable for their genes, and as a result, should not be held responsible for their dispositions and resulting actions.[42] If a single gene mutation is the reason for aggressive behaviour, there is a possibility that aggressive offenders would have a genetic excuse for their crimes.[42] Genetic determinism has been debated for its plausibility of use in the judicial system for some time, yet "the use of a 'genetic defense' in the courtroom is a fairly new phenomenon."[42] In the United States, use of genetics in court defense has been used and successful in reducing sentencing for violent offenders, for "certain predispositions may reduce the blameworthiness of the offender."[42]

Canadian Judge Justice McLachlin denotes, "I can find no support in criminal theory for the conclusion that protection of the morally innocent requires a general consideration of individual excusing conditions. The principle comes into play only at the point where the person is shown to lack the capacity to appreciate the nature and quality or consequences of his or her acts."[43]

Several mitigating factors already shape the purposes and principles of sentencing in relation to genetics such as impairment of judgment and a disadvantaged background. A 'genetic predisposition to violence' could fall similarly under the same statute laws 718c in the USA as a mitigating factor in crime if the science behind genetic determinants can be found conclusive.[42] However, Duke University researcher Laura Baker disagrees, as "although genes may increase the propensity for criminality, for example, they do not determine it."[43]

Futures studies


From Wikipedia, the free encyclopedia


Moore's law is an example of futures studies; it is a statistical collection of past and present trends with the goal of accurately extrapolating future trends.

Futures studies (also called futurology and futurism) is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. There is a debate as to whether this discipline is an art or science. In general, it can be considered as a branch of the social sciences and parallel to the field of history. History studies the past, futures studies considers the future. Futures studies (colloquially called "futures" by many of the field's practitioners) seeks to understand what is likely to continue and what could plausibly change. Part of the discipline thus seeks a systematic and pattern-based understanding of past and present, and to determine the likelihood of future events and trends.[1] Unlike the physical sciences where a narrower, more specified system is studied, futures studies concerns a much bigger and more complex world system. The methodology and knowledge are much less proven as compared to natural science or even social science like sociology, economics, and political science.

Overview

Futures studies is an interdisciplinary field, studying yesterday's and today's changes, and aggregating and analyzing both lay and professional strategies and opinions with respect to tomorrow. It includes analyzing the sources, patterns, and causes of change and stability in an attempt to develop foresight and to map possible futures. Around the world the field is variously referred to as futures studies, strategic foresight, futuristics, futures thinking, futuring, futurology, and futurism. Futures studies and strategic foresight are the academic field's most commonly used terms in the English-speaking world.

Foresight was the original term and was first used in this sense by H.G. Wells in 1932.[2] "Futurology" is a term common in encyclopedias, though it is used almost exclusively by nonpractitioners today, at least in the English-speaking world. "Futurology" is defined as the "study of the future."[3] The term was coined by German professor Ossip K. Flechtheim[citation needed] in the mid-1940s, who proposed it as a new branch of knowledge that would include a new science of probability. This term may have fallen from favor in recent decades because modern practitioners stress the importance of alternative and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures.[citation needed]

Three factors usually distinguish futures studies from the research conducted by other disciplines (although all of these disciplines overlap, to differing degrees). First, futures studies often examines not only possible but also probable, preferable, and "wild card" futures. Second, futures studies typically attempts to gain a holistic or systemic view based on insights from a range of different disciplines. Third, futures studies challenges and unpacks the assumptions behind dominant and contending views of the future. The future thus is not empty but fraught with hidden assumptions. For example, many people expect the collapse of the Earth's ecosystem in the near future, while others believe the current ecosystem will survive indefinitely. A foresight approach would seek to analyse and so highlight the assumptions underpinning such views.

Futures studies does not generally focus on short term predictions such as interest rates over the next business cycle, or of managers or investors with short-term time horizons. Most strategic planning, which develops operational plans for preferred futures with time horizons of one to three years, is also not considered futures. Plans and strategies with longer time horizons that specifically attempt to anticipate possible future events are definitely part of the field.

The futures field also excludes those who make future predictions through professed supernatural means. At the same time, it does seek to understand the models such groups use and the interpretations they give to these models.

History

Johan Galtung and Sohail Inayatullah[4] argue in Macrohistory and Macrohistorians that the search for grand patterns of social change goes all the way back to Ssu-Ma Chien (145-90BC) and his theory of the cycles of virtue, although the work of Ibn Khaldun (1332–1406) such as The Muqaddimah[5] would be an example that is perhaps more intelligible to modern sociology. Some intellectual foundations of futures studies appeared in the mid-19th century; according to Wendell Bell, Comte's discussion of the metapatterns of social change presages futures studies as a scholarly dialogue.[6]

The first works that attempt to make systematic predictions for the future were written in the 18th century. Memoirs of the Twentieth Century written by Samuel Madden in 1733, takes the form of a series of diplomatic letters written in 1997 and 1998 from British representatives in the foreign cities of Constantinople, Rome, Paris, and Moscow.[7] However, the technology of the 20th century is identical to that of Madden's own era - the focus is instead on the political and religious state of the world in the future. Madden went on to write The Reign of George VI, 1900 to 1925, where (in the context of the boom in canal construction at the time) he envisioned a large network of waterways that would radically transform patterns of living - "Villages grew into towns and towns became cities".[8]

The genre of science fiction became established towards the end of the 19th century, with notable writers, including Jules Verne and H. G. Wells, setting their stories in an imagined future world.

Origins


H. G. Wells first advocated for 'future studies', in a lecture delivered in 1902.

According to W. Warren Wagar, the founder of future studies was H. G. Wells. His Anticipations of the Reaction of Mechanical and Scientific Progress Upon Human Life and Thought: An Experiment in Prophecy, was first serially published in The Fortnightly Review in 1901.[9] Anticipating what the world would be like in the year 2000, the book is interesting both for its hits (trains and cars resulting in the dispersion of population from cities to suburbs; moral restrictions declining as men and women seek greater sexual freedom; the defeat of German militarism, and the existence of a European Union) and its misses (he did not expect successful aircraft before 1950, and averred that "my imagination refuses to see any sort of submarine doing anything but suffocate its crew and founder at sea").[10][11]

Moving from narrow technological predictions, Wells envisioned the eventual collapse of the capitalist world system after a series of destructive total wars. From this havoc would ultimately emerge a world of peace and plenty, controlled by competent technocrats.[9]

The work was a bestseller, and Wells was invited to deliver a lecture at the Royal Institution in 1902, entitled The Discovery of the Future. The lecture was well-received and was soon republished in book form. He advocated for the establishment of a new academic study of the future that would be grounded in scientific methodology rather than just speculation. He argued that a scientifically ordered vision of the future "will be just as certain, just as strictly science, and perhaps just as detailed as the picture that has been built up within the last hundred years to make the geological past." Although conscious of the difficulty in arriving at entirely accurate predictions, he thought that it would still be possible to arrive at a "working knowledge of things in the future".[9]

In his fictional works, Wells predicted the invention and use of the atomic bomb in The World Set Free (1914).[12] In The Shape of Things to Come (1933) the impending World War and cities destroyed by aerial bombardment was depicted.[13] However, he didn't stop advocating for the establishment of a futures science. In a 1933 BBC broadcast he called for the establishment of "Departments and Professors of Foresight", foreshadowing the development of modern academic futures studies by approximately 40 years.[2]

Emergence

Futures studies emerged as an academic discipline in the mid-1960s. First-generation futurists included Herman Kahn, an American Cold War strategist who wrote On Thermonuclear War (1960), Thinking about the unthinkable (1962) and The Year 2000: a framework for speculation on the next thirty-three years (1967); Bertrand de Jouvenel, a French economist who founded Futuribles International in 1960; and Dennis Gabor, a Hungarian-British scientist who wrote Inventing the Future (1963) and The Mature Society. A View of the Future (1972).[6]

Future studies had a parallel origin with the birth of systems science in academia, and with the idea of national economic and political planning, most notably in France and the Soviet Union.[6][14] In the 1950s, France was continuing to reconstruct their war-torn country. In the process, French scholars, philosophers, writers, and artists searched for what could constitute a more positive future for humanity. The Soviet Union similarly participated in postwar rebuilding, but did so in the context of an established national economic planning process, which also required a long-term, systemic statement of social goals. Future studies was therefore primarily engaged in national planning, and the construction of national symbols.

By contrast, in the United States of America, futures studies as a discipline emerged from the successful application of the tools and perspectives of systems analysis, especially with regard to quartermastering the war-effort. These differing origins account for an initial schism between futures studies in America and futures studies in Europe: U.S. practitioners focused on applied projects, quantitative tools and systems analysis, whereas Europeans preferred to investigate the long-range future of humanity and the Earth, what might constitute that future, what symbols and semantics might express it, and who might articulate these.[15][16]

By the 1960s, academics, philosophers, writers and artists across the globe had begun to explore enough future scenarios so as to fashion a common dialogue. Inventors such as Buckminster Fuller also began highlighting the effect technology might have on global trends as time progressed. This discussion on the intersection of population growth, resource availability and use, economic growth, quality of life, and environmental sustainability – referred to as the "global problematique" – came to wide public attention with the publication of Limits to Growth, a study sponsored by the Club of Rome.[17]

Further development

International dialogue became institutionalized in the form of the World Futures Studies Federation (WFSF), founded in 1967, with the noted sociologist, Johan Galtung, serving as its first president. In the United States, the publisher Edward Cornish, concerned with these issues, started the World Future Society, an organization focused more on interested laypeople.

1975 saw the founding of the first graduate program in futures studies in the United States, the M.S. program in Studies of the Future at the University of Houston at Clear Lake City;[18] there followed a year later the M.A. Program in Public Policy in Alternative Futures at the University of Hawaii at Manoa.[19] The Hawaii program provides particular interest in the light of the schism in perspective between European and U.S. futurists; it bridges that schism by locating futures studies within a pedagogical space defined by neo-Marxism, critical political economic theory, and literary criticism. In the years following the foundation of these two programs, single courses in Futures Studies at all levels of education have proliferated, but complete programs occur only rarely.

As a transdisciplinary field, futures studies attracts generalists. This transdisciplinary nature can also cause problems, owing to it sometimes falling between the cracks of disciplinary boundaries; it also has caused some difficulty in achieving recognition within the traditional curricula of the sciences and the humanities. In contrast to "Futures Studies" at the undergraduate level, some graduate programs in strategic leadership or management offer masters or doctorate programs in "strategic foresight" for mid-career professionals, some even online. Nevertheless, comparatively few new PhDs graduate in Futures Studies each year.

The field currently faces the great challenge of creating a coherent conceptual framework, codified into a well-documented curriculum (or curricula) featuring widely accepted and consistent concepts and theoretical paradigms linked to quantitative and qualitative methods, exemplars of those research methods, and guidelines for their ethical and appropriate application within society. As an indication that previously disparate intellectual dialogues have in fact started converging into a recognizable discipline,[20] at least six solidly-researched and well-accepted first attempts to synthesize a coherent framework for the field have appeared: Eleonora Masini's Why Futures Studies,[21] James Dator's Advancing Futures Studies,[22] Ziauddin Sardar's Rescuing all of our Futures,[23] Sohail Inayatullah's Questioning the future,[24] Richard A. Slaughter's The Knowledge Base of Futures Studies,[25] a collection of essays by senior practitioners, and Wendell Bell's two-volume work, The Foundations of Futures Studies.[26]

Probability and predictability

Some aspects of the future, such as celestial mechanics, are highly predictable, and may even be described by relatively simple mathematical models. At present however, science has yielded only a special minority of such "easy to predict" physical processes. Theories such as chaos theory, nonlinear science and standard evolutionary theory have allowed us to understand many complex systems as contingent (sensitively dependent on complex environmental conditions) and stochastic (random within constraints), making the vast majority of future events unpredictable, in any specific case.

Not surprisingly, the tension between predictability and unpredictability is a source of controversy and conflict among futures studies scholars and practitioners. Some argue that the future is essentially unpredictable, and that "the best way to predict the future is to create it." Others believe, as Flechtheim, that advances in science, probability, modeling and statistics will allow us to continue to improve our understanding of probable futures, while this area presently remains less well developed than methods for exploring possible and preferable futures.

As an example, consider the process of electing the president of the United States. At one level we observe that any U.S. citizen over 35 may run for president, so this process may appear too unconstrained for useful prediction. Yet further investigation demonstrates that only certain public individuals (current and former presidents and vice presidents, senators, state governors, popular military commanders, mayors of very large cities, etc.) receive the appropriate "social credentials" that are historical prerequisites for election. Thus with a minimum of effort at formulating the problem for statistical prediction, a much reduced pool of candidates can be described, improving our probabilistic foresight. Applying further statistical intelligence to this problem, we can observe that in certain election prediction markets such as the Iowa Electronic Markets, reliable forecasts have been generated over long spans of time and conditions, with results superior to individual experts or polls. Such markets, which may be operated publicly or as an internal market, are just one of several promising frontiers in predictive futures research.

Such improvements in the predictability of individual events do not though, from a complexity theory viewpoint, address the unpredictability inherent in dealing with entire systems, which emerge from the interaction between multiple individual events.

Methodologies

Futures practitioners use a wide range of models and methods (theory and practice), many of which come from other academic disciplines, including economics, sociology, geography, history, engineering, mathematics, psychology, technology, tourism, physics, biology, astronomy, and aspects of theology (specifically, the range of future beliefs).

One of the fundamental assumptions in futures studies is that the future is plural not singular, that is, that it consists of alternative futures of varying likelihood but that it is impossible in principle to say with certainty which one will occur. The primary effort in Futures studies, therefore, is to identify and describe alternative futures. This effort includes collecting quantitative and qualitative data about the possibility, probability, and desirability of change. The plurality of the term "futures" in futures studies denotes the rich variety of alternative futures, including the subset of preferable futures (normative futures), that can be studied.

Practitioners of the discipline previously concentrated on extrapolating present technological, economic or social trends, or on attempting to predict future trends, but more recently they have started to examine social systems and uncertainties and to build scenarios, question the worldviews behind such scenarios via the causal layered analysis method (and others), create preferred visions of the future, and use backcasting to derive alternative implementation strategies. Apart from extrapolation and scenarios, many dozens of methods and techniques are used in futures research (see below).

Futures studies also includes normative or preferred futures, but a major contribution involves connecting both extrapolated (exploratory) and normative research to help individuals and organisations to build better social futures amid a (presumed) landscape of shifting social changes. Practitioners use varying proportions of inspiration and research. Futures studies only rarely uses the scientific method in the sense of controlled, repeatable and falsifiable experiments with highly standardized methodologies, given that environmental conditions for repeating a predictive scheme are usually quite hard to control. However, many futurists are informed by scientific techniques. Some historians project patterns observed in past civilizations upon present-day society to anticipate what will happen in the future. Oswald Spengler's "Decline of the West" argued, for instance, that western society, like imperial Rome, had reached a stage of cultural maturity that would inexorably lead to decline, in measurable ways.

Futures studies is often summarized as being concerned with "three Ps and a W", or possible, probable, and preferable futures, plus wildcards, which are low probability but high impact events (positive or negative), should they occur. Many futurists, however, do not use the wild card approach. Rather, they use a methodology called Emerging Issues Analysis. It searches for the seeds of change, issues that are likely to move from unknown to the known, from low impact to high impact.

Estimates of probability are involved with two of the four central concerns of foresight professionals (discerning and classifying both probable and wildcard events), while considering the range of possible futures, recognizing the plurality of existing alternative futures, characterizing and attempting to resolve normative disagreements on the future, and envisioning and creating preferred futures are other major areas of scholarship. Most estimates of probability in futures studies are normative and qualitative, though significant progress on statistical and quantitative methods (technology and information growth curves, cliometrics, predictive psychology, prediction markets, etc.) has been made in recent decades.

Futures techniques

While forecasting – i.e., attempts to predict future states from current trends – is a common methodology, professional scenarios often rely on "backcasting": asking what changes in the present would be required to arrive at envisioned alternative future states. For example, the Policy Reform and Eco-Communalism scenarios developed by the Global Scenario Group rely on the backcasting method. Practitioners of futures studies classify themselves as futurists (or foresight practitioners).
Futurists use a diverse range of forecasting methods including:

Shaping alternative futures

Futurists use scenarios – alternative possible futures – as an important tool. To some extent, people can determine what they consider probable or desirable using qualitative and quantitative methods. By looking at a variety of possibilities one comes closer to shaping the future, rather than merely predicting it. Shaping alternative futures starts by establishing a number of scenarios. Setting up scenarios takes place as a process with many stages. One of those stages involves the study of trends. A trend persists long-term and long-range; it affects many societal groups, grows slowly and appears to have a profound basis. In contrast, a fad operates in the short term, shows the vagaries of fashion, affects particular societal groups, and spreads quickly but superficially.

Sample predicted futures range from predicted ecological catastrophes, through a utopian future where the poorest human being lives in what present-day observers would regard as wealth and comfort, through the transformation of humanity into a posthuman life-form, to the destruction of all life on Earth in, say, a nanotechnological disaster.

Futurists have a decidedly mixed reputation and a patchy track record at successful prediction. For reasons of convenience, they often extrapolate present technical and societal trends and assume they will develop at the same rate into the future; but technical progress and social upheavals, in reality, take place in fits and starts and in different areas at different rates.

Many 1950s futurists predicted commonplace space tourism by the year 2000, but ignored the possibilities of ubiquitous, cheap computers, while Marxist expectations have failed to materialise to date. On the other hand, many forecasts have portrayed the future with some degree of accuracy. Current futurists often present multiple scenarios that help their audience envision what "may" occur instead of merely "predicting the future". They claim that understanding potential scenarios helps individuals and organizations prepare with flexibility.

Many corporations use futurists as part of their risk management strategy, for horizon scanning and emerging issues analysis, and to identify wild cards – low probability, potentially high-impact risks.[27] Every successful and unsuccessful business engages in futuring to some degree – for example in research and development, innovation and market research, anticipating competitor behavior and so on.[28][29]

Weak signals, the future sign and wild cards

In futures research "weak signals" may be understood as advanced, noisy and socially situated indicators of change in trends and systems that constitute raw informational material for enabling anticipatory action. There is confusion about the definition of weak signal by various researchers and consultants. Sometimes it is referred as future oriented information, sometimes more like emerging issues. Elina Hiltunen (2007), in her new concept the future sign has tried to clarify the confusion about the weak signal definitions, by combining signal, issue and interpretation to the future sign, which more holistically describes the change.[30]

"Wild cards" refer to low-probability and high-impact events, such as existential risks. This concept may be embedded in standard foresight projects and introduced into anticipatory decision-making activity in order to increase the ability of social groups adapt to surprises arising in turbulent business environments. Such sudden and unique incidents might constitute turning points in the evolution of a certain trend or system. Wild cards may or may not be announced by weak signals, which are incomplete and fragmented data from which relevant foresight information might be inferred. Sometimes, mistakenly, wild cards and weak signals are considered as synonyms, which they are not.[31]

Near-term predictions

A long-running tradition in various cultures, and especially in the media, involves various spokespersons making predictions for the upcoming year at the beginning of the year. These predictions sometimes base themselves on current trends in culture (music, movies, fashion, politics); sometimes they make hopeful guesses as to what major events might take place over the course of the next year.

Some of these predictions come true as the year unfolds, though many fail. When predicted events fail to take place, the authors of the predictions often state that misinterpretation of the "signs" and portents may explain the failure of the prediction.

Marketers have increasingly started to embrace futures studies, in an effort to benefit from an increasingly competitive marketplace with fast production cycles, using such techniques as trendspotting as popularized by Faith Popcorn.[dubious ]

Trend analysis and forecasting

Mega-trends

Trends come in different sizes. A mega-trend extends over many generations, and in cases of climate, mega-trends can cover periods prior to human existence. They describe complex interactions between many factors. The increase in population from the palaeolithic period to the present provides an example.

Potential trends

Possible new trends grow from innovations, projects, beliefs or actions that have the potential to grow and eventually go mainstream in the future. For example, just a few years ago, alternative medicine remained an outcast from modern medicine. Now it has links with big business and has achieved a degree of respectability in some circles and even in the marketplace. This increasing level of acceptance illustrates a potential trend of society to move away from the sciences, even beyond the scope of medicine.

Branching trends

Very often, trends relate to one another the same way as a tree-trunk relates to branches and twigs. For example, a well-documented movement toward equality between men and women might represent a branch trend. The trend toward reducing differences in the salaries of men and women in the Western world could form a twig on that branch.

Life-cycle of a trend

When does a potential trend gain acceptance as a bona fide trend? When it gets enough confirmation in the various media, surveys or questionnaires to show it has an increasingly accepted value, behavior or technology. Trends can also gain confirmation by the existence of other trends perceived as springing from the same branch. Some commentators claim that when 15% to 25% of a given population integrates an innovation, project, belief or action into their daily life then a trend becomes mainstream.

Education

Education in the field of futures studies has taken place for some time. Beginning in the United States of America in the 1960s, it has since developed in many different countries. Futures education can encourage the use of concepts, tools and processes that allow students to think long-term, consequentially, and imaginatively. It generally helps students to:
  1. conceptualise more just and sustainable human and planetary futures.
  2. develop knowledge and skills in exploring probable and preferred futures.
  3. understand the dynamics and influence that human, social and ecological systems have on alternative futures.
  4. conscientize responsibility and action on the part of students toward creating better futures.
Thorough documentation of the history of futures education exists, for example in the work of Richard A. Slaughter (2004),[32] David Hicks, Ivana Milojević[33] and Jennifer Gidley[34][35][36] to name a few.

While futures studies remains a relatively new academic tradition, numerous tertiary institutions around the world teach it. These vary from small programs, or universities with just one or two classes, to programs that incorporate futures studies into other degrees, (for example in planning, business, environmental studies, economics, development studies, science and technology studies). Various formal Masters-level programs exist on six continents. Finally, doctoral dissertations around the world have incorporated futures studies. A recent survey documented approximately 50 cases of futures studies at the tertiary level.[37]

The largest Futures Studies program in the world is at Tamkang University, Taiwan.[citation needed] Futures Studies is a required course at the undergraduate level, with between three to five thousand students taking classes on an annual basis. Housed in the Graduate Institute of Futures Studies is an MA Program. Only ten students are accepted annually in the program. Associated with the program is the Journal of Futures Studies.[38]

As of 2003, over 40 tertiary education establishments around the world were delivering one or more courses in futures studies. The World Futures Studies Federation[39] has a comprehensive survey of global futures programs and courses. The Acceleration Studies Foundation maintains an annotated list of primary and secondary graduate futures studies programs.[40]

Futurists

Several authors have become recognized as futurists. They research trends, particularly in technology, and write their observations, conclusions, and predictions. In earlier eras, many futurists were at academic institutions. John McHale, author of The Future of the Future, published a 'Futures Directory', and directed a think tank called The Centre For Integrative Studies at a university. Futurists have started consulting groups or earn money as speakers, with examples including Alvin Toffler, John Naisbitt and Patrick Dixon. Frank Feather is a business speaker that presents himself as a pragmatic futurist. Some futurists have commonalities with science fiction, and some science-fiction writers, such as Arthur C. Clarke, are known as futurists.[citation needed] In the introduction to The Left Hand of Darkness, Ursula K. Le Guin distinguished futurists from novelists, writing of the study as the business of prophets, clairvoyants, and futurists. In her words, "a novelist's business is lying".
A survey of 108 futurists[41] found the following shared assumptions:
  1. We are in the midst of a historical transformation. Current times are not just part of normal history.
  2. Multiple perspectives are at heart of futures studies, including unconventional thinking, internal critique, and cross-cultural comparison.
  3. Consideration of alternatives. Futurists do not see themselves as value-free forecasters, but instead aware of multiple possibilities.
  4. Participatory futures. Futurists generally see their role as liberating the future in each person, and creating enhanced public ownership of the future. This is true worldwide.[clarification needed]
  5. Long term policy transformation. While some are more policy-oriented than others, almost all believe that the work of futurism is to shape public policy, so it consciously and explicitly takes into account the long term.
  6. Part of the process of creating alternative futures and of influencing public (corporate, or international) policy is internal transformation. At international meetings, structural and individual factors are considered equally important.
  7. Complexity. Futurists believe that a simple one-dimensional or single-discipline orientation is not satisfactory. Trans-disciplinary approaches that take complexity seriously are necessary. Systems thinking, particularly in its evolutionary dimension, is also crucial.
  8. Futurists are motivated by change. They are not content merely to describe or forecast. They desire an active role in world transformation.
  9. They are hopeful for a better future as a "strange attractor".
  10. Most believe they are pragmatists in this world, even as they imagine and work for another. Futurists have a long term perspective.
  11. Sustainable futures, understood as making decisions that do not reduce future options, that include policies on nature, gender and other accepted paradigms. This applies to corporate futurists and the NGO. Environmental sustainability is reconciled with the technological, spiritual and post-structural ideals. Sustainability is not a "back to nature" ideal, but rather inclusive of technology and culture.

Applications of foresight and specific fields

General applicability and use of foresight products

Several corporations and government agencies utilize foresight products to both better understand potential risks and prepare for potential opportunities. Several government agencies publish material for internal stakeholders as well as make that material available to broader public. Examples of this include the US Congressional Budget Office long term budget projections,[42] the National Intelligence Center,[43] and the United Kingdom Government Office for Science.[44] Much of this material is used by policy makers to inform policy decisions and government agencies to develop long term plan. Several corporations, particularly those with long product development lifecycles, utilize foresight and future studies products and practitioners in the development of their business strategies. The Shell Corporation is one such entity.[45] Foresight professionals and their tools are increasingly being utilized in both the private and public areas to help leaders deal with an increasingly complex and interconnected world.

Fashion and design

Fashion is one area of trend forecasting. The industry typically works 18 months ahead of the current selling season.[citation needed] Large retailers look at the obvious impact of everything from the weather forecast to runway fashion for consumer tastes. Consumer behavior and statistics are also important for a long-range forecast.

Artists and conceptual designers, by contrast, may feel that consumer trends are a barrier to creativity. Many of these ‘startists’ start micro trends but do not follow trends themselves.[citation needed]

Design

Foresight and futures thinking are rapidly being adopted by the design industry to insure more sustainable, robust and humanistic products. Design, much like future studies is an interdisciplinary field that considers global trends, challenges and opportunities to foster innovation. Designers are thus adopting futures methodologies including scenarios, trend forecasting, and futures research.

Holistic thinking that incorporates strategic, innovative and anticipatory solutions gives designers the tools necessary to navigate complex problems and develop novel future enhancing and visionary solutions.

The Association for Professional Futurists has also held meetings discussing the ways in which Design Thinking and Futures Thinking intersect.

Energy and alternative sources

 While the price of oil probably will go down and up, the basic price trajectory is sharply up. Market forces will play an important role, but there are not enough new sources of oil in the Earth to make up for escalating demands from China, India, and the Middle East, and to replace declining fields. And while many alternative sources of energy exist in principle, none exists in fact in quality or quantity sufficient to make up for the shortfall of oil soon enough. A growing gap looms between the effective end of the Age of Oil and the possible emergence of new energy sources.[46]

Education

As Foresight has expanded to include a broader range of social concerns all levels and types of education have been addressed, including formal and informal education. Many countries are beginning to implement Foresight in their Education policy. A few programs are listed below:
  • Finland's FinnSight 2015[47] - Implementation began in 2006 and though at the time was not referred to as "Foresight" they tend to display the characteristics of a foresight program.
  • Singapore's Ministry of Education Master plan for Information Technology in Education[48] - This third Masterplan continues what was built on in the 1st and 2nd plans to transform learning environments to equip students to compete in a knowledge economy.

Research centers

Futurists and foresight thought leaders

Books

Periodicals and monographs

Organizations


Mandatory Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine   Palestine 1920–...