Search This Blog

Thursday, July 29, 2021

Futures studies

From Wikipedia, the free encyclopedia

A rough approximation of Pangaea Proxima, a potential supercontinent that may exist in about 250 million years according to the early model on the Paleomap Project website
 
Moore's law is an example of futurology; it is a statistical collection of past and present trends with the goal of accurately extrapolating future trends.

Futures studies, futures research or futurology is the systematic, interdisciplinary and holistic study of social and technological advancement, and other environmental trends, often for the purpose of exploring how people will live and work in the future. Predictive techniques, such as forecasting, can be applied, but contemporary futures studies scholars emphasize the importance of systematically exploring alternatives. In general, it can be considered as a branch of the social sciences and parallel to the field of history. Futures studies (colloquially called "futures" by many of the field's practitioners) seeks to understand what is likely to continue and what could plausibly change. Part of the discipline thus seeks a systematic and pattern-based understanding of past and present, and to explore the possibility of future events and trends.

Unlike the physical sciences where a narrower, more specified system is studied, futurology concerns a much bigger and more complex world system. The methodology and knowledge are much less proven than in natural science and social sciences like sociology and economics. There is a debate as to whether this discipline is an art or science, and it is sometimes described as pseudoscience; nevertheless, the Association of Professional Futurists was formed in 2002, a Foresight Competency Model was developed in 2017, and it is now possible to academically study it, for example at the FU Berlin in their master's course Zukunftsforschung.

Overview

Futurology is an interdisciplinary field that aggregates and analyzes trends, with both lay and professional methods, to compose possible futures. It includes analyzing the sources, patterns, and causes of change and stability in an attempt to develop foresight. Around the world the field is variously referred to as futures studies, futures research, strategic foresight, futuristics, futures thinking, futuring, and futurology. Futures studies and strategic foresight are the academic field's most commonly used terms in the English-speaking world.

Foresight was the original term and was first used in this sense by H.G. Wells in 1932. "Futurology" is a term common in encyclopedias, though it is used almost exclusively by nonpractitioners today, at least in the English-speaking world. "Futurology" is defined as the "study of the future." The term was coined by German professor Ossip K. Flechtheim in the mid-1940s, who proposed it as a new branch of knowledge that would include a new science of probability. This term has fallen from favor in recent decades because modern practitioners stress the importance of alternative, plausible, preferable and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures.

Three factors usually distinguish futures studies from the research conducted by other disciplines (although all of these disciplines overlap, to differing degrees). First, futures studies often examines trends to compose possible, probable, and preferable futures along with the role "wild cards" can play on future scenarios. Second, futures studies typically attempts to gain a holistic or systemic view based on insights from a range of different disciplines, generally focusing on the STEEP categories of Social, Technological, Economic, Environmental and Political. Third, futures studies challenges and unpacks the assumptions behind dominant and contending views of the future. The future thus is not empty but fraught with hidden assumptions. For example, many people expect the collapse of the Earth's ecosystem in the near future, while others believe the current ecosystem will survive indefinitely. A foresight approach would seek to analyze and highlight the assumptions underpinning such views.

As a field, futures studies expands on the research component, by emphasizing the communication of a strategy and the actionable steps needed to implement the plan or plans leading to the preferable future. It is in this regard, that futures studies evolves from an academic exercise to a more traditional business-like practice, looking to better prepare organizations for the future.

Futures studies does not generally focus on short term predictions such as interest rates over the next business cycle, or of managers or investors with short-term time horizons. Most strategic planning, which develops goals and objectives with time horizons of one to three years, is also not considered futures. Plans and strategies with longer time horizons that specifically attempt to anticipate possible future events are definitely part of the field. Learning about medium and long-term developments may at times be observed from their early signs. As a rule, futures studies is generally concerned with changes of transformative impact, rather than those of an incremental or narrow scope.

The futures field also excludes those who make future predictions through professed supernatural means.

To complete a futures study, a domain is selected for examination. The domain is the main idea of the project, or what the outcome of the project seeks to determine. Domains can have a strategic or exploratory focus and must narrow down the scope of the research. It examines what will, and more importantly, will not be discussed in the research. Futures practitioners study trends focusing on STEEP (Social Technological, Economic, Environments and Political) baselines. Baseline exploration examine current STEEP environments to determine normal trends, called baselines. Next, practitioners use scenarios to explore different futures outcomes. Scenarios examine how the future can be different. 1. Collapse Scenarios seek to answer: What happens if the STEEP baselines fall into ruin and no longer exist? How will that impact STEEP categories? 2. Transformation Scenarios: explore futures with the baseline of society transiting to a “new” state. How are the STEEP categories effected if society has a whole new structure? 3. New Equilibrium: examines an entire change to the structure of the domain. What happens if the baseline changes to a “new” baseline within the same structure of society? Hines, Andy; Bishop, Peter (2006). Thinking About The Future Guidelines for Strategic Foresight.

History

Origins

Sir Thomas More, originator of the 'Utopian' ideal.

Johan Galtung and Sohail Inayatullah argue in Macrohistory and Macrohistorians that the search for grand patterns of social change goes all the way back to Sima Qian (145-90BC) and his theory of the cycles of virtue, although the work of Ibn Khaldun (1332–1406) such as The Muqaddimah would be an example that is perhaps more intelligible to modern sociology. Early western examples include Sir Thomas More’s “Utopia,” published in 1516, and based upon Plato’s “Republic,” in which a future society has overcome poverty and misery to create a perfect model for living. This work was so powerful that utopias, originally meaning "nowhere", have come to represent positive and fulfilling futures in which everyone's needs are met.

Some intellectual foundations of futures studies appeared in the mid-19th century. Isadore Comte, considered the father of scientific philosophy, was heavily influenced by the work of utopian socialist Henri Saint-Simon, and his discussion of the metapatterns of social change presages futures studies as a scholarly dialogue.

The first works that attempt to make systematic predictions for the future were written in the 18th century. Memoirs of the Twentieth Century written by Samuel Madden in 1733, takes the form of a series of diplomatic letters written in 1997 and 1998 from British representatives in the foreign cities of Constantinople, Rome, Paris, and Moscow. However, the technology of the 20th century is identical to that of Madden's own era - the focus is instead on the political and religious state of the world in the future. Madden went on to write The Reign of George VI, 1900 to 1925, where (in the context of the boom in canal construction at the time) he envisioned a large network of waterways that would radically transform patterns of living - "Villages grew into towns and towns became cities".

In 1845, Scientific American, the oldest continuously published magazine in the U.S., began publishing articles about scientific and technological research, with a focus upon the future implications of such research. It would be followed in 1872 by the magazine Popular Science, which was aimed at a more general readership.

The genre of science fiction became established towards the end of the 19th century, with notable writers, including Jules Verne and H. G. Wells, setting their stories in an imagined future world.

Early 20th Century

H. G. Wells first advocated for 'future studies' in a lecture delivered in 1902.

According to W. Warren Wagar, the founder of future studies was H. G. Wells. His Anticipations of the Reaction of Mechanical and Scientific Progress Upon Human Life and Thought: An Experiment in Prophecy, was first serially published in The Fortnightly Review in 1901. Anticipating what the world would be like in the year 2000, the book is interesting both for its hits (trains and cars resulting in the dispersion of population from cities to suburbs; moral restrictions declining as men and women seek greater sexual freedom; the defeat of German militarism, the existence of a European Union, and a world order maintained by "English-speaking peoples" based on the urban core between Chicago and New York) and its misses (he did not expect successful aircraft before 1950, and averred that "my imagination refuses to see any sort of submarine doing anything but suffocate its crew and founder at sea").

Moving from narrow technological predictions, Wells envisioned the eventual collapse of the capitalist world system after a series of destructive total wars. From this havoc would ultimately emerge a world of peace and plenty, controlled by competent technocrats.

The work was a bestseller, and Wells was invited to deliver a lecture at the Royal Institution in 1902, entitled The Discovery of the Future. The lecture was well-received and was soon republished in book form. He advocated for the establishment of a new academic study of the future that would be grounded in scientific methodology rather than just speculation. He argued that a scientifically ordered vision of the future "will be just as certain, just as strictly science, and perhaps just as detailed as the picture that has been built up within the last hundred years to make the geological past." Although conscious of the difficulty in arriving at entirely accurate predictions, he thought that it would still be possible to arrive at a "working knowledge of things in the future".

In his fictional works, Wells predicted the invention and use of the atomic bomb in The World Set Free (1914). In The Shape of Things to Come (1933) the impending World War and cities destroyed by aerial bombardment was depicted. However, he didn't stop advocating for the establishment of a futures science. In a 1933 BBC broadcast he called for the establishment of "Departments and Professors of Foresight", foreshadowing the development of modern academic futures studies by approximately 40 years.

At the beginning of the 20th century future works were often shaped by political forces and turmoil. The WWI era led to adoption of futures thinking in institutions throughout Europe. The Russian Revolution led to the 1921 establishment of the Soviet Union's Gosplan, or State Planning Committee, which was active until the dissolution of the Soviet Union. Gosplan was responsible for economic planning and created plans in five year increments to govern the economy. One of the first Soviet dissidents, Yevgeny Zamyatin, published the first dystopian novel, We, in 1921. The science fiction and political satire featured a future police state and was the first work censored by the Soviet censorship board, leading to Zamyatin's political exile.

In the United States, President Hoover created the Research Committee on Social Trends, which produced a report in 1933. The head of the committee, William F. Ogburn, analyzed the past to chart trends and project those trends into the future, with a focus on technology. Similar technique was used during The Great Depression, with the addition of alternative futures and a set of likely outcomes that resulted in the creation of Social Security and the Tennessee Valley development project.

The WWII era emphasized the growing need for foresight. The Nazis used strategic plans to unify and mobilize their society with a focus on creating a fascist utopia. This planning and the subsequent war forced global leaders to create their own strategic plans in response. The post-war era saw the creation of numerous nation states with complex political alliances and was further complicated by the introduction of nuclear power.

Project RAND was created in 1946 as joint project between the United States Army Air Forces and the Douglas Aircraft Company, and later incorporated as the non-profit RAND corporation. Their objective was the future of weapons, and long-range planning to meet future threats. Their work has formed the basis of US strategy and policy in regard to nuclear weapons, the Cold War, and the space race.

Mid-Century Emergence

Futures studies truly emerged as an academic discipline in the mid-1960s. First-generation futurists included Herman Kahn, an American Cold War strategist for the RAND Corporation who wrote On Thermonuclear War (1960), Thinking about the unthinkable (1962) and The Year 2000: a framework for speculation on the next thirty-three years (1967); Bertrand de Jouvenel, a French economist who founded Futuribles International in 1960; and Dennis Gabor, a Hungarian-British scientist who wrote Inventing the Future (1963) and The Mature Society. A View of the Future (1972).

Future studies had a parallel origin with the birth of systems science in academia, and with the idea of national economic and political planning, most notably in France and the Soviet Union. In the 1950s, the people of France were continuing to reconstruct their war-torn country. In the process, French scholars, philosophers, writers, and artists searched for what could constitute a more positive future for humanity. The Soviet Union similarly participated in postwar rebuilding, but did so in the context of an established national economic planning process, which also required a long-term, systemic statement of social goals. Future studies was therefore primarily engaged in national planning, and the construction of national symbols.

Rachel Carson, author of The Silent Spring, which helped launch the environmental movement and a new direction for futures research.

By contrast, in the United States, futures studies as a discipline emerged from the successful application of the tools and perspectives of systems analysis, especially with regard to quartermastering the war-effort. The Society for General Systems Research, founded in 1955, sought to understand cybernetics and the practical application of systems sciences, greatly influencing the U.S. foresight community. These differing origins account for an initial schism between futures studies in America and “futurology” in Europe: U.S. practitioners focused on applied projects, quantitative tools and systems analysis, whereas Europeans preferred to investigate the long-range future of humanity and the Earth, what might constitute that future, what symbols and semantics might express it, and who might articulate these.

By the 1960s, academics, philosophers, writers and artists across the globe had begun to explore enough future scenarios so as to fashion a common dialogue. Several of the most notable writers to emerge during this era include: sociologist Fred L. Polak, whose work Images of the Future (1961) discusses the importance of images to society's creation of the future; Marshall McLuhan, whose The Gutenberg Galaxy (1962) and Understanding Media: The Extensions of Man (1964) put forth his theories on how technologies change our cognitive understanding; and Rachel Carson’s The Silent Spring (1962) which was hugely influential not only to future studies but also the creation of the environmental movement.

Inventors such as Buckminster Fuller also began highlighting the effect technology might have on global trends as time progressed.

By the 1970s there was an obvious shift in the use and development of futures studies; its focus was no longer exclusive to governments and militaries. Instead, it embraced a wide array of technologies, social issues, and concerns. This discussion on the intersection of population growth, resource availability and use, economic growth, quality of life, and environmental sustainability – referred to as the "global problematique" – came to wide public attention with the publication of Limits to Growth by Donella Meadows, a study sponsored by the Club of Rome which detailed the results of a computer simulation of the future based on economic and population growth. Public investment in the future was further enhanced by the publication of Alvin & Heidi Toffler’s bestseller Future Shock (1970), and its exploration of how great amounts of change can overwhelm people and create a social paralysis due to “information overload.”

Further development

International dialogue became institutionalized in the form of the World Futures Studies Federation (WFSF), founded in 1967, with the noted sociologist, Johan Galtung, serving as its first president. In the United States, the publisher Edward Cornish, concerned with these issues, started the World Future Society, an organization focused more on interested laypeople. The Association of Professional Futurists was founded in 2002 and spans 40 countries with more than 400 members. Their mission is to promote professional excellence by “demonstrating the value of strategic foresight and futures studies.”

The first doctoral program on the Study of the Future, was founded in 1969 at the University Of Massachusetts by Christopher Dede and Billy Rojas. The next graduate program (Master's degree) was also founded by Christopher Dede in 1975 at the University of Houston–Clear Lake,. Oliver Markley of SRI (now SRI International) was hired in 1978 to move the program into a more applied and professional direction. The program moved to the University of Houston in 2007 and renamed the degree to Foresight. The program has remained focused on preparing professional futurists and providing high-quality foresight training for individuals and organizations in business, government, education, and non-profits. In 1976, the M.A. Program in Public Policy in Alternative Futures at the University of Hawaii at Manoa was established. The Hawaii program locates futures studies within a pedagogical space defined by neo-Marxism, critical political economic theory, and literary criticism. In the years following the foundation of these two programs, single courses in Futures Studies at all levels of education have proliferated, but complete programs occur only rarely.

In 2010, the Free University of Berlin initiated a master's degree Programme in Futures Studies, which is the first one in Germany. In 2012, the Finland Futures Research Centre started a master's degree Programme in Futures Studies at Turku School of Economics, a business school which is part of the University of Turku in Turku, Finland.

Foresight and futures work cover any domain a company considers important; therefore, a futurist must be able to cross domains and industries in their work. There is continued discussion by people in the profession on how to advance it, with some preferring to keep the field open to anyone interested in the future and others arguing to make the credentialing more rigorous. There are approximately 23 graduate and PhD programs in foresight globally, and many other certification courses.

The field currently faces the challenge of creating a coherent conceptual framework, codified into a well-documented curriculum (or curricula) featuring widely accepted and consistent concepts and theoretical paradigms linked to quantitative and qualitative methods, exemplars of those research methods, and guidelines for their ethical and appropriate application within society. As an indication that previously disparate intellectual dialogues have in fact started converging into a recognizable discipline, at least seven solidly-researched and well-accepted attempts to synthesize a coherent framework for the field have appeared: Eleonora Masini [sk]'s Why Futures Studies?, James Dator's Advancing Futures Studies, Ziauddin Sardar's Rescuing all of our Futures, Sohail Inayatullah's Questioning the future, Richard A. Slaughter's The Knowledge Base of Futures Studies, a collection of essays by senior practitioners, Wendell Bell's two-volume work, The Foundations of Futures Studies, and Andy Hines and Peter Bishop’s Thinking about the Future.

Probability and predictability

While understanding the difference between the concepts of probability and predictability are very important to understanding the future, the field of futures studies is generally more focused on long-term futures in which the concept of plausibility becomes the greater concern.  The usefulness of probability and predictability to the field lies more in analyzing the quantifiable trends and drivers which influence future change, than in predicting future events.

Some aspects of the future, such as celestial mechanics, are highly predictable, and may even be described by relatively simple mathematical models. At present however, science has yielded only a special minority of such "easy to predict" physical processes. Theories such as chaos theory, nonlinear science and standard evolutionary theory have allowed us to understand many complex systems as contingent (sensitively dependent on complex environmental conditions) and stochastic (random within constraints), making the vast majority of future events unpredictable, in any specific case.

Not surprisingly, the tension between predictability and unpredictability is a source of controversy and conflict among futures studies scholars and practitioners. Some argue that the future is essentially unpredictable, and that "the best way to predict the future is to create it." Others believe, as Flechtheim, that advances in science, probability, modeling and statistics will allow us to continue to improve our understanding of probable futures, as this area presently remains less well developed than methods for exploring possible and preferable futures.

As an example, consider the process of electing the president of the United States. At one level we observe that any U.S. citizen over 35 may run for president, so this process may appear too unconstrained for useful prediction. Yet further investigation demonstrates that only certain public individuals (current and former presidents and vice presidents, senators, state governors, popular military commanders, mayors of very large cities, celebrities, etc.) receive the appropriate "social credentials" that are historical prerequisites for election. Thus, with a minimum of effort at formulating the problem for statistical prediction, a much-reduced pool of candidates can be described, improving our probabilistic foresight. Applying further statistical intelligence to this problem, we can observe that in certain election prediction markets such as the Iowa Electronic Markets, reliable forecasts have been generated over long spans of time and conditions, with results superior to individual experts or polls. Such markets, which may be operated publicly or as an internal market, are just one of several promising frontiers in predictive futures research.

Such improvements in the predictability of individual events do not though, from a complexity theory viewpoint, address the unpredictability inherent in dealing with entire systems, which emerge from the interaction between multiple individual events.

Futurology is sometimes described by scientists as pseudoscience. Science exists in the realm of the certain and builds knowledge through attempting to falsify predictions.  Futures studies, however, exists in the realm of the uncertain but also builds knowledge through attempting to falsify predictions and exposing uncertainty.  So in a sense, both science and futures studies share the same goal. The difference is that futures studies attempts to understand, mitigate, and utilize uncertainty.

Methodologies

In terms of methodology, futures practitioners employ a wide range of approaches, models and methods, in both theory and practice, many of which are derived from or informed by other academic or professional disciplines , including social sciences such as economics, psychology, sociology, religious studies, cultural studies, history, geography, and political science; physical and life sciences such as physics, chemistry, astronomy, biology; mathematics, including statistics, game theory and econometrics; applied disciplines such as engineering, computer sciences, and business management (particularly strategy).

The largest internationally peer-reviewed collection of futures research methods (1,300 pages) is Futures Research Methodology 3.0. Each of the 37 methods or groups of methods contains: an executive overview of each method's history, description of the method, primary and alternative usages, strengths and weaknesses, uses in combination with other methods, and speculation about future evolution of the method. Some also contain appendixes with applications, links to software, and sources for further information. More recent method books, such as "How Do We Explore Our Futures?" have also been published.

Given its unique objectives and material, the practice of futures studies only rarely features employment of the scientific method in the sense of controlled, repeatable and verifiable experiments with highly standardized methodologies. However, many futurists are informed by scientific techniques or work primarily within scientific domains. Borrowing from history, the futurist might project patterns observed in past civilizations upon present-day society to model what might happen in the future, or borrowing from technology, the futurist may model possible social and cultural responses to an emerging technology based on established principles of the diffusion of innovation. In short, the futures practitioner enjoys the synergies of an interdisciplinary laboratory.

As the plural term “futures” suggests, one of the fundamental assumptions in futures studies is that the future is plural not singular. That is, the future consists not of one inevitable future that is to be “predicted,” but rather of multiple alternative futures of varying likelihood which may be derived and described, and about which it is impossible to say with certainty which one will occur. The primary effort in futures studies, then, is to identify and describe alternative futures in order to better understand the driving forces of the present or the structural dynamics of a particular subject or subjects. The exercise of identifying alternative futures includes collecting quantitative and qualitative data about the possibility, probability, and desirability of change. The plural term "futures" in futures studies denotes both the rich variety of alternative futures, including the subset of preferable futures (normative futures), that can be studied, as well as the tenet that the future is many.

At present, the general futures studies model has been summarized as being concerned with "three Ps and a W", or possible, probable, and preferable futures, plus wildcards, which are unexpected, seemingly low probability but high impact events (positive or negative). Many futurists do not use the wild card approach. Rather, they use a methodology called Emerging Issues Analysis. It searches for the drivers of change, issues that are likely to move from unknown to the known, from low impact to high impact.

In terms of technique, futures practitioners originally concentrated on extrapolating present technological, economic or social trends, or on attempting to predict future trends. Over time, the discipline has come to put more and more focus on the examination of social systems and uncertainties, to the end of articulating scenarios. The practice of scenario development facilitates the examination of worldviews and assumptions through the causal layered analysis method (and others), the creation of preferred visions of the future, and the use of exercises such as backcasting to connect the present with alternative futures. Apart from extrapolation and scenarios, many dozens of methods and techniques are used in futures research (see below).

Therefore, the general practice of futures studies also sometimes includes the articulation of normative or preferred futures, and a major thread of practice involves connecting both extrapolated (exploratory) and normative research to assist individuals and organizations to model preferred futures amid shifting social changes. For instance, despite many wicked, global challenges in today's world from climate change to extreme poverty, the aspect of preferability or "what should happen" can at times be overlooked. Practitioners use varying proportions of collaboration, creativity and research to derive and define alternative futures, and to the degree that a “preferred” future might be sought, especially in an organizational context, techniques may also be deployed to develop plans or strategies for directed future shaping or implementation of a preferred future.

While some futurists are not concerned with assigning probability to future scenarios, other futurists find probabilities useful in certain situations, such as when probabilities stimulate thinking about scenarios within organizations . When dealing with the three Ps and a W model, estimates of probability are involved with two of the four central concerns (discerning and classifying both probable and wildcard events), while considering the range of possible futures, recognizing the plurality of existing alternative futures, characterizing and attempting to resolve normative disagreements on the future, and envisioning and creating preferred futures are other major areas of scholarship. Most estimates of probability in futures studies are normative and qualitative, though significant progress on statistical and quantitative methods (technology and information growth curves, cliometrics, predictive psychology, prediction markets, crowd-voting forecasts, etc.) has been made in recent decades.

Futures techniques

Futures techniques or methodologies may be viewed as “frameworks for making sense of data generated by structured processes to think about the future”. There is no single set of methods that are appropriate for all futures research. Different futures researchers intentionally or unintentionally promote use of favored techniques over a more structured approach. Selection of methods for use on futures research projects has so far been dominated by the intuition and insight of practitioners; but can better identify a balanced selection of techniques via acknowledgement of foresight as a process together with familiarity with the fundamental attributes of most commonly used methods.

Scenarios are a central technique in Futures Studies and are often confused with other techniques. The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.

Process for classifying a phenomenon as a scenario in the Intuitive Logics tradition.

Futurists use a diverse range of forecasting and foresight methods including:

Shaping alternative futures

Futurists use scenarios – alternative possible futures – as an important tool. To some extent, people can determine what they consider probable or desirable using qualitative and quantitative methods. By looking at a variety of possibilities one comes closer to shaping the future, rather than merely predicting it. Shaping alternative futures starts by establishing a number of scenarios. Setting up scenarios takes place as a process with many stages, and can take place in an evidence-based manner. Scenarios can also study unlikely and improbable developments that would otherwise be ignored. However, for credibility, they should not be entirely utopian or dystopian. One of those stages involves the study of emerging issues, such as megatrends, trends and weak signals. Megatrends illustrate major, long-term phenomena that change slowly, are often interlinked and cannot be transformed in an instant. Trends express an increase or a decrease in a phenomenon, and there are many ways to spot trends. Some argue that a trend persists long-term and long-range; affects many societal groups; grows slowly; and appears to have a profound basis. A fad operates in the short term, shows the vagaries of fashion, affects particular societal groups, and spreads quickly but superficially.

Futurists have a decidedly mixed reputation and a patchy track record at successful prediction. Many 1950s futurists predicted commonplace space tourism by the year 2000, but ignored the possibilities of ubiquitous, cheap computers. On the other hand, many forecasts have portrayed the future with some degree of accuracy. Sample predicted futures range from predicted ecological catastrophes, through a utopian future where the poorest human being lives in what present-day observers would regard as wealth and comfort, through the transformation of humanity into a posthuman life-form, to the destruction of all life on Earth in, say, a nanotechnological disaster. For reasons of convenience, futurists have often extrapolated present technical and societal trends and assumed they will develop at the same rate into the future; but technical progress and social upheavals, in reality, take place in fits and starts and in different areas at different rates.

Therefore, to some degree, the field has aimed to move away from prediction. Current futurists often present multiple scenarios that help their audience envision what "may" occur instead of merely "predicting the future". They claim that understanding potential scenarios helps individuals and organizations prepare with flexibility.

Many corporations use futurists as part of their risk management strategy, for horizon scanning and emerging issues analysis, and to identify wild cards – low probability, potentially high-impact risks. Understanding a range of possibilities can enhance the recognition of opportunities and threats. Every successful and unsuccessful business engages in futuring to some degree – for example in research and development, innovation and market research, anticipating competitor behavior and so on.

Weak signals, the future sign and wild cards

In futures research "weak signals" may be understood as advanced, noisy and socially situated indicators of change in trends and systems that constitute raw informational material for enabling anticipatory action. There is some confusion about the definition of weak signal by various researchers and consultants. Sometimes it is referred as future oriented information, sometimes more like emerging issues. The confusion has been partly clarified with the concept 'the future sign', by separating signal, issue and interpretation of the future sign.

A weak signal can be an early indicator of coming change, and an example might also help clarify the confusion. On May 27, 2012, hundreds of people gathered for a “Take the Flour Back” demonstration at Rothamsted Research in Harpenden, UK, to oppose a publicly funded trial of genetically modified wheat. This was a weak signal for a broader shift in consumer sentiment against genetically modified foods. When Whole Foods mandated the labeling of GMOs in 2013, this non-GMO idea had already become a trend and was about to be a topic of mainstream awareness.

"Wild cards" refer to low-probability and high-impact events "that happen quickly" and "have huge sweeping consequences," and materialize too quickly for social systems to effectively respond. Elina Hultunen notes that wild cards are not new, though they have become more prevalent. One reason for this may be the increasingly fast pace of change. Oliver Markley proposed four types of wild cards:

  • Type I Wild Card: low probability, high impact, high credibility
  • Type II Wild Card: high probability, high impact, low credibility
  • Type III Wild Card: high probability, high impact, disputed credibility
  • Type IV Wild Card: high probability, high impact, high credibility

He posits that it is important to track the emergence of "Type II Wild Cards" that have a high probability of occurring, but low credibility that it will happen. This focus is especially important to note because it is often difficult to persuade people to accept something they don't believe is happening, until they see the wild card. An example is climate change. This hypothesis has gone from Type I (high impact and high credibility, but low probability where science was accepted and thought unlikely to happen) to Type II (high probability, high impact, but low credibility as policy makers and lobbyists push back against the science), to Type III (high probability, high impact, high credibility)--at least for most people, There are still some who probably will not accept the science until the Greenland ice sheet has completely melted and sea-level has risen the seven meters estimated rise.

This concept may be embedded in standard foresight projects and introduced into anticipatory decision-making activity in order to increase the ability of social groups adapt to surprises arising in turbulent business environments. Such sudden and unique incidents might constitute turning points in the evolution of a certain trend or system. Wild cards may or may not be announced by weak signals, which are incomplete and fragmented data from which relevant foresight information might be inferred. Sometimes, mistakenly, wild cards and weak signals are considered as synonyms, which they are not. One of the most often cited examples of a wild card event in recent history is 9/11. Nothing had happened in the past that could point to such a possibility and yet it had a huge impact on everyday life in the United States, from simple tasks like how to travel via airplane to deeper cultural values. Wild card events might also be natural disasters, such as Hurricane Katrina, which can force the relocation of huge populations and wipe out entire crops or completely disrupt the supply chain of many businesses. Although wild card events can't be predicted, after they occur it is often easy to reflect back and convincingly explain why they happened.

Near-term predictions

A long-running tradition in various cultures, and especially in the media, involves various spokespersons making predictions for the upcoming year at the beginning of the year. These predictions are thought-provokers, which sometimes base themselves on current trends in culture (music, movies, fashion, politics); sometimes they make hopeful guesses as to what major events might take place over the course of the next year. Evidently, some of these predictions may come true as the year unfolds, though many fail. When predicted events fail to take place, the authors of the predictions may state that misinterpretation of the "signs" and portents may explain the failure of the prediction.

Marketers have increasingly started to embrace futures studies, in an effort to benefit from an increasingly competitive marketplace with fast production cycles, using such techniques as trendspotting as popularized by Faith Popcorn.

Trend analysis and forecasting

Megatrends

Trends come in different sizes. A megatrend extends over many generations, and in cases of climate, megatrends can cover periods prior to human existence. They describe complex interactions between many factors. The increase in population from the palaeolithic period to the present provides an example. Megatrends are likely to produce greater change than any previous one, because technology is causing trends to unfold at an accelerating pace. The concept was popularized by the 1982 book Megatrends by futurist John Naisbitt.

Potential trends

Possible new trends grow from innovations, projects, beliefs or actions and activism that have the potential to grow and eventually go mainstream in the future.

Branching trends

Very often, trends relate to one another the same way as a tree-trunk relates to branches and twigs. For example, a well-documented movement toward equality between men and women might represent a branch trend. The trend toward reducing differences in the salaries of men and women in the Western world could form a twig on that branch.

Life cycle of a trend

Understanding the technology adoption cycle helps futurists monitor trend development. Trends start as weak signals by small mentions in fringe media outlets, discussion conversations or blog posts, often by innovators. As these ideas, projects, beliefs or technologies gain acceptance, they move into the phase of early adopters. In the beginning of a trend's development, it is difficult to tell if it will become a significant trend that creates changes or merely a trendy fad that fades into forgotten history. Trends will emerge as initially unconnected dots but eventually coalesce into persistent change.

Some trends emerge when enough confirmation occurs in the various media, surveys or questionnaires to show that it has an increasingly accepted value, behavior or technology, it becomes accepted as a bona fide trend. Trends can also gain confirmation by the existence of other trends perceived as springing from the same branch. Some commentators claim that when 15% to 25% of a given population integrates an innovation, project, belief or action into their daily life then a trend becomes mainstream.

General Hype Cycle used to visualize technological life stages of maturity, adoption, and social application.

Life cycle of technologies

Gartner created their Hype Cycle to illustrate the phases a technology moves through as it grows from research and development to mainstream adoption. The unrealistic expectations and subsequent disillusionment that virtual reality experienced in the 1990s and early 2000s is an example of the middle phases encountered before a technology can begin to be integrated into society.

Education

Education in the field of futures studies has taken place for some time. Beginning in the United States in the 1960s, it has since developed in many different countries. Futures education encourages the use of concepts, tools and processes that allow students to think long-term, consequentially, and imaginatively. It generally helps students to:

  1. conceptualize more just and sustainable human and planetary futures.
  2. develop knowledge and skills of methods and tools used to help people understand, map, and influence the future by exploring probable and preferred futures.
  3. understand the dynamics and influence that human, social and ecological systems have on alternative futures.
  4. conscientize responsibility and action on the part of students toward creating better futures.

Thorough documentation of the history of futures education exists, for example in the work of Richard A. Slaughter (2004), David Hicks, Ivana Milojević to name a few.

While futures studies remains a relatively new academic tradition, numerous tertiary institutions around the world teach it. These vary from small programs, or universities with just one or two classes, to programs that offer certificates and incorporate futures studies into other degrees, (for example in planning, business, environmental studies, economics, development studies, science and technology studies). Various formal Masters-level programs exist on six continents. Finally, doctoral dissertations around the world have incorporated futures studies (see e.g. Rohrbeck, 2010; von der Gracht, 2008; Hines, 2012). A recent survey documented approximately 50 cases of futures studies at the tertiary level.

A Futures Studies program is offered at Tamkang University, Taiwan. Futures Studies is a required course at the undergraduate level, with between three and five thousand students taking classes on an annual basis. Housed in the Graduate Institute of Futures Studies is an MA Program. Only ten students are accepted annually in the program. Associated with the program is the Journal of Futures Studies.

The longest running Future Studies program in North America was established in 1975 at the University of Houston–Clear Lake. It moved to the University of Houston in 2007 and renamed the degree to Foresight. The program was established on the belief that if history is studied and taught in an academic setting, then so should the future. Its mission is to prepare professional futurists. The curriculum incorporates a blend of the essential theory, a framework and methods for doing the work, and a focus on application for clients in business, government, nonprofits, and society in general.

As of 2003, over 40 tertiary education establishments around the world were delivering one or more courses in futures studies. The World Futures Studies Federation has a comprehensive survey of global futures programs and courses. The Acceleration Studies Foundation maintains an annotated list of primary and secondary graduate futures studies programs.

A MA Program in Futures Studies has been offered at Free University of Berlin since 2010.

A MSocSc and PhD program in Futures Studies is offered at the University of Turku, Finland.

Applications of foresight and specific fields

General applicability and use of foresight products

Several corporations and government agencies utilize foresight products to both better understand potential risks and prepare for potential opportunities as an anticipatory approach. Several government agencies publish material for internal stakeholders as well as make that material available to broader public. Examples of this include the US Congressional Budget Office long term budget projections, the National Intelligence Center, and the United Kingdom Government Office for Science. Much of this material is used by policy makers to inform policy decisions and government agencies to develop long-term plan. Several corporations, particularly those with long product development lifecycles, utilize foresight and future studies products and practitioners in the development of their business strategies. The Shell Corporation is one such entity. Foresight professionals and their tools are increasingly being used in both the private and public areas to help leaders deal with an increasingly complex and interconnected world.

Imperial cycles and world order

Imperial cycles represent an "expanding pulsation" of "mathematically describable" macro-historic trend.

Chinese philosopher K'ang Yu-wei and French demographer Georges Vacher de Lapouge stressed in the late 19th century that the trend cannot proceed indefinitely on the finite surface of the globe. The trend is bound to culminate in a world empire. K'ang Yu-wei predicted that the matter will be decided in a contest between Washington and Berlin; Vacher de Lapouge foresaw this contest as being between the United States and Russia and wagered the odds were in the United States' favour. Both published their futures studies before H. G. Wells introduced the science of future in his Anticipations (1901).

Four later anthropologists—Hornell Hart, Raoul Naroll, Louis Morano, and Robert Carneiro—researched the expanding imperial cycles. They reached the same conclusion that a world empire is not only pre-determined but close at hand and attempted to estimate the time of its appearance.

Education

As foresight has expanded to include a broader range of social concerns all levels and types of education have been addressed, including formal and informal education. Many countries are beginning to implement Foresight in their Education policy. A few programs are listed below:

  • Finland's FinnSight 2015 - Implementation began in 2006 and though at the time was not referred to as "Foresight" they tend to display the characteristics of a foresight program.
  • Singapore's Ministry of Education Master plan for Information Technology in Education - This third Masterplan continues what was built on in the 1st and 2nd plans to transform learning environments to equip students to compete in a knowledge economy.
  • The World Future Society, founded in 1966, is the largest and longest-running community of futurists in the world. WFS established and built futurism from the ground up—through publications, global summits, and advisory roles to world leaders in business and government.

By the early 2000s, educators began to independently institute futures studies (sometimes referred to as futures thinking) lessons in K-12 classroom environments. To meet the need, non-profit futures organizations designed curriculum plans to supply educators with materials on the topic. Many of the curriculum plans were developed to meet common core standards. Futures studies education methods for youth typically include age-appropriate collaborative activities, games, systems thinking and scenario building exercises.

There are several organizations devoted to furthering the advancement of Foresight and Future Studies worldwide. Teach the Future emphasizes foresight educational practices appropriate for K-12 schools. The University of Houston has a Master's (MS) level graduate program through the College of Technology as well as a certificate program for those interested in advanced studies. The Department of Political Science at the University of Hawaii Manoa has the Hawaii Research Center for Future Studies which offers a Master's (MA) in addition to a Doctorate (Ph.D.).

Science fiction

Wendell Bell and Ed Cornish acknowledge science fiction as a catalyst to future studies, conjuring up visions of tomorrow. Science fiction's potential to provide an “imaginative social vision” is its contribution to futures studies and public perspective. Productive sci-fi presents plausible, normative scenarios. Jim Dator attributes the foundational concepts of “images of the future” to Wendell Bell, for clarifying Fred Polak's concept in Images of the Future, as it applies to futures studies. Similar to futures studies’ scenarios thinking, empirically supported visions of the future are a window into what the future could be. However, unlike in futures studies, most science fiction works present a single alternative, unless the narrative deals with multiple timelines or alternative realities, such as in the works of Phillip K. Dick, and a multitude of small and big screen works. Pamela Sargent states, “Science fiction reflects attitudes typical of this century.” She gives a brief history of impactful sci-fi publications, like The Foundation Trilogy, by Isaac Asimov and Starship Troopers, by Robert A. Heinlein. Alternate perspectives validate sci-fi as part of the fuzzy “images of the future.”

Brian David Johnson is a futurist and author who uses science fiction to help build the future. He has been a futurist at Intel, and is now the resident futurist at Arizona State University. “His work is called ‘future casting’—using ethnographic field studies, technology research, trend data, and even science fiction to create a pragmatic vision of consumers and computing.” Brian David Johnson has developed a practical guide to utilizing science fiction as a tool for futures studies. Science Fiction Prototyping combines the past with the present, including interviews with notable science fiction authors to provide the tools needed to “design the future with science fiction.”

Science Fiction Prototyping has five parts:

1.     Pick your science concept and build an imaginative world

2.     The scientific inflection point

3.     The consequences, for better, or worse, or both, of the science or technology on the people and your world

4.     The human inflection point

5.     Reflection, what did we learn?

“A full Science Fiction Prototyping (SFP) is 6-12 pages long, with a popular structure being; an introduction, background work, the fictional story (the bulk of the SFP), a short summary and a summary (reflection). Most often science fiction prototypes extrapolate current science forward and, therefore, include a set of references at the end.”

Ian Miles reviews The New Encyclopedia of Science Fiction,” identifying ways Science Fiction and Futures Studies “cross-fertilize, as well as the ways in which they differ distinctly.” Science Fiction cannot be simply considered fictionalized Futures Studies. It may have aims other than foresight or “prediction, and be no more concerned with shaping the future than any other genre of literature.”  It is not to be understood as an explicit pillar of futures studies, due to its inconsistency of integrated futures research. Additionally, Dennis Livingston, a literature and Futures journal critic says, “The depiction of truly alternative societies has not been one of science fiction’s strong points, especially” preferred, normative envisages. The strengths of the genre as a form of futurist thinking are discussed by Tom Lombardo, who argues that select science fiction "combines a highly detailed and concrete level of realism with theoretical speculation on the future", "addresses all the main dimensions of the future and synthesizes all these dimensions into integrative visions of the future", and "reflects contemporary and futurist thinking", therefore it "can be viewed as the mythology of the future."

It is notable that although there are no hard limits on horizons in future studies and foresight efforts, typical future horizons explored are within the realm of the practical and do not span more than a few decades. Nevertheless, there are hard science fiction works that can be applicable as visioning exercises that span longer periods of time when the topic is of a significant time scale, such as is in the case of Kim Stanley Robinson's Mars Trilogy, which deals with the terraforming of Mars and extends two centuries forward through the early 23rd century. In fact, there is some overlap between science fiction writers and professional futurists such as in the case of David Brin. Arguably, the work of science fiction authors has seeded many ideas that have later been developed (be it technological or social in nature) - from early works of Jules Verne and H.G. Wells to the later Arthur C. Clarke and William Gibson. Beyond literary works, futures studies and futurists have influenced film and TV works. The 2002 movie adaptation of Phillip K. Dick's short stort, Minority Report, had a group of consultants to build a realistic vision of the future, including futurist Peter Schwartz. TV shows such as HBO's Westworld, and Channel 4/Netflix' Black Mirror follow many of the rules of futures studies to build the world, the scenery and storytelling in a way futurists would in experiential scenarios and works.

Science Fiction novels for Futurists:

  • William Gibson, Neuromancer, Ace Books, 1984. (Pioneering cyberpunk novel)
  • Kim Stanley Robinson, Red Mars, Spectra, 1993. (Story on the founding a colony on Mars)
  • Bruce Sterling, Heavy Weather, Bantam, 1994. (Story about a world with drastically altered climate and weather)
  • Iain Banks’ Culture novels (Space operas in distance future with thoughtful treatments of advanced AI)

Government agencies

Several governments have formalized strategic foresight agencies to encourage long range strategic societal planning, with most notable are the governments of Singapore, Finland, and the United Arab Emirates. Other governments with strategic foresight agencies include Canada's Policy Horizons Canada and the Malaysia's Malaysian Foresight Institute.

The Singapore government's Centre for Strategic Futures (CSF) is part of the Strategy Group within the Prime Minister's Office. Their mission is to position the Singapore government to navigate emerging strategic challenges and harness potential opportunities. Singapore's early formal efforts in strategic foresight began in 1991 with the establishment of the Risk Detection and Scenario Planning Office in the Ministry of Defence. In addition to the CSF, the Singapore government has established the Strategic Futures Network, which brings together deputy secretary-level officers and foresight units across the government to discuss emerging trends that may have implications for Singapore.

Since the 1990s, Finland has integrated strategic foresight within the parliament and Prime Minister's Office. The government is required to present a "Report of the Future" each parliamentary term for review by the parliamentary Committee for the Future. Led by the Prime Minister's Office, the Government Foresight Group coordinates the government's foresight efforts. Futures research is supported by the Finnish Society for Futures Studies (established in 1980), the Finland Futures Research Centre (established in 1992), and the Finland Futures Academy (established in 1998) in coordination with foresight units in various government agencies.

In the United Arab Emirates, Sheikh Mohammed bin Rashid, Vice President and Ruler of Dubai, announced in September 2016 that all government ministries were to appoint Directors of Future Planning. Sheikh Mohammed described the UAE Strategy for the Future as an "integrated strategy to forecast our nation’s future, aiming to anticipate challenges and seize opportunities". The Ministry of Cabinet Affairs and Future(MOCAF) is mandated with crafting the UAE Strategy for the Future and is responsible for the portfolio of the future of UAE.

In 2018, the United States General Accountability Office (GAO) created the Center for Strategic Foresight to enhance its ability to “serve as the agency’s principal hub for identifying, monitoring, and analyzing emerging issues facing policymakers.” The Center is composed of non-resident Fellows who are considered leading experts in foresight, planning and future thinking. In September 2019 they hosted a conference on space policy and “deep fake” synthetic media to manipulate online and real-world interactions.

Risk analysis and management

Foresight is a framework or lens which could be used in risk analysis and management in a medium- to long-term time range. A typical formal foresight project would identify key drivers and uncertainties relevant to the scope of analysis. It would also analyze how the drivers and uncertainties could interact to create the most probable scenarios of interest and what risks they might contain. An additional step would be identifying actions to avoid or minimize these risks.

One classic example of such work was how foresight work at the Royal Dutch Shell international oil company led to envision the turbulent oil prices of the 1970s as a possibility and better embed this into company planning. Yet the practice at Shell focuses on stretching the company's thinking rather than in making predictions. Its planning is meant to link and embed scenarios in “organizational processes such as strategy making, innovation, risk management, public affairs, and leadership development.”

Foresight studies can also consider the possibility of “wild card” events – or events that many consider would be impossible to envision – although often such events can be imagined as remote possibilities as part of foresight work. One of many possible areas of focus for a foresight lens could also be identifying conditions for potential scenarios of high-level risks to society.

These risks may arise from the development and adoption of emerging technologies and/or social change. Special interest lies on hypothetical future events that have the potential to damage human well-being on a global scale - global catastrophic risks. Such events may cripple or destroy modern civilization or, in the case of existential risks, even cause human extinction. Potential global catastrophic risks include but are not limited to climate change, hostile artificial intelligence, nanotechnology weapons, nuclear warfare, total war, and pandemics. The aim of a professional futurist would be to identify conditions that could lead to these events in order to create “pragmatically feasible roads to alternative futures.”

Academic Programs and Research centers

Futurists

Futurists are practitioners of the foresight profession, which seeks to provide organizations and individuals with images of the future to help them prepare for contingencies and to maximize opportunities. A foresight project begins with a question that ponders the future of any given subject area, including technology, medicine, government and business. Futurists engage in environmental scanning to search for drivers of change and emerging trends that may have an effect on the focus topic. The scanning process includes reviewing social media platforms, researching already prepared reports, engaging in Delphi studies, reading articles and any other sources of relevant information and preparing and analyzing data extrapolations. Then, through one of a number of highly structured methods futurists organize this information and use it to create multiple future scenarios for the topic, also known as a domain. The value of preparing many different versions of the future rather than a singular prediction is that they provide a client with the ability to prepare long-range plans that will weather and optimize a variety of contexts.

Books

APF's list of most significant futures works

The Association for Professional Futurists recognizes the Most Significant Futures Works for the purpose of identifying and rewarding the work of foresight professionals and others whose work illuminates aspects of the future.

Author Title
Bertrand de Jouvenel L’Art de la conjecture (The Art of Conjecture), 2008 
Donella Meadows The Limits to Growth, 2008 
Peter Schwartz The Art of the Long View, 2008 
Ray Kurzweil The Age of Spiritual Machines: When Computers Exceed Human Intelligence, 2008 
Jerome C. Glenn & Theodore J. Gordon Futures Research Methodology Version 2.0, 2008 
Jerome C. Glenn & Theodore J. Gordon The State of the Future, 2008
Jared Diamond Collapse: How Societies Choose to Fail or Succeed, 2008 
Richard Slaughter The Biggest Wake up Call in History, 2012
Richard Slaughter The Knowledge Base of Futures Studies, 2008
Worldwatch Institute State of the World (book series), 2008
Nassim Nicholas Taleb The Black Swan: The Impact of the Highly Improbable, 2012 
Tim Jackson (economist) Prosperity Without Growth, 2012 
Jørgen Randers 2052: A Global Forecast for the Next Forty Years, 2013
Stroom den Haag Food for the City, 2013
Andy Hines & Peter C. Bishop Teaching About the Future, 2014 
James A. Dator Advancing Futures - Futures Studies in Higher Education
Ziauddin Sardar Future: All that Matters, 2014
Emma Marris Rambunctious Garden: Saving Nature in a Post-Wild World, 2014
Sohail Inayatullah What Works: Case Studies in the Practice of Foresight, 2016 
Dougal Dixon After Man: A Zoology of the Future

Other notable foresight books

For further suggestions, please visit A Resource Bibliography by Dr. Peter Bishop

Periodicals and journals

Organizations

Foresight professional networks

Public-sector foresight organizations

Non-governmental foresight organizations

Technological unemployment

From Wikipedia, the free encyclopedia
 
In the 21st century, robots are beginning to perform roles not just in manufacturing, but also in the service sector – in healthcare, for example.

Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment.

Technological change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation), and humans' role in these processes are minimized. Just as horses were gradually made obsolete as transport by the automobile and as labourer by the tractor, humans' jobs have also been affected throughout modern history

Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. During World War II, Alan Turing's Bombe machine compressed and decoded thousands of man-years worth of encrypted data in a matter of hours. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills.

That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs. Whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s, who said it was a "only a temporary phase of maladjustment". Yet the issue of machines displacing human labour has been discussed since at least Aristotle's time.

Prior to the 18th century, both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it became increasingly apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment.

The view that technology is unlikely to lead to long-term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century.

In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may be increasing worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation. However, their findings have frequently been misinterpreted, and on the PBS NewsHours they again made clear that their findings do not necessarily imply future technological unemployment. While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Recent technological innovations have the potential to render humans obsolete with the professional, white-collar, low-skilled, creative fields, and other "mental jobs". The World Bank's World Development Report 2019 argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance.

Issues within the debates

Long term effects on employment

There are more sectors losing jobs than creating jobs. And the general-purpose aspect of software technology means that even the industries and jobs that it creates are not forever.

Lawrence Summers

All participants in the technological employment debates agree that temporary job losses can result from technological innovation. Similarly, there is no dispute that innovation sometimes has positive effects on workers. Disagreement focuses on whether it is possible for innovation to have a lasting negative impact on overall employment. Levels of persistent unemployment can be quantified empirically, but the causes are subject to debate. Optimists accept short term unemployment may be caused by innovation, yet claim that after a while, compensation effects will always create at least as many jobs as were originally destroyed. While this optimistic view has been continually challenged, it was dominant among mainstream economists for most of the 19th and 20th centuries. For example, labor economists Jacob Mincer and Stephan Danninger developed an empirical study using micro-data from the Panel Study of Income Dynamics, and find that although in the short run, technological progress seems to have unclear effects on aggregate unemployment, it reduces unemployment in the long run. When they include a 5-year lag, however, the evidence supporting a short-run employment effect of technology seems to disappear as well, suggesting that technological unemployment "appears to be a myth".

The concept of structural unemployment, a lasting level of joblessness that does not disappear even at the high point of the business cycle, became popular in the 1960s. For pessimists, technological unemployment is one of the factors driving the wider phenomena of structural unemployment. Since the 1980s, even optimistic economists have increasingly accepted that structural unemployment has indeed risen in advanced economies (ref missing), but they have tended to blame this on globalisation and offshoring rather than technological change. Others claim a chief cause of the lasting increase in unemployment has been the reluctance of governments to pursue expansionary policies since the displacement of Keynesianism that occurred in the 1970s and early 80s. In the 21st century, and especially since 2013, pessimists have been arguing with increasing frequency that lasting worldwide technological unemployment is a growing threat.

Compensation effects

John Kay inventor of the Fly Shuttle AD 1753, by Ford Madox Brown, depicting the inventor John Kay kissing his wife goodbye as men carry him away from his home to escape a mob angry about his labour-saving mechanical loom. Compensation effects were not widely understood at this time.

Compensation effects are labour-friendly consequences of innovation which "compensate" workers for job losses initially caused by new technology. In the 1820s, several compensation effects were described by Say in response to Ricardo's statement that long term technological unemployment could occur. Soon after, a whole system of effects was developed by Ramsey McCulloch. The system was labelled "compensation theory" by Marx, who proceeded to attack the ideas, arguing that none of the effects were guaranteed to operate. Disagreement over the effectiveness of compensation effects has remained a central part of academic debates on technological unemployment ever since.

Compensation effects include:

  1. By new machines. (The labour needed to build the new equipment that applied innovation requires.)
  2. By new investments. (Enabled by the cost savings and therefore increased profits from the new technology.)
  3. By changes in wages. (In cases where unemployment does occur, this can cause a lowering of wages, thus allowing more workers to be re-employed at the now lower cost. On the other hand, sometimes workers will enjoy wage increases as their profitability rises. This leads to increased income and therefore increased spending, which in turn encourages job creation.)
  4. By lower prices. (Which then lead to more demand, and therefore more employment.) Lower prices can also help offset wage cuts, as cheaper goods will increase workers' buying power.
  5. By new products. (Where innovation directly creates new jobs.)

The "by new machines" effect is now rarely discussed by economists; it is often accepted that Marx successfully refuted it. Even pessimists often concede that product innovation associated with the "by new products" effect can sometimes have a positive effect on employment. An important distinction can be drawn between 'process' and 'product' innovations. Evidence from Latin America seems to suggest that product innovation significantly contributes to the employment growth at the firm level, more so than process innovation. The extent to which the other effects are successful in compensating the workforce for job losses has been extensively debated throughout the history of modern economics; the issue is still not resolved. One such effect that potentially complements the compensation effect is job multiplier. According to research developed by Enrico Moretti, with each additional skilled job created in high tech industries in a given city, more than two jobs are created in the non-tradable sector. His findings suggest that technological growth and the resulting job-creation in high-tech industries might have a more significant spillover effect than we have anticipated. Evidence from Europe also supports such a job multiplier effect, showing local high-tech jobs could create five additional low-tech jobs.

Many economists now pessimistic about technological unemployment accept that compensation effects did largely operate as the optimists claimed through most of the 19th and 20th century. Yet they hold that the advent of computerisation means that compensation effects are now less effective. An early example of this argument was made by Wassily Leontief in 1983. He conceded that after some disruption, the advance of mechanization during the Industrial Revolution actually increased the demand for labour as well as increasing pay due to effects that flow from increased productivity. While early machines lowered the demand for muscle power, they were unintelligent and needed large armies of human operators to remain productive. Yet since the introduction of computers into the workplace, there is now less need not just for muscle power but also for human brain power. Hence even as productivity continues to rise, the lower demand for human labour may mean less pay and employment. However, this argument is not fully supported by more recent empirical studies. One research done by Erik Brynjolfsson and Lorin M. Hitt in 2003 presents direct evidence that suggests a positive short-term effect of computerization on firm-level measured productivity and output growth. In addition, they find the long-term productivity contribution of computerization and technological changes might even be greater.

The Luddite fallacy

If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.

Alex Tabarrok

The term "Luddite fallacy" is sometimes used to express the view that those concerned about long term technological unemployment are committing a fallacy, as they fail to account for compensation effects. People who use the term typically expect that technological progress will have no long-term impact on employment levels, and eventually will raise wages for all workers, because progress helps to increase the overall wealth of society. The term is based on the early 19th century example of the Luddites. During the 20th century and the first decade of the 21st century, the dominant view among economists has been that belief in long term technological unemployment was indeed a fallacy. More recently, there has been increased support for the view that the benefits of automation are not equally distributed.

There are two underlying premises for why long-term difficulty could develop. The one that has traditionally been deployed is that ascribed to the Luddites (whether or not it is a truly accurate summary of their thinking), which is that there is a finite amount of work available and if machines do that work, there can be no other work left for humans to do. Economists may call this the lump of labour fallacy, arguing that in reality no such limitation exists. However, the other premise is that it is possible for long-term difficulty to arise that has nothing to do with any lump of labour. In this view, the amount of work that can exist is infinite, but (1) machines can do most of the "easy" work, (2) the definition of what is "easy" expands as information technology progresses, and (3) the work that lies beyond "easy" (the work that requires more skill, talent, knowledge, and insightful connections between pieces of knowledge) may require greater cognitive faculties than most humans are able to supply, as point 2 continually advances. This latter view is the one supported by many modern advocates of the possibility of long-term, systemic technological unemployment.

Skill levels and technological unemployment

A common view among those discussing the effect of innovation on the labour market has been that it mainly hurts those with low skills, while often benefiting skilled workers. According to scholars such as Lawrence F. Katz, this may have been true for much of the twentieth century, yet in the 19th century, innovations in the workplace largely displaced costly skilled artisans, and generally benefited the low skilled. While 21st century innovation has been replacing some unskilled work, other low skilled occupations remain resistant to automation, while white collar work requiring intermediate skills is increasingly being performed by autonomous computer programs.

Some recent studies however, such as a 2015 paper by Georg Graetz and Guy Michaels, found that at least in the area they studied – the impact of industrial robots – innovation is boosting pay for highly skilled workers while having a more negative impact on those with low to medium skills. A 2015 report by Carl Benedikt Frey, Michael Osborne and Citi Research agreed that innovation had been disruptive mostly to middle-skilled jobs, yet predicted that in the next ten years the impact of automation would fall most heavily on those with low skills.

Geoff Colvin at Forbes argued that predictions on the kind of work a computer will never be able to do have proven inaccurate. A better approach to anticipate the skills on which humans will provide value would be to find out activities where we will insist that humans remain accountable for important decisions, such as with judges, CEOs, bus drivers and government leaders, or where human nature can only be satisfied by deep interpersonal connections, even if those tasks could be automated.

In contrast, others see even skilled human laborers being obsolete. Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerization could make nearly half of jobs redundant; of the 702 professions assessed, they found a strong correlation between education and income with ability to be automated, with office jobs and service work being some of the more at risk. In 2012 co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.

The issue of redundant job places is elaborated by the 2019 paper by Natalya Kozlova, according to which over 50% of workers in Russia perform work that requires low levels of education and can be replaced by applying digital technologies. Only 13% of those people possess education that exceeds the level of intellectual computer systems present today and expected within the following decade.

Empirical findings

There has been a lot of empirical research that attempts to quantify the impact of technological unemployment, mostly done at the microeconomic level. Most existing firm-level research has found a labor-friendly nature of technological innovations. For example, German economists Stefan Lachenmaier and Horst Rottmann find that both product and process innovation have a positive effect on employment. They also find that process innovation has a more significant job creation effect than product innovation. This result is supported by evidence in the United States as well, which shows that manufacturing firm innovations have a positive effect on the total number of jobs, not just limited to firm-specific behavior.

At the industry level, however, researchers have found mixed results with regard to the employment effect of technological changes. A 2017 study on manufacturing and service sectors in 11 European countries suggests that positive employment effects of technological innovations only exist in the medium- and high-tech sectors. There also seems to be a negative correlation between employment and capital formation, which suggests that technological progress could potentially be labor-saving given that process innovation is often incorporated in investment.

Limited macroeconomic analysis has been done to study the relationship between technological shocks and unemployment. The small amount of existing research, however, suggests mixed results. Italian economist Marco Vivarelli finds that the labor-saving effect of process innovation seems to have affected the Italian economy more negatively than the United States. On the other hand, the job creating effect of product innovation could only be observed in the United States, not Italy. Another study in 2013 finds a more transitory, rather than permanent, unemployment effect of technological change.

Measures of technological innovation

There have been four main approaches that attempt to capture and document technological innovation quantitatively. The first one, proposed by Jordi Gali in 1999 and further developed by Neville Francis and Valerie A. Ramey in 2005, is to use long-run restrictions in a vector autoregression (VAR) to identify technological shocks, assuming that only technology affects long-run productivity.

The second approach is from Susanto Basu, John Fernald and Miles Kimball. They create a measure of aggregate technology change with augmented Solow residuals, controlling for aggregate, non-technological effects such as non-constant returns and imperfect competition.

The third method, initially developed by John Shea in 1999, takes a more direct approach and employs observable indicators such as research and development (R&D) spending, and number of patent applications. This measure of technological innovation is very widely used in empirical research, since it does not rely on the assumption that only technology affects long-run productivity, and fairly accurately captures the output variation based on input variation. However, there are limitations with direct measures such as R&D. For example, since R&D only measures the input in innovation, the output is unlikely to be perfectly correlated with the input. In addition, R&D fails to capture the indeterminate lag between developing a new product or service, and bringing it to market.

The fourth approach, constructed by Michelle Alexopoulos, looks at the number of new titles published in the fields of technology and computer science to reflect technological progress, which turns out to be consistent with R&D expenditure data. Compared with R&D, this indicator captures the lag between changes in technology.

History

Pre-16th century

Roman Emperor Vespasian, who refused a low-cost method of transport of heavy goods that would put laborers out of work

According to author Gregory Woirol, the phenomenon of technological unemployment is likely to have existed since at least the invention of the wheel. Ancient societies had various methods for relieving the poverty of those unable to support themselves with their own labour. Ancient China and ancient Egypt may have had various centrally run relief programmes in response to technological unemployment dating back to at least the second millennium BC. Ancient Hebrews and adherents of the ancient Vedic religion had decentralised responses where aiding the poor was encouraged by their faiths. In ancient Greece, large numbers of free labourers could find themselves unemployed due to both the effects of ancient labour saving technology and to competition from slaves ("machines of flesh and blood"). Sometimes, these unemployed workers would starve to death or were forced into slavery themselves although in other cases they were supported by handouts. Pericles responded to perceived technological unemployment by launching public works programmes to provide paid work to the jobless. Conservatives criticized Pericle's programmes for wasting public money but were defeated.

Perhaps the earliest example of a scholar discussing the phenomenon of technological unemployment occurs with Aristotle, who speculated in Book One of Politics that if machines could become sufficiently advanced, there would be no more need for human labour.

Similar to the Greeks, ancient Romans, responded to the problem of technological unemployment by relieving poverty with handouts (such as the Cura Annonae). Several hundred thousand families were sometimes supported like this at once. Less often, jobs were directly created with public works programmes, such as those launched by the Gracchi. Various emperors even went as far as to refuse or ban labour saving innovations. In one instance, the introduction of a labor-saving invention was blocked, when Emperor Vespasian refused to allow a new method of low-cost transportation of heavy goods, saying "You must allow my poor hauliers to earn their bread." Labour shortages began to develop in the Roman empire towards the end of the second century AD, and from this point mass unemployment in Europe appears to have largely receded for over a millennium.

The medieval and early renaissance period saw the widespread adoption of newly invented technologies as well as older ones which had been conceived yet barely used in the Classical era. The Black Death left fewer workers across Europe. Mass unemployment began to reappear in Europe in the 15th century, partly as a result of population growth, and partly due to changes in the availability of land for subsistence farming caused by early enclosures. As a result of the threat of unemployment, there was less tolerance for disruptive new technologies. European authorities would often side with groups representing subsections of the working population, such as Guilds, banning new technologies and sometimes even executing those who tried to promote or trade in them.

16th to 18th century

Elizabeth I, who refused to patent a knitting machine invented by William Lee, saying "Consider thou what the invention could do to my poor subjects. It would assuredly bring them to ruin by depriving them of employment, thus making them beggars."

In Great Britain, the ruling elite began to take a less restrictive approach to innovation somewhat earlier than in much of continental Europe, which has been cited as a possible reason for Britain's early lead in driving the Industrial Revolution. Yet concern over the impact of innovation on employment remained strong through the 16th and early 17th century. A famous example of new technology being refused occurred when the inventor William Lee invited Queen Elizabeth I to view a labour saving knitting machine. The Queen declined to issue a patent on the grounds that the technology might cause unemployment among textile workers. After moving to France and also failing to achieve success in promoting his invention, Lee returned to England but was again refused by Elizabeth's successor James I for the same reason.

Especially after the Glorious Revolution, authorities became less sympathetic to workers concerns about losing their jobs due to innovation. An increasingly influential strand of Mercantilist thought held that introducing labour saving technology would actually reduce unemployment, as it would allow British firms to increase their market share against foreign competition. From the early 18th century workers could no longer rely on support from the authorities against the perceived threat of technological unemployment. They would sometimes take direct action, such as machine breaking, in attempts to protect themselves from disruptive innovation. Schumpeter notes that as the 18th century progressed, thinkers would raise the alarm about technological unemployment with increasing frequency, with von Justi being a prominent example. Yet Schumpeter also notes that the prevailing view among the elite solidified on the position that technological unemployment would not be a long-term problem.

19th century

It was only in the 19th century that debates over technological unemployment became intense, especially in Great Britain where many economic thinkers of the time were concentrated. Building on the work of Dean Tucker and Adam Smith, political economists began to create what would become the modern discipline of economics. While rejecting much of mercantilism, members of the new discipline largely agreed that technological unemployment would not be an enduring problem. In the first few decades of the 19th century, several prominent political economists did, however, argue against the optimistic view, claiming that innovation could cause long-term unemployment. These included Sismondi, Malthus, J S Mill, and from 1821, Ricardo himself. As arguably the most respected political economist of his age, Ricardo's view was challenging to others in the discipline. The first major economist to respond was Jean-Baptiste Say, who argued that no one would introduce machinery if they were going to reduce the amount of product, and that as Say's Law states that supply creates its own demand, any displaced workers would automatically find work elsewhere once the market had had time to adjust. Ramsey McCulloch expanded and formalised Say's optimistic views on technological unemployment, and was supported by others such as Charles Babbage, Nassau Senior and many other lesser known political economists. Towards the middle of the 19th century, Karl Marx joined the debates. Building on the work of Ricardo and Mill, Marx went much further, presenting a deeply pessimistic view of technological unemployment; his views attracted many followers and founded an enduring school of thought but mainstream economics was not dramatically changed. By the 1870s, at least in Great Britain, technological unemployment faded both as a popular concern and as an issue for academic debate. It had become increasingly apparent that innovation was increasing prosperity for all sections of British society, including the working class. As the classical school of thought gave way to neoclassical economics, mainstream thinking was tightened to take into account and refute the pessimistic arguments of Mill and Ricardo.

20th century

Critics of the view that innovation causes lasting unemployment argue that technology is used by workers and does not replace them on a large scale.

For the first two decades of the 20th century, mass unemployment was not the major problem it had been in the first half of the 19th. While the Marxist school and a few other thinkers still challenged the optimistic view, technological unemployment was not a significant concern for mainstream economic thinking until the mid to late 1920s. In the 1920s mass unemployment re-emerged as a pressing issue within Europe. At this time the U.S. was generally more prosperous, but even there urban unemployment had begun to increase from 1927. Rural American workers had been suffering job losses from the start of the 1920s; many had been displaced by improved agricultural technology, such as the tractor. The centre of gravity for economic debates had by this time moved from Great Britain to the United States, and it was here that the 20th century's two great periods of debate over technological unemployment largely occurred.

The peak periods for the two debates were in the 1930s and the 1960s. According to economic historian Gregory R Woirol, the two episodes share several similarities. In both cases academic debates were preceded by an outbreak of popular concern, sparked by recent rises in unemployment. In both cases the debates were not conclusively settled, but faded away as unemployment was reduced by an outbreak of war – World War II for the debate of the 1930s, and the Vietnam war for the 1960s episodes. In both cases, the debates were conducted within the prevailing paradigm at the time, with little reference to earlier thought. In the 1930s, optimists based their arguments largely on neo-classical beliefs in the self-correcting power of markets to automatically reduce any short-term unemployment via compensation effects. In the 1960s, faith in compensation effects was less strong, but the mainstream Keynesian economists of the time largely believed government intervention would be able to counter any persistent technological unemployment that was not cleared by market forces. Another similarity was the publication of a major Federal study towards the end of each episode, which broadly found that long-term technological unemployment was not occurring (though the studies did agree innovation was a major factor in the short term displacement of workers, and advised government action to provide assistance).

As the golden age of capitalism came to a close in the 1970s, unemployment once again rose, and this time generally remained relatively high for the rest of the century, across most advanced economies. Several economists once again argued that this may be due to innovation, with perhaps the most prominent being Paul Samuelson. Overall, the closing decades of the 20th century saw most concern expressed over technological unemployment in Europe, though there were several examples in the U.S. A number of popular works warning of technological unemployment were also published. These included James S. Albus's 1976 book titled Peoples' Capitalism: The Economics of the Robot Revolution; David F. Noble with works published in 1984 and 1993; Jeremy Rifkin and his 1995 book The End of Work; and the 1996 book The Global Trap. Yet for the most part, other than during the periods of intense debate in the 1930s and 60s, the consensus in the 20th century among both professional economists and the general public remained that technology does not cause long-term joblessness.

21st century

Opinions

There is a prevailing opinion that we are in an era of technological unemployment – that technology is increasingly making skilled workers obsolete.

Prof. Mark MacCarthy (2014)

The general consensus that innovation does not cause long-term unemployment held strong for the first decade of the 21st century although it continued to be challenged by a number of academic works, and by popular works such as Marshall Brain's Robotic Nation and Martin Ford's The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.

Since the publication of their 2011 book Race Against the Machine, MIT professors Andrew McAfee and Erik Brynjolfsson have been prominent among those raising concern about technological unemployment. The two professors remain relatively optimistic, however, stating "the key to winning the race is not to compete against machines but to compete with machines".

Concern about technological unemployment grew in 2013 due in part to a number of studies predicting substantially increased technological unemployment in forthcoming decades and empirical evidence that, in certain sectors, employment is falling worldwide despite rising output, thus discounting globalization and offshoring as the only causes of increasing unemployment.

In 2013, professor Nick Bloom of Stanford University stated there had recently been a major change of heart concerning technological unemployment among his fellow economists. In 2014 the Financial Times reported that the impact of innovation on jobs has been a dominant theme in recent economic discussion. According to the academic and former politician Michael Ignatieff writing in 2014, questions concerning the effects of technological change have been "haunting democratic politics everywhere". Concerns have included evidence showing worldwide falls in employment across sectors such as manufacturing; falls in pay for low and medium skilled workers stretching back several decades even as productivity continues to rise; the increase in often precarious platform mediated employment; and the occurrence of "jobless recoveries" after recent recessions. The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.

Former U.S. Treasury Secretary and Harvard economics professor Lawrence Summers stated in 2014 that he no longer believed automation would always create new jobs and that "This isn't some hypothetical future possibility. This is something that's emerging before us right now." Summers noted that already, more labor sectors were losing jobs than creating new ones. While himself doubtful about technological unemployment, professor Mark MacCarthy stated in the fall of 2014 that it is now the "prevailing opinion" that the era of technological unemployment has arrived.

At the 2014 Davos meeting, Thomas Friedman reported that the link between technology and unemployment seemed to have been the dominant theme of that year's discussions. A survey at Davos 2014 found that 80% of 147 respondents agreed that technology was driving jobless growth. At the 2015 Davos, Gillian Tett found that almost all delegates attending a discussion on inequality and technology expected an increase in inequality over the next five years, and gives the reason for this as the technological displacement of jobs. 2015 saw Martin Ford win the Financial Times and McKinsey Business Book of the Year Award for his Rise of the Robots: Technology and the Threat of a Jobless Future, and saw the first world summit on technological unemployment, held in New York. In late 2015, further warnings of potential worsening for technological unemployment came from Andy Haldane, the Bank of England's chief economist, and from Ignazio Visco, the governor of the Bank of Italy. In an October 2016 interview, US President Barack Obama said that due to the growth of artificial intelligence, society would be debating "unconditional free money for everyone" within 10 to 20 years. In 2019, computer scientist and artificial intelligence expert Stuart J. Russell stated that "in the long run nearly all current jobs will go away, so we need fairly radical policy changes to prepare for a very different future economy." In a book he authored, Russell claims that "One rapidly emerging picture is that of an economy where far fewer people work because work is unnecessary." However, he predicted that employment in healthcare, home care, and construction would increase.

Other economists have argued that long-term technological unemployment is unlikely. In 2014, Pew Research canvassed 1,896 technology professionals and economists and found a split of opinion: 48% of respondents believed that new technologies would displace more jobs than they would create by the year 2025, while 52% maintained that they would not. Economics professor Bruce Chapman from Australian National University has advised that studies such as Frey and Osborne's tend to overstate the probability of future job losses, as they don't account for new employment likely to be created, due to technology, in what are currently unknown areas.

General public surveys have often found an expectation that automation would impact jobs widely, but not the jobs held by those particular people surveyed.

Studies

A number of studies have predicted that automation will take a large proportion of jobs in the future, but estimates of the level of unemployment this will cause vary. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School showed that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement. The study, published in 2013, shows that automation can affect both skilled and unskilled work and both high and low-paying occupations; however, low-paid physical occupations are most at risk. It estimated that 47% of US jobs were at high risk of automation. In 2014, the economic think tank Bruegel released a study, based on the Frey and Osborne approach, claiming that across the European Union's 28 member states, 54% of jobs were at risk of automation. The countries where jobs were least vulnerable to automation were Sweden, with 46.69% of jobs vulnerable, the UK at 47.17%, the Netherlands at 49.50%, and France and Denmark, both at 49.54%. The countries where jobs were found to be most vulnerable were Romania at 61.93%, Portugal at 58.94%, Croatia at 57.9%, and Bulgaria at 56.56%. A 2015 report by the Taub Center found that 41% of jobs in Israel were at risk of being automated within the next two decades. In January 2016, a joint study by the Oxford Martin School and Citibank, based on previous studies on automation and data from the World Bank, found that the risk of automation in developing countries was much higher than in developed countries. It found that 77% of jobs in China, 69% of jobs in India, 85% of jobs in Ethiopia, and 55% of jobs in Uzbekistan were at risk of automation. The World Bank similarly employed the methodology of Frey and Osborne. A 2016 study by the International Labour Organization found 74% of salaried electrical & electronics industry positions in Thailand, 75% of salaried electrical & electronics industry positions in Vietnam, 63% of salaried electrical & electronics industry positions in Indonesia, and 81% of salaried electrical & electronics industry positions in the Philippines were at high risk of automation. A 2016 United Nations report stated that 75% of jobs in the developing world were at risk of automation, and predicted that more jobs might be lost when corporations stop outsourcing to developing countries after automation in industrialized countries makes it less lucrative to outsource to countries with lower labor costs.

The Council of Economic Advisers, a US government agency tasked with providing economic research for the White House, in the 2016 Economic Report of the President, used the data from the Frey and Osborne study to estimate that 83% of jobs with an hourly wage below $20, 31% of jobs with an hourly wage between $20 and $40, and 4% of jobs with an hourly wage above $40 were at risk of automation. A 2016 study by Ryerson University found that 42% of jobs in Canada were at risk of automation, dividing them into two categories - "high risk" jobs and "low risk" jobs. High risk jobs were mainly lower-income jobs that required lower education levels than average. Low risk jobs were on average more skilled positions. The report found a 70% chance that high risk jobs and a 30% chance that low risk jobs would be affected by automation in the next 10–20 years. A 2017 study by PricewaterhouseCoopers found that up to 38% of jobs in the US, 35% of jobs in Germany, 30% of jobs in the UK, and 21% of jobs in Japan were at high risk of being automated by the early 2030s. A 2017 study by Ball State University found about half of American jobs were at risk of automation, many of them low-income jobs. A September 2017 report by McKinsey & Company found that as of 2015, 478 billion out of 749 billion working hours per year dedicated to manufacturing, or $2.7 trillion out of $5.1 trillion in labor, were already automatable. In low-skill areas, 82% of labor in apparel goods, 80% of agriculture processing, 76% of food manufacturing, and 60% of beverage manufacturing were subject to automation. In mid-skill areas, 72% of basic materials production and 70% of furniture manufacturing was automatable. In high-skill areas, 52% of aerospace and defense labor and 50% of advanced electronics labor could be automated. In October 2017, a survey of information technology decision makers in the US and UK found that a majority believed that most business processes could be automated by 2022. On average, they said that 59% of business processes were subject to automation. A November 2017 report by the McKinsey Global Institute that analyzed around 800 occupations in 46 countries estimated that between 400 million and 800 million jobs could be lost due to robotic automation by 2030. It estimated that jobs were more at risk in developed countries than developing countries due to a greater availability of capital to invest in automation. Job losses and downward mobility blamed on automation has been cited as one of many factors in the resurgence of nationalist and protectionist politics in the US, UK and France, among other countries.

However, not all recent empirical studies have found evidence to support the idea that automation will cause widespread unemployment. A study released in 2015, examining the impact of industrial robots in 17 countries between 1993 and 2007, found no overall reduction in employment was caused by the robots, and that there was a slight increase in overall wages. According to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not replacement of employees but automation of portions of the tasks they perform. A 2016 OECD study found that among the 21 OECD countries surveyed, on average only 9% of jobs were in foreseeable danger of automation, but this varied greatly among countries: for example in South Korea the figure of at-risk jobs was 6% while in Austria it was 12%. In contrast to other studies, the OECD study does not primarily base its assessment on the tasks that a job entails, but also includes demographic variables, including sex, education and age. It is not clear however why a job should be more or less automatise just because it is performed by a woman. In 2017, Forrester estimated that automation would result in a net loss of about 7% of jobs in the US by 2027, replacing 17% of jobs while creating new jobs equivalent to 10% of the workforce. Another study argued that the risk of US jobs to automation had been overestimated due to factors such as the heterogeneity of tasks within occupations and the adaptability of jobs being neglected. The study found that once this was taken into account, the number of occupations at risk to automation in the US drops, ceteris paribus, from 38% to 9%. A 2017 study on the effect of automation on Germany found no evidence that automation caused total job losses but that they do effect the jobs people are employed in; losses in the industrial sector due to automation were offset by gains in the service sector. Manufacturing workers were also not at risk from automation and were in fact more likely to remain employed, though not necessarily doing the same tasks. However, automation did result in a decrease in labour's income share as it raised productivity but not wages.

A 2018 Brookings Institution study that analyzed 28 industries in 18 OECD countries from 1970 to 2018 found that automation was responsible for holding down wages. Although it concluded that automation did not reduce the overall number of jobs available and even increased them, it found that from the 1970s to the 2010s, it had reduced the share of human labor in the value added to the work, and thus had helped to slow wage growth. In April 2018, Adair Turner, former Chairman of the Financial Services Authority and head of the Institute for New Economic Thinking, stated that it would already be possible to automate 50% of jobs with current technology, and that it will be possible to automate all jobs by 2060.

Premature deindustrialization

Premature deindustrialization occurs when developing nations deindustrialize without first becoming rich, as happened with the advanced economies. The concept was popularized by Dani Rodrik in 2013, who went on to publish several papers showing the growing empirical evidence for the phenomena. Premature deindustrialization adds to concern over technological unemployment for developing countries - as traditional compensation effects that advanced economy workers enjoyed, such being able to get well paid work in the service sector after losing their factory jobs - may not be available. Some commentators, such as Carl Benedikt Frey, argue that with the right responses, the negative effects of further automation on workers in developing economies can still be avoided. 

Artificial intelligence

Since about 2017, a new wave of concern over technological unemployment had become prominent, this time over the effects of artificial intelligence (AI). Commentators including Calum Chace and Daniel Hulme have warned that if unchecked, AI threatens to cause an "economic singularity", with job churn too rapid for humans to adapt to, leading to widespread technological unemployment. Though they also advise that with the right responses by business leaders, policy makers and society, the impact of AI could be a net positive for workers.

Morgan R. Frank et al. cautions that there are several barriers preventing researchers from making accurate predictions of the effects AI will have on future job markets. Marian Krakovsky has argued that the jobs most likely to be completely replaced by AI are in middle-class areas, such as professional services. Often, the practical solution is to find another job, but workers may not have the qualifications for high-level jobs and so must drop to lower level jobs. However, Krakovsky (2018) predicts that AI will largely take the route of "complementing people," rather than "replicating people." Suggesting that the goal of people implementing AI is to improve the life of workers, not replace them. Studies have also shown that rather than solely destroying jobs AI can also create work: albeit low-skill jobs to train AI in low-income countries.

Following President Putin's 2017 statement that which ever country first achieves mastery in AI "will become the ruler of the world", various national and supranational governments have announced AI strategies. Concerns on not falling behind in the AI arms race have been more prominent than worries over AI's potential to cause unemployment. Several strategies suggest that achieving a leading role in AI should help their citizens get more rewarding jobs. Finland has aimed to help the citizens of other EU nations acquire the skills they need to compete in the post AI jobs market, making a free course on "The Elements of AI" available in multiple European languages.

Solutions

Preventing net job losses

Banning/refusing innovation

"What I object to, is the craze for machinery, not machinery as such. The craze is for what they call labour-saving machinery. Men go on 'saving labour', till thousands are without work and thrown on the open streets to die of starvation." — Gandhi, 1924

Historically, innovations were sometimes banned due to concerns about their impact on employment. Since the development of modern economics, however, this option has generally not even been considered as a solution, at least not for the advanced economies. Even commentators who are pessimistic about long-term technological unemployment invariably consider innovation to be an overall benefit to society, with J. S. Mill being perhaps the only prominent western political economist to have suggested prohibiting the use of technology as a possible solution to unemployment.

Gandhian economics called for a delay in the uptake of labour saving machines until unemployment was alleviated, however this advice was largely rejected by Nehru who was to become prime minister once India achieved its independence. The policy of slowing the introduction of innovation so as to avoid technological unemployment was, however, implemented in the 20th century within China under Mao's administration.

Shorter working hours

In 1870, the average American worker clocked up about 75 hours per week. Just prior to World War II working hours had fallen to about 42 per week, and the fall was similar in other advanced economies. According to Wassily Leontief, this was a voluntary increase in technological unemployment. The reduction in working hours helped share out available work, and was favoured by workers who were happy to reduce hours to gain extra leisure, as innovation was at the time generally helping to increase their rates of pay.

Further reductions in working hours have been proposed as a possible solution to unemployment by economists including John R. Commons, Lord Keynes and Luigi Pasinetti. Yet once working hours have reached about 40 hours per week, workers have been less enthusiastic about further reductions, both to prevent loss of income and as many value engaging in work for its own sake. Generally, 20th-century economists had argued against further reductions as a solution to unemployment, saying it reflects a lump of labour fallacy. In 2014, Google's co-founder, Larry Page, suggested a four-day workweek, so as technology continues to displace jobs, more people can find employment.

Public works

Programmes of public works have traditionally been used as way for governments to directly boost employment, though this has often been opposed by some, but not all, conservatives. Jean-Baptiste Say, although generally associated with free market economics, advised that public works could be a solution to technological unemployment. Some commentators, such as professor Mathew Forstater, have advised that public works and guaranteed jobs in the public sector may be the ideal solution to technological unemployment, as unlike welfare or guaranteed income schemes they provide people with the social recognition and meaningful engagement that comes with work.

For less developed economies, public works may be an easier to administrate solution compared to universal welfare programmes. As of 2015, calls for public works in the advanced economies have been less frequent even from progressives, due to concerns about sovereign debt. A partial exception is for spending on infrastructure, which has been recommended as a solution to technological unemployment even by economists previously associated with a neoliberal agenda, such as Larry Summers.

Education

Improved availability to quality education, including skills training for adults and other active labour market policies, is a solution that in principle at least is not opposed by any side of the political spectrum, and welcomed even by those who are optimistic about long-term technological employment. Improved education paid for by government tends to be especially popular with industry.

Proponents of this brand of policy assert higher level, more specialized learning is a way to capitalize from the growing technology industry. Leading technology research university MIT published an open letter to policymakers advocating for the "reinvention of education", namely a shift "away from rote learning" and towards STEM disciplines. Similar statements released by the U.S President's Council of Advisors on Science and Technology (PACST) have also been used to support this STEM emphasis on enrollment choice in higher learning. Education reform is also a part of the U.K government's "Industrial Strategy", a plan announcing the nation's intent to invest millions into a "technical education system". The proposal includes the establishment of a retraining program for workers who wish to adapt their skill-sets. These suggestions combat the concerns over automation through policy choices aiming to meet the emerging needs of society via updated information. Of the professionals within the academic community who applaud such moves, often noted is a gap between economic security and formal education —a disparity exacerbated by the rising demand for specialized skills—and education's potential to reduce it.

However, several academics have also argued that improved education alone will not be sufficient to solve technological unemployment, pointing to recent declines in the demand for many intermediate skills, and suggesting that not everyone is capable in becoming proficient in the most advanced skills. Kim Taipale has said that "The era of bell curve distributions that supported a bulging social middle class is over... Education per se is not going to make up the difference." while an op-ed piece from 2011, Paul Krugman, an economics professor and columnist for the New York Times, argued that better education would be an insufficient solution to technological unemployment, as it "actually reduces the demand for highly educated workers".

Living with technological unemployment

Welfare payments

The use of various forms of subsidies has often been accepted as a solution to technological unemployment even by conservatives and by those who are optimistic about the long term effect on jobs. Welfare programmes have historically tended to be more durable once established, compared with other solutions to unemployment such as directly creating jobs with public works. Despite being the first person to create a formal system describing compensation effects, Ramsey McCulloch and most other classical economists advocated government aid for those suffering from technological unemployment, as they understood that market adjustment to new technology was not instantaneous and that those displaced by labour-saving technology would not always be able to immediately obtain alternative employment through their own efforts.

Basic income

Several commentators have argued that traditional forms of welfare payment may be inadequate as a response to the future challenges posed by technological unemployment, and have suggested a basic income as an alternative. People advocating some form of basic income as a solution to technological unemployment include Martin Ford,  Erik Brynjolfsson, Robert Reich, Andrew Yang, Elon Musk, Zoltan Istvan, and Guy Standing. Reich has gone as far as to say the introduction of a basic income, perhaps implemented as a negative income tax is "almost inevitable", while Standing has said he considers that a basic income is becoming "politically essential". Since late 2015, new basic income pilots have been announced in Finland, the Netherlands, and Canada. Further recent advocacy for basic income has arisen from a number of technology entrepreneurs, the most prominent being Sam Altman, president of Y Combinator.

Skepticism about basic income includes both right and left elements, and proposals for different forms of it have come from all segments of the spectrum. For example, while the best-known proposed forms (with taxation and distribution) are usually thought of as left-leaning ideas that right-leaning people try to defend against, other forms have been proposed even by libertarians, such as von Hayek and Friedman. Republican president Nixon's Family Assistance Plan (FAP) of 1969, which had much in common with basic income, passed in the House but was defeated in the Senate.

One objection to basic income is that it could be a disincentive to work, but evidence from older pilots in India, Africa, and Canada indicates that this does not happen and that a basic income encourages low-level entrepreneurship and more productive, collaborative work. Another objection is that funding it sustainably is a huge challenge. While new revenue-raising ideas have been proposed such as Martin Ford's wage recapture tax, how to fund a generous basic income remains a debated question, and skeptics have dismissed it as utopian. Even from a progressive viewpoint, there are concerns that a basic income set too low may not help the economically vulnerable, especially if financed largely from cuts to other forms of welfare.

To better address both the funding concerns and concerns about government control, one alternative model is that the cost and control would be distributed across the private sector instead of the public sector. Companies across the economy would be required to employ humans, but the job descriptions would be left to private innovation, and individuals would have to compete to be hired and retained. This would be a for-profit sector analog of basic income, that is, a market-based form of basic income. It differs from a job guarantee in that the government is not the employer (rather, companies are) and there is no aspect of having employees who "cannot be fired", a problem that interferes with economic dynamism. The economic salvation in this model is not that every individual is guaranteed a job, but rather just that enough jobs exist that massive unemployment is avoided and employment is no longer solely the privilege of only the very smartest or highly trained 20% of the population. Another option for a market-based form of basic income has been proposed by the Center for Economic and Social Justice (CESJ) as part of "a Just Third Way" (a Third Way with greater justice) through widely distributed power and liberty. Called the Capital Homestead Act, it is reminiscent of James S. Albus's Peoples' Capitalism in that money creation and securities ownership are widely and directly distributed to individuals rather than flowing through, or being concentrated in, centralized or elite mechanisms.

Broadening the ownership of technological assets

Several solutions have been proposed which do not fall easily into the traditional left-right political spectrum. This includes broadening the ownership of robots and other productive capital assets. Enlarging the ownership of technologies has been advocated by people including James S. Albus John Lanchester, Richard B. Freeman, and Noah Smith. Jaron Lanier has proposed a somewhat similar solution: a mechanism where ordinary people receive "nano payments" for the big data they generate by their regular surfing and other aspects of their online presence.

Structural changes towards a post-scarcity economy

The Zeitgeist Movement (TZM), The Venus Project (TVP) as well as various individuals and organizations propose structural changes towards a form of a post-scarcity economy in which people are 'freed' from their automatable, monotonous jobs, instead of 'losing' their jobs. In the system proposed by TZM all jobs are either automated, abolished for bringing no true value for society (such as ordinary advertising), rationalized by more efficient, sustainable and open processes and collaboration or carried out based on altruism and social relevance, opposed to compulsion or monetary gain. The movement also speculates that the free time made available to people will permit a renaissance of creativity, invention, community and social capital as well as reducing stress.

Other approaches

The threat of technological unemployment has occasionally been used by free market economists as a justification for supply side reforms, to make it easier for employers to hire and fire workers. Conversely, it has also been used as a reason to justify an increase in employee protection.

Economists including Larry Summers have advised a package of measures may be needed. He advised vigorous cooperative efforts to address the "myriad devices" – such as tax havens, bank secrecy, money laundering, and regulatory arbitrage – which enable the holders of great wealth to avoid paying taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return. Summers suggested more vigorous enforcement of anti-monopoly laws; reductions in "excessive" protection for intellectual property; greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation; strengthening of collective bargaining arrangements; improvements in corporate governance; strengthening of financial regulation to eliminate subsidies to financial activity; easing of land-use restrictions that may cause estates to keep rising in value; better training for young people and retraining for displaced workers; and increased public and private investment in infrastructure development, such as energy production and transportation.

Michael Spence has advised that responding to the future impact of technology will require a detailed understanding of the global forces and flows technology has set in motion. Adapting to them "will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution".

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...