Technological change (TC), technological development, technological achievement, or technological progress is the overall process of invention, innovation and diffusion of technology or processes. In essence, technological change covers the invention of technologies (including processes) and their commercialization or release as open source via research and development (producing emerging technologies), the continual improvement
of technologies (in which they often become less expensive), and the
diffusion of technologies throughout industry or society (which
sometimes involves disruption and convergence). In short, technological change is based on both better and more technology.
Original model of three phases of the process of
Technological Change
Modeling technological change
In its earlier days, technological change was illustrated with the 'Linear Model of Innovation',
which has now been largely discarded to be replaced with a model of
technological change that involves innovation at all stages of research,
development, diffusion, and use. When speaking about "modeling
technological change," this often means the process of innovation. This
process of continuous improvement is often modeled as a curve depicting
decreasing costs over time (for instance fuel cell which have become cheaper every year). TC is also often modelled using a learning curve, ex.: Ct=C0 * Xt^-b.
Technological change itself is often included in other models (e.g. climate change models) and was often taken as an exogenous factor. These days TC is more often included as an endogenous
factor. This means that it is taken as something you can influence.
Today, there are sectors that maintain policy can influence the speed
and direction of technological change. For instance, proponents of the
Induced Technological Change hypothesis state that policy makers can
steer the direction of technological advances by influencing relative
factor prices and this can be demonstrated in the way climate policies
impact the use of fossil fuel energy, specifically how it becomes
relatively more expensive.
Until now, the empirical evidence about the existence of policy induced
innovation effects is still lacking and this may be attributed to a
variety of reasons outside the sparsity of models (e.g. long-term policy
uncertainty and exogenous drivers of (directed) innovation).
A related concept is the notion of Directed Technical Change with more
emphasis on price induced directional rather than policy induced scale
effects.
Invention
The creation of something new, or a "breakthrough" technology. This is often included in the process of product development and relies on research. This can be demonstrated in the invention of the spreadsheet software. Newly invented technologies are conventionally patented.
Diffusion
Diffusion pertains to the spread of a technology through a society or industry. The diffusion of a technology theory generally follows an S-shaped curve
as early versions of technology are rather unsuccessful, followed by a
period of successful innovation with high levels of adoption, and
finally a dropping off in adoption as a technology reaches its maximum
potential in a market. In the case of a personal computer, it has made
way beyond homes and into business settings, such as office workstations and server machines to host websites.
Technological change as a social process
Underpinning
the idea of technological change as a social process is general
agreement on the importance of social context and communication.
According to this model, technological change is seen as a social
process involving producers and adopters and others (such as government)
who are profoundly affected by cultural setting, political institutions
and marketing strategies.
In free market
economies, the maximization of profits is a powerful driver of
technological change. Generally, only those technologies that promise to
maximize profits for the owners of incoming producing capital are
developed and reach the market. Any technological product that fails to
meet this criterion - even though they may satisfy very important
societal needs - are eliminated. Therefore, technological change is a
social process strongly biased in favor of the financial interests of
capital. There are currently no well established democratic processes,
such as voting on the social or environmental desirability of a new
technology prior to development and marketing, that would allow average
citizens to direct the course of technological change.
Elements of diffusion
Emphasis
has been on four key elements of the technological change process: (1)
an innovative technology (2) communicated through certain channels (3)
to members of a social system (4) who adopt it over a period of time.
These elements are derived from Everett M. RogersDiffusion of innovations theory using a communications-type approach.
Innovation
Rogers
proposed that there are five main attributes of innovative technologies
which influence acceptance. He called these criteria ACCTO, which
stands for Advantage, Compatibility, Complexity, Trialability, and
Observability. Relative advantage may be economic or
non-economic, and is the degree to which an innovation is seen as
superior to prior innovations fulfilling the same needs. It is
positively related to acceptance (e.g. the higher the relative
advantage, the higher the adoption level, and vice versa). Compatibility
is the degree to which an innovation appears consistent with existing
values, past experiences, habits and needs to the potential adopter; a
low level of compatibility will slow acceptance. Complexity is
the degree to which an innovation appears difficult to understand and
use; the more complex an innovation, the slower its acceptance. Trialability
is the perceived degree to which an innovation may be tried on a
limited basis, and is positively related to acceptance. Trialability can
accelerate acceptance because small-scale testing reduces risk. Observability is the perceived degree to which results of innovating are visible to others and is positively related to acceptance.
Communication channels
Communication
channels are the means by which a source conveys a message to a
receiver. Information may be exchanged through two fundamentally
different, yet complementary, channels of communication. Awareness is
more often obtained through the mass media, while uncertainty reduction that leads to acceptance mostly results from face-to-face communication.
Social system
The
social system provides a medium through which and boundaries within
which, innovation is adopted. The structure of the social system affects
technological change in several ways. Social norms, opinion leaders,
change agents, government and the consequences of innovations are all
involved. Also involved are cultural setting, nature of political
institutions, laws, policies and administrative structures.
Time
Time
enters into the acceptance process in many ways. The time dimension
relates to the innovativeness of an individual or other adopter, which
is the relative earlyness or lateness with which an innovation is
adopted.
Emerging technologies are technologies that are perceived as
capable of changing the status quo. These technologies are generally new
but include older technologies that are still controversial and
relatively undeveloped in potential, such as preimplantation genetic diagnosis and gene therapy which date to 1989 and 1990 respectively.
Emerging technologies are characterized by radical novelty,
relatively fast growth, coherence, prominent impact, and uncertainty and
ambiguity. In other words, an emerging technology can be defined as "a
radically novel and relatively fast growing technology characterised by a
certain degree of coherence persisting over time and with the potential
to exert a considerable impact on the socio-economic domain(s) which is
observed in terms of the composition of actors, institutions and
patterns of interactions among those, along with the associated
knowledge production processes. Its most prominent impact, however, lies
in the future and so in the emergence phase is still somewhat uncertain
and ambiguous.".
New technological fields may result from the technological convergence
of different systems evolving towards similar goals. Convergence brings
previously separate technologies such as voice (and telephony
features), data (and productivity applications) and video together so
that they share resources and interact with each other, creating new
efficiencies.
Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage;
converging technologies represent previously distinct fields which are
in some way moving towards stronger inter-connection and similar goals.
However, the opinion on the degree of the impact, status and economic
viability of several emerging and converging technologies.
History of emerging technologies
In the history of technology, emerging technologies are contemporary advances and innovation in various fields of technology.
Over centuries innovative methods and new technologies are
developed and opened up. Some of these technologies are due to
theoretical research, and others from commercial research and development.
Technological growth includes incremental developments and disruptive technologies. An example of the former was the gradual roll-out of DVD (digital video disc) as a development intended to follow on from the previous optical technology compact disc.
By contrast, disruptive technologies are those where a new method
replaces the previous technology and makes it redundant, for example,
the replacement of horse-drawn carriages by automobiles and other
vehicles.
Emerging technology debates
Many writers, including computer scientistBill Joy,
have identified clusters of technologies that they consider critical
to humanity's future. Joy warns that the technology could be used by
elites for good or evil.
They could use it as "good shepherds" for the rest of humanity, or
decide everyone else is superfluous and push for mass extinction of
those made unnecessary by technology.
Advocates of the benefits of technological change typically see emerging and converging technologies as offering hope for the betterment of the human condition. Cyberphilosophers Alexander Bard and Jan Söderqvist argue in The Futurica Trilogy that while Man himself is basically constant throughout human history (genes change very slowly), all relevant change is rather a direct or indirect result of technological innovation (memes change very fast) since new ideas always emanate from technology use and not the other way around.
Man should consequently be regarded as history's main constant and
technology as its main variable. However, critics of the risks of
technological change, and even some advocates such as transhumanist philosopher Nick Bostrom, warn that some of these technologies could pose dangers, perhaps even contribute to the extinction of humanity itself; i.e., some of them could involve existential risks.
Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs.
As robotics and artificial intelligence develop further, even many
skilled jobs may be threatened. Technologies such as machine learning
may ultimately allow computers to do many knowledge-based jobs that
require significant education. This may result in substantial
unemployment at all skill levels, stagnant or falling wages for most
workers, and increased concentration of income and wealth as the owners
of capital capture an ever-larger fraction of the economy. This in turn
could lead to depressed consumer spending and economic growth as the
bulk of the population lacks sufficient discretionary income to purchase
the products and services produced by the economy.
Artificial intelligence (AI) is the sub intelligence exhibited by machines or software, and the branch of computer science
that develops machines and software with animal-like intelligence.
Major AI researchers and textbooks define the field as "the study and
design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1942, defines it as "the study of making intelligent machines".
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence (or "strong AI")
is still among the field's long-term goals. Currently popular
approaches include deep learning, statistical methods, computational
intelligence and traditional symbolic AI. There are an enormous number
of tools used in AI, including versions of search and mathematical
optimization, logic, methods based on probability and economics, and
many others.
Combined with Internet technology, 3D printing would allow for
digital blueprints of virtually any material product to be sent
instantly to another person to be produced on the spot, making
purchasing a product online almost instantaneous.
Although this technology is still too crude to produce most
products, it is rapidly developing and created a controversy in 2013
around the issue of 3D printed guns.
Gene therapy
Gene therapy was first successfully demonstrated in late 1990/early 1991 for adenosine deaminase deficiency,
though the treatment was somatic – that is, did not affect the
patient's germ line and thus was not heritable. This led the way to
treatments for other genetic diseases and increased interest in germ line gene therapy – therapy affecting the gametes and descendants of patients.
Between September 1990 and January 2014 there were around 2,000 gene therapy trials conducted or approved.
Cancer vaccines
A cancer vaccine is a vaccine that treats existing cancer or prevents the development of cancer in certain high-risk individuals. Vaccines that treat existing cancer are known as therapeutic cancer vaccines. There are currently no vaccines able to prevent cancer in general.
On April 14, 2009, Dendreon Corporation announced that their Phase III clinical trial of Provenge, a cancer vaccine designed to treat prostate cancer, had demonstrated an increase in survival. It received U.S. Food and Drug Administration (FDA) approval for use in the treatment of advanced prostate cancer patients on April 29, 2010. The approval of Provenge has stimulated interest in this type of therapy.
In vitro meat
In vitro meat, also called cultured meat, clean meat, cruelty-free meat, shmeat, and test-tube meat, is an animal-flesh product that has never been part of a living animal with exception of the fetal calf serum taken from a slaughtered cow. In the 21st century, several research projects have worked on in vitro meat in the laboratory. The first in vitro beefburger, created by a Dutch team, was eaten at a demonstration for the press in London in August 2013. There remain difficulties to be overcome before in vitro meat becomes commercially available.
Cultured meat is prohibitively expensive, but it is expected that the
cost could be reduced to compete with that of conventionally obtained
meat as technology improves. In vitro
meat is also an ethical issue. Some argue that it is less objectionable
than traditionally obtained meat because it doesn't involve killing and
reduces the risk of animal cruelty, while others disagree with eating
meat that has not developed naturally.
Nanotechnology
Nanotechnology (sometimes shortened to nanotech) is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology
referred to the particular technological goal of precisely
manipulating atoms and molecules for fabrication of macroscale products,
also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm
scale, and so the definition shifted from a particular technological
goal to a research category inclusive of all types of research and
technologies that deal with the special properties of matter that occur
below the given size threshold.
Robotics
Robotics is the branch of technology that deals with the design, construction, operation, and application of robots,
as well as computer systems for their control, sensory feedback, and
information processing. These technologies deal with automated machines
that can take the place of humans in dangerous environments or
manufacturing processes, or resemble humans in appearance, behavior,
and/or cognition. A good example of robots which resembles humans is Sophia, a social humanoid robot developed by Hong Kong-based company Hanson Robotics which was activated on April 19, 2015. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics.
Stem cell therapy
Stem cell therapy is an intervention strategy that introduces new adult stem cells into damaged tissue in order to treat disease or injury. Many medical researchers believe that stem cell treatments have the potential to change the face of human disease and alleviate suffering. The ability of stem cells to self-renew and give rise to subsequent generations with variable degrees of differentiation capacities,
offers significant potential for generation of tissues that can
potentially replace diseased and damaged areas in the body, with minimal
risk of rejection and side effects.
Distributed ledger technology
Distributed ledger or blockchain technology
is a technology which provides transparent and immutable lists of
transactions. Blockchains can enable autonomous transactions through the
use of smart contracts.
Smart contracts are self-executing transactions which occur when
pre-defined conditions are met. The original idea of a smart contract
was conceived by Nick Szabo in 1994
but these original theories about how these smart contracts could work
remained unrealised because there was no technology to support
programmable agreements and transactions between parties. His example of
a smart contract was the vending machine that holds goods until money
has been received and then the goods are released to the buyer. The
machine holds the property and is able to enforce the contract. There
were two main issues that needed to be addressed before smart contracts
could be used in the real world. Firstly, the control of physical assets
by smart contracts to be able to enforce agreements. Secondly, the last
of trustworthy computers that are reliable and trusted to execute the
contract between two or more parties. It is only with the advent of cryptocurrency and encryption
that the technology for smart contracts has come to fruition. Many
potential applications of smart contracts have been suggested that go
beyond the transfer of value from one party to another, such as supply
chain management, electronic voting, law and the internet of things.
Development of emerging technologies
As
innovation drives economic growth, and large economic rewards come from
new inventions, a great deal of resources (funding and effort) go into
the development of emerging technologies. Some of the sources of these
resources are described below...
Research and development
Research and development is directed towards the advancement of technology in general, and therefore includes development of emerging technologies. See also List of countries by research and development spending. Applied research
is a form of systematic inquiry involving the practical application of
science. It accesses and uses some part of the research communities'
(the academia's) accumulated theories, knowledge, methods, and
techniques, for a specific, often state-, business-, or client-driven
purpose.
Science policy
is the area of public policy which is concerned with the policies that
affect the conduct of the science and research enterprise, including the
funding of science, often in pursuance of other national policy goals
such as technological innovation to promote commercial product
development, weapons development, health care and environmental
monitoring.
DARPA
The Defense Advanced Research Projects Agency (DARPA)
is an agency of the U.S. Department of Defense responsible for the
development of emerging technologies for use by the military.
DARPA was created in 1958 as the Advanced Research Projects
Agency (ARPA) by President Dwight D. Eisenhower. Its purpose was to
formulate and execute research and development projects to expand the
frontiers of technology and science, with the aim to reach beyond
immediate military requirements.
Projects funded by DARPA have provided significant technologies that influenced many non-military fields, such as the Internet and Global Positioning System technology.
Technology competitions and awards
There
are awards that provide incentive to push the limits of technology
(generally synonymous with emerging technologies). Note that while some
of these awards reward achievement after-the-fact via analysis of the
merits of technological breakthroughs, others provide incentive via
competitions for awards offered for goals yet to be achieved.
The Orteig Prize was a $25,000 award offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis. In total, nine teams spent $400,000 in pursuit of the Orteig Prize.
The XPRIZE series of awards, public competitions designed and managed by the non-profit organization called the X Prize Foundation,
are intended to encourage technological development that could benefit
mankind. The most high-profile XPRIZE to date was the $10,000,000 Ansari
XPRIZE relating to spacecraft development, which was awarded in 2004
for the development of SpaceShipOne.
The Turing Award is an annual prize given by the Association for Computing Machinery
(ACM) to "an individual selected for contributions of a technical
nature made to the computing community". It is stipulated that "The
contributions should be of lasting and major technical importance to the
computer field". The Turing Award is generally recognized as the
highest distinction in computer science, and in 2014 grew to $1,000,000.
In 2003, David Gobel seed-funded the Methuselah Mouse Prize
(Mprize) to encourage the development of new life extension therapies
in mice, which are genetically similar to humans. So far, three Mouse
Prizes have been awarded: one for breaking longevity records to Dr.
Andrzej Bartke of Southern Illinois University; one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California; and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin.
Role of science fiction
Science fiction has criticized developing and future technologies, but also inspires innovation
and new technology. This topic has been more often discussed in
literary and sociological than in scientific forums. Cinema and media
theorist Vivian Sobchack
examines the dialogue between science fiction films and technological
imagination. Technology impacts artists and how they portray their
fictionalized subjects, but the fictional world gives back to science by
broadening imagination. How William Shatner Changed the World
is a documentary that gave a number of real-world examples of
actualized technological imaginations. While more prevalent in the early
years of science fiction with writers like Arthur C. Clarke, new authors still find ways to make currently impossible technologies seem closer to being realized.
In
the 21st century, robots are beginning to perform roles not just in
manufacturing, but in the service sector; e.g. in healthcare.
Technological unemployment is the loss of jobs caused by technological change.
Such change typically includes the introduction of labour-saving
"mechanical-muscle" machines or more efficient "mechanical-mind"
processes (automation).
Just as horses employed as prime movers were gradually made obsolete by
the automobile, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. During World War II, Alan Turing's Bombe
machine compressed and decoded thousands of man-years worth of
encrypted data in a matter of hours. A contemporary example of
technological unemployment is the displacement of retail cashiers by self-service tills.
That technological change can cause short-term job losses is
widely accepted. The view that it can lead to lasting increases in
unemployment has long been controversial. Participants in the
technological unemployment debates can be broadly divided into optimists
and pessimists. Optimists agree that innovation may be
disruptive to jobs in the short term, yet hold that various compensation
effects ensure there is never a long-term negative impact on jobs,
whereas pessimists contend that at least in some circumstances,
new technologies can lead to a lasting decline in the total number of
workers in employment. The phrase "technological unemployment" was
popularised by John Maynard Keynes in the 1930s, who said it was a "only a temporary phase of maladjustment". Yet the issue of machines displacing human labour has been discussed since at least Aristotle's time.
Prior to the 18th century both the elite and common people
would generally take the pessimistic view on technological
unemployment, at least in cases where the issue arose. Due to generally
low unemployment in much of pre-modern history, the topic was rarely a
prominent concern. In the 18th century fears over the impact of
machinery on jobs intensified with the growth of mass unemployment,
especially in Great Britain which was then at the forefront of the Industrial Revolution.
Yet some economic thinkers began to argue against these fears, claiming
that overall innovation would not have negative effects on jobs. These
arguments were formalised in the early 19th century by the classical economists.
During the second half of the 19th century, it became increasingly
apparent that technological progress was benefiting all sections of
society, including the working class. Concerns over the negative impact
of innovation diminished. The term "Luddite fallacy" was coined to
describe the thinking that innovation would have lasting harmful effects
on employment.
The view that technology is unlikely to lead to long term
unemployment has been repeatedly challenged by a minority of economists.
In the early 1800s these included Ricardo
himself. There were dozens of economists warning about technological
unemployment during brief intensifications of the debate that spiked in
the 1930s and 1960s. Especially in Europe, there were further warnings
in the closing two decades of the twentieth century, as commentators
noted an enduring rise in unemployment suffered by many industrialised
nations since the 1970s. Yet a clear majority of both professional
economists and the interested general public held the optimistic view
through most of the 20th century.
In the second decade of the 21st century, a number of studies
have been released suggesting that technological unemployment may be
increasing worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation.
However, their findings have frequently been misinterpreted, and on the
PBS NewsHours they again made clear that their findings do not
necessarily imply future technological unemployment.
While many economists and commentators still argue such fears are
unfounded, as was widely accepted for most of the previous two
centuries, concern over technological unemployment is growing once
again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Regarding a recent claim by Treasury Secretary Steve Mnuchin
that automation is not "going to have any kind of big effect on the
economy for the next 50 or 100 years", says McAfee, "I don't talk to
anyone in the field who believes that."
Recent technological innovations have the potential to render humans
obsolete with the professional, white-collar, low-skilled, creative
fields, and other "mental jobs".
Issues within the debates
Long term effects on employment
There are more
sectors losing jobs than creating jobs. And the general-purpose aspect
of software technology means that even the industries and jobs that it
creates are not forever.
Lawrence Summers
All participants in the technological employment debates agree that
temporary job losses can result from technological innovation.
Similarly, there is no dispute that innovation sometimes has positive
effects on workers. Disagreement focuses on whether it is possible for
innovation to have a lasting negative impact on overall employment.
Levels of persistent unemployment can be quantified empirically, but the
causes are subject to debate. Optimists accept short term unemployment
may be caused by innovation, yet claim that after a while, compensation effects
will always create at least as many jobs as were originally destroyed.
While this optimistic view has been continually challenged, it was
dominant among mainstream economists for most of the 19th and 20th
centuries. For example, labor economists Jacob Mincer and Stephan Danninger develop an empirical study using micro-data from the Panel Study of Income Dynamics,
and find that although in the short run, technological progress seems
to have unclear effects on aggregate unemployment, it reduces
unemployment in the long run. When they include a 5-year lag, however,
the evidence supporting a short-run employment effect of technology
seems to disappear as well, suggesting that technological unemployment
"appears to be a myth".
The concept of structural unemployment, a lasting level of joblessness that does not disappear even at the high point of the business cycle,
became popular in the 1960s. For pessimists, technological unemployment
is one of the factors driving the wider phenomena of structural
unemployment. Since the 1980s, even optimistic economists have
increasingly accepted that structural unemployment has indeed risen in
advanced economies, but they have tended to blame this on globalisation and offshoring
rather than technological change. Others claim a chief cause of the
lasting increase in unemployment has been the reluctance of governments
to pursue expansionary policies since the displacement of Keynesianism that occurred in the 1970s and early 80s.
In the 21st century, and especially since 2013, pessimists have been
arguing with increasing frequency that lasting worldwide technological
unemployment is a growing threat.
Compensation effects
John Kay inventor of the Fly Shuttle AD 1753, by Ford Madox Brown, depicting the inventor John Kay
kissing his wife goodbye as men carry him away from his home to escape a
mob angry about his labour-saving mechanical loom. Compensation effects
were not widely understood at this time.
Compensation effects are labour-friendly consequences of innovation
which "compensate" workers for job losses initially caused by new
technology. In the 1820s, several compensation effects were described by
Say
in response to Ricardo's statement that long term technological
unemployment could occur. Soon after, a whole system of effects was
developed by Ramsey McCulloch. The system was labelled "compensation theory" by Marx,
who proceeded to attack the ideas, arguing that none of the effects
were guaranteed to operate. Disagreement over the effectiveness of
compensation effects has remained a central part of academic debates on
technological unemployment ever since.
Compensation effects include:
By new machines. (The labour needed to build the new equipment that applied innovation requires.)
By new investments. (Enabled by the cost savings and therefore increased profits from the new technology.)
By changes in wages. (In cases where unemployment does occur, this
can cause a lowering of wages, thus allowing more workers to be
re-employed at the now lower cost. On the other hand, sometimes workers
will enjoy wage increases as their profitability rises. This leads to
increased income and therefore increased spending, which in turn
encourages job creation.)
By lower prices. (Which then lead to more demand, and therefore
more employment.) Lower prices can also help offset wage cuts, as
cheaper goods will increase workers' buying power.
By new products. (Where innovation directly creates new jobs.)
The "by new machines" effect is now rarely discussed by economists; it is often accepted that Marx successfully refuted it.
Even pessimists often concede that product innovation associated with
the "by new products" effect can sometimes have a positive effect on
employment. An important distinction can be drawn between 'process' and
'product' innovations.
Evidence from Latin America seems to suggest that product innovation
significantly contributes to the employment growth at the firm level,
more so than process innovation.
The extent to which the other effects are successful in compensating
the workforce for job losses has been extensively debated throughout the
history of modern economics; the issue is still not resolved. One such effect that potentially complements the compensation effect is job multiplier.
According to research developed by Enrico Moretti, with each additional
skilled job created in high tech industries in a given city, more than
two jobs are created in the non-tradable sector. His findings suggest
that technological growth and the resulting job-creation in high-tech
industries might have a more significant spillover effect than we have anticipated. Evidence from Europe also supports such a job multiplier effect,
showing local high-tech jobs could create five additional low-tech jobs.
Many economists now pessimistic about technological unemployment
accept that compensation effects did largely operate as the optimists
claimed through most of the 19th and 20th century. Yet they hold that
the advent of computerisation means that compensation effects are now
less effective. An early example of this argument was made by Wassily Leontief in 1983. He conceded that after some disruption, the advance of mechanization
during the Industrial Revolution actually increased the demand for
labour as well as increasing pay due to effects that flow from increased
productivity.
While early machines lowered the demand for muscle power, they were
unintelligent and needed large armies of human operators to remain
productive. Yet since the introduction of computers into the workplace,
there is now less need not just for muscle power but also for human
brain power. Hence even as productivity continues to rise, the lower
demand for human labour may mean less pay and employment. However, this argument is not fully supported by more recent empirical studies. One research done by Erik Brynjolfsson and Lorin M. Hitt
in 2003 presents direct evidence that suggests a positive short-term
effect of computerization on firm-level measured productivity and output
growth. In addition, they find the long-term productivity contribution
of computerization and technological changes might even be greater.
The Luddite fallacy
If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.
The term "Luddite fallacy" is sometimes used to express the view that
those concerned about long term technological unemployment are
committing a fallacy, as they fail to account for compensation effects.
People who use the term typically expect that technological progress
will have no long term impact on employment levels, and eventually will
raise wages for all workers, because progress helps to increase the
overall wealth of society. The term is based on the early 19th century
example of the Luddites.
During the 20th century and the first decade of the 21st century, the
dominant view among economists has been that belief in long term
technological unemployment was indeed a fallacy. More recently, there has been increased support for the view that the benefits of automation are not equally distributed.
There are two underlying premises for why long-term difficulty
could develop. The one that has traditionally been deployed is that
ascribed to the Luddites (whether or not it is a truly accurate summary
of their thinking), which is that there is a finite amount of work
available and if machines do that work, there can be no other work left
for humans to do. Economists call this the lump of labour fallacy,
arguing that in reality no such limitation exists. However, the other
premise is that it is possible for long-term difficulty to arise that
has nothing to do with any lump of labour. In this view, the amount of
work that can exist is infinite, but (1) machines can do most of the
"easy" work, (2) the definition of what is "easy" expands as information
technology progresses, and (3) the work that lies beyond "easy" (the
work that requires more skill, talent, knowledge, and insightful
connections between pieces of knowledge) may require greater cognitive
faculties than most humans are able to supply, as point 2 continually
advances. This latter view is the one supported by many modern advocates
of the possibility of long-term, systemic technological unemployment.
Skill levels and technological unemployment
A
common view among those discussing the effect of innovation on the
labour market has been that it mainly hurts those with low skills, while
often benefiting skilled workers. According to scholars such as Lawrence F. Katz,
this may have been true for much of the twentieth century, yet in the
19th century, innovations in the workplace largely displaced costly
skilled artisans, and generally benefited the low skilled. While 21st
century innovation has been replacing some unskilled work, other low
skilled occupations remain resistant to automation, while white collar
work requiring intermediate skills is increasingly being performed by
autonomous computer programs.
Some recent studies however, such as a 2015 paper by Georg Graetz
and Guy Michaels, found that at least in the area they studied – the
impact of industrial robots – innovation is boosting pay for highly
skilled workers while having a more negative impact on those with low to
medium skills. A 2015 report by Carl Benedikt Frey, Michael Osborne and Citi Research,
agreed that innovation had been disruptive mostly to middle-skilled
jobs, yet predicted that in the next ten years the impact of automation
would fall most heavily on those with low skills.
Geoff Colvin at Forbes
argued that predictions on the kind of work a computer will never be
able to do have proven inaccurate. A better approach to anticipate the
skills on which humans will provide value would be to find out
activities where we will insist that humans remain accountable for
important decisions, such as with judges, CEOs,
bus drivers and government leaders, or where human nature can only be
satisfied by deep interpersonal connections, even if those tasks could
be automated.
In contrast, others see even skilled human laborers being
obsolete. Oxford academics Carl Benedikt Frey and Michael A Osborne have
predicted computerization could make nearly half of jobs redundant;
of the 702 professions assessed, they found a strong correlation
between education and income with ability to be automated, with office
jobs and service work being some of the more at risk. In 2012 co-founder of Sun MicrosystemsVinod Khosla
predicted that 80% of medical doctors jobs would be lost in the next
two decades to automated machine learning medical diagnostic software.
Empirical findings
There
has been a lot of empirical research that attempts to quantify the
impact of technological unemployment, mostly done at the microeconomic
level. Most existing firm-level research has found a labor-friendly
nature of technological innovations. For example, German economists
Stefan Lachenmaier and Horst Rottmann find that both product and process
innovation have a positive effect on employment. They also find that
process innovation has a more significant job creation effect than
product innovation.
This result is supported by evidence in the United States as well,
which shows that manufacturing firm innovations have a positive effect
on the total number of jobs, not just limited to firm-specific behavior.
At the industry level, however, researchers have found mixed
results with regard to the employment effect of technological changes. A
2017 study on manufacturing and service sectors in 11 European
countries suggests that positive employment effects of technological
innovations only exist in the medium- and high-tech sectors.There also
seems to be a negative correlation between employment and capital
formation, which suggests that technological progress could potentially
be labor-saving given that process innovation is often incorporated in
investment.
Limited macroeconomic analysis has been done to study the
relationship between technological shocks and unemployment. The small
amount of existing research, however, suggests mixed results. Italian
economist Marco Vivarelli finds that the labor-saving effect of process
innovation seems to have affected the Italian economy more negatively
than the United States. On the other hand, the job creating effect of
product innovation could only be observed in the United States, not
Italy. Another study in 2013 finds a more transitory, rather than permanent, unemployment effect of technological change.
Measures of technological innovation
There
have been four main approaches that attempt to capture and document
technological innovation quantitatively. The first one, proposed by
Jordi Gali in 1999 and further developed by Neville Francis and Valerie
A. Ramey in 2005, is to use long-run restrictions in a Vector
Autoregression (VAR) to identify technological shocks, assuming that
only technology affects long-run productivity.
The second approach is from Susanto Basu, John Fernald and Miles Kimball. They create a measure of aggregate technology change with augmented Solow residuals, controlling for aggregate, non-technological effects such as non-constant returns and imperfect competition.
The third method, initially developed by John Shea in 1999, takes
a more direct approach and employs observable indicators such as
Research and Development (R&D) spending, and number of patent
applications.
This measure of technological innovation is very widely used in
empirical research, since it does not rely on the assumption that only
technology affects long-run productivity, and fairly accurately captures
the output variation based on input variation. However, there are
limitations with direct measures such as R&D. For example, since
R&D only measures the input in innovation, the output is unlikely to
be perfectly correlated with the input. In addition, R&D fails to
capture the indeterminate lag between developing a new product or
service, and bringing it to market.
The fourth approach, constructed by Michelle Alexopoulos, looks
at the number of new titles published in the fields of technology and
computer science to reflect technological progress, which turns out to
be consistent with R&D expenditure data. Compared with R&D, this indicator captures the lag between changes in technology.
History
Pre-16th century
Roman Emperor Vespasian, who refused a low-cost method of transport of heavy goods that would put laborers out of work.
According to author Gregory Woirol, the phenomenon of technological
unemployment is likely to have existed since at least the invention of
the wheel. Ancient societies had various methods for relieving the poverty of those unable to support themselves with their own labour. Ancient China and ancient Egypt
may have had various centrally run relief programmes in response to
technological unemployment dating back to at least the second millennium
BC. Ancient Hebrews and adherents of the ancient Vedic religion had decentralised responses where aiding the poor was encouraged by their faiths. In ancient Greece, large numbers of free labourers could find themselves unemployed due to both the effects of ancient labour saving technology and to competition from slaves ("machines of flesh and blood").
Sometimes, these unemployed workers would starve to death or were
forced into slavery themselves although in other cases they were
supported by handouts. Pericles responded to perceived technological unemployment by launching public works
programmes to provide paid work to the jobless. Conservatives
criticized Pericle's programmes for wasting public money but were
defeated.
Perhaps the earliest example of a scholar discussing the
phenomenon of technological unemployment occurs with Aristotle, who
speculated in Book One of Politics that if machines could become sufficiently advanced, there would be no more need for human labour.
Similar to the Greeks, ancient Romans,
responded to the problem of technological unemployment by relieving
poverty with handouts. Several hundred thousand families were sometimes
supported like this at once. Less often, jobs were directly created with public works programmes, such as those launched by the Gracchi. Various emperors even went as far as to refuse or ban labour saving innovations. In one instance, the introduction of a labor-saving invention was blocked, when Emperor Vespasian
refused to allow a new method of low-cost transportation of heavy
goods, saying "You must allow my poor hauliers to earn their bread."
Labour shortages began to develop in the Roman empire towards the end
of the second century AD, and from this point mass unemployment in
Europe appears to have largely receded for over a millennium.
The medieval and early renaissance period
saw the widespread adoption of newly invented technologies as well as
older ones which had been conceived yet barely used in the Classical
era. Mass unemployment began to reappear in Europe in the 15th century,
partly as a result of population growth, and partly due to changes in
the availability of land for subsistence farming caused by early enclosures.
As a result of the threat of unemployment, there was less tolerance for
disruptive new technologies. European authorities would often side with
groups representing subsections of the working population, such as Guilds, banning new technologies and sometimes even executing those who tried to promote or trade in them.
16th to 18th century
Elizabeth I who refused to patent a knitting machine invented by William Lee,
saying "Consider thou what the invention could do to my poor subjects.
It would assuredly bring them to ruin by depriving them of employment,
thus making them beggars."
In Great Britain, the ruling elite began to take a less restrictive
approach to innovation somewhat earlier than in much of continental
Europe, which has been cited as a possible reason for Britain's early
lead in driving the Industrial Revolution.
Yet concern over the impact of innovation on employment remained
strong through the 16th and early 17th century. A famous example of new
technology being refused occurred when the inventor William Lee
invited Queen Elizabeth I to view a labour saving knitting machine. The
Queen declined to issue a patent on the grounds that the technology
might cause unemployment among textile workers. After moving to France
and also failing to achieve success in promoting his invention, Lee
returned to England but was again refused by Elizabeth's successor James I for the same reason.
Especially after the Glorious Revolution,
authorities became less sympathetic to workers concerns about losing
their jobs due to innovation. An increasingly influential strand of Mercantilist
thought held that introducing labour saving technology would actually
reduce unemployment, as it would allow British firms to increase their
market share against foreign competition. From the early 18th century
workers could no longer rely on support from the authorities against the
perceived threat of technological unemployment. They would sometimes
take direct action, such as machine breaking, in attempts to protect themselves from disruptive innovation. Schumpeter
notes that as the 18th century progressed, thinkers would raise the
alarm about technological unemployment with increasing frequency, with von Justi being a prominent example.
Yet Schumpeter also notes that the prevailing view among the elite
solidified on the position that technological unemployment would not be a
long term problem.
19th century
It
was only in the 19th century that debates over technological
unemployment became intense, especially in Great Britain where many
economic thinkers of the time were concentrated. Building on the work of
Dean Tucker and Adam Smith, political economists began to create what would become the modern discipline of economics.
While rejecting much of mercantilism, members of the new discipline
largely agreed that technological unemployment would not be an enduring
problem. In the first few decades of the 19th century, several
prominent political economists did, however, argue against the
optimistic view, claiming that innovation could cause long-term
unemployment. These included Sismondi, Malthus, J S Mill, and from 1821, Ricardo himself.
As arguably the most respected political economist of his age,
Ricardo's view was challenging to others in the discipline. The first
major economist to respond was Jean-Baptiste Say, who argued that no one would introduce machinery if they were going to reduce the amount of product, and that as Say's Law
states that supply creates its own demand, any displaced workers would
automatically find work elsewhere once the market had had time to
adjust.
Ramsey McCulloch expanded and formalised Say's optimistic views on technological unemployment, and was supported by others such as Charles Babbage, Nassau Senior and many other lesser known political economists. Towards the middle of the 19th century, Karl Marx
joined the debates. Building on the work of Ricardo and Mill, Marx went
much further, presenting a deeply pessimistic view of technological
unemployment; his views attracted many followers and founded an enduring
school of thought but mainstream economics was not dramatically
changed. By the 1870s, at least in Great Britain, technological
unemployment faded both as a popular concern and as an issue for
academic debate. It had become increasingly apparent that innovation was
increasing prosperity for all sections of British society, including
the working class. As the classical school of thought gave way to neoclassical economics, mainstream thinking was tightened to take into account and refute the pessimistic arguments of Mill and Ricardo.
20th century
Critics
of the view that innovation causes lasting unemployment argue that
technology is used by workers and does not replace them on a large
scale.
For the first two decades of the 20th century, mass unemployment was
not the major problem it had been in the first half of the 19th. While
the Marxist school
and a few other thinkers still challenged the optimistic view,
technological unemployment was not a significant concern for mainstream
economic thinking until the mid to late 1920s. In the 1920s mass
unemployment re-emerged as a pressing issue within Europe. At this time
the U.S. was generally more prosperous, but even there urban
unemployment had begun to increase from 1927. Rural American workers had been suffering job losses from the start of the 1920s; many had been displaced by improved agricultural technology, such as the tractor.
The centre of gravity for economic debates had by this time moved from
Great Britain to the United States, and it was here that the 20th
century's two great periods of debate over technological unemployment
largely occurred.
The peak periods for the two debates were in the 1930s and the
1960s. According to economic historian Gregory R Woirol, the two
episodes share several similarities.
In both cases academic debates were preceded by an outbreak of popular
concern, sparked by recent rises in unemployment. In both cases the
debates were not conclusively settled, but faded away as unemployment
was reduced by an outbreak of war – World War II for the debate of the
1930s, and the Vietnam war
for the 1960s episodes. In both cases, the debates were conducted
within the prevailing paradigm at the time, with little reference to
earlier thought. In the 1930s, optimists based their arguments largely
on neo-classical beliefs in the self-correcting power of markets to
automatically reduce any short-term unemployment via compensation
effects. In the 1960s, faith in compensation effects was less strong,
but the mainstream Keynesian economists
of the time largely believed government intervention would be able to
counter any persistent technological unemployment that was not cleared
by market forces. Another similarity was the publication of a major
Federal study towards the end of each episode, which broadly found that
long-term technological unemployment was not occurring (though the
studies did agree innovation was a major factor in the short term
displacement of workers, and advised government action to provide
assistance).
As the golden age of capitalism
came to a close in the 1970s, unemployment once again rose, and this
time generally remained relatively high for the rest of the century,
across most advanced economies. Several economists once again argued
that this may be due to innovation, with perhaps the most prominent
being Paul Samuelson. A number of popular works warning of technological unemployment were also published. These included James S. Albus's 1976 book titled Peoples' Capitalism: The Economics of the Robot Revolution;
David F. Noble with works published in 1984 and 1993; Jeremy Rifkin and his 1995 book The End of Work; and the 1996 book The Global Trap.
In general, the closing decades of the 20th century saw much more
concern expressed over technological unemployment in Europe, compared
with the U.S.
For the most part, other than during the periods of intense debate in
the 1930s and 60s, the consensus in the 20th century among both
professional economists and the general public remained that technology
does not cause long-term joblessness.
21st century
Opinions
There is a
prevailing opinion that we are in an era of technological unemployment –
that technology is increasingly making skilled workers obsolete.
Prof. Mark MacCarthy (2014)
The general consensus that innovation does not cause long-term
unemployment held strong for the first decade of the 21st century
although it continued to be challenged by a number of academic works, and by popular works such as Marshall Brain's Robotic Nation and Martin Ford's The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.
Since the publication of their 2011 book Race Against The Machine, MIT professors Andrew McAfee and Erik Brynjolfsson
have been prominent among those raising concern about technological
unemployment. The two professors remain relatively optimistic however,
stating "the key to winning the race is not to compete against machines but to compete with machines".
Concern about technological unemployment grew in 2013 due in part
to a number of studies predicting substantially increased technological
unemployment in forthcoming decades and empirical evidence that, in
certain sectors, employment is falling worldwide despite rising output,
thus discounting globalization and offshoring as the only causes of
increasing unemployment.
In 2013, professor Nick Bloom of Stanford University stated there had recently been a major change of heart concerning technological unemployment among his fellow economists.
In 2014 the Financial Times reported that the impact of innovation on jobs has been a dominant theme in recent economic discussion. According to the academic and former politician Michael Ignatieff writing in 2014, questions concerning the effects of technological change have been "haunting democratic politics everywhere".
Concerns have included evidence showing worldwide falls in employment
across sectors such as manufacturing; falls in pay for low and medium
skilled workers stretching back several decades even as productivity
continues to rise; the increase in often precarious platform mediated
employment; and the occurrence of "jobless recoveries" after recent
recessions. The 21st century has seen a variety of skilled tasks
partially taken over by machines, including translation, legal research
and even low level journalism. Care work, entertainment, and other tasks
requiring empathy, previously thought safe from automation, have also
begun to be performed by robots.
Former U.S. Treasury Secretary and Harvard economics professor Lawrence Summers
stated in 2014 that he no longer believed automation would always
create new jobs and that "This isn't some hypothetical future
possibility. This is something that's emerging before us right now."
Summers noted that already, more labor sectors were losing jobs than
creating new ones.
While himself doubtful about technological unemployment, professor
Mark MacCarthy stated in the fall of 2014 that it is now the "prevailing
opinion" that the era of technological unemployment has arrived.
At the 2014 Davos meeting, Thomas Friedman
reported that the link between technology and unemployment seemed to
have been the dominant theme of that year's discussions. A survey at
Davos 2014 found that 80% of 147 respondents agreed that technology was
driving jobless growth. At the 2015 Davos, Gillian Tett
found that almost all delegates attending a discussion on inequality
and technology expected an increase in inequality over the next five
years, and gives the reason for this as the technological displacement
of jobs.
2015 saw Martin Ford win the Financial Times and McKinsey Business Book of the Year Award for his Rise of the Robots: Technology and the Threat of a Jobless Future,
and saw the first world summit on technological unemployment, held in
New York. In late 2015, further warnings of potential worsening for
technological unemployment came from Andy Haldane, the Bank of England's chief economist, and from Ignazio Visco, the governor of the Bank of Italy. In an October 2016 interview, US President Barack Obama
said that due to the growth of artificial intelligence, society would
be debating "unconditional free money for everyone" within 10 to 20
years.
Other economists, however, have argued that long-term
technological unemployment is unlikely. In 2014, Pew Research canvassed
1,896 technology professionals and economists and found a split of
opinion: 48% of respondents believed that new technologies would
displace more jobs than they would create by the year 2025, while 52%
maintained that they would not. Economics professor Bruce Chapman from Australian National University
has advised that studies such as Frey and Osborne's tend to overstate
the probability of future job losses, as they don't account for new
employment likely to be created, due to technology, in what are
currently unknown areas.
General public surveys have often found an expectation that
automation would impact jobs widely, but not the jobs held by those
particular people surveyed.
Studies
A
number of studies have predicted that automation will take a large
proportion of jobs in the future, but estimates of the level of
unemployment this will cause vary. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School
showed that employees engaged in "tasks following well-defined
procedures that can easily be performed by sophisticated algorithms" are
at risk of displacement. The study, published in 2013, shows that
automation can affect both skilled and unskilled work and both high and
low-paying occupations; however, low-paid physical occupations are most
at risk. It estimated that 47% of US jobs were at high risk of
automation. In 2014, the economic think tank Bruegel released a study, based on the Frey and Osborne approach, claiming that across the European Union's 28 member states, 54% of jobs were at risk of automation. The countries where jobs were least vulnerable to automation were Sweden, with 46.69% of jobs vulnerable, the UK at 47.17%, the Netherlands at 49.50%, and France and Denmark, both at 49.54%. The countries where jobs were found to be most vulnerable were Romania at 61.93%, Portugal at 58.94%, Croatia at 57.9%, and Bulgaria at 56.56%. A 2015 report by the Taub Center found that 41% of jobs in Israel were at risk of being automated within the next two decades. In January 2016, a joint study by the Oxford Martin School and Citibank, based on previous studies on automation and data from the World Bank,
found that the risk of automation in developing countries was much
higher than in developed countries. It found that 77% of jobs in China, 69% of jobs in India, 85% of jobs in Ethiopia, and 55% of jobs in Uzbekistan were at risk of automation.[109] The World Bank similarly employed the methodology of Frey and Osborne. A 2016 study by the International Labour Organization found 74% of salaried jobs in Thailand, 75% of salaried jobs in Vietnam, 63% of salaried jobs in Indonesia, and 81% of salaried jobs in the Philippines were at high risk of automation. A 2016 United Nations
report stated that 75% of jobs in the developing world were at risk of
automation, and predicted that more jobs might be lost when corporations
stop outsourcing
to developing countries after automation in industrialized countries
makes it less lucrative to outsource to countries with lower labor
costs.
The Council of Economic Advisers, a US government agency tasked with providing economic research for the White House, in the 2016 Economic Report of the President,
used the data from the Frey and Osborne study to estimate that 83% of
jobs with an hourly wage below $20, 31% of jobs with an hourly wage
between $20 and $40, and 4% of jobs with an hourly wage above $40 were
at risk of automation. A 2016 study by Ryerson University found that 42% of jobs in Canada
were at risk of automation, dividing them into two categories - "high
risk" jobs and "low risk" jobs. High risk jobs were mainly lower-income
jobs that required lower education levels than average. Low risk jobs
were on average more skilled positions. The report found a 70% chance
that high risk jobs and a 30% chance that low risk jobs would be
affected by automation in the next 10–20 years. A 2017 study by PricewaterhouseCoopers found that up to 38% of jobs in the US, 35% of jobs in Germany, 30% of jobs in the UK, and 21% of jobs in Japan were at high risk of being automated by the early 2030s. A 2017 study by Ball State University found about half of American jobs were at risk of automation, many of them low-income jobs. A September 2017 report by McKinsey & Company
found that as of 2015, 478 billion out of 749 billion working hours per
year dedicated to manufacturing, or $2.7 trillion out of $5.1 trillion
in labor, were already automatable. In low-skill areas, 82% of labor in
apparel goods, 80% of agriculture processing, 76% of food manufacturing,
and 60% of beverage manufacturing were subject to automation. In
mid-skill areas, 72% of basic materials production and 70% of furniture
manufacturing was automatable. In high-skill areas, 52% of aerospace and
defense labor and 50% of advanced electronics labor could be automated. In October 2017, a survey of information technology
decision makers in the US and UK found that a majority believed that
most business processes could be automated by 2022. On average, they
said that 59%
of business processes were subject to automation.
A November 2017 report by the McKinsey Global Institute that analyzed
around 800 occupations in 46 countries estimated that between 400
million and 800 million jobs could be lost due to robotic automation by
2030. It estimated that jobs were more at risk in developed countries
than developing countries due to a greater availability of capital to
invest in automation. Job losses and downward mobility blamed on automation has been cited as one of many factors in the resurgence of nationalist and protectionist politics in the US, UK and France, among other countries.
However, not all recent empirical studies have found evidence to
support the idea that automation will cause widespread unemployment. A
study released in 2015, examining the impact of industrial robots in 17
countries between 1993 and 2007, found no overall reduction in
employment was caused by the robots, and that there was a slight
increase in overall wages. According to a study published in McKinsey Quarterly
in 2015 the impact of computerization in most cases is not replacement
of employees but automation of portions of the tasks they perform. A 2016 OECD
study found that among the 21 OECD countries surveyed, on average only
9% of jobs were in foreseeable danger of automation, but this varied
greatly among countries: for example in South Korea the figure of at-risk jobs was 6% while in Austria it was 12%.
In contrast to other studies, the OECD study does not primarily base
its assessment on the tasks that a job entails, but also includes
demographic variables, including sex, education and age. It is not clear
however why a job should be more or less automatise just because it is
performed by a woman. In 2017, Forrester
estimated that automation would result in a net loss of about 7% of
jobs in the US by 2027, replacing 17% of jobs while creating new jobs
equivalent to 10% of the workforce.
Another study argued that the risk of US jobs to automation had been
overestimated due to factors such as the heterogeneity of tasks within
occupations and the adaptability of jobs being neglected. The study
found that once this was taken into account, the number of occupations
at risk to automation in the US drops, ceteris paribus, from 38% to 9%.
A 2017 study on the effect of automation on Germany found no evidence
that automation caused total job losses but that they do effect the jobs
people are employed in; losses in the industrial sector due to
automation were offset by gains in the service sector. Manufacturing
workers were also not at risk from automation and were in fact more
likely to remain employed, though not necessarily doing the same tasks.
However, automation did result in a decrease in labour's income share as
it raised productivity but not wages.
A 2018 Brookings Institution
study that analyzed 28 industries in 18 OECD countries from 1970 to
2018 found that automation was responsible for holding down wages.
Although it concluded that automation did not reduce the overall number
of jobs available and even increased them, it found that from the 1970s
to the 2010s, it had reduced the share of human labor in the value added
to the work, and thus had helped to slow wage growth. In April 2018, Adair Turner, former Chairman of the Financial Services Authority and head of the Institute for New Economic Thinking,
stated that it would already be possible to automate 50% of jobs with
current technology, and that it will be possible to automate all jobs by
2060.
Policy
In 2017, South Korea
became the most automated country on earth with one robot for every 19
employed humans. This caused the government to consider changing the tax
laws to hinder future automation increases.
Solutions
Preventing net job losses
Banning/refusing innovation
"What
I object to, is the craze for machinery, not machinery as such. The
craze is for what they call labour-saving machinery. Men go on 'saving
labour', till thousands are without work and thrown on the open streets
to die of starvation." — Gandhi, 1924.
Historically, innovations were sometimes banned due to concerns about
their impact on employment. Since the development of modern economics,
however, this option has generally not even been considered as a
solution, at least not for the advanced economies. Even commentators who
are pessimistic about long-term technological unemployment invariably
consider innovation to be an overall benefit to society, with JS Mill
being perhaps the only prominent western political economist to have
suggested prohibiting the use of technology as a possible solution to
unemployment.
Gandhian economics
called for a delay in the uptake of labour saving machines until
unemployment was alleviated, however this advice was largely rejected by
Nehru
who was to become prime minister once India achieved its independence.
The policy of slowing the introduction of innovation so as to avoid
technological unemployment was however implemented in the 20th century
within China under Mao's administration.
Shorter working hours
In
1870, the average American worker clocked up about 75 hours per week.
Just prior to World War II working hours had fallen to about 42 per
week, and the fall was similar in other advanced economies. According to
Wassily Leontief,
this was a voluntary increase in technological unemployment. The
reduction in working hours helped share out available work, and was
favoured by workers who were happy to reduce hours to gain extra
leisure, as innovation was at the time generally helping to increase
their rates of pay.
Further reductions in working hours have been proposed as a possible solution to unemployment by economists including John R. Commons, Lord Keynes and Luigi Pasinetti.
Yet once working hours have reached about 40 hours per week, workers
have been less enthusiastic about further reductions, both to prevent
loss of income and as many value engaging in work for its own sake. Generally, 20th-century economists had argued against further reductions as a solution to unemployment, saying it reflects a Lump of labour fallacy.
In 2014, Google's co-founder, Larry Page, suggested a four-day workweek, so as technology continues to displace jobs, more people can find employment.
Public works
Programmes of Public works
have traditionally been used as way for governments to directly boost
employment, though this has often been opposed by some, but not all,
conservatives. Jean-Baptiste Say,
although generally associated with free market economics, advised that
public works could be a solution to technological unemployment.
Some commentators, such as professor Mathew Forstater, have advised
that public works and guaranteed jobs in the public sector may be the
ideal solution to technological unemployment, as unlike welfare or
guaranteed income schemes they provide people with the social
recognition and meaningful engagement that comes with work.
For less developed economies, public works may be an easier to administrate solution compared to universal welfare programmes.
As of 2015, calls for public works in the advanced economies have been
less frequent even from progressives, due to concerns about sovereign debt.
A partial exception is for spending on infrastructure, which has been
recommended as a solution to technological unemployment even by
economists previously associated with a neoliberal agenda, such as Larry Summers.
Education
Improved
availability to quality education, including skills training for
adults, is a solution that in principle at least is not opposed by any
side of the political spectrum, and welcomed even by those who are
optimistic about long-term technological employment. Improved education
paid for by government tends to be especially popular with industry.
Proponents of this brand of policy assert higher level, more
specialized learning is a way to capitalize from the growing technology
industry. Leading technology research university MIT published an open letter to policymakers advocating for the "reinvention of education", namely a shift "away from rote learning" and towards STEM disciplines. Similar statements released by the U.S President's Council of Advisors on Science and Technology (PACST) have also been used to support this STEM emphasis on enrollment choice in higher learning.
Education reform is also a part of the U.K government's "Industrial
Strategy", a plan announcing the nation's intent to invest millions into
a "technical education system".
The proposal includes the establishment of a retraining program for
workers who wish to adapt their skill-sets. These suggestions combat the
concerns over automation
through policy choices aiming to meet the emerging needs of society via
updated information. Of the professionals within the academic community
who applaud such moves, often noted is a gap between economic security
and formal education —a disparity exacerbated by the rising demand for specialized skills—and education's potential to reduce it.
However, several academics have also argued that improved
education alone will not be sufficient to solve technological
unemployment, pointing to recent declines in the demand for many
intermediate skills, and suggesting that not everyone is capable in
becoming proficient in the most advanced skills. Kim Taipale has said that "The
era of bell curve distributions that supported a bulging social middle
class is over... Education per se is not going to make up the
difference." while an op-ed piece from 2011, Paul Krugman,
an economics professor and columnist for the New York Times, argued
that better education would be an insufficient solution to technological
unemployment, as it "actually reduces the demand for highly educated workers".
Living with technological unemployment
Welfare payments
The
use of various forms of subsidies has often been accepted as a solution
to technological unemployment even by conservatives and by those who
are optimistic about the long term effect on jobs. Welfare programmes
have historically tended to be more durable once established, compared
with other solutions to unemployment such as directly creating jobs with
public works. Despite being the first person to create a formal system
describing compensation effects, Ramsey McCulloch and most other
classical economists advocated government aid for those suffering from
technological unemployment, as they understood that market adjustment to
new technology was not instantaneous and that those displaced by
labour-saving technology would not always be able to immediately obtain
alternative employment through their own efforts.
Basic income
Several commentators have argued that traditional forms of welfare
payment may be inadequate as a response to the future challenges posed
by technological unemployment, and have suggested a basic income
as an alternative. People advocating some form of basic income as a
solution to technological unemployment include Martin Ford, Erik Brynjolfsson, Robert Reich and Guy Standing. Reich has gone as far as to say the introduction of a basic income, perhaps implemented as a negative income tax is "almost inevitable", while Standing has said he considers that a basic income is becoming "politically essential".
Since late 2015, new basic income pilots have been announced in Finland,
the Netherlands, and Canada. Further recent advocacy for basic income
has arisen from a number of technology entrepreneurs, the most prominent
being Sam Altman, president of Y Combinator.
Skepticism about basic income includes both right and left
elements, and proposals for different forms of it have come from all
segments of the spectrum. For example, while the best-known proposed
forms (with taxation and distribution) are usually thought of as
left-leaning ideas that right-leaning people try to defend against,
other forms have been proposed even by libertarians, such as von Hayek and Friedman. Republican president Nixon's Family Assistance Plan (FAP) of 1969, which had much in common with basic income, passed in the House but was defeated in the Senate.
One objection to basic income is that it could be a disincentive to work, but evidence from older pilots in India, Africa, and Canada indicates that this does not happen and that a basic income encourages low-level entrepreneurship
and more productive, collaborative work. Another objection is that
funding it sustainably is a huge challenge. While new revenue-raising
ideas have been proposed such as Martin Ford's wage recapture tax, how
to fund a generous basic income remains a debated question, and skeptics
have dismissed it as utopian. Even from a progressive viewpoint, there
are concerns that a basic income set too low may not help the
economically vulnerable, especially if financed largely from cuts to
other forms of welfare.
To better address both the funding concerns and concerns about
government control, one alternative model is that the cost and control
would be distributed across the private sector instead of the public
sector. Companies across the economy would be required to employ humans,
but the job descriptions would be left to private innovation, and
individuals would have to compete to be hired and retained. This would
be a for-profit sector analog of basic income, that is, a market-based
form of basic income. It differs from a job guarantee
in that the government is not the employer (rather, companies are) and
there is no aspect of having employees who "cannot be fired", a problem
that interferes with economic dynamism. The economic salvation in this
model is not that every individual is guaranteed a job, but rather just
that enough jobs exist that massive unemployment is avoided and
employment is no longer solely the privilege of only the very smartest
or highly trained 20% of the population. Another option for a
market-based form of basic income has been proposed by the Center for Economic and Social Justice (CESJ) as part of "a Just Third Way" (a Third Way with greater justice) through widely distributed power and liberty. Called the Capital Homestead Act, it is reminiscent of James S. Albus's Peoples' Capitalism in that money creation and securities ownership
are widely and directly distributed to individuals rather than flowing
through, or being concentrated in, centralized or elite mechanisms.
Broadening the ownership of technological assets
Several solutions have been proposed which don't fall easily into the traditional left-right political spectrum.
This includes broadening the ownership of robots and other productive
capital assets. Enlarging the ownership of technologies has been
advocated by people including James S. AlbusJohn Lanchester, Richard B. Freeman, and Noah Smith.
Jaron Lanier has proposed a somewhat similar solution: a mechanism where ordinary people receive "nano payments" for the big data they generate by their regular surfing and other aspects of their online presence.
Structural changes towards a post-scarcity economy
The Zeitgeist Movement (TZM), The Venus Project (TVP) as well as various individuals and organizations propose structural changes towards a form of a post-scarcity economy
in which people are 'freed' from their automatable, monotonous jobs,
instead of 'losing' their jobs. In the system proposed by TZM all jobs
are either automated, abolished for bringing no true value for society
(such as ordinary advertising), rationalized by more efficient, sustainable and open processes and collaboration or carried out based on altruism and social relevance (see also: Whuffie), opposed to compulsion or monetary gain.
The movement also speculates that the free time made available to
people will permit a renaissance of creativity, invention, community and
social capital as well as reducing stress.
Other approaches
The
threat of technological unemployment has occasionally been used by free
market economists as a justification for supply side reforms, to make
it easier for employers to hire and fire workers. Conversely, it has
also been used as a reason to justify an increase in employee
protection.
Economists including Larry Summers
have advised a package of measures may be needed. He advised vigorous
cooperative efforts to address the "myriad devices" – such as tax
havens, bank secrecy, money laundering, and regulatory arbitrage – which
enable the holders of great wealth to avoid paying taxes, and to make
it more difficult to accumulate great fortunes without requiring "great
social contributions" in return. Summers suggested more vigorous
enforcement of anti-monopoly laws; reductions in "excessive" protection
for intellectual property; greater encouragement of profit-sharing
schemes that may benefit workers and give them a stake in wealth
accumulation; strengthening of collective bargaining arrangements;
improvements in corporate governance; strengthening of financial
regulation to eliminate subsidies to financial activity; easing of
land-use restrictions that may cause estates to keep rising in value;
better training for young people and retraining for displaced workers;
and increased public and private investment in infrastructure
development, such as energy production and transportation.
Michael Spence
has advised that responding to the future impact of technology will
require a detailed understanding of the global forces and flows
technology has set in motion. Adapting to them "will require shifts in
mindsets, policies, investments (especially in human capital), and quite
possibly models of employment and distribution".