Search This Blog

Thursday, March 11, 2021

Technological determinism

From Wikipedia, the free encyclopedia

Technological determinism is a reductionist theory that assumes that a society's technology determines the development of its social structure and cultural values. The term is believed to have originated from Thorstein Veblen (1857–1929), an American sociologist and economist. The most radical technological determinist in the United States in the 20th century was most likely Clarence Ayres who was a follower of Thorstein Veblen and John Dewey. William Ogburn was also known for his radical technological determinism.

The first major elaboration of a technological determinist view of socioeconomic development came from the German philosopher and economist Karl Marx, who argued that changes in technology, and specifically productive technology, are the primary influence on human social relations and organizational structure, and that social relations and cultural practices ultimately revolve around the technological and economic base of a given society. Marx's position has become embedded in contemporary society, where the idea that fast-changing technologies alter human lives is pervasive. Although many authors attribute a technologically determined view of human history to Marx's insights, not all Marxists are technological determinists, and some authors question the extent to which Marx himself was a determinist. Furthermore, there are multiple forms of technological determinism.

Origin

The term is believed to have been coined by Thorstein Veblen (1857–1929), an American social scientist. Veblen's contemporary, popular historian Charles A. Beard, provided this apt determinist image, "Technology marches in seven-league boots from one ruthless, revolutionary conquest to another, tearing down old factories and industries, flinging up new processes with terrifying rapidity." As to the meaning, it is described as the ascription to machines of "powers" that they do not have. Veblen, for instance, asserted that "the machine throws out anthropomorphic habits of thought." There is also the case of Karl Marx who expected that the construction of the railway in India would dissolve the caste system. The general idea, according to Robert Heilbroner, is that technology, by way of its machines, can cause historical change by changing the material conditions of human existence.

One of the most radical technological determinists was a man named Clarence Ayres, who was a follower of Veblen's theory in the 20th century. Ayres is best known for developing economic philosophies, but he also worked closely with Veblen who coined the technological determinism theory. He often talked about the struggle between technology and ceremonial structure. One of his most notable theories involved the concept of "technological drag" where he explains technology as a self-generating process and institutions as ceremonial and this notion creates a technological over-determinism in the process.

Explanation

Technological determinism seeks to show technical developments, media, or technology as a whole, as the key mover in history and social change. It is a theory subscribed by "hyperglobalists" who claim that as a consequence of the wide availability of technology, accelerated globalization is inevitable. Therefore, technological development and innovation become the principal motor of social, economic or political change.

Strict adherents to technological determinism do not believe the influence of technology differs based on how much a technology is or can be used. Instead of considering technology as part of a larger spectrum of human activity, technological determinism sees technology as the basis for all human activity.

Technological determinism has been summarized as 'The belief in technology as a key governing force in society ...' (Merritt Roe Smith). 'The idea that technological development determines social change ...' (Bruce Bimber). It changes the way people think and how they interact with others and can be described as '...a three-word logical proposition: "Technology determines history"' (Rosalind Williams) . It is, '... the belief that social progress is driven by technological innovation, which in turn follows an "inevitable" course.' (Michael L. Smith). This 'idea of progress' or 'doctrine of progress' is centralised around the idea that social problems can be solved by technological advancement, and this is the way that society moves forward. Technological determinists believe that "'You can't stop progress', implying that we are unable to control technology" (Lelia Green). This suggests that we are somewhat powerless and society allows technology to drive social changes because, "societies fail to be aware of the alternatives to the values embedded in it [technology]" (Merritt Roe Smith).

Technological determinism has been defined as an approach that identifies technology, or technological advances, as the central causal element in processes of social change (Croteau and Hoynes). As a technology is stabilized, its design tends to dictate users' behaviors, consequently diminishing human agency. This stance however ignores the social and cultural circumstances in which the technology was developed. Sociologist Claude Fischer (1992) characterized the most prominent forms of technological determinism as "billiard ball" approaches, in which technology is seen as an external force introduced into a social situation, producing a series of ricochet effects.

Rather than acknowledging that a society or culture interacts with and even shapes the technologies that are used, a technological determinist view holds that "the uses made of technology are largely determined by the structure of the technology itself, that is, that its functions follow from its form" (Neil Postman). However, this is not to be confused with Daniel Chandler's "inevitability thesis", which states that once a technology is introduced into a culture that what follows is the inevitable development of that technology.

For example, we could examine why Romance Novels have become so dominant in our society compared to other forms of novels like the Detective or Western novel. We might say that it was because of the invention of the perfect binding system developed by publishers. This was where glue was used instead of the time-consuming and very costly process of binding books by sewing in separate signatures. This meant that these books could be mass-produced for the wider public. We would not be able to have mass literacy without mass production. This example is closely related to Marshall McLuhan's belief that print helped produce the nation state. This moved society on from an oral culture to a literate culture but also introduced a capitalist society where there was clear class distinction and individualism. As Postman maintains

The printing press, the computer, and television are not therefore simply machines which convey information. They are metaphors through which we conceptualize reality in one way or another. They will classify the world for us, sequence it, frame it, enlarge it, reduce it, argue a case for what it is like. Through these media metaphors, we do not see the world as it is. We see it as our coding systems are. Such is the power of the form of information.

Hard and soft determinism

In examining determinism, hard determinism can be contrasted with soft determinism. A compatibilist says that it is possible for free will and determinism to exist in the world together, while an incompatibilist would say that they can not and there must be one or the other. Those who support determinism can be further divided.

Hard determinists would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning. According to this view of determinism we organize ourselves to meet the needs of technology and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome (autonomous technology). The 20th century French philosopher and social theorist Jacques Ellul could be said to be a hard determinist and proponent of autonomous technique (technology). In his 1954 work The Technological Society, Ellul essentially posits that technology, by virtue of its power through efficiency, determines which social aspects are best suited for its own development through a process of natural selection. A social system's values, morals, philosophy etc. that are most conducive to the advancement of technology allow that social system to enhance its power and spread at the expense of those social systems whose values, morals, philosophy etc. are less promoting of technology. While geography, climate, and other "natural" factors largely determined the parameters of social conditions for most of human history, technology has recently become the dominant objective factor (largely due to forces unleashed by the industrial revolution) and it has been the principal objective and determining factor.

Soft determinism, as the name suggests, is a more passive view of the way technology interacts with socio-political situations. Soft determinists still subscribe to the fact that technology is the guiding force in our evolution, but would maintain that we have a chance to make decisions regarding the outcomes of a situation. This is not to say that free will exists, but that the possibility for us to roll the dice and see what the outcome is exists. A slightly different variant of soft determinism is the 1922 technology-driven theory of social change proposed by William Fielding Ogburn, in which society must adjust to the consequences of major inventions, but often does so only after a period of cultural lag.

Technology as neutral

Individuals who consider technology as neutral see technology as neither good nor bad and what matters are the ways in which we use technology. An example of a neutral viewpoint is, "guns are neutral and its up to how we use them whether it would be 'good or bad'" (Green, 2001). Mackenzie and Wajcman believe that technology is neutral only if it's never been used before, or if no one knows what it is going to be used for (Green, 2001). In effect, guns would be classified as neutral if and only if society were none the wiser of their existence and functionality (Green, 2001). Obviously, such a society is non-existent and once becoming knowledgeable about technology, the society is drawn into a social progression where nothing is 'neutral about society' (Green). According to Lelia Green, if one believes technology is neutral, one would disregard the cultural and social conditions that technology has produced (Green, 2001). This view is also referred to as technological instrumentalism.

In what is often considered a definitive reflection on the topic, the historian Melvin Kranzberg famously wrote in the first of his six laws of technology: "Technology is neither good nor bad; nor is it neutral."

Criticism

Skepticism about technological determinism emerged alongside increased pessimism about techno-science in the mid-20th century, in particular around the use of nuclear energy in the production of nuclear weapons, Nazi human experimentation during World War II, and the problems of economic development in the Third World. As a direct consequence, desire for greater control of the course of development of technology gave rise to disenchantment with the model of technological determinism in academia.

Modern theorists of technology and society no longer consider technological determinism to be a very accurate view of the way in which we interact with technology, even though determinist assumptions and language fairly saturate the writings of many boosters of technology, the business pages of many popular magazines, and much reporting on technology. Instead, research in science and technology studies, social construction of technology and related fields have emphasised more nuanced views that resist easy causal formulations. They emphasise that "The relationship between technology and society cannot be reduced to a simplistic cause-and-effect formula. It is, rather, an 'intertwining'", whereby technology does not determine but "operates, and are operated upon in a complex social field" (Murphie and Potts).

T. Snyder approached the aspect of technological determinism in his concept: 'politics of inevitability'. A concept utilized by politicians in which society is promised the idea that the future will be only more of the present, this concept removes responsibility. This could be applied to free markets, the development of nation states and technological progress.

In his article "Subversive Rationalization: Technology, Power and Democracy with Technology," Andrew Feenberg argues that technological determinism is not a very well founded concept by illustrating that two of the founding theses of determinism are easily questionable and in doing so calls for what he calls democratic rationalization (Feenberg 210–212).

Prominent opposition to technologically determinist thinking has emerged within work on the social construction of technology (SCOT). SCOT research, such as that of Mackenzie and Wajcman (1997) argues that the path of innovation and its social consequences are strongly, if not entirely shaped by society itself through the influence of culture, politics, economic arrangements, regulatory mechanisms and the like. In its strongest form, verging on social determinism, "What matters is not the technology itself, but the social or economic system in which it is embedded" (Langdon Winner).

In his influential but contested (see Woolgar and Cooper, 1999) article "Do Artifacts Have Politics?", Langdon Winner illustrates not a form of determinism but the various sources of the politics of technologies. Those politics can stem from the intentions of the designer and the culture of the society in which a technology emerges or can stem from the technology itself, a "practical necessity" for it to function. For instance, New York City urban planner Robert Moses is purported to have built Long Island's parkway tunnels too low for buses to pass in order to keep minorities away from the island's beaches, an example of externally inscribed politics. On the other hand, an authoritarian command-and-control structure is a practical necessity of a nuclear power plant if radioactive waste is not to fall into the wrong hands. As such, Winner neither succumbs to technological determinism nor social determinism. The source of a technology's politics is determined only by carefully examining its features and history.

Although "The deterministic model of technology is widely propagated in society" (Sarah Miller), it has also been widely questioned by scholars. Lelia Green explains that, "When technology was perceived as being outside society, it made sense to talk about technology as neutral". Yet, this idea fails to take into account that culture is not fixed and society is dynamic. When "Technology is implicated in social processes, there is nothing neutral about society" (Lelia Green). This confirms one of the major problems with "technological determinism and the resulting denial of human responsibility for change. There is a loss of human involvement that shape technology and society" (Sarah Miller).

Another conflicting idea is that of technological somnambulism, a term coined by Winner in his essay "Technology as Forms of Life". Winner wonders whether or not we are simply sleepwalking through our existence with little concern or knowledge as to how we truly interact with technology. In this view, it is still possible for us to wake up and once again take control of the direction in which we are traveling (Winner 104). However, it requires society to adopt Ralph Schroeder's claim that, "users don't just passively consume technology, but actively transform it".

In opposition to technological determinism are those who subscribe to the belief of social determinism and postmodernism. Social determinists believe that social circumstances alone select which technologies are adopted, with the result that no technology can be considered "inevitable" solely on its own merits. Technology and culture are not neutral and when knowledge comes into the equation, technology becomes implicated in social processes. The knowledge of how to create and enhance technology, and of how to use technology is socially bound knowledge. Postmodernists take another view, suggesting that what is right or wrong is dependent on circumstance. They believe technological change can have implications on the past, present and future. While they believe technological change is influenced by changes in government policy, society and culture, they consider the notion of change to be a paradox, since change is constant.

Media and cultural studies theorist Brian Winston, in response to technological determinism, developed a model for the emergence of new technologies which is centered on the Law of the suppression of radical potential. In two of his books – Technologies of Seeing: Photography, Cinematography and Television (1997) and Media Technology and Society (1998) – Winston applied this model to show how technologies evolve over time, and how their 'invention' is mediated and controlled by society and societal factors which suppress the radical potential of a given technology.

The stirrup

One continued argument for technological determinism is centered on the stirrup and its impact on the creation of feudalism in Europe in the late 8th century/early 9th century. Lynn White is credited with first drawing this parallel between feudalism and the stirrup in his book Medieval Technology and Social Change, which was published in 1962 and argued that as "it made possible mounted shock combat", the new form of war made the soldier that much more efficient in supporting feudal townships (White, 2). According to White, the superiority of the stirrup in combat was found in the mechanics of the lance charge: "The stirrup made possible- though it did not demand- a vastly more effective mode of attack: now the rider could law his lance at rest, held between the upper arm and the body, and make at his foe, delivering the blow not with his muscles but with the combined weight of himself and his charging stallion (White, 2)." White draws from a large research base, particularly Heinrich Brunner's "Der Reiterdienst und die Anfänge des Lehnwesens" in substantiating his claim of the emergence of feudalism. In focusing on the evolution of warfare, particularly that of cavalry in connection with Charles Martel's "diversion of a considerable part of the Church's vast military riches...from infantry to cavalry", White draws from Brunner's research and identifies the stirrup as the underlying cause for such a shift in military division and the subsequent emergence of feudalism (White, 4). Under the new brand of warfare garnered from the stirrup, White implicitly argues in favor of technological determinism as the vehicle by which feudalism was created.

Though an accomplished work, White's Medieval Technology and Social Change has since come under heavy scrutiny and condemnation. The most volatile critics of White's argument at the time of its publication, P.H. Sawyer and R.H. Hilton, call the work as a whole "a misleading adventurist cast to old-fashioned platitudes with a chain of obscure and dubious deductions from scanty evidence about the progress of technology (Sawyer and Hilton, 90)." They further condemn his methods and, by association, the validity of technological determinism: "Had Mr. White been prepared to accept the view that the English and Norman methods of fighting were not so very different in the eleventh century, he would have made the weakness of his argument less obvious, but the fundamental failure would remain: the stirrup cannot alone explain the changes it made possible (Sawyer and Hilton, 91)." For Sawyer and Hilton, though the stirrup may be useful in the implementation of feudalism, it cannot be credited for the creation of feudalism alone.

Despite the scathing review of White's claims, the technological determinist aspect of the stirrup is still in debate. Alex Roland, author of "Once More into the Stirrups; Lynne White Jr, Medieval Technology and Social Change", provides an intermediary stance: not necessarily lauding White's claims, but providing a little defense against Sawyer and Hilton's allegations of gross intellectual negligence. Roland views White's focus on technology to be the most relevant and important aspect of Medieval Technology and Social Change rather than the particulars of its execution: "But can these many virtues, can this utility for historians of technology, outweigh the most fundamental standards of the profession? Can historians of technology continue to read and assign a book that is, in the words of a recent critic, "shot through with over-simplification, with a progression of false connexions between cause and effect, and with evidence presented selectively to fit with [White's] own pre-conceived ideas"? The answer, I think, is yes, at least a qualified yes (Roland, 574-575)." Objectively, Roland claims Medieval Technology and Social Change a variable success, at least as "Most of White's argument stands... the rest has sparked useful lines of research (Roland, 584)." This acceptance of technological determinism is ambiguous at best, neither fully supporting the theory at large nor denouncing it, rather placing the construct firmly in the realm of the theoretical. Roland neither views technological determinism as completely dominant over history nor completely absent as well; in accordance with the above criterion of technological determinist structure, would Roland be classified as a "soft determinist".

Notable technological determinists

Thomas L. Friedman, American journalist, columnist and author, admits to being a technological determinist in his book The World is Flat.

Futurist Raymond Kurzweil's theories about a technological singularity follow a technologically deterministic view of history.

Some interpret Karl Marx as advocating technological determinism, with such statements as "The Handmill gives you society with the feudal lord: the steam-mill, society with the industrial capitalist" (The Poverty of Philosophy, 1847), but others argue that Marx was not a determinist.

Technological determinist Walter J. Ong reviews the societal transition from an oral culture to a written culture in his work Orality and Literacy: The Technologizing of the Word (1982). He asserts that this particular development is attributable to the use of new technologies of literacy (particularly print and writing,) to communicate thoughts which could previously only be verbalized. He furthers this argument by claiming that writing is purely context dependent as it is a "secondary modelling system" (8). Reliant upon the earlier primary system of spoken language, writing manipulates the potential of language as it depends purely upon the visual sense to communicate the intended information. Furthermore, the rather stagnant technology of literacy distinctly limits the usage and influence of knowledge, it unquestionably effects the evolution of society. In fact, Ong asserts that "more than any other single invention, writing has transformed human consciousness" (Ong 1982: 78).

Media determinism as a form of technological determinism

Media determinism is a form of technological determinism, a philosophical and sociological position which posits the power of the media to impact society. Two foundational media determinists are the Canadian scholars Harold Innis and Marshall McLuhan. One of the best examples of technological determinism in media theory is Marshall McLuhan's theory "the medium is the message" and the ideas of his mentor Harold Adams Innis. Both these Canadian theorists saw media as the essence of civilization. The association of different media with particular mental consequences by McLuhan and others can be seen as related to technological determinism. It is this variety of determinism that is referred to as media determinism. According to McLuhan, there is an association between communications media/technology and language; similarly, Benjamin Lee Whorf argues that language shapes our perception of thinking (linguistic determinism). For McLuhan, media is a more powerful and explicit determinant than is the more general concept of language. McLuhan was not necessarily a hard determinist. As a more moderate version of media determinism, he proposed that our use of particular media may have subtle influences on us, but more importantly, it is the social context of use that is crucial. Media determinism is a form of the popular dominant theory of the relationship between technology and society. In a determinist view, technology takes on an active life of its own and is seen be as a driver of social phenomena. Innis believed that the social, cultural, political, and economic developments of each historical period can be related directly to the technology of the means of mass communication of that period. In this sense, like Dr. Frankenstein's monster, technology itself appears to be alive, or at least capable of shaping human behavior. However, it has been increasingly subject to critical review by scholars. For example, scholar Raymond Williams, criticizes media determinism and rather believes social movements define technological and media processes. With regard to communications media, audience determinism is a viewpoint opposed to media determinism. This is described as instead of media being presented as doing things to people; the stress is on the way people do things with media. Individuals need to be aware that the term "deterministic" is a negative one for many social scientists and modern sociologists; in particular they often use the word as a term of abuse.

Technological singularity

From Wikipedia, the free encyclopedia

The technological singularity—also, simply, the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

The first use of the concept of a "singularity" in the technological context was John von Neumann. Stanislaw Ulam reports a discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint.

I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.

The concept and the term "singularity" were popularized by Vernor Vinge in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.

Public figures such as Stephen Hawking and Elon Musk have expressed concern that full artificial intelligence (AI) could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

Four polls of AI researchers, conducted in 2012 and 2013 by Nick Bostrom and Vincent C. Müller, suggested a median probability estimate of 50% that artificial general intelligence (AGI) would be developed by 2040–2050.

Background

Although technological progress has been accelerating in most areas (though slowing in some), it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.

If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

Intelligence explosion

Intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI may be capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown, shortly after technological singularity is achieved.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. He speculated on the effects of superhuman machines, should they ever be invented:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (even more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

Other manifestations

Emergence of superintelligence

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.

Technology forecasters and researchers disagree about if or when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Non-AI singularity

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology, although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.

Speed superintelligence

A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster. For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such a difference in information processing speed could drive the singularity.

Plausibility

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The speculated ways to produce intelligence augmentation are many, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. Because multiple paths to an intelligence explosion are being explored, it makes a singularity more likely; for a singularity to not occur they would all have to fail.

Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult to find. Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.

Whether or not an intelligence explosion occurs depends on three factors. The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement should beget at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics will eventually prevent any further improvements.

There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used. The former is predicted by Moore's Law and the forecasted improvements in hardware, and is comparatively similar to previous technological advances. But there are some AI researchers who believe software is more important than hardware.

A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance of an intelligence explosion. Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".

Speed improvements

Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Simply put, Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. An upper limit on speed may eventually be reached, although it is unclear how high this would be. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

Exponential growth

Ray Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore's law from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of ("unenhanced") human brains, with superhuman artificial intelligence appearing around the same time.
An updated version of Moore's law over 120 Years (based on Kurzweil's graph). The 7 most recent data points are all NVIDIA GPUs.

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others. Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months. On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential.

Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine". He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."

Accelerating change

According to Kurzweil, his logarithmic graph of 15 lists of paradigm shifts for key historic events shows an exponential trend

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".

Algorithm improvements

Some intelligence technologies, like "seed AI", may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately. An AI rewriting its own source code could do so while contained in an AI box.

Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.

There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to survive.

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support humankind to promote its own goals, causing human extinction.

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."

Criticisms

Some critics, like philosopher Hubert Dreyfus, assert that computers or machines cannot achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of intelligence is irrelevant if the net result is the same.

Psychologist Steven Pinker stated in 2008:

... There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ...

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally ..., no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. ... [T]he machinery has no beliefs, desires, [or] motivations.

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine."

Theodore Modis and Jonathan Huebner argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.

In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.

Paul Allen argued the opposite of accelerating returns, the complexity brake; the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process." He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily. Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.

Potential impacts

Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.

Uncertainty and risk

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.

Berglas (2008) claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators. Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources, and humans would be powerless to stop them. Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion, unintended instrumental actions, and corruption of the reward generator. He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.

Next step of sociobiological evolution

Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "major evolutionary transitions" in information processing.
 
Amount of digital information worldwide (5×1021 bytes) versus human genome information worldwide (1019 bytes) in 2014.

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.

In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.

A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".

The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×1021 bytes).

In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×1019 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×1037 base pairs, equivalent to 1.325×1037 bytes of information.

If growth in digital storage continues at its current rate of 30–38% compound annual growth per year, it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".

Implications for human society

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.

Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.

Hard vs. soft takeoff

In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).

In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.

Ramez Naam argues against a hard takeoff. He has pointed that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law. Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".

Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."

Immortality

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.

According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious."

History of the concept

A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.

An early description of the idea was made in John Wood Campbell Jr.'s 1932 short story "The last evolution".

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.

In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vernor Vinge greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.

In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.

Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era", spread widely on the internet and helped to popularize the idea. This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.

In 2005, Kurzweil published The Singularity is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[110] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In politics

In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:

One thing that we haven't talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren't spending a lot of time right now worrying about singularity—they are worrying about "Well, is my job going to be replaced by a machine?"

Predictive medicine

From Wikipedia, the free encyclopedia

Predictive medicine is a field of medicine that entails predicting the probability of disease and instituting preventive measures in order to either prevent the disease altogether or significantly decrease its impact upon the patient (such as by preventing mortality or limiting morbidity).

While different prediction methodologies exist, such as genomics, proteomics, and cytomics, the most fundamental way to predict future disease is based on genetics. Although proteomics and cytomics allow for the early detection of disease, much of the time those detect biological markers that exist because a disease process has already started. However, comprehensive genetic testing (such as through the use of DNA arrays or full genome sequencing) allows for the estimation of disease risk years to decades before any disease even exists, or even whether a healthy fetus is at higher risk for developing a disease in adolescence or adulthood. Individuals who are more susceptible to disease in the future can be offered lifestyle advice or medication with the aim of preventing the predicted illness.

Current genetic testing guidelines supported by the health care professionals discourage purely predictive genetic testing of minors until they are competent to understand the relevancy of genetic screening so as to allow them to participate in the decision about whether or not it is appropriate for them. Genetic screening of newborns and children in the field of predictive medicine is deemed appropriate if there is a compelling clinical reason to do so, such as the availability of prevention or treatment as a child that would prevent future disease.

The goal

The goal of predictive medicine is to predict the probability of future disease so that health care professionals and the patient themselves can be proactive in instituting lifestyle modifications and increased physician surveillance, such as bi-annual full body skin exams by a dermatologist or internist if their patient is found to have an increased risk of melanoma, an EKG and cardiology examination by a cardiologist if a patient is found to be at increased risk for a cardiac arrhythmia or alternating MRIs or mammograms every six months if a patient is found to be at increased risk for breast cancer. Predictive medicine is intended for both healthy individuals ("predictive health") and for those with diseases ("predictive medicine"), its purpose being to predict susceptibility to a particular disease and to predict progression and treatment response for a given disease.

A number of association studies have been published in scientific literature that show associations between specific genetic variants in a person's genetic code and a specific disease. Association and correlation studies have found that a female individual with a mutation in the BRCA1 gene has a 65% cumulative risk of breast cancer. Additionally, new tests from Genetic Technologies LTD and Phenogen Sciences Inc. comparing non-coding DNA to a woman's lifetime exposure to estrogen can now determine a woman's probability of developing estrogen positive breast cancer also known as sporadic breast cancer (the most prevalent form of breast cancer). Genetic variants in the Factor V gene is associated with an increased tendency to form blood clots, such as deep vein thrombosis (DVTs). Genetics tests are expected to reach the market more quickly than new medicines. Myriad Genetics is already generating revenue from genetic tests for BRCA1 and BRCA2.

Aside from genetic testing, predictive medicine utilizes a wide variety of tools to predict health and disease, including assessments of exercise, nutrition, spirituality, quality of life, and so on. This integrative approach was adopted when Emory University and Georgia Institute of Technology partnered to launch the Predictive Health Institute. Predictive medicine changes the paradigm of medicine from being reactive to being proactive and has the potential to significantly extend the duration of health and to decrease the incidence, prevalence and cost of diseases.

Types

Notable types of predictive medicine through health care professionals include:

  • Carrier testing: Carrier testing is done to identify people who carry one copy of a gene mutation that, when present in both copies, causes a genetic disorder. This type of testing is offered to individuals who have genetic disorder in their family history or to people in ethnic groups with increased risk of certain genetic diseases. If both parents are tested, carrier testing can provide information about a couple's risk of having a child with a genetic disorder.
  • Diagnostic testing: Diagnostic testing is conducted to aid in the specificity diagnosis or detection of a disease. It is often used to confirm a particular diagnosis when a certain condition is suspected based on the subject's mutations and physical symptoms. The diversity in diagnostic testing ranges from common consulting room tests such as measuring blood pressure and urine tests to more invasive protocols such as biopsies.
  • Newborn screening: Newborn screening is conducted just after birth to identify genetic disorders that can be treated early in life. This testing of infants for certain disorders is one of the most widespread uses of genetic screening - all US states currently test infants for phenylketonuria and congenital hypothyroidism. US state law mandates collecting a sample by pricking the heel of a newborn baby to obtain enough blood to fill a few circles on filter paper labeled with names of infant, parent, hospital, and primary physician.
  • Prenatal testing: Prenatal testing is used to look for diseases and conditions in a fetus or embryo before it is born. This type of testing is offered for couples who have an increased risk of having a baby with a genetic or chromosomal disorder. Screening can determine the sex of the fetus. Prenatal testing can help a couple decide whether to abort the pregnancy. Like diagnostic testing, prenatal testing can be noninvasive or invasive. Non-invasive techniques include examinations of the woman's womb through ultrasonography or maternal serum screens. These non-invasive techniques can evaluate risk of a condition, but cannot determine with certainty if the fetus has a condition. More invasive prenatal methods are slightly more risky for the fetus and involve needles or probes being inserted into the placenta or chorionic villus sampling.

Health benefits

The future of medicine's focus may potentially shift from treating existing diseases, typically late in their progression, to preventing disease before it sets in. Predictive health and predictive medicine is based on probabilities: while it evaluates susceptibility to diseases, it is not able to predict with 100% certainty that a specific disease will occur. Unlike many preventive interventions that are directed at groups (e.g., immunization programs), predictive medicine is conducted on an individualized basis. For example, glaucoma is a monogenic disease whose early detection can allow to prevent permanent loss of vision. Predictive medicine is expected to be most effective when applied to polygenic multifactorial disease that are prevalent in industrialized countries, such as diabetes mellitus, hypertension, and myocardial infarction. With careful usage, predictive medicine methods such as genetic screens can help diagnose inherited genetic disease caused by problems with a single gene (such as cystic fibrosis) and help early treatment. Some forms of cancer and heart disease are inherited as single-gene diseases and some people in these high-risk families may also benefit from access to genetic tests. As more and more genes associated with increased susceptibility to certain diseases are reported, predictive medicine becomes more useful.

Direct-to-consumer genetic testing

Direct-to-Consumer (DTC) genetic testing enables a consumer to screen his or her own genes without having to go through a health care professional. They can be ordered without the permission of a physician. Variety in DTC tests range from those testing for mutations associated with cystic fibrosis to breast cancer alleles. DTC tests make the applicability of predictive medicine very real and accessible to consumers. Benefits of DTC testing include this accessibility, privacy of genetic information, and promotion of proactive health care. Risks of obtaining DTC testing are the lack of governmental regulation and the interpreting of genetic information without professional counseling.

Limitations of predictive medicine

On a protein level, structure is less conserved than sequence. Therefore, in many diseases, having the faulty gene still does not necessarily mean someone will get the disease. Common, complex diseases in the wider population are affected not only by heredity, but also by external causes such as lifestyle and environment. Therefore, genes are not perfect predictors of future health; individuals with both the high risk form of the gene and those without are all candidates to get the disease. Multiple factors in the environment, particular smoking, diet and exercise, infection, and pollution; play important roles and can be more important than genetic make-up. This makes the results and risks determined by predictive medicine more difficult to quantify. Furthermore, the potential false positives or false negatives that may arise from a predictive genetic screen can cause substantial unnecessary strain on the individual.

Targeting medication to people who are genetically susceptible to a disease but do not yet show the symptoms of it can be a questionable measure. In large populations, there is concern that likely most of the people taking preventative medications would never have developed the disease anyway. Many medications carry undesirable side effects that high risk individuals must then cope with. In contrast, several populations-based prevention measures (such as encouraging healthy diets or banning tobacco advertising) carry a far lower likelihood of adverse effects and are also less expensive.

Another potential downfall of commercially available genetic testing lies within the psychological impacts of accessibility to such data. For single-gene inherited diseases, counseling and the right to refuse a test (the right "not to know") have been found to be important. However, adequate individual counseling can be difficult to employ to the potentially large proportion of the population likely to be identified as at high risk of common complex disease. Some people are vulnerable to adverse psychological reactions to genetic predictions of stigmatized or feared conditions, such as cancer or mental illness.

Ethics and law

Predictive medicine ushers in a number of sensitive legal and ethical issues. There is a delicate balance that presides over predictive medicine and occupational health: if an employee were dismissed because he was found to be at risk of a certain chemical agent used in his workplace, would his termination be considered discrimination or an act of prevention? Several organizations believe that legislation is needed to prevent insurers and employers from using predictive genetic test results to decide who gets insurance or a job: "Ethical considerations, and legal, are fundamental to the whole issue of genetic testing. The consequences for individuals with regard to insurance and employment are also of the greatest importance, together with the implications for stigma and discrimination." In the future, people may be required to reveal genetic predictions about their health to their employers or insurers. The grim prospect of discrimination based on a person's genetic make-up can lead to a "genetic underclass" which does not receive equal opportunity for insurance and employment.

Currently in the United States, health insurers do not require applicants for coverage to undergo genetic testing. Genetic information is under the same protection of confidentiality as other sensitive health information under the Health Insurance Portability and Accountability Act (HIPAA) when health insurers come across it. In the United States, the Genetic Information Nondiscrimination Act, signed into law by President Bush on May 21, 2008; prohibits health insurers from denying coverage or charging differentials in premiums, and bars employers from making job placement or hiring/firing decisions based on individuals' genetic predispositions.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...