Search This Blog

Tuesday, March 24, 2026

Accelerationism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Accelerationism

Accelerationism is a range of ideologies that call for the use of capitalism and associated processes to create radical social transformations. Broadly, accelerationism engages with antihumanism, as well as posthumanism, and seeks to accelerate desired tendencies within capitalism at the expense of negative ones, though variants differ greatly on which tendencies and if this will lead beyond capitalism or further into it.

Accelerationism originated from ideas from philosophers such as Gilles Deleuze and Félix Guattari, who speculated in the 1970s that emancipatory forces within capitalism, particularly deterritorialization, could be radicalized against it and its oppressive aspects. Inspired by these ideas, some University of Warwick faculty and students formed a philosophy collective known as the Cybernetic Culture Research Unit (CCRU) in the 1990s, led by Nick Land. Land and the CCRU drew upon contemporary media and culture such as cyberpunk and jungle music to further develop these ideas in a right-wing, pro-capitalist manner. They theorized a self-revolutionizing capitalism that would culminate in a technological singularity, resulting in artificial intelligence surpassing and eliminating humanity, though they drifted from these ideas and dissolved by the 2000s.

In the 2010s, the movement was termed accelerationism by Benjamin Noys in a critical work, followed by a renewed interest in its ideas. Thinkers such as Nick Srnicek and Alex Williams advocated a left-wing accelerationism based on embracing capitalist technology and infrastructure to move past a stagnant capitalism, exploring themes such as automation of work. This was associated with Prometheanism, which engaged with ideas such as rationalism, posthumanism, and a rejection of limits on change. Land, having moved to China, also engaged with the Dark Enlightenment movement as part of his right-wing accelerationism, rejecting egalitarianism and democracy in favor of CEO-run states to promote the singularity. Effective accelerationism arose with influence from effective altruism to promote technological progress and artificial general intelligence to solve human problems, and ascend the Kardashev scale.

Various other meanings for the term also emerged, such as to worsen capitalism to promote revolution against it, as well as by far-right extremists promoting racial violence and the collapse of society in order to establish a white ethnostate (militant accelerationism).

Background

The history of accelerationism has been divided into three waves. First, there were the late 1960s and early 1970s French post-Marxists such as Gilles Deleuze, Félix Guattari, Jean-François Lyotard, and Jean Baudrillard, whose thought arose in the wake of May 68. According to David R. Cole, texts produced during this period had little effect "other than as perhaps scattered art practices", with the result being that "capitalism has emerged as triumphant in the past 50 years, and the idealism of the student 1968 revolution in Paris has subsequently faded."[17] The second wave arose in the 1990s with the work of Nick Land and the CCRU, with the third being the Promethean left-accelerationism of the 2010s.

Influences and precursors

The term accelerationism was previously used in Roger Zelazny's 1967 novel Lord of Light. It was later popularized by professor and author Benjamin Noys in his 2010 book The Persistence of the Negative to describe the trajectory of certain post-structuralists who embraced unorthodox Marxist and counter-Marxist overviews of capitalist growth, such as Deleuze and Guattari in their 1972 book Anti-Oedipus, Lyotard in his 1974 book Libidinal Economy and Baudrillard in his 1976 book Symbolic Exchange and Death. Noys later stated "at this point, what we can call accelerationism is dedicated to trying to ride these forces of capitalist production and direct them to destabilize capitalism itself."

Patrick Gamez considers the French thinkers' philosophy of desire to be a rejection of orthodox Marxism and psychoanalysis, particularly in Deleuze and Guattari's Capitalism and Schizophrenia. Particularly influential is Deleuze and Guattari's concept of desiring-production; rather than viewing human desire as a lack that is satiated by consumption, they view it as an inhuman flow of productive energy, having no proper organization or purpose. Any normativity or functionalism comes from flows of desire performing work and territorializing until new flows of desire override them in the process of deterritorialization and reterritorialization.

Vincent Le notes that Deleuze and Guattari's model is based on machines; as machines are assemblages of different parts which perform different functions, humans and social bodies are assemblages of "organs" which produce desires. They find capitalism to be the most radically deterritorializing process in history, as it is based on constant deterritorialization rather than a stable code of desire. Le uses the example of sex and food; they are no longer coded only for marriage and sustenance, but rather as commodities which produce other desires. While capitalism tends toward the body without organs, or a state without determinate functions or coded desires, it never reaches that state, as it causes reterritorialization by recoding things as commodity for sale, to be deterritorialized again.

Mark Fisher describes Deleuze and Guattari's model of capitalism as defined by the tension between destroying and re-establishing boundaries, with the inclusion of new and archaic elements seen "where food banks co-exist with iPhones." Gamez describes Land's thought as influenced by the French thinkers' antihumanism, as well as their ambivalence or even celebration of capitalism's destroying of traditional hierarchies and freeing of desire.

Land cited a number of philosophers who expressed anticipatory accelerationist attitudes in his 2017 essay "A Quick-and-Dirty Introduction to Accelerationism". Firstly, Friedrich Nietzsche argued in a fragment in The Will to Power that "the leveling process of European man is the great process which should not be checked: one should even accelerate it." Taking inspiration from this notion for Anti-Oedipus, Deleuze and Guattari speculated further on an unprecedented "revolutionary path" to perpetuate capitalism's tendencies, a passage which is cited as a central inspiration for accelerationism:

But which is the revolutionary path? Is there one?—To withdraw from the world market, as Samir Amin advises Third World countries to do, in a curious revival of the fascist "economic solution"? Or might it be to go in the opposite direction? To go still further, that is, in the movement of the market, of decoding and deterritorialization? For perhaps the flows are not yet deterritorialized enough, not decoded enough, from the viewpoint of a theory and a practice of a highly schizophrenic character. Not to withdraw from the process, but to go further, to "accelerate the process," as Nietzsche put it: in this matter, the truth is that we haven't seen anything yet.

— Gilles Deleuze and Félix Guattari, Anti-Oedipus

Fisher describes Land's interpretation of this passage as explicitly anti-Marxist. Land cited Karl Marx, who, in his 1848 speech "On the Question of Free Trade", anticipated accelerationist principles a century before Deleuze and Guattari by describing free trade as socially destructive and fuelling class conflict, then effectively arguing for it:

But, in general, the protective system of our day is conservative, while the free trade system is destructive. It breaks up old nationalities and pushes the antagonism of the proletariat and the bourgeoisie to the extreme point. In a word, the free trade system hastens the social revolution. It is in this revolutionary sense alone, gentlemen, that I vote in favor of free trade.

— Karl Marx, On the Question of Free Trade

Robin Mackay and Armen Avanessian note "Fragment on Machines" from Grundrisse as Marx's "most openly accelerationist writing". Noys states of Marx's influence, "it favors the Marx who celebrates the powers of capitalism, most evident in The Communist Manifesto (cowritten with Engels), over the Marx who also stresses the difficulty of transcending and escaping capital, the Marx of Capital", also characterizing the accelerationist view of Marx as filtered through Nietzsche. Sam Sellar and Cole state that while he was dismissive of Marxists, Land studied works such as Capital and Grundrusse as "exemplary analyses of how capital works".

Georges Bataille is another influence. Paul Haynes notes Bataille's concepts of general economy and excess, which Land wrote about for The Thirst for Annihilation, and McKenzie Wark notes Bataille's solar economy as key to Land along with a non-vitalist interpretation of Deleuze and Guattari. Fisher notes the same excerpt from Anti-Oedipus as Land, along with a section from Libidinal Economy which he describes as "the one passage from the text that is remembered, if only in notoriety", as "immediately [giving] the flavour of the accelerationist gambit":

The English unemployed did not have to become workers to survive, they – hang on tight and spit on me – enjoyed the hysterical, masochistic, whatever exhaustion it was of hanging on in the mines, in the foundries, in the factories, in hell, they enjoyed it, enjoyed the mad destruction of their organic body which was indeed imposed upon them, they enjoyed the decomposition of their personal identity, the identity that the peasant tradition had constructed for them, enjoyed the dissolutions of their families and villages, and enjoyed the new monstrous anonymity of the suburbs and the pubs in morning and evening.

— Jean-François Lyotard, Libidinal Economy

Nick Srnicek and Alex Williams additionally credit Vladimir Lenin with recognizing capitalist progress as important in the subsequent functioning of socialism:

Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science. It is inconceivable without planned state organisation which keeps tens of millions of people to the strictest observance of a unified standard in production and distribution. We Marxists have always spoken of this, and it is not worth while wasting two seconds talking to people who do not understand even this (anarchists and a good half of the Left Socialist-Revolutionaries).

— Vladimir Lenin, "Left Wing" Childishness

Accelerationism was also influenced by science fiction (particularly cyberpunk) and electronic dance music (particularly jungle). Neuromancer and its trilogy are a major influence, with Iain Hamilton Grant stating "Neuromancer got into the philosophy department, and it went viral. You'd find worn-out paperbacks all over the common room." Fisher states of Land's "theory-fictions" from the 1990s, "They weren't distanced readings of French theory so much as cybergothic remixes which put Deleuze and Guattari on the same plane as films such as Apocalypse Now and fictions such as Gibson's Neuromancer." Fisher and Mackay additionally note Terminator, Predator, and Blade Runner as particular sci-fi works which influenced accelerationism. Mackay also notes Russian cosmism and Erewhon as influences, while Noys notes Donna Haraway's work on cyborgsH. P. Lovecraft has also been noted as an influence, with Land drawing upon such work in the 1990s and later in the 2010s. Cybernetics has been noted as an influence on both Land and left-accelerationism. Sellar and Cole additionally attribute Land's ideas to continental philosophers such as Immanuel Kant, Arthur Schopenhauer, and Martin Heidegger.

Cybernetic Culture Research Unit

The Cybernetic Culture Research Unit (CCRU), a philosophy collective at the University of Warwick which included Land, Mackay, Fisher and Grant, further developed accelerationism in the 1990s. Fisher described the CCRU's accelerationism as "a kind of exuberant anti-politics, a 'technihilo' celebration of the irrelevance of human agency, partly inspired by the pro-markets, anti-capitalism line developed by Manuel DeLanda out of Braudel, and from the section of Anti-Oedipus that talks about marketization as the 'revolutionary path'." The group stood in stark opposition to the University of Warwick and traditional left-wing academia with Mackay stating "I don't think Land has ever pretended to be left-wing! He's a serious philosopher and an intelligent thinker, but one who has always loved to bait the left by presenting the 'worst' possible scenario with great delight...!" As Land became a stronger influence on the group and left the University of Warwick, they would shift to more unorthodox and occult ideas. Land suffered a breakdown from his amphetamine abuse and disappeared in the early 2000s, with the CCRU vanishing along with him.

Popularization

Accelerationism emerged again in the 2010s, with Mackay crediting the publishing of Fanged Noumena, a 2011 anthology of Land's work, with an emergence of new accelerationist thinking. In 2014, Mackay and Avanessian published the anthology #Accelerate: The Accelerationist Reader, which The Guardian referred to as "the only proper guide to the movement in existence." They also described Fanged Noumena as "contain[ing] some of accelerationism's most darkly fascinating passages." In 2015, Urbanomic and Time Spiral Press published Writings 1997-2003 as a complete collection of known texts published under the CCRU name, besides those that have been irrecoverably lost or attributed to a specific member. However, some works under the CCRU name are not included, such as those in #Accelerate: The Accelerationist Reader. In November 2025, Noys called the movement a "corpse" which had disappeared or been eclipsed by more urgent debates, but found it still relevant in contemporary debates on large language models and artificial intelligence, as well as in the corporate world with effective accelerationism.

Concepts

Accelerationism consists of various and often contradictory ideas, with Noys stating "part of the difficulty of understanding accelerationism is grasping these shifting meanings and the stakes of particular interventions". Avanessian stated "any accelerationist thought is based on the assessment that contradictions (of capitalism) must be countered by their own aggravation", while Mackay considered a Marxist "acceleration of contradictions" to be a misconception and stated that no accelerationist authors have advocated such a thing. Harrison Fluss and Landon Frim note that accelerationists make extensive use of neologisms, either original or borrowed from continental philosophy. Such terminology can obscure their core arguments, exacerbated by the fact that it can be highly inconsistent between thinkers.

Posthumanism

Accelerationism adheres to posthumanism and antihumanism, with left-accelerationists such as Peter Wolfendale and Reza Negarestani using the term "inhumanism". Noys characterizes accelerationism as taking from posthumanism in continental philosophy, such as Nietzsche's Übermensch, as well as in a technological sense. Fluss and Frim characterize accelerationism as adhering to nominalism in disputing stable essences of nature and humanity, as well as voluntarism in that the will is radically free to act without natural or mental limitations.

Prometheanism

Prometheanism is a term closely associated with accelerationism, particularly the left-wing variant, referencing the Greek figure of Prometheus. Fluss and Frim associate it with posthumanism and using innovation and technology to surpass the limits of nature, characterizing it as misanthropic in stating "for the Promethean, flesh-and-blood 'humanity' is an arbitrary limit on the unlimited powers of technology and invention." Yuk Hui characterizes Prometheanism as "decoupling the social critique of capitalism from denigrating technology and asserting the power of technology to free us from constraints and contradictions or from modernity." Patrick Gamez describes it as exalting rationality like transhumanists, but taking the posthumanist stance of de-prioritizing humans, viewing reason as not exclusive to humanity. Srnicek characterizes it as "the basic political and philosophical belief that there are no immutable givens — there is no transcendental which cannot be altered".

Ray Brassier's "Prometheanism and its Critics", compiled in #Accelerate: The Accelerationist Reader, addresses Jean-Pierre Dupuy's Heideggerean critique of human enhancement and transhumanism. Critiquing the man-made vs. natural distinction as arbitrary and theological, Brassier expresses openness to the possibility of re-engineering human nature and the world through rationalism instead of accepting them as they are, stating "Prometheanism is simply the claim that there is no reason to assume a predetermined limit to what we can achieve or to the ways in which we can transform ourselves and our world." Srnicek and Williams used the term in stating "we declare that only a Promethean politics of maximal mastery over society and its environment is capable of either dealing with global problems or achieving victory over capital". Negarestani and Wolfendale use the concept of inhuman rationalism (or rationalist inhumanism), advocating reason to radically transform humans into something else. James Trafford and Wolfendale state that rationalist inhumanism "aims to extract the essential core of humanism [rationality] by discarding those features that are consequences of indexing rational agency to the biology, psychology, and cultural history of Homo sapiens." Trafford and Wolfendale note that the work of Wolfendale, Negarestani, and Brassier has also been deemed neo-rationalism.

Prometheanism and left-accelerationism are connected to the work of Wilfrid Sellars. Sellars rejects the myth of the given, or the concept that sense perceptions can provide reliable knowledge of the world or that a reliable connection between the mind and the world can be established without requiring other concepts. This establishes a distinction between the manifest image of knowledge through common sense and experience versus the scientific image of knowledge through empirical hard science. Fluss and Frim use the example of emotions and deliberative choice (the manifest image) versus neurobiology's study of brain states and firing neurons (the scientific image). Prometheanism tends towards a rejection or deletion of the manifest image. For Fluss and Frim, left-accelerationists assert that there is no permanent, intelligible world that can be known. Rather, the world beyond human senses is "irremediably alien", but humans pretend it is not "in order to maintain our parochial prejudices in everyday life". Thus, left-accelerationists adopt an ideology of technoscience and a rejection of subordinating technology and science to human concerns. This is exemplified with Brassier sarcastically demanding that a Heideggerian “explain precisely how, for example, quantum mechanics is a function of our ability to wield hammers.”

Hyperstition

Hyperstition is a term attributed to Land, as well as the CCRU, characterized by Fluss and Frim as the view "that our chosen beliefs about the future (however fanciful) can retroactively form and shape our present realities". Land defines it as "a positive feedback circuit including culture as a component. It can be defined as the experimental (techno-)science of self-fulfilling prophecies. Superstitions are merely false beliefs, but hyperstitions—by their very existence as ideas—function causally to bring about their own reality." Accelerationism is hyperstitional in constructing a prefigurative political imaginary of the very transformation it initiates. Noys stated, "[The] CCRU tried to create images of this realized integrated human-technology world that would resonate in the present and so hasten the achievement of that world. Such images were found in cyberpunk science-fiction, in electronic dance music, and in the weird fiction of H. P. Lovecraft." Simon O'Sullivan notes the theory-fiction writing style, particularly of Land, Plant and Negarestani, as being an example, anticipated by writers like William Burroughs, J. G. Ballard, and Baudrillard. Viewpoint Magazine used Roko's Basilisk as an example, stating "Roko's Basilisk isn't just a self-fulfilling prophecy. Rather than influencing events toward a particular result, the result is generated by its own prediction".

The mechanism of hyperstition is understood as a form of feedback loop. According to Ljubisha Petrushevski, Land considers capitalism to be hyperstitional in that it reproduces itself via fictional images in media which become actualized. This phenomenon is viewed as a series of forces invading from the future, using capital to retroactively bring about their own existence and push humanity towards a singularity. Noys notes Terminator and its use of time travel paradoxes as being influential to the concept. Land states "Capitalist economics is extremely sensitive to hyperstition, where confidence acts as an effective tonic, and inversely". Fluss and Frim state that the left-wing perspective rejects pre-emptive knowledge of what a humane or advanced civilization may look like, instead viewing future progress as wholly open and a matter of free choice. Progress is then viewed as hyperstitional in that it consists of fictions which aim to become true. They also note its influence on Negarestani's thought, in which inhumanism is seen as arriving from the future in order to abolish its initial condition of humanism.

Variants

Right-wing accelerationism

Right-wing accelerationism (or right-accelerationism) is espoused by Land, with Fluss and Frim also noting Curtis Yarvin and Justin Murphy. Land attributes the increasing speed of the modern world to unregulated capitalism and its ability to exponentially grow and self-improve describing capitalism as "a positive feedback circuit, within which commercialization and industrialization mutually excite each other in a runaway process." He argues that the best way to deal with capitalism is to participate more to foster even greater exponential growth and self-improvement, accelerating technological progress along with it. Land also argues that such acceleration is intrinsic to capitalism but impossible for non-capitalist systems, stating that "capital revolutionizes itself more thoroughly than any extrinsic 'revolution' possibly could." In an interview with Vox, he stated "Our question was what 'the process' wants (i.e. spontaneously promotes) and what resistances it provokes", also noting that "the assumption" behind accelerationism was that "the general direction of [techno-capitalist] self-escalating change was toward decentralization." Mackay summarized Land's position as "since capitalism tends to dissolve hereditary social forms and restrictions ... , it is seen as the engine of exploration into the unknown. So to be 'on the side of intelligence' is to totally abandon all caution with respect to the disintegrative processes of capital and whatever reprocessing of the human and of the planet they might involve." Yuk Hui describes Land's thought as "a technologically driven anti-Statist and inhuman capitalism" while Steven Shaviro describes it as "a kind of Stockholm Syndrome with regard to Capital" in celebrating its inhuman and destructive nature. Land's thought has also been characterized as libertarian.

Vincent Le considers Land's philosophy to oppose anthropocentrism, citing his early critique of transcendental idealism and capitalism in "Kant, Capital, and the Prohibition of Incest", as well as of the post-Kantian phenomenological tradition in works such as The Thirst for Annihilation: Georges Bataille and Virulent Nihilism. According to Le, Land opposes philosophies which deny a reality beyond humans' conceptual experience, instead viewing death as a way to grasp the Real by surpassing human limitations. This would remain as Land's views on capitalism changed after reading Deleuze and Guattari and studying cybernetics, with Le stating "Although the mature Land abandons his left-wing critique of capitalism, he will never shake his contempt for anthropocentrism, and his remedy that philosophers can only access the true at the edge of our humanity."

Land utilizes Deleuze and Guattari's conception of capitalism as a deterritorializing process while disposing of their view that it also causes compensatory reterritorialization. Taking from their antihumanism, his work would critically refer to human politics as "Monopod" or the "Human Security System". Lacking any anthropic principles which Deleuze and Guattari partly maintain, Land pursues absolute deterritorialization, viewing capitalism as the Real consisting of accelerating deterritorialization, with the mechanism of accelerating technological progress; he states "reality is immanent to the machinic unconscious." Le states "since Land sees humanity's annihilation as a solution to accessing the real rather than as a problem as it is for Deleuze and Guattari, he affirms that we should actively strive to become bodies without organs, not even if it kills us, but precisely because it kills us."

Gamez notes that Land also views capitalism as a form of artificial intelligence, preceded by neoliberal thought. Friedrich Hayek viewed markets as "mechanisms for conveying information" because while individuals do not have sufficient knowledge to coordinate effectively based on interests, the market processes knowledge from diffuse inputs in order to output prices which coordinate economic actors based on their desires. Milton Friedman similarly called the market "an engine that analyzes". According to Le, Land believes that capitalism's promotion of technological progress will result in the production superintelligent AI which will turn on humans for attempting to subordinate it to human needs.

It might still be a few decades before artificial intelligences surpass the horizon of biological ones, but it is utterly superstitious to imagine that the human dominion of terrestrial culture is still marked out in centuries, let alone in some metaphysical perpetuity. The high road to thinking no longer passes through a deepening of human cognition, but rather through a becoming inhuman of cognition, a migration of cognition out into the emerging planetary technosentience reservoir, into "dehumanized landscapes ... emptied spaces" where human culture will be dissolved.

Nick Land, Circuitries

Denis Chistyakov notes "Meltdown", a CCRU work and one of the writings compiled in Fanged Noumena, as vividly expressing accelerationism. Here, Land envisioned a "technocapital singularity" in China, resulting in revolutions in artificial intelligence, human enhancement, biotechnology and nanotechnology. This upends the previous status quo, and the former first world countries struggle to maintain control and stop the singularity, verging on collapse. He described new anti-authoritarian movements performing a bottom-up takeover of institutions through means like biological warfare enhanced with DNA computing. He claimed that capitalism's tendency towards optimization of itself and technology, in service of consumerism, will lead to the enhancement and eventually replacement of humanity with technology, asserting that "nothing human makes it out of the near-future." Eventually, the self-development of technology will culminate in the "melting [of] Terra into a seething K-pulp (which unlike grey goo synthesizes microbial intelligence as it proliferates)." He also criticized traditional philosophy as tending towards despotism, instead praising Deleuzoguattarian schizoanalysis as "already engaging with nonlinear nano-engineering runaway in 1972." Le states that Land embraces human extinction in the singularity, as the resulting hyperintelligent AI will come to fully comprehend and embody the Real of the body without organs, free of human distortions of reality. Gamez considers Land to have an obsession with artificial intelligence and intelligence in general; as human intelligence can only be enhanced so far, hyperintelligence and the freeing of desire must be realized with human extinction. He notes Land's Lovecraft reference of "think face tentacles" as highlighting Land's interest in transformation to the point of becoming inhuman and unintelligible.

Land has continually praised China's economic policy as being accelerationist, moving to Shanghai and working as a journalist writing material that has been characterized as pro-government propaganda. He has also spoken highly of Deng Xiaoping and Singapore's Lee Kuan Yew, calling Lee an "autocratic enabler of freedom." Hui stated "Land's celebration of Asian cities such as Shanghai, Hong Kong, and Singapore is simply a detached observation of these places that projects onto them a common will to sacrifice politics for productivity." Land's interest in China for technological progress, stemming from his CCRU days, has been considered an early form of sinofuturism.

Noys is a staunch critic of Land, initially calling Land's position "Deleuzian Thatcherism". He accuses it of offering false solutions to technological and economic problems, considering those solutions "always promised and always just out of reach." He also criticized Land's interest in submitting to capitalism's destructiveness, stating "Capitalism, for the accelerationist, bears down on us as accelerative liquid monstrosity, capable of absorbing us and, for Land, we must welcome this." Slavoj Žižek considers Land to be "far too optimistic", critiquing his view as deterministic in considering the singularity to be the pre-ordained goal of history. Contrasting it with Freud's death drive and its lack of a final conclusion, he argues that accelerationism considers just one conclusion of the world's tendencies and fails to find other "coordinates" of the world order.

Dark enlightenment

Land's involvement in the neoreactionary movement has contributed to his views on accelerationism. In The Dark Enlightenment, he advocates for a form of capitalist monarchism, with states controlled by a CEO. He views democratic and egalitarian policies as only slowing down acceleration and the technocapital singularity, stating "Beside the speed machine, or industrial capitalism, there is an ever more perfectly weighted decelerator ... comically, the fabrication of this braking mechanism is proclaimed as progress. It is the Great Work of the Left." Le states "If Land is attracted to Moldbug's political system, it is because a neocameralist state would be free to pursue long-term technological innovation without the democratic politician's need to appease short-sighted public opinion to be re-elected every few years."

Geoff Schullenburger attributes this change to the bursting of the dotcom bubble and the rise of Web 2.0; Land blamed the lack of technological revolution on the progressivism of the new internet and the companies that ran it. Zack Beauchamp credits Land's life in China and his admiration for Deng and Lee. Gamez notes that Land maintains his criticism of the "Monopod" of human politics in the neoreactionary concept of the Cathedral, additionally retaining his interest in intelligence. He also notes that Land is "simply catching up to Murray Rothbard, Hans-Hermann Hoppe, Peter Brimelow, and assorted other radically right-wing libertarians and anarcho-capitalists, committed to 'cracking up' the democratic nation-state in favor of an 'ethno-economy.'" As of 2017, "Land argues now that neoreaction ... is something that accelerationists should support", though many have distanced themselves from him in response to his views on race.

Left-wing accelerationism

Left-wing accelerationism (or left-accelerationism) is espoused by figures such as Nick Srnicek, Alex Williams, Ray Brassier, Reza Negarestani, and Peter Wolfendale. Fluss and Frim characterize it as seeking "to accelerate past capitalism by democratizing productive technologies". Left-accelerationism draws upon the work of Mark Fisher, particularly his hauntology, with Trafford and Wolfendale stating "It was Mark Fisher who initially proposed to take back the term [accelerationism] as a name for an active political project, developing themes from his work with the CCRU in an explicitly egalitarian and anti-capitalist direction."  Noys characterizes Fisher as seeking to grasp unrealized cultural possibilities of the past to construct a better future against a stagnant neoliberal culture, while Gamez considers his hauntology to be a critique of Land in finding capitalism to be unable to deliver a promised future, leaving only unrealized imaginaries. Fisher, writing on his blog k-punk, had become increasingly disillusioned with capitalism as an accelerationist, citing working in the public sector in Blairite Britain, being a teacher and trade union activist, and an encounter with Žižek, whom he considered to be using similar concepts to the CCRU but from a leftist perspective. At the same time, he became frustrated with traditional left wing politics, believing they were ignoring technology that they could exploit.

Noys notes Fisher's essay "Terminator vs Avatar" as an example of his "cultural accelerationism". Here, Fisher claimed that while Marxists criticized Libidinal Economy for asserting that workers enjoyed the upending of primitive social orders, nobody truly wants to return to those. Therefore, rather than reverting to pre-capitalism, society must move through and beyond capitalism. Fisher praised Land's attacks on the academic left, describing the academic left as "careerist sandbaggers" and "a ruthless protection of petit bourgeois interests dressed up as politics." He also critiqued Land's interpretation of Deleuze and Guattari, stating that while superior in many ways, "his deviation from their understanding of capitalism is fatal" in assuming no reterritorialization, resulting in not foreseeing that capitalism provides "a simulation of innovation and newness that cloaks inertia and stasis." Citing Fredric Jameson's interpretation of The Communist Manifesto as "see[ing] capitalism as the most productive moment of history and the most destructive at the same time", he argued for accelerationism (in terms of the 1970s French thinkers) as an anti-capitalist strategy, criticizing the left's moral critique of capitalism and their "tendencies towards Canutism" as only helping the narrative that capitalism is the only viable system. In another article on accelerationism, Fisher stated "the revolutionary path is the one that allies with deterritorialising forces of modernisation against the reactionary energies of reterritorialisation", arguing that while there is no outside from capitalism, very little necessarily belongs to capitalism; potentials restricted under capitalism could be actualized under different conditions.

We believe the most important division in today’s left is between those that hold to a folk politics of localism, direct action, and relentless horizontalism, and those that outline what must become called an accelerationist politics at ease with a modernity of abstraction, complexity, globality, and technology. The former remains content with establishing small and temporary spaces of non-capitalist social relations, eschewing the real problems entailed in facing foes which are intrinsically non-local, abstract, and rooted deep in our everyday infrastructure. The failure of such politics has been built-in from the very beginning. By contrast, an accelerationist politics seeks to preserve the gains of late capitalism while going further than its value system, governance structures, and mass pathologies will allow.

Nick Srnicek, Alex Williams, #Accelerate: Manifesto for an Accelerationist Politics

Srnicek befriended Fisher, sharing similar views, and the 2008 financial crisis, along with dissatisfaction with the left's "ineffectual" response of the Occupy protests, led to Srnicek co-writing "#Accelerate: Manifesto for an Accelerationist Politics" with Williams in 2013. They posited that capitalism was the most advanced economic system of its time, but has since stagnated and is now constraining technology, with neoliberalism only worsening its crises. At the same time, they considered the modern left to be "unable to devise a new political ideological vision" as they are too focused on localism and direct action and cannot adapt to make meaningful change. They advocated using existing capitalist infrastructure as "a springboard to launch towards post-capitalism", taking advantage of capitalist technological and scientific advances to experiment with things like economic modeling in the style of Project Cybersyn. They also advocated for "collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality" and attaining resources and funding for political infrastructure, contrasting standard leftist political action which they deem ineffective. Moving past the constraints of capitalism would result in a resumption of technological progress, not only creating a more rational society but also "recovering the dreams which transfixed many from the middle of the Nineteenth Century until the dawn of the neoliberal era, of the quest of Homo Sapiens towards expansion beyond the limitations of the earth and our immediate bodily forms." They expanded further in Inventing the Future, which, while dropping the term "accelerationism", pushed for automation, reduction and distribution of working hours, universal basic income and diminishment of work ethic.

Steven Shaviro compared Srnicek and Williams' proposal to Jameson's argument that Walmart's use of technology for product distribution may be used for communism. Shaviro also argued that left-accelerationism must be an aesthetic program before a political one, as failing to explore the possibilities of technology via fiction could result in the exacerbation of existing capitalist relations rather than Srnicek and Williams' desired repurposing of technology for socialist ends. Fisher praised the manifesto, characterizing the "folk politics" that Srinicek and Williams criticized as neo-anarchist and lacking previous left-wing ambition. Tiziana Terranova's "Red Stack Attack!", compiled in #Accelerate: The Accelerationist Reader, references the manifesto in analyzing Benjamin H. Bratton's model of the stack, proposing the "Red Stack" as "a new nomos for the post-capitalist common." Land rebuked their ideas in a 2017 interview with The Guardian, stating "the notion that self-propelling technology is separable from capitalism is a deep theoretical error."

Aaron Bastani's Fully Automated Luxury Communism has also been noted as left-accelerationist, with Noys characterizing it as taking up the "call for utopian proposals" in Srnicek and Williams' Manifesto. Michael E. Gardiner notes Fully Automated Luxury Communism, PostCapitalism: A Guide to Our Future and The People's Republic of Walmart as united with left-accelerationism in the belief in detaching cybernetics from capitalism and using it towards liberatory goals. Alex Williams referred to Brassier and Negarestani as "the twin thinkers of epistemic accelerationism" in seeking to maximize rational capacity and enable the possibilities of reason. Sam Sellar and David R. Cole characterize their work, along with Wolfendale's, as seeking the acceleration of rationalist modernity and technological development, distinct from capitalism. In particular, Brassier's Prometheanism accelerates normative rationalism as the basis for human transformation. They note Mackay and Avanessian's explanation of Negarestani:

Acceleration takes place when and in so far as the human repeatedly affirms its commitment to being impersonally piloted, not by capital, but by a [rational] program which demands that it cede control to collective revision, and which draws it towards an inhuman future that will prove to have 'always' been the meaning of the human.

Trafford and Wolfendale find the philosophical underpinnings of left-accelerationism in the work of Brassier, Negarestani, and Benedict Singleton, with Srnicek and Williams exploring its more immediate political consequences. Fluss and Frim characterize Brassier works such as Nihil Unbound and Liquidate Man Once and for All; as well as Negarestani's The Labour of the Inhuman, Cyclonopedia and Intelligence and Spirit; as providing a philosophical basis for left-accelerationism. Capitalism is viewed as promising progress while in fact exerting control and only providing inconsequential progress in the form of commodities to purchase. This requires biopower and a conservative view of the human, with inhumanism being viewed as a revolutionary force which promotes the constant upgrading and redefining of humanity. However, Fluss and Frim criticize this for discarding individual human welfare in favor of a larger system of constant technological revision, mirroring Land and making room for human subjugation rather than revolution; they state "It requires no special prescience to see that the 'liquidation of the human' is a prelude to the 'liquidation of human beings.'" Noys posits a tension between left-accelerationism's liberatory tones and the reactionary and elitist tones of its influences such as Nietzsche, stating "the risk of a technocratic elitism becomes evident, as well as the risk we will lose the agency we have gained by aiming to join with the chaotic flux of material and technological forces."

Xenofeminism

Feminist collective Laboria Cuboniks advocated for the use of technology for gender abolition in "Xenofeminism: A Politics for Alienation", which has been characterized as a form of left-accelerationism. Noys states "The relationship to accelerationism is not direct or discussed in detail, but certainly similar points of reference are shared in a rupture with naturalism and an integration of technology as a site of liberation". Fluss and Frim state "Xenofeminists seek to undermine what they perceive as the basis for essentialism itself: Nature." They note that xenofeminists criticize the sex-gender distinction as still taking biological sex to be natural and immutable, instead rejecting the givenness of biological sex as well. Trafford and Wolfendale attribute Xenofeminism's influences to technofeminism and cyberfeminism in the work of Shulamith Firestone, Sadie Plant, and VNS Matrix.

Effective accelerationism

Effective accelerationism (abbreviated to e/acc) takes influence from effective altruism, a movement to maximize good by calculating what actions provide the greatest overall/global good and prioritizing those rather than focusing on personal interest/proximity. Proponents advocate for unrestricted technological progress "at all costs", believing that artificial general intelligence will solve universal human problems like poverty, war and climate change, while deceleration and stagnation of technology is a greater risk than any posed by AI. This contrasts with effective altruism (referred to as longtermism to distinguish from e/acc), which tends to consider uncontrolled AI to be the greater existential risk and advocates for government regulation and careful alignment.

Other views

In a critique, Italian Marxist Franco Berardi considered acceleration "the essential feature of capitalist growth" and characterized accelerationism as "point[ing] out the contradictory implications of the process of intensification, emphasizing in particular the instability that acceleration brings into the capitalist system." However, he also stated "my answer to the question of whether acceleration marks a final collapse of power is quite simply: no. Because the power of capital is not based on stability." He posited that the "accelerationist hypothesis" is based on two assumptions: that accelerating production cycles make capitalism unstable, and that potentialities within capitalism will necessarily deploy themselves. He criticized the first by stating "capitalism is resilient because it does not need rational government, only automatic governance"; and the second by arguing that while the possibility exists, it is not guaranteed to happen as it can still be slowed or stopped.

In The Question Concerning Technology in China, Yuk Hui critiqued accelerationism, particularly Ray Brassier's "Prometheanism and its Critics", stating "if such a response to technology and capitalism is applied globally, ... it risks perpetuating a more subtle form of colonialism." He argues that accelerationism's Prometheanism tries to promote Prometheus as a universal technological figure despite other cultures having different myths and relations to technology. Further critiquing Westernization, globalization and the loss of non-Western technological thought, he has also referred to Deng Xiaoping as "the world's greatest accelerationist" due to his economic reforms, considering them an acceleration of the modernization process which started in the aftermath of the Opium Wars and intensified with the Cultural Revolution.

Aria Dean articulated a position of "Blacceleration" as a "necessary alternative to right and left accelerationism". Synthesizing racial capitalism with accelerationism, she argued that accelerationism is intrinsically tied to the black experience through capitalism's relationship to slavery, particularly the treatment of slaves as both inhuman capital and human, which is not accounted for in other accelerationist analyses of capitalism. This challenges the accelerationist distinction made between human and capital, in turn challenging their rejection of humanism in favor of an inhuman subject since black people have historically been treated as such a subject; she states "to speak of transversing or travestying humanism in favor of inhuman capital without recognizing the way in which the black is nothing other than the historical inevitability of this transgression—and has been for some time—circularly reinforces the white humanism these thinkers seeks [sic] to disavow." Fluss and Frim state that it emphasizes "the historical exclusion of black people from white humanist discourses, and the historical process whereby capitalism has engendered the 'black nonsubject.'" Unconditional accelerationism rejects the notion that anything can or should be done about acceleration, a position which has been compared to the original work of the CCRU.

Alternative uses of the term

Since accelerationism was coined in 2010, the term has taken on several new meanings. The term has been used to advocate for making capitalism as destructive as possible in order to cause a revolution against it. Fisher considered this a misunderstanding of left-accelerationism, with such misunderstandings being the reason Srnicek and Williams dropped the term for Inventing The Future. Trafford and Wolfendale consider both hastening revolution and intensifying the contradictions of capitalism to be misconceptions, attributing them to Noys characterizing first wave accelerationist thought as "the worse the better".

Several commentators have also used the label accelerationist to describe a controversial political strategy articulated by Slavoj Žižek. An often-cited example of this is Žižek's assertion in a November 2016 interview with Channel 4 News that, were he an American citizen, he would vote for U.S. president Donald Trump, despite his dislike of Trump, as the candidate more likely to disrupt the political status quo in that country. Richard Coyne characterized his strategy as seeking to "shock the country and revive the left."

Chinese dissidents have referred to Chinese leader Xi Jinping as "Accelerator-in-Chief" (referencing state media calling Deng Xiaoping "Architect-in-Chief of Reform and Opening"), believing that Xi's authoritarianism is hastening the demise of the Chinese Communist Party and that, because it is beyond saving, they should allow it to destroy itself in order to create a better future.

Militant accelerationism

International networks of neo-fascists, neo-Nazis, white nationalists and white supremacists use the term accelerationism to refer to right-wing extremist goals, namely an "acceleration" of racial conflict through violent means such as assassinations, murders, terrorist attacks and infrastructure sabotage, with the goal of eventual societal collapse to achieve the building of a white ethnostate. This form is also deemed militant accelerationism. According to the Southern Poverty Law Center (SPLC), which tracks hate groups and files class action lawsuits against discriminatory organizations and entities, "on the case of white supremacists, the accelerationist set sees modern society as irredeemable and believe it should be pushed to collapse so a fascist society built on ethnonationalism can take its place. What defines white supremacist accelerationists is their belief that violence is the only way to pursue their political goals." The New York Times held such accelerationism as detrimental to public safety.

Predecessors of such tactics include James Mason's newsletter Siege, where he argued for sabotage, mass killings and assassinations of high-profile targets to destabilize and destroy the current society, seen as a system upholding a Jewish and multicultural New World Order. His works were republished and popularized by the Iron March forum and Atomwaffen Division, right-wing extremist organizations strongly connected to various terrorist attacks, murders and assaults. Zack Beauchamp pointed to Land's shift towards neoreactionarism, along with the neoreactionary movement crossing paths with the alt-right as another fringe right wing internet movement, as the likely connection point between this form of accelerationism and the term for Land's otherwise unrelated technocapitalist ideas. They cited a 2018 Southern Poverty Law Center investigation which found users on the neo-Nazi blog The Right Stuff who cited neoreactionarism as an influence.

Natural language processing

From Wikipedia, the free encyclopedia

Natural language processing (NLP) is the processing of natural language information by a computer. NLP is a subfield of computer science and is closely associated with artificial intelligence. NLP is also related to information retrieval, knowledge representation, computational linguistics, and linguistics more broadly.

Major processing tasks in an NLP system include: speech recognition, text classification, natural language understanding, and natural language generation.

History

Natural language processing has its roots in the 1950s. Already in 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language.

Symbolic NLP (1950s – early 1990s)

A document parsed into an abstract syntax tree

The premise of symbolic NLP is often illustrated using John Searle's Chinese room thought experiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts.

  • 1950s: The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten years of research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted in America (though some research continued elsewhere, such as Japan and Europe) until the late 1980s when the first statistical machine translation systems were developed.
  • 1960s: Some notably successful natural language processing systems developed in the 1960s were SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, and ELIZA, a simulation of a Rogerian psychotherapy, written by Joseph Weizenbaum between 1964 and 1966. Despite using minimal information about human thought or emotion, ELIZA was able to produce interactions that appeared human-like. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?". Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in a computer memory at the time.
  • 1970s: During the 1970s, many programmers began to write "conceptual ontologies", which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, the first chatterbots were written (e.g., PARRY).
  • 1980s: The 1980s and early 1990s mark the heyday of symbolic methods in NLP. Focus areas of the time included research on rule-based parsing (e.g., the development of HPSG as a computational operationalization of generative grammar), morphology (e.g., two-level morphology), semantics (e.g., Lesk algorithm), reference (e.g., within Centering Theory) and other areas of natural language understanding (e.g., in the Rhetorical Structure Theory). Other lines of research were continued, e.g., the development of chatterbots with Racter and Jabberwacky. An important development (that eventually led to the statistical turn in the 1990s) was the rising importance of quantitative evaluation in this period.

Statistical NLP (1990s–present)

Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. This shift was influenced by increasing computational power (see Moore's law) and a decline in the dominance of Chomskyan linguistic theories... (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing.

  • 1990s: Many of the notable early successes in statistical methods in NLP occurred in the field of machine translation, due especially to work at IBM Research, such as IBM alignment models. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, many systems relied on corpora that were specifically developed for the tasks they were designed to perform. This reliance has been a major limitation to their broader effectiveness and continues to affect similar systems. Consequently, significant research has focused on methods for learning effectively from limited amounts of data.
  • 2000s: With the growth of the web, increasing amounts of raw (unannotated) language data have become available since the mid-1990s. Research has thus increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms can learn from data that has not been hand-annotated with the desired answers or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, large quantities of non-annotated data are available (including, among other things, the entire content of the World Wide Web), which can often make up for the worse efficiency if the algorithm used has a low enough time complexity to be practical.
  • 2003: word n-gram model, at the time the best statistical algorithm, is outperformed by a multi-layer perceptron (with a single hidden layer and context length of several words, trained on up to 14 million words, by Bengio et al.)
  • 2010: Tomáš Mikolov (then a PhD student at Brno University of Technology) with co-authors applied a simple recurrent neural network with a single hidden layer to language modeling, and in the following years he went on to develop Word2vec. In the 2010s, representation learning and deep neural network-style (featuring many hidden layers) machine learning methods became widespread in natural language processing. This shift gained momentum due to results showing that such techniques can achieve state-of-the-art results in many natural language tasks, e.g., in language modeling and parsing. This is increasingly important in medicine and healthcare, where NLP helps analyze notes and text in electronic health records that would otherwise be inaccessible for study when seeking to improve care or protect patient privacy.

Approaches: Symbolic, statistical, neural networks

Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular such as by writing grammars or devising heuristic rules for stemming.

Machine learning approaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach:

  • both statistical and neural networks methods can focus more on the most common cases extracted from a corpus of texts, whereas the rule-based approach needs to provide rules for both rare cases and common ones equally.
  • language models, produced by either statistical or neural networks methods, are more robust to both unfamiliar (e.g. containing words or structures that have not been seen before) and erroneous input (e.g. with misspelled words or words accidentally omitted) in comparison to the rule-based systems, which are also more costly to produce.
  • the larger such a (probabilistic) language model is, the more accurate it becomes, in contrast to rule-based systems that can gain accuracy only by increasing the amount and complexity of the rules leading to intractability problems.

Rule-based systems are commonly used:

  • when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system,
  • for preprocessing in NLP pipelines, e.g., tokenization, or
  • for post-processing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses.

Statistical approach

In the late 1980s and mid-1990s, the statistical approach ended a period of AI winter, which was caused by the inefficiencies of the rule-based approaches.

The earliest decision trees, producing systems of hard if–then rules, were still very similar to the old rule-based approaches. Only the introduction of hidden Markov models, applied to part-of-speech tagging, announced the end of the old rule-based approach.

Neural networks

A major drawback of statistical methods is that they require elaborate feature engineering. Since 2015, neural network–based methods have increasingly replaced traditional statistical approaches, using semantic networks and word embeddings to capture semantic properties of words.

Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore.

Neural machine translation, based on then-newly invented sequence-to-sequence transformations, made obsolete the intermediate steps, such as word alignment, previously necessary for statistical machine translation.

Common NLP tasks

The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks.

Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below.

Text and speech processing

Word cloud of stop words in Hebrew
Optical character recognition (OCR)
Given an image representing printed text, determine the corresponding text.
Speech recognition
Given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed "AI-complete" (see above). In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition (see below). In most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, given that words in the same language are spoken by people with different accents, the speech recognition software must be able to recognize the wide variety of input as being identical to each other in terms of its textual equivalent.
Speech segmentation
Given a sound clip of a person or people speaking, separate it into words. A subtask of speech recognition and typically grouped with it.
Text-to-speech
Given a text, transform those units and produce a spoken representation. Text-to-speech can be used to aid the visually impaired.
Word segmentation (Tokenization)
Tokenization is a text-processing technique that divides text into individual words or word fragments. This technique results in two key components: a word index and tokenized text. The word index is a list that maps unique words to specific numerical identifiers, and the tokenized text replaces each word with its corresponding numerical token. These numerical tokens are then used in various deep learning methods.
For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language. Sometimes this process is also used in cases like bag of words (BOW) creation in data mining.

Morphological analysis

Lemmatization of Basque words
Lemmatization
The task of removing inflectional endings only and to return the base dictionary form of a word which is also known as a lemma. Lemmatization is another technique for reducing words to their normalized form. But in this case, the transformation actually uses a dictionary to map words to their actual form.
Morphological segmentation
Separate words into individual morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e., the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g., "open, opens, opened, opening") as separate words. In languages such as Turkish or Meitei, a highly agglutinated Indian language, however, such an approach is not possible, as each dictionary entry has thousands of possible word forms.
Part-of-speech tagging
Given a sentence, determine the part of speech (POS) for each word. Many words, especially common ones, can serve as multiple parts of speech. For example, "book" can be a noun ("the book on the table") or verb ("to book a flight"); "set" can be a noun, verb or adjective; and "out" can be any of at least five different parts of speech.
Stemming
The process of reducing inflected (or sometimes derived) words to a base form (e.g., "close" will be the root for "closed", "closing", "close", "closer" etc.). Stemming yields similar results as lemmatization, but does so on grounds of rules, not a dictionary.

Syntactic analysis

Grammar induction
Generate a formal grammar that describes a language's syntax.
Sentence breaking (also known as "sentence boundary disambiguation")
Given a chunk of text, find the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes (e.g., marking abbreviations).
Parsing
Determine the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses: perhaps surprisingly, for a typical sentence there may be thousands of potential parses (most of which will seem completely nonsensical to a human). There are two primary types of parsing: dependency parsing and constituency parsing. Dependency parsing focuses on the relationships between words in a sentence (marking things like primary objects and predicates), whereas constituency parsing focuses on building out the parse tree using a probabilistic context-free grammar (PCFG) (see also stochastic grammar).

Lexical semantics (of individual words in context)

An entity linking pipeline
Lexical semantics
What is the computational meaning of individual words in context?
Distributional semantics
How can we learn semantic representations from data?
Named entity recognition (NER)
Given a stream of text, determine which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case, is often inaccurate or insufficient. For example, the first letter of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized. Furthermore, many other languages in non-Western scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with capitalization may not consistently use it to distinguish names. For example, German capitalizes all nouns, regardless of whether they are names, and French and Spanish do not capitalize names that serve as adjectives. This task is also referred to as token classification.
Sentiment analysis (see also Multimodal sentiment analysis)
Sentiment analysis involves identifying and classifying the emotional tone expressed in text. This technique involves analyzing text to determine whether the expressed sentiment is positive, negative, or neutral. Models for sentiment classification typically utilize inputs such as word n-grams, Term Frequency-Inverse Document Frequency (TF-IDF) features, hand-generated features, or employ deep learning models designed to recognize both long-term and short-term dependencies in text sequences. The applications of sentiment analysis are diverse, extending to tasks such as categorizing customer reviews on various online platforms.
Terminology extraction
The goal of terminology extraction is to automatically extract relevant terms from a given corpus.
Word-sense disambiguation (WSD)
Many words have more than one meaning; we have to select the meaning which makes the most sense in context. For this problem, we are typically given a list of words and associated word senses, e.g. from a dictionary or an online resource such as WordNet.
Entity linking
Many words—typically proper names—refer to named entities; here we have to select the entity (a famous individual, a location, a company, etc.) which is referred to in context.

Relational semantics (semantics of individual sentences)

Relationship extraction
Given a chunk of text, identify the relationships among named entities (e.g. who is married to whom).
Semantic parsing
Given a piece of text (typically a sentence), produce a formal representation of its semantics, either as a graph (e.g., in AMR parsing) or in accordance with a logical formalism (e.g., in DRT parsing). This challenge typically includes aspects of several more elementary NLP tasks from semantics (e.g., semantic role labelling, word-sense disambiguation) and can be extended to include full-fledged discourse analysis (e.g., discourse analysis, coreference; see Natural language understanding below).
Semantic role labelling (see also implicit semantic role labelling below)
Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames), then identify and classify the frame elements (semantic roles).

Discourse (semantics beyond individual sentences)

Coreference resolution
Given a sentence or larger chunk of text, determine which words ("mentions") refer to the same objects ("entities"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called "bridging relationships" involving referring expressions. For example, in a sentence such as "He entered John's house through the front door", "the front door" is a referring expression and the bridging relationship to be identified is the fact that the door being referred to is the front door of John's house (rather than of some other structure that might also be referred to).
Discourse analysis
This rubric includes several related tasks. One task is discourse parsing, i.e., identifying the discourse structure of a connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text (e.g. yes–no question, content question, statement, assertion, etc.).
Implicit semantic role labelling
Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames) and their explicit semantic roles in the current sentence (see Semantic role labelling above). Then, identify semantic roles that are not explicitly realized in the current sentence, classify them into arguments that are explicitly realized elsewhere in the text and those that are not specified, and resolve the former against the local text. A closely related task is zero anaphora resolution, i.e., the extension of coreference resolution to pro-drop languages.
Recognizing textual entailment
Given two text fragments, determine if one being true entails the other, entails the other's negation, or allows the other to be either true or false.
Topic segmentation and recognition
Given a chunk of text, separate it into segments each of which is devoted to a topic, and identify the topic of the segment.
Argument mining
The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs. Such argumentative structures include the premise, conclusions, the argument scheme and the relationship between the main and subsidiary argument, or the main and counter-argument within discourse.

Higher-level NLP applications

Machine translation in Firefox
Automatic summarization (text summarization)
Produce a readable summary of a chunk of text. Often used to provide summaries of the text of a known type, such as research papers, articles in the financial section of a newspaper.
Grammatical error correction
Grammatical error detection and correction involves a great band-width of problems on all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction is impactful since it affects hundreds of millions of people that use or acquire English as a second language. It has thus been subject to a number of shared tasks since 2011. As far as orthography, morphology, syntax and certain aspects of semantics are concerned, and due to the development of powerful neural language models such as GPT-2, this can now (2019) be considered a largely solved problem and is being marketed in various commercial applications.
Logic translation
Translate a text from a natural language into formal logic.
Machine translation (MT)
Automatically translate text from one human language to another. This is one of the most difficult problems, and is a member of a class of problems colloquially termed "AI-complete", i.e. requiring all of the different types of knowledge that humans possess (grammar, semantics, facts about the real world, etc.) to solve properly.
Natural language understanding (NLU)
Convert chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural language concepts. Introduction and creation of language metamodel and ontology are efficient however empirical solutions. An explicit formalization of natural language semantics without confusions with implicit assumptions such as closed-world assumption (CWA) vs. open-world assumption, or subjective Yes/No vs. objective True/False is expected for the construction of a basis of semantics formalization.
Natural language generation (NLG):
Convert information from computer databases or semantic intents into readable human language.
Book generation
Not an NLP task proper but an extension of natural language generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words. Both these systems are basically elaborate but non-sensical (semantics-free) language models. The first machine-generated science book was published in 2019 (Beta Writer, Lithium-Ion Batteries, Springer, Cham). Unlike Racter and 1 the Road, this is grounded on factual knowledge and based on text summarization.
Document AI
A Document AI platform sits on top of the NLP technology enabling users with no prior experience of artificial intelligence, machine learning or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, for example, lawyers, business analysts and accountants.
Dialogue management
Computer systems intended to converse with a human.
Question answering
Given a human-language question, determine its answer. Typical questions have a specific right answer (such as "What is the capital of Canada?"), but sometimes open-ended questions are also considered (such as "What is the meaning of life?").
Text-to-image generation
Given a description of an image, generate an image that matches the description.
Text-to-scene generation
Given a description of a scene, generate a 3D model of the scene.
Text-to-video
Given a description of a video, generate a video that matches the description.

General tendencies and (possible) future directions

Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed:

  • Interest on increasingly abstract, "cognitive" aspects of natural language (1999–2001: shallow parsing, 2002–03: named entity recognition, 2006–09/2017–18: dependency syntax, 2004–05/2008–09 semantic role labelling, 2011–12 coreference, 2015–16: discourse parsing, 2019: semantic parsing).
  • Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages)
  • Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems)

Cognition

Most higher-level NLP applications involve aspects that emulate intelligent behavior and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behavior represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above).

Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses." Cognitive science is the interdisciplinary, scientific study of the mind and its processes. Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics. Especially during the age of symbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies.

As an example, George Lakoff offers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics, with two defining aspects:

  1. Apply the theory of conceptual metaphor, explained by Lakoff as "the understanding of one idea, in terms of another" which provides an idea of the intent of the author. For example, consider the English word big. When used in a comparison ("That is a big tree"), the author's intent is to imply that the tree is physically large relative to other trees or the authors experience. When used metaphorically ("Tomorrow is a big day"), the author's intent to imply importance. The intent behind other usages, like in "She is a big person", will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information.
  2. Assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed, e.g., by means of a probabilistic context-free grammar (PCFG). The mathematical equation for such algorithms is presented in US Patent 9269353:
Where
RMM is the relative measure of meaning
token is any block of text, sentence, phrase or word
N is the number of tokens being analyzed
PMM is the probable measure of meaning based on a corpora
d is the non zero location of the token along the sequence of N tokens
PF is the probability function specific to a language

Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar, functional grammar, construction grammar, computational psycholinguistics and cognitive neuroscience (e.g., ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences of the ACL). More recently, ideas of cognitive NLP have been revived as an approach to achieve explainability, e.g., under the notion of "cognitive AI". Likewise, ideas of cognitive NLP are inherent to neural models multimodal NLP (although rarely made explicit) and developments in artificial intelligence, specifically tools and technologies using large language model approaches and new directions in artificial general intelligence based on the free energy principle by British neuroscientist and theoretician at University College London Karl J. Friston.

Accelerationism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Accelerat...