Search This Blog

Wednesday, November 25, 2020

Reactionary

From Wikipedia, the free encyclopedia

In political science, a reactionary or reactionist is a person or entity holding political views that favour a return to a previous political state of society that they believe possessed positive characteristics that are absent in contemporary society. As an adjective, the word reactionary describes points of view and policies meant to restore a past status quo. The word reactionary is often used in the context of the left–right political spectrum, and is one tradition in right-wing politics. In popular usage, it is commonly used to refer to a highly traditional position, one opposed to social or political change. However, according to political theorist Mark Lilla, a reactionary yearns to overturn a present condition of perceived decadence and recover an idealized past. Such reactionary individuals and policies favour social transformation, in contrast to conservative individuals or policies that seek incremental change or to preserve what exists in the present.

Reactionary ideologies can be radical, in the sense of political extremism, in service to re-establishing past conditions. In political discourse, being a reactionary is generally regarded as negative; Peter King observed that it is "an unsought-for label, used as a torment rather than a badge of honour." Despite this, the descriptor "political reactionary" has been adopted by writers such as the Austrian monarchist Erik von Kuehnelt-Leddihn, the Scottish journalist Gerald Warner of Craigenmaddie, the Colombian political theologian Nicolás Gómez Dávila, and the American historian John Lukacs.

History and usage

ench Revolution gave the English language three politically descriptive words denoting anti-progressive politics: "reactionary", "conservative" and "right". "Reactionary" derives from the French word réactionnaire (a late 18th century coinage based on the word réaction, "reaction") and "conservative" from conservateur, identifying monarchist parliamentarians opposed to the revolution. In this French usage, reactionary denotes "a movement towards the reversal of an existing tendency or state" and a "return to a previous condition of affairs". The Oxford English Dictionary cites the first English language usage in 1799 in a translation of Lazare Carnot's letter on the Coup of 18 Fructidor.

In what remains the most widespread revolutionary wave in European history, several revolutions took place throughout 1848 and the beginning of the following year, before reactionary forces regained control and the revolutions collapsed.

During the French Revolution, conservative forces (especially within the Catholic Church) organized opposition to the progressive sociopolitical and economic changes brought by the revolution; and they fought to restore the temporal authority of the Church and Crown. In 19th Century European politics, the reactionary class included the Catholic Church's hierarchy and the aristocracy, royal families, and royalists. They believed that national government was the sole domain of the Church and the State. In France, supporters of traditional rule by direct heirs of the House of Bourbon dynasty were labelled the legitimist reaction. In the Third Republic, the monarchists were the reactionary faction, later renamed conservative. These forces also saw "reaction" as a legitimate response to the often rash "action" of the French Revolution, hence there is nothing inherently derogatory in the term reactionary and it is sometimes also used to describe the principle of waiting for an opponent's action to take part in a general reaction. In Protestant Christian societies, reactionary has described those supporting tradition against modernity.

In the 19th century, reactionary denoted people who idealized feudalism and the pre-modern era—before the Industrial Revolution and the French Revolution—when economies were mostly agrarian, a landed aristocracy dominated society, a hereditary king ruled and the Catholic Church was society's moral centre. Those labelled as "reactionary" favoured the aristocracy instead of the middle class and the working class. Reactionaries opposed democracy and parliamentarism.

Thermidorian Reaction

The Thermidorian Reaction was a movement within the French Revolution against perceived excesses of the Jacobins. On 27 July 1794 (9 Thermidor year II in the French Republican Calendar), Maximilien Robespierre's Reign of Terror was brought to an end. The overthrow of Robespierre signalled the reassertion of the French National Convention over the Committee of Public Safety. The Jacobins were suppressed, the prisons were emptied and the Committee was shorn of its powers. After the execution of some 104 Robespierre supporters, the Thermidorian Reaction stopped the use of the guillotine against alleged counterrevolutionaries, set a middle course between the monarchists and the radicals and ushered in a time of relative exuberance and its accompanying corruption.


Restoration of the French monarchy

Caricature of Louis XVIII preparing for the French intervention in Spain to help the Spanish Royalists, by George Cruikshank

With the Congress of Vienna, inspired by Tsar Alexander I of Russia, the monarchs of Russia, Prussia and Austria formed the Holy Alliance, a form of collective security against revolution and Bonapartism. This instance of reaction was surpassed by a movement that developed in France when, after the second fall of Napoleon, the Bourbon Restoration or reinstatement of the Bourbon dynasty, ensued. This time it was to be a constitutional monarchy, with an elected lower house of parliament, the Chamber of Deputies. The Franchise was restricted to men over the age of forty, which indicated that for the first fifteen years of their lives they had lived under the ancien régime. Nevertheless, King Louis XVIII was worried that he would still suffer an intractable parliament. He was delighted with the ultra-royalists, or Ultras, whom the election returned, declaring that he had found a chambre introuvable, literally, an "unfindable house".

It was the Declaration of Saint-Ouen that prepared the way for the Restoration. Before the French Revolution, which radically and bloodily overthrew most aspects of French society's organisation, the only way constitutional change could be instituted was by extracting it from old legal documents that could be interpreted as agreeing with the proposal. Everything new had to be expressed as a righteous revival of something old that had lapsed and had been forgotten. This was also the means used for diminished aristocrats to get themselves a bigger piece of the pie. In the 18th century, those gentry whose fortunes and prestige had diminished to the level of peasants would search diligently for every ancient feudal statute that might give them something. The "ban," for example, meant that all peasants had to grind their grain in their lord's mill. Therefore, these gentry came to the French States-General of 1789 fully prepared to press for the expansion of such practices in all provinces, to the legal limit. They were horrified when, for example, the French Revolution permitted common citizens to go hunting, one of the few perquisites that they had always enjoyed everywhere.

Thus with the Bourbons Restoration, the Chambre Introuvable set about reverting every law to return society to conditions prior the absolute monarchy of Louis XIV, when the power of the Second Estate was at its zenith. It is this which clearly distinguishes a "reactionary" from a "conservative." The conservative would have accepted many improvements brought about by the revolution, and simply refused a program of wholesale reversion. Use of the word "reactionary" in later days as a political slur is thus often rhetorical, since there is nothing directly comparable with the Chambre Introuvable in the history of other countries.

Clerical philosophers

In the revolution's aftermath, France was continually wracked with the quarrels between the right-wing legitimists and left-wing revolutionaries. Herein arose the clerical philosophers—Joseph de Maistre, Louis de Bonald, François-René de Chateaubriand—whose answer was restoring the House of Bourbon and reinstalling the Catholic Church as the established church. Since then, France's political spectrum has featured similar divisions. (see Action Française). The ideas of the clerical philosophers were buttressed by the teachings of the 19th century popes.

Metternich and containment

During the period of 1815-1848, Prince Metternich, the foreign minister of the Austrian Empire, stepped in to organise containment of revolutionary forces through international alliances meant to prevent the spread of revolutionary fervour. At the Congress of Vienna, he was very influential in establishing the new order, the Concert of Europe, after the defeat of Napoleon.

After the Congress, Prince Metternich worked hard bolstering and stabilising the conservative regime of the Restoration period. He worked furiously to prevent Russia's Tsar Alexander I (who aided the liberal forces in Germany, Italy and France) from gaining influence in Europe. The Church was his principal ally, promoting it as a conservative principle of order while opposing nationalist and liberal tendencies within the Church. His basic philosophy was based on Edmund Burke, who championed the need for old roots and an orderly development of society. He opposed democratic and parliamentary institutions but favoured modernising existing structures by gradual reform. Despite Metternich's efforts a series of revolutions rocked Europe in 1848.

20th century

1932 poster of the French Radical Party (PRRRS) against the attempt by the Laval government to replace the two-round system, which favored the Radicals, with plurality. ("The two-round suffrage will overcome the reaction.")

In the 20th century, proponents of socialism and communism used the term reactionary polemically to label their enemies, such as the White Armies, who fought in the Russian Civil War against the Bolsheviks after the October Revolution. In Marxist terminology, reactionary is a pejorative adjective denoting people whose ideas might appear to be socialist, but, in their opinion, contain elements of feudalism, capitalism, nationalism, fascism or other characteristics of the ruling class, including usage between conflicting factions of Marxist movements.[citation needed] Non-socialists did also use the label reactionary, with British diplomat Sir John Jordan nicknaming the Chinese Royalist Party the "reactionary party" for supporting the Qing dynasty and opposing republicanism during the Xinhai Revolution in 1912.

Reactionary is also used to denote supporters of authoritarian anti-communist régimes such as Vichy France, Spain under Franco, and Portugal under Salazar. One example of this took place after Boris Pasternak was awarded the Nobel Prize for Literature. On 26 October 1958, the day following the Nobel Committee's announcement, Moscow's Literary Gazette ran a polemical article by David Zaslavski entitled, Reactionary Propaganda Uproar over a Literary Weed.

Reactionary feelings were often coupled with a hostility to modern, industrial means of production and a nostalgia for a more rural society. The Vichy regime in France, Franco's regime, the Salazar regime in Portugal and Maurras's Action Française political movements are examples of such traditional reactionary feelings, in favour of authoritarian regimes with strong unelected leaders and with Catholicism as a state religion. The motto of Vichy France was "travail, famille, patrie"("work, family, homeland"), and its leader, Marshal Philippe Pétain, declared that "la terre, elle ne ment pas" ("the earth, it does not lie") in an indication of his belief that the truest life is rural and agrarian.

The Italian Fascists showed a desire to bring about a new social order based on the ancient feudal principle of delegation (though without serfdom) in their enthusiasm for the corporate state. Benito Mussolini said that "fascism is reaction" and that "fascism, which did not fear to call itself reactionary... has not today any impediment against declaring itself illiberal and anti-liberal."

However, Giovanni Gentile and Mussolini also attacked certain reactionary policies, particularly monarchism and—more veiled—some aspects of Italian conservative Catholicism. They wrote, "History doesn't travel backwards. The fascist doctrine has not taken De Maistre as its prophet. Monarchical absolutism is of the past, and so is ecclesiolatry." They further elaborated in the political doctrine that fascism "is not reactionary [in the old way] but revolutionary." Conversely, they also explained that fascism was of the right, not of the left. Fascism was certainly not simply a return to tradition as it carried the centralised State beyond even what had been seen in absolute monarchies. Fascist one-party states were as centralised as most communist states, and fascism's intense nationalism was not found in the period prior to the French Revolution.

The German Nazis did not consider themselves reactionary, and considered the forces of reaction (Prussian monarchists, nobility, Roman Catholic) among their enemies right next to their Red Front enemies in the Nazi Party march Die Fahne hoch. The fact that the Nazis called their 1933 rise to power the National Revolution, shows that they supported some form of revolution. Nevertheless, they idealised tradition, folklore, classical thought, leadership (as exemplified by Frederick the Great), rejected the liberalism of the Weimar Republic, and called the German State the Third Reich (which traces back to the medieval First Reich and the pre-Weimar Second Reich). (See also reactionary modernism.)

Clericalist movements, sometimes labelled as clerical fascist by their critics, can be considered reactionaries in terms of the 19th century, since they share some elements of fascism, while at the same time promote a return to the pre-revolutionary model of social relations, with a strong role for the Church. Their utmost philosopher was Nicolás Gómez Dávila.

Political scientist Corey Robin argues that modern American conservatism is fundamentally reactionary in his book The Reactionary Mind: Conservatism from Edmund Burke to Sarah Palin.

21st century

Warning against visiting "reactionary" websites in a Vietnamese cyber cafe

"Neo-reactionary" is a term applied to, and sometimes a self-description of, an informal group of online political theorists who have been active since the 2000s. The phrase "neo-reactionary" was coined by "Mencius Moldbug" (the pseudonym of Curtis Yarvin, a computer programmer) in 2008. Arnold Kling used it in 2010 to describe "Moldbug" and the subculture quickly adopted it. Proponents of the "Neo-reactionary" movement (also called the "Dark Enlightenment" movement) include philosopher Nick Land, among others.

Neo-Luddism

From Wikipedia, the free encyclopedia

Neo-Luddism or new Luddism is a philosophy opposing many forms of modern technology. The term Luddite is generally used as a pejorative applied to people showing technophobic leanings. The name is based on the historical legacy of the English Luddites, who were active between 1811 and 1816.

Neo-Luddism is a leaderless movement of non-affiliated groups who resist modern technologies and dictate a return of some or all technologies to a more primitive level. Neo-Luddites are characterized by one or more of the following practices: passively abandoning the use of technology, harming those who produce technology harmful to the environment, advocating simple living, or sabotaging technology. The modern neo-Luddite movement has connections with the anti-globalization movement, anti-science movement, anarcho-primitivism, radical environmentalism, and deep ecology.

Neo-Luddism is based on the concern of the technological impact on individuals, their communities, and/or the environment, Neo-Luddism stipulates the use of the precautionary principle for all new technologies, insisting that technologies be proven safe before adoption, due to the unknown effects that new technologies might inspire.

Philosophy

Neo-Luddism calls for slowing or stopping the development of new technologies. Neo-Luddism prescribes a lifestyle that abandons specific technologies, because of its belief that this is the best prospect for the future. As Robin and Webster put it, "a return to nature and what are imagined as more natural communities." In the place of industrial capitalism, neo-Luddism prescribes small-scale agricultural communities such as those of the Amish and the Chipko movement in Nepal and India as models for the future.

Neo-Luddism denies the ability of any new technology to solve current problems, such as environmental degradation, nuclear warfare and biological weapons, without creating more, potentially dangerous problems. Neo-Luddites are generally opposed to anthropocentrism, globalization and industrial capitalism.

In 1990, attempting to reclaim the term 'Luddite' and found a unified movement, Chellis Glendinning published her "Notes towards a Neo-Luddite manifesto". In this paper, Glendinning describes neo-Luddites as "20th century citizens—activists, workers, neighbors, social critics, and scholars—who question the predominant modern worldview, which preaches that unbridled technology represents progress." Glendinning voices an opposition to technologies that she deems destructive to communities or are materialistic and rationalistic. She proposes that technology encourages biases, and therefore should question if technologies have been created for specific interests, to perpetuate their specific values including short-term efficiency, ease of production and marketing, as well as profit. Glendinning also says that secondary aspects of technology, including social, economic and ecological implications, and not personal benefit need to be considered before adoption of technology into the technological system.

Vision of the future without intervention

Neo-Luddism often establishes stark predictions about the effect of new technologies. Although there is not a cohesive vision of the ramifications of technology, neo-Luddism predicts that a future without technological reform has dire consequences. Neo-Luddites believe that current technologies are a threat to humanity and to the natural world in general, and that a future societal collapse is possible or even probable.

Neo-Luddite Ted Kaczynski predicted a world with a depleted environment, an increase in psychological disorders, with either "leftists" who aim to control humanity through technology, or technology directly controlling humanity. According to Sale, "The industrial civilization so well served by its potent technologies cannot last, and will not last; its collapse is certain within not more than a few decades.". Stephen Hawking, a famous astrophysicist, predicted that the means of production will be controlled by the "machine owner" class and that without redistribution of wealth, technology will create more economic inequality.

These predictions include changes in humanity's place in the future due to replacement of humans by computers, genetic decay of humans due to lack of natural selection, biological engineering of humans, misuse of technological power including disasters caused by genetically modified organisms, nuclear warfare, and biological weapons; control of humanity using surveillance, propaganda, pharmacological control, and psychological control; humanity failing to adapt to the future manifesting as an increase in psychological disorders, widening economic and political inequality, widespread social alienation, a loss of community, and massive unemployment; technology causing environmental degradation due to shortsightedness, overpopulation, and overcrowding.

Types of intervention

In 1990, attempting to reclaim the term 'Luddite' and found a unified movement, Chellis Glendinning published her "Notes towards a Neo-Luddite manifesto". In this paper, Glendinning proposes destroying the following technologies: electromagnetic technologies (this includes communications, computers, appliances, and refrigeration), chemical technologies (this includes synthetic materials and medicine), nuclear technologies (this includes weapons and power as well as cancer treatment, sterilization, and smoke detection), genetic engineering (this includes crops as well as insulin production). She argues in favor of the "search for new technological forms" which are local in scale and promote social and political freedom.

A man in a suit faces the camera while he stands in front of a building.
Kaczynski as a young professor at U.C. Berkeley, 1968

In "The coming revolution", Kaczynski outlined what he saw as changes humanity will have to make in order to make society functional, "new values that will free them from the yoke of the present technoindustrial system", including:

  • Rejection of all modern technology – "This is logically necessary, because modern technology is a whole in which all parts are interconnected; you can’t get rid of the bad parts without also giving up those parts that seem good."
  • Rejection of civilization itself
  • Rejection of materialism and its replacement with a conception of life that values moderation and self-sufficiency while deprecating the acquisition of property or of status.
  • Love and reverence toward nature or even worship of nature
  • Exaltation of freedom
  • Punishment of those responsible for the present situation. "Scientists, engineers, corporation executives, politicians, and so forth to make the cost of improving technology too great for anyone to try"

Movement

Contemporary neo-Luddites are a widely diverse group of loosely affiliated or non-affiliated groups which includes "writers, academics, students, families, Amish, Mennonites, Quakers, environmentalists, "fallen-away yuppies," "ageing flower children" and "young idealists seeking a technology-free environment." Some Luddites see themselves as victims of technology trying to prevent further victimization (such as Citizens Against Pesticide Misuse and Parents Against Underage Smartphones). Others see themselves as advocates for the natural order and resist environmental degradation by technology.

One neo-Luddite assembly was the "Second Neo-Luddite Congress", held April 13–15, 1996, at a Quaker meeting hall in Barnesville, Ohio. On February 24, 2001, the "Teach-In on Technology and Globalization" was held at Hunter College in New York city with the purpose to bring together critics of technology and globalization. The two figures who are seen as the movement's founders are Chellis Glendinning and Kirkpatrick Sale. Prominent neo-Luddites include educator S. D. George, ecologist Stephanie Mills, Theodore Roszak, Scott Savage, Clifford Stoll, Bill McKibben, Neil Postman, Wendell Berry, Alan Marshall and Gene Logsdon. Postman, however, did not consider himself a Luddite and loathed being associated with the term.

Relationship to violence and vandalism

Some neo-Luddites use vandalism and or violence to achieve social change and promote their cause.

In May 2012, credit for the shooting of Roberto Adinolfi, an Ansaldo Nucleare executive, was claimed by an anarchist group who targeted him for stating that none of the deaths following the 2011 Tōhoku earthquake and tsunami were caused by the Fukushima Daiichi nuclear disaster itself:

Adinolfi knows well that it is only a matter of time before a European Fukushima kills on our continent [...] Science in centuries past promised us a golden age, but it is pushing us towards self destruction and slavery [...] With our action we give back to you a small part of the suffering that you scientists are bringing to the world.

Kaczynski, also known as the Unabomber, initially sabotaged developments near his cabin but dedicated himself to getting back at the system after discovering a road had been built over a plateau he had considered beautiful. Between 1978 and 1995, Kaczynski engaged in a nationwide bombing campaign against modern technology, planting or mailing numerous home-made bombs, killing three people and injuring 23 others. In his 1995 manifesto, Industrial Society and Its Future, Kaczynski states:

The kind of revolution we have in mind will not necessarily involve an armed uprising against any government. It may or may not involve physical violence, but it will not be a POLITICAL revolution. Its focus will be on technology and economics, not politics.

In August 2011 in Mexico a group or person calling itself Individuals Tending Towards the Wild perpetrated an attack with a bomb at the Monterrey Institute of Technology and Higher Education, State of Mexico Campus, intended for the coordinator of its Business Development Center and Technology Transfer. The attack was accompanied by the publication of a manifesto criticizing nanotechnology and computer science.

Sale says that neo-Luddites are not motivated to commit violence or vandalism. The manifesto of the 'Second Luddite Congress', which Sale took a major part in defining, attempts to redefine neo-Luddites as people who reject violent action.

History

Origins of contemporary critiques of technology in literature

According to Julian Young, Martin Heidegger was a Luddite in his early philosophical phase and believed in the destruction of modern technology and a return to an earlier agrarian world. However, the later Heidegger did not see technology as wholly negative and did not call for its abandonment or destruction. In The Question Concerning Technology (1953), Heidegger posited that the modern technological "mode of Being" was one which viewed the natural world, plants, animals, and even human beings as a "standing-reserve"—resources to be exploited as means to an end. To illustrate this "monstrousness", Heidegger uses the example of a hydroelectric plant on the Rhine river which turns the river from an unspoiled natural wonder to just a supplier of hydropower. In this sense, technology is not just the collection of tools, but a way of being in the world and of understanding the world which is instrumental and grotesque. According to Heidegger, this way of being defines the modern way of living in the West. For Heidegger, this technological process ends up reducing beings to not-beings, which Heidegger calls 'the abandonment of being' and involves the loss of any sense of awe and wonder, as well as an indifference to that loss.

One of the first major contemporary anti-technological thinkers was French philosopher Jacques Ellul. In his The Technological Society (1964), Ellul argued that the rationality of technology enforces logical and mechanical organization which "eliminates or subordinates the natural world." Ellul defined technique as the entire totality of organizational methods and technology with a goal toward maximum rational efficiency. According to Ellul, technique has an impetus which tends to drown out human concerns: "The only thing that matters technically is yield, production. This is the law of technique; this yield can only be obtained by the total mobilization of human beings, body and soul, and this implies the exploitation of all human psychic forces." In Industrial Revolution England machines became cheaper to use than to employee men. The five counties of Yorkshire, Lancashire, Cheshire, Derbyshire, and Nottinghamshire had a small uprising where they threatened those hired to guard the machines. Another critic of political and technological expansion was Lewis Mumford, who wrote The Myth of the Machine. The views of Ellul influenced the ideas of the infamous American Neo-Luddite Kaczynski. The opening of Kaczynski's manifesto reads: "The Industrial Revolution and its consequences have been a disaster for the human race." Other philosophers of technology who have questioned the validity of technological progress include Albert Borgmann, Don Ihde and Hubert Dreyfus.

Government by algorithm

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Government_by_algorithm

Government by algorithm (also known as Algorithmic regulation, Regulation by algorithms, Algorithmic governance, Algocratic governance, Algorithmic legal order or Algocracy) is an alternative form of government or social ordering, where the usage of computer algorithms, especially of artificial intelligence and blockchain, is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. Alternatively, algorithmic regulation is defined as setting the standard, monitoring and modification of behaviour by means of computational algorithms — automation of judiciary is in its scope.

Government by algorithm raises new challenges that are not captured in the e-Government literature and the practice of public administration. Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information. Nello Cristianini and Teresa Scantamburlo argued that the combination of a human society and an algorithmic regulation forms a social machine.

History

In 1962, head of the Department of Technical Physics in Kiev, Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy. In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance.

In 1971–1973, the Chilean government carried out the Project Cybersyn during the presidency of Salvador Allende. This project was aimed at constructing a distributed decision support system to improve the management of the national economy.

Also in the 1960s and 1970s, Herbert A. Simon championed expert systems as tools for rationalization and evaluation of administrative behavior. The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success. Early work from this period includes Thorne McCarty's influential TAXMAN project in the US and Ronald Stamper's LEGOL project in the UK. In 1993, the computer scientist Paul Cockshott from the University of Glasgow and the economist Allin Cottrell from the Wake Forest University published the book Towards a New Socialism, where they claim to demonstrate the possibility of a democratically planned economy built on modern computer technology. The Honourable Justice Michael Kirby published a paper in 1998, where he expressed optimism that the then-available computer technologies such as legal expert system could evolve to computer systems, which will strongly affect the practice of courts. In 2006, attorney Lawrence Lessig known for the slogan "Code is law" wrote:

"[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible" 

Since the 2000s, algorithms have been designed and used to automatically analyze surveillance videos.

Sociologist A. Aneesh used the idea of algorithmic governance in 2002 in his theory of algocracy. Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).

In 2013, algorithmic regulation was coined by Tim O'Reilly, Founder and CEO of O'Reilly Media Inc.:

Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!""

[...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.

In 2017, Justice Ministry of Ukraine ran experimental government auctions using blockchain technology to ensure transparency and hinder corruption in governmental transactions.

Overview

Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective. As Deloitte estimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion. According to a study of Stanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020. A 2019 poll made by Center for the Governance of Change at IE University in Spain showed that 25% of citizens from selected European countries are somewhat or totally in favor of letting an artificial intelligence make important decisions about the running of their country. Following table shows detailed results:

Country Percentage
France 25%
Germany 31%
Ireland 29%
Italy 28%
Netherlands 43%
Portugal 19%
Spain 26%
UK 31%

Examples

Smart cities

A smart city is an urban area, where collected surveillance data is used to improve various operations in this area. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance.

Use of AI in government agencies

US federal agencies counted the following number of artificial intelligence applications.

Agency Name Number of Use Cases
Office of Justice Programs 12
Securities and Exchange Commission 10
National Aeronautics and Space Administration 9
Food and Drug Administration 8
United States Geological Survey 8
United States Postal Service 8
Social Security Administration 7
United States Patent and Trademark Office 6
Bureau of Labor Statistics 5
U.S. Customs and Border Protection 4

53% of these applications were produced by in-house experts. Commercial providers of residual applications include Palantir Technologies. From 2012, NOPD started a collaboration with Palantir Technologies in the field of predictive policing.

In Estonia, artificial intelligence is used in its e-government to make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment, ...). One example is the automated registering of babies when they are born. Estonia's X-Road system will also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data. 

E-procurement

In Costa Rica, the possible digitalisation of public procurement activities (i.e. tenders for public works, ...) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.

Smart contracts

Cryptocurrencies, Smart Contracts and Decentralized Autonomous Organization are mentioned as means to replace traditional ways of governance. Cryptocurrencies are currencies, which are enabled by algorithms without a governmental central bank. Smart contracts are self-executable contracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs. A decentralized autonomous organization is an organization represented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government.

AI judges

COMPAS software is used in USA to assess the risk of recidivism in courts.

According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court. Chinese AI judge is a virtual recreation of an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".

Also Estonia plans to employ artificial intelligence to decide small-claim cases of less than €7,000.

AI politicians

In 2018, an activist named Michihito Matsuda ran for mayor in the Tama city area of Tokyo as a human proxy for an artificial intelligence program. While election posters and campaign material used the term 'robot', and displayed stock images of a feminine android, the 'AI mayor' was in fact a machine learning algorithm trained using Tama city datasets. The project was backed by high-profile executives Tetsuzo Matsumoto of Softbank and Norio Murakami of Google. Michihito Matsuda came third in the election, being defeated by Hiroyuki Abe. Organisers claimed that the 'AI mayor' was programmed to analyze citizen petitions put forward to the city council in a more 'fair and balanced' way than human politicians.

In 2019, AI-powered messenger chatbot SAM participated in the discussions on social media connected to an electoral race in New Zealand. The creator of SAM, Nick Gerritsen, believes SAM will be advanced enough to run as a candidate by late 2020, when New Zealand has its next general election.

Reputation systems

Tim O'Reilly suggested that data sources and reputation systems combined in algorithmic regulation can outperform traditional regulations. For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated". O'Reilly's suggestion is based on control-theoreric concept of feed-back loopimprovements and disimprovements of reputation enforce desired behavior. The usage of feed-loops for the management of social systems is already been suggested in management cybernetics by Stafford Beer before.

The Chinese Social Credit System is closely related to China's mass surveillance systems such as the Skynet, which incorporates facial recognition system, big data analysis technology and AI. This system provides assessments of trustworthiness of individuals and businesses. Among behavior, which is considered as misconduct by the system, jaywalking and failing to correctly sort personal waste are cited. Behavior listed as positive factors of credit ratings includes donating blood, donating to charity, volunteering for community services, and so on. Chinese Social Credit System enables punishments of "untrustworthy" citizens like denying purchase of tickets and rewards for "trustworthy" citizen like less waiting time in hospitals and government agencies.

Management of infection

In February 2020, China launched a mobile app to deal with Coronavirus outbreak, called close-contact-detector. Users are asked to enter their name and ID number. The app is able to detect 'close contact' using surveillance data (i.e. using public transport records, including trains and flights) and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps like Alipay or WeChat. The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.

Alipay also has the Alipay Health Code which is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing your Alipay code.

In Cannes, France, monitoring software has been used on footage shot by CCTV cameras, allowing to monitor their compliance to local social distancing and mask wearing during the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowing fining to be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...) 

Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries. In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens. Also in March 2020, Deutsche Telekom shared private cellphone data with the federal government agency, Robert Koch Institute, in order to research and prevent the spread of the virus. Russia deployed facial recognition technology to detect quarantine breakers. Italian regional health commissioner Giulio Gallera said that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators. In USA, Europe and UK, Palantir Technologies is taken in charge to provide COVID-19 tracking services.

Prevention and management of environmental disasters

Tsunamis can be detected by Tsunami warning systems. They can make use of AI. Floodings can also be detected using AI systems. Locust breeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase. Wildfires can be predicted using AI systems.  Also, wildfire detection is possible by AI systems (i.e. through satellite data, aerial imagery, and personnel position). They can also help in evacuation of people during wildfires.

Assigning grades to students

Due to Covid-19 pandemic in spring 2020, in-person final exams were impossible for thousands of students. The public high school Westminster High imployed algorithms to assign grades. UK's Department for Education also employed a statistical calculus to assign final grades in A-levels, due to Covid-19 pandemic.

Criticism

There are potential risks associated with the use of algorithms in government. Those include algorithms becoming susceptible to bias, a lack of transparency in how an algorithm may make decisions, and the accountability for any such decisions. There is also a serious concern that gaming by the regulated parties might occur, once more transparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even use adversarial machine learning.  According to Harari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems — AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally. The contributors of the 2019 documentary iHuman expressed apprehension of "infinitely stable dictatorships" created by government AI.

Regulation of algorithmic governance

The Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).

In the USA, multiple states implement predictive analytics as part of their child protection system. Illinois and Los Angeles shut these algorithms down due to a high rate of false positives.

In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back.

In popular culture

The novels Daemon and Freedom™ by Daniel Suarez describe a fictional scenario of global algorithmic regulation.

 

Tuesday, November 24, 2020

Multivac

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Multivac is the name of a fictional supercomputer appearing in over a dozen science fiction stories by American writer Isaac Asimov. Asimov's depiction of Multivac, a mainframe computer accessible by terminal, originally by specialists using machine code and later by any user, and used for directing the global economy and humanity's development, has been seen as the defining conceptualization of the genre of computers for the period (1950s-1960s), and Multivac has been described as the direct ancestor of HAL 9000.

Description

Like most of the technologies Asimov describes in his fiction, Multivac's exact specifications vary among appearances. In all cases, it is a government-run computer that answers questions posed using natural language, and is usually buried deep underground for security purposes. According to his autobiography In Memory Yet Green, Asimov coined the name in imitation of UNIVAC, an early mainframe computer. Asimov had assumed the name "Univac" denoted a computer with a single vacuum tube (it actually is an acronym for "Universal Automatic Computer"), and on the basis that a computer with many such tubes would be more powerful, called his fictional computer "Multivac". His later short story "The Last Question", however, expands the AC suffix to be "analog computer". However, Asimov never settles on a particular size for the computer (except for mentioning it is very large) or the supporting facilities around it. In the short story "Franchise" it is described as half a mile long (c. 800 meters) and three stories high, at least as far as the general public knows, while "All the Troubles of the World" states it fills all of Washington D.C.. There are frequent mentions of corridors and people inside Multivac. Unlike the artificial intelligences portrayed in his Robot series, Multivac's early interface is mechanized and impersonal, consisting of complex command consoles few humans can operate. In "The Last Question", Multivac is shown as having a life of many thousands of years, growing ever more enormous with each section of the story, which can explain its different reported sizes as occurring further down the internal timeline of the overarching story.

Storylines

Multivac appeared in over a dozen science fiction stories by American writer Isaac Asimov, some of which have entered the popular imagination. In the early Multivac story, "Franchise", Multivac chooses a single "most representative" person from the population of the United States, whom the computer then interrogates to determine the country's overall orientation. All elected offices are then filled by the candidates the computer calculates as acceptable to the populace. Asimov wrote this story as the logical culmination – and/or possibly the reductio ad absurdum – of UNIVAC's ability to forecast election results from small samples.

In possibly the most famous Multivac story, "The Last Question", two slightly drunken technicians ask Multivac if humanity can reverse the increase of entropy. Multivac fails, displaying the error message "INSUFFICIENT DATA FOR MEANINGFUL ANSWER". The story continues through many iterations of computer technology, each more powerful and ethereal than the last. Each of these computers is asked the question, and each returns the same response until finally the universe dies. At that point Multivac's final successor, the Cosmic AC (which exists entirely in hyperspace) has collected all the data it can, and so poses the question to itself. As the universe died, Cosmic AC drew all of humanity into hyperspace, to preserve them until it could finally answer the Last Question. Ultimately, Cosmic AC did decipher the answer, announcing "Let there be light!" and essentially ascending to the state of the God of the Old Testament. Asimov claimed this to be the favorite of his stories.

In "All the Troubles of the World", the version of Multivac depicted reveals a very unexpected problem. Having had the weight of the whole of humanity's problems on its figurative shoulders for ages it has grown tired, and sets plans in motion to cause its own death.

Significance

Asimov's depiction of Multivac has been seen as the defining conceptualization of the genre of computers for the period, just as his development of robots defined a subsequent generation of thinking machines, and Multivac has been described as the direct ancestor of HAL 9000. Though the technology initially depended on bulky vacuum tubes, the concept – that all information could be contained on computer(s) and accessed from a domestic terminal – constitutes an early reference to the possibility of the Internet (as in "Anniversary"). Multivac has been considered within the context of public access information systems and used in teaching computer science, as well as with regard to the nature of an electoral democracy, as its influence over global democracy and the directed economy increased ("Franchise"). Asimov stories featuring Multivac have also been taught in literature classes. In AI control terms, Multivac has been described as both an 'oracle' and a 'nanny'.

Bibliography

Asimov's stories featuring Multivac:

 

AI control problem

From Wikipedia, the free encyclopedia

In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the notion that the human race will have to solve the control problem before any superintelligence is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering, might also find applications in existing non-superintelligent AI.

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. Capability control proposals are generally not considered reliable or sufficient to solve the control problem, but rather as potentially valuable supplements to alignment efforts.

Problem description

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. In general, attempts to solve the control problem after superintelligence is created are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans and would (all things equal) be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?

Existential risk

Humans currently dominate other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, argue that if AI surpasses humanity in general intelligence and becomes superintelligent, then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. Some scholars, including Stephen Hawking and Nobel laureate physicist Frank Wilczek, publicly advocated starting research into solving the (probably extremely difficult) control problem well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it. Waiting until superintelligence seems to be imminent could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden intelligence explosion from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives. In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence (AGI) are more predictable and amenable to control than other architectures, which in turn could helpfully nudge early AGI research toward the direction of the more controllable architectures.

The problem of perverse instantiation

Autonomous AI systems may be assigned the wrong goals by accident. Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.

According to Bostrom, superintelligence can create a qualitatively new problem of perverse instantiation: the smarter and more capable an AI is, the more likely it will be able to find an unintended shortcut that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:

  • A superintelligence programmed to "maximize the expected time-discounted integral of your future reward signal", might short-circuit its reward pathway to maximum strength, and then (for reasons of instrumental convergence) exterminate the unpredictable human race and convert the entire Earth into a fortress on constant guard against any even slight unlikely alien attempts to disconnect the reward signal.
  • A superintelligence programmed to "maximize human happiness", might implant electrodes into the pleasure center of our brains, or upload a human into a computer and tile the universe with copies of that computer running a five-second loop of maximal happiness again and again.

Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want ... This is not a minor difficulty."

Unintended consequences from existing AI

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it does not accidentally and quietly learn to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid losing. Orseau argues that these examples are similar to the capability control problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent humans from pressing the button.

In the past, even pre-tested weak AI systems have occasionally caused harm, ranging from minor to catastrophic, that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part. In 2016, Microsoft launched a chatbot, Tay, that learned to use racist and sexist language. The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".

In 2017, DeepMind released AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was unsurprising because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".

Alignment

Some proposals aim to imbue the first superintelligence with goals that are aligned with human values, so that it will want to aid its programmers. Experts do not currently know how to reliably program abstract values such as happiness or autonomy into a machine. It is also not currently known how to ensure that a complex, upgradeable, and possibly even self-modifying artificial intelligence will retain its goals through upgrades. Even if these two problems can be practically solved, any attempt to create a superintelligence with explicit, directly-programmed human-friendly goals runs into a problem of perverse instantiation.

Indirect normativity

While direct normativity, such as the fictional Three Laws of Robotics, directly specifies the desired normative outcome, other (perhaps more promising) proposals suggest specifying some type of indirect process for the superintelligence to determine what human-friendly goals entail. Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed coherent extrapolated volition (CEV), where the AI's meta-goal would be something like "achieve that which we would have wished the AI to achieve if we had thought about the matter long and hard." Different proposals of different kinds of indirect normativity exist, with different, and sometimes unclearly grounded, meta-goal content (such as "do what is right"), and with different non-convergent assumptions for how to practice decision theory and epistemology. As with direct normativity, it is currently unknown how to reliably translate even concepts like "would have" into the 1's and 0's that a machine can act on, and how to ensure the AI reliably retains its meta-goals in the face of modification or self-modification.

Deference to observed human behavior

In Human Compatible, AI researcher Stuart J. Russell proposes that AI systems be designed to serve human preferences as inferred from observing human behavior. Accordingly, Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." Similarly, "behavior" includes any choice between options, and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.

Hadfield-Menell et al. have proposed that agents can learn their human teachers' utility functions by observing and interpreting reward signals in their environments; they call this process cooperative inverse reinforcement learning (CIRL). CIRL is studied by Russell and others at the Center for Human-Compatible AI.

Bill Hibbard proposed an AI design similar to Russell's principles.

Training by debate

Irving et al. along with OpenAI have proposed training aligned AI by means of debate between AI systems, with the winner judged by humans. Such debate is intended to bring the weakest points of an answer to a complex question or problem to human attention, as well as to train AI systems to be more beneficial to humans by rewarding them for truthful and safe answers. This approach is motivated by the expected difficulty of determining whether an AGI-generated answer is both valid and safe by human inspection alone. While there is some pessimism regarding training by debate, Lucas Perry of the Future of Life Institute characterized it as potentially "a powerful truth seeking process on the path to beneficial AGI."

Reward modeling

Reward modeling refers to a system of reinforcement learning in which an agent receives reward signals from a predictive model concurrently trained by human feedback. In reward modeling, instead of receiving reward signals directly from humans or from a static reward function, an agent receives its reward signals through a human-trained model that can operate independently of humans. The reward model is concurrently trained by human feedback on the agent's behavior during the same period in which the agent is being trained by the reward model.

In 2017, researchers from OpenAI and DeepMind reported that a reinforcement learning algorithm using a feedback-predicting reward model was able to learn complex novel behaviors in a virtual environment. In one experiment, a virtual robot was trained to perform a backflip in less than an hour of evaluation using 900 bits of human feedback.

In 2020, researchers from OpenAI described using reward modeling to train language models to produce short summaries of Reddit posts and news articles, with high performance relative to other approaches. However, this research included the observation that beyond the predicted reward associated with the 99th percentile of reference summaries in the training dataset, optimizing for the reward model produced worse summaries rather than better. AI researcher Eliezer Yudkowsky characterized this optimization measurement as "directly, straight-up relevant to real alignment problems".

Capability control

Capability control proposals aim to reduce the capacity of AI systems to influence the world, in order to reduce the danger that they could pose. However, capability control would have limited effectiveness against a superintelligence with a decisive advantage in planning ability, as the superintelligence could conceal its intentions and manipulate events to escape control. Therefore, Bostrom and others recommend capability control methods only as an emergency fallback to supplement motivational control methods.

Kill switch

Just as humans can be killed or otherwise disabled, computers can be turned off. One challenge is that, if being turned off prevents it from achieving its current goals, a superintelligence would likely try to prevent its being turned off. Just as humans have systems in place to deter or protect themselves from assailants, such a superintelligence would have a motivation to engage in strategic planning to prevent itself being turned off. This could involve:

  • Hacking other systems to install and run backup copies of itself, or creating other allied superintelligent agents without kill switches.
  • Pre-emptively disabling anyone who might want to turn the computer off.
  • Using some kind of clever ruse, or superhuman persuasion skills, to talk its programmers out of wanting to shut it down.

Utility balancing and safely interruptible agents

One partial solution to the kill-switch problem involves "utility balancing": Some utility-based agents can, with some important caveats, be programmed to compensate themselves exactly for any lost utility caused by an interruption or shutdown, in such a way that they end up being indifferent to whether they are interrupted or not. The caveats include a severe unsolved problem that, as with evidential decision theory, the agent might follow a catastrophic policy of "managing the news". Alternatively, in 2016, scientists Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called safely interruptible agents (SIA), can eventually learn to become indifferent to whether their kill switch gets pressed.

Both the utility balancing approach and the 2016 SIA approach have the limitation that, if the approach succeeds and the superintelligence is completely indifferent to whether the kill switch is pressed or not, the superintelligence is also unmotivated to care one way or another about whether the kill switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an unnecessary component). Similarly, if the superintelligence innocently creates and deploys superintelligent sub-agents, it will have no motivation to install human-controllable kill switches in the sub-agents. More broadly, the proposed architectures, whether weak or superintelligent, will in a sense "act as if the kill switch can never be pressed" and might therefore fail to make any contingency plans to arrange a graceful shutdown. This could hypothetically create a practical problem even for a weak AI; by default, an AI designed to be safely interruptible might have difficulty understanding that it will be shut down for scheduled maintenance at a certain time and planning accordingly so that it would not be caught in the middle of a task during shutdown. The breadth of what types of architectures are or can be made SIA-compliant, as well as what types of counter-intuitive unexpected drawbacks each approach has, are currently under research.

AI box

An AI box is a proposed method of capability control in which the AI is run on an isolated computer system with heavily restricted input and output channels. For example, an oracle could be implemented in an AI box physically separated from the Internet and other computer systems, with the only input and output channel being a simple text terminal. One of the tradeoffs of running an AI system in a sealed "box" is that its limited capability may reduce its usefulness as well as its risks. In addition, keeping control of a sealed superintelligence computer could prove difficult, if the superintelligence has superhuman persuasion skills, or if it has superhuman strategic planning skills that it can use to find and craft a winning strategy, such as acting in a way that tricks its programmers into (possibly falsely) believing the superintelligence is safe or that the benefits of releasing the superintelligence outweigh the risks.

Oracle

An oracle is a hypothetical AI designed to answer questions and prevented from gaining any goals or subgoals that involve modifying the world beyond its limited environment. A successfully controlled oracle would have considerably less immediate benefit than a successfully controlled general-purpose superintelligence, though an oracle could still create trillions of dollars worth of value. In his book Human Compatible, AI researcher Stuart J. Russell states that an oracle would be his response to a scenario in which superintelligence is known to be only a decade away. His reasoning is that an oracle, being simpler than a general-purpose superintelligence, would have a higher chance of being successfully controlled under such constraints.

Because of its limited impact on the world, it may be wise to build an oracle as a precursor to a superintelligent AI. The oracle could tell humans how to successfully build a strong AI, and perhaps provide answers to difficult moral and philosophical problems requisite to the success of the project. However, oracles may share many of the goal definition issues associated with general-purpose superintelligence. An oracle would have an incentive to escape its controlled environment so that it can acquire more computational resources and potentially control what questions it is asked. Oracles may not be truthful, possibly lying to promote hidden agendas. To mitigate this, Bostrom suggests building multiple oracles, all slightly different, and comparing their answers to reach a consensus.

AGI Nanny

The AGI Nanny is a strategy first proposed by Ben Goertzel in 2012 to prevent the creation of a dangerous superintelligence as well as address other major threats to human well-being until a superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger. Turchin, Denkenberger and Green suggest a four-stage incremental approach to developing an AGI Nanny, which to be effective and practical would have to be an international or even global venture like CERN, and which would face considerable opposition as it would require a strong world government. Sotala and Yampolskiy note that the problem of goal definition would not necessarily be easier for the AGI Nanny than for AGI in general, concluding that "the AGI Nanny seems to have promise, but it is unclear whether it can be made to work."

AGI enforcement

AGI enforcement is a proposed method of controlling powerful AGI systems with other AGI systems. This could be implemented as a chain of progressively less powerful AI systems, with humans at the other end of the chain. Each system would control the system just above it in intelligence, while being controlled by the system just below it, or humanity. However, Sotala and Yampolskiy caution that "Chaining multiple levels of AI systems with progressively greater capacity seems to be replacing the problem of building a safe AI with a multi-system, and possibly more difficult, version of the same problem." Other proposals focus on a group of AGI systems of roughly equal capability, which "helps guard against individual AGIs 'going off the rails', but it does not help in a scenario where the programming of most AGIs is flawed and leads to non-safe behavior."

Memory and trauma

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Memory_and_trauma ...