Search This Blog

Friday, April 12, 2024

Self-replicating machine

From Wikipedia, the free encyclopedia
A simple form of machine self-replication

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. The concept, first proposed by Von Neumann no later than the 1940s, has attracted a range of different approaches involving various types of technology. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann used the term universal constructor to describe such self-replicating machines.

Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here.

History

The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".

In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose.

Von Neumann's kinematic model

A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that directed it to retrieve parts from this "sea" using a manipulator, assemble them into a duplicate of itself, and then copy the contents of its memory tape into the empty duplicate's. The machine was envisioned as consisting of as few as eight different types of components; four logic elements that send and receive stimuli and four mechanical elements used to provide a structural skeleton and mobility. While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicating machine due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.

Von Neumann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

Moore's artificial living plants

In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.

Dyson's replicating systems

The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken."

Advanced Automation for Space Missions

An artist's conception of a "self-growing" robotic lunar factory

In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.

The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.

A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.

A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."

A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining.

Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.

Lackner-Wendt Auxon replicators

In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow".

Recent work

NIAC studies on self-replicating systems

In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:

Bootstrapping self-replicating factories in space

In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.

New York University artificial DNA tile motifs

In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.

Self-replication of magnetic polymers

In 2001, Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.

Self-replication of neural circuits

In 1968, Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.

Harvard Wyss Institute

November 29, 2021 a team at Harvard Wyss Institute built the first living robots that can reproduce.

Self-replicating spacecraft

The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

Prospects for implementation

As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).

An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.

Other sources

  • A number of patents have been granted for self-replicating machine concepts. U.S. patent 5,659,477 "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), U.S. patent 5,764,518 " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003).
  • Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
  • In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
  • In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space.

Commons-based peer production

From Wikipedia, the free encyclopedia

Commons-based peer production (CBPP) is a term coined by Harvard Law School professor Yochai Benkler. It describes a model of socio-economic production in which large numbers of people work cooperatively; usually over the Internet. Commons-based projects generally have less rigid hierarchical structures than those under more traditional business models.

One of the major characteristics of the commons-based peer production is its non-profit scope. Often—but not always—commons-based projects are designed without a need for financial compensation for contributors. For example, sharing of STL (file format) design files for objects freely on the internet enables anyone with a 3-D printer to digitally replicate the object, saving the prosumer significant money.

Synonymous terms for this process include consumer co-production and collaborative media production.

Overview

The history of commons-based peer production communities (by the P2Pvalue project)

Yochai Benkler used this term as early as 2001. Benkler first introduced the term in his 2002 paper in the Yale Law Journal (published as a pre-print in 2001) "Coase's Penguin, or Linux and the Nature of the Firm", whose title refers to the Linux mascot and to Ronald Coase, who originated the transaction costs theory of the firm that provides the methodological template for the paper's analysis of peer production. The paper defines the concept as "decentralized information gathering and exchange" and credits Eben Moglen as the scholar who first identified it without naming it.

Yochai Benkler contrasts commons-based peer production with firm production, in which tasks are delegated based on a central decision-making process, and market-based production, in which allocating different prices to different tasks serves as an incentive to anyone interested in performing a task.

In his book The Wealth of Networks (2006), Yochai Benkler significantly expands on his definition of commons-based peer production. According to Benkler, what distinguishes commons-based production is that it doesn't rely upon or propagate proprietary knowledge: "The inputs and outputs of the process are shared, freely or conditionally, in an institutional form that leaves them equally available for all to use as they choose at their individual discretion." To ensure that the knowledge generated is available for free use, commons-based projects are often shared under an open license.

Not all commons-based production necessarily qualifies as commons-based peer production. According to Benkler, peer production is defined not only by the openness of its outputs, but also by a decentralized, participant-driven working method of working.

Peer production enterprises have two primary advantages over traditional hierarchical approaches to production:

  1. Information gain: Peer production allows individuals to self-assign tasks that suit their own skills, expertise, and interests. Contributors can generate dynamic content that reflects the individual skills and the "variability of human creativity."
  2. Great variability of human and information resources leads to substantial increasing returns to scale to the number of people, and resources and projects that may be accomplished without need for a contract or other factor permitting the proper use of the resource for a project.

In Wikinomics, Don Tapscott and Anthony D. Williams suggest an incentive mechanism behind common-based peer production. "People participate in peer production communities," they write, "for a wide range of intrinsic and self-interested reasons....basically, people who participate in peer production communities love it. They feel passionate about their particular area of expertise and revel in creating something new or better."

Aaron Krowne offers another definition:

Commons-based peer production refers to any coordinated, (chiefly) internet-based effort whereby volunteers contribute project components, and there exists some process to combine them to produce a unified intellectual work. CBPP covers many different types of intellectual output, from software to libraries of quantitative data to human-readable documents (manuals, books, encyclopedias, reviews, blogs, periodicals, and more).

Principles

First, the potential goals of peer production must be modular. In other words, objectives must be divisible into components, or modules, each of which can be independently produced. That allows participants to work asynchronously, without having to wait for each other's contributions or coordinate with each other in person.

Second, the granularity of the modules is essential. Granularity refers to the degree to which objects are broken down into smaller pieces (module size). Different levels of granularity will allow people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation.

Third, a successful peer-production enterprise must have low-cost integration—the mechanism by which the modules are integrated into a whole end product. Thus, integration must include both quality controls over the modules and a mechanism for integrating the contributions into the finished product at relatively low cost.

Participation

Participation in commons-based peer production is often voluntary and not necessarily associated with getting profit out of it. Thus, the motivation behind this phenomenon goes far beyond traditional capitalistic theories, which picture individuals as self-interested and rational agents, such portrayal is also called homo economicus.

However, it can be explained through alternative theories as behavioral economics. Famous psychologist Dan Ariely in his work Predictably Irrational explains that social norms shape people's decisions as much as market norms. Therefore, individuals tend to be willing to create value because of their social constructs, knowing that they won't be paid for that. He draws an example of a thanksgiving dinner: offering to pay would likely offend the family member who prepared the dinner as they were motivated by the pleasure of treating family members.

Similarly, commons-based projects, as claimed by Yochai Benkler, are the results of individuals acting "out of social and psychological motivations to do something interesting". He goes on describing the wide range of reasons as pleasure, socially and psychologically rewarding experiences, to the economic calculation of possible monetary rewards (not necessarily from the project itself).

On the other hand, the need for collaboration and interaction lies at the very core of human nature and turns out to be a very essential feature for one's survival. Enhanced with digital technologies, allowing easier and faster collaboration which was not as noticeable before, it resulted in a new social, cultural and economic trend named collaborative society. This theory outlines further reasons for individuals to participate in peer production such as collaboration with strangers, building or integrating into a community or contributing to a general good.

Examples

Additional examples of commons-based peer production communities (by the P2Pvalue project)
One day living with commons-based peer production communities (by the P2Pvalue project)

Examples of projects using commons-based peer production include:

Outgrowths

Several outgrowths have been:

  • Customization/Specialization: With free and open-source software small groups have the capability to customize a large project according to specific needs. With the rise of low-cost 3-D printing, and other digital manufacturing techniques this is now also becoming true of open source hardware.
  • Longevity: Once code is released under a copyleft free software license it is almost impossible to make it unavailable to the public.
  • Cross-fertilization: Experts in a field can work on more than one project with no legal hassles.
  • Technology Revisions: A core technology gives rise to new implementations of existing projects.
  • Technology Clustering: Groups of products tend to cluster around a core set of technology and integrate with one another.

Related concepts

Interrelated concepts to Commons-based peer production are the processes of peer governance and peer property. To begin with, peer governance is a new mode of governance and bottom-up mode of participative decision-making that is being experimented in peer projects, such as Wikipedia and FLOSS; thus peer governance is the way that peer production, the process in which common value is produced, is managed. Peer Property indicates the innovative nature of legal forms such as the General Public License, the Creative Commons, etc. Whereas traditional forms of property are exclusionary ("if it is mine, it is not yours"), peer property forms are inclusionary. It is from all of us, i.e. also for you, provided you respect the basic rules laid out in the license, such as the openness of the source code for example.

The ease of entering and leaving an organization is a feature of adhocracies.

The principle of commons-based peer production is similar to collective invention, a model of open innovation in economics coined by Robert Allen.

Also related: Open-source economics and Commercial use of copyleft works.

Criticism

Some believe that the commons-based peer production (CBPP) vision, while powerful and groundbreaking, needs to be strengthened at its root because of some allegedly wrong assumptions concerning free and open-source software (FOSS).

The CBPP literature regularly and explicitly quotes FOSS products as examples of artifacts "emerging" by virtue of mere cooperation, with no need for supervising leadership (without "market signals or managerial commands", in Benkler's words).

It can be argued, however, that in the development of any less than trivial piece of software, irrespective of whether it be FOSS or proprietary, a subset of the (many) participants always play—explicitly and deliberately—the role of leading system and subsystem designers, determining architecture and functionality, while most of the people work “underneath” them in a logical, functional sense.

From a micro-level, Bauwens and Pantazis are of the view that CBPP models should be considered a prototype, since it cannot reproduce itself fully outside of the limits that capitalism has imposed on it as a result of the interdependence of CBPP with capitalist competition. The innovative activities of CBPP occur within capitalist competitive contexts, and capitalist firms can gain competitive advantage over firms that rely on personal research without proprietary knowledge, because the former is able to utilize and access the knowledge commons, especially in digital commons where participants in CBPP struggle to earn direct livelihood for themselves. CBPP is then at the risk of being subordinated.

Alternative to capitalism

Commons-based peer production (CBPP) represents an alternative form of production from traditional capitalism. Nevertheless, to this day CBPP is still a prototype of a new way of producing, it cannot be called a complete form of production by itself. CBPP is embedded in the capitalist system and even though the processes and forms of production differ it is still mutually dependent to capital. If CBPP triumphs in its implementation the market and state will not disappear, but their relationship with the means of production will be modified. A socio-economic shift pursued by CBPP will not be straightforward or lead to a utopia, it could help solve some current issues. As any economic transition, new problems will emerge and the transition will be complicated. But, moving towards a CBPP production model will be ideal, a step forward for society. CBPP is still a prototype of what a new way of production and society would look like, and can't separate itself completely from capitalism: commoners should find innovative ways to become more autonomous from capitalism. In a society led by commons the market would continue to exist as in capitalism, but would shift from being mainly extractive to being predominantly generative.

Both scenarios, the extractive as well as the generative, can include elements which are based on peer-to-peer (P2P) dynamics, or social peer-to-peer processes. Therefore, one should not only discuss peer production as an opposing alternative to current forms of market organization, but also needs to discuss how both manifest in the organizations of today’s economy. Four scenarios can be described along the lines of profit maximization and commons on one side, and centralized and decentralized control over digital production infrastructure, such as for example networking technologies: netarchical capitalism, distributed capitalism, global commons, and localized commons. Each of them uses P2P elements to a different extent and thus leads to different outcomes:

  • Netarchical capitalism: In this version of capitalism, P2P elements are mainly found in digital platforms, through which individuals can interact with each other. These platforms are controlled by the platform owners, which capture the value of the P2P exchanges.
  • Distributed capitalism: As compared to the first type, platforms are not centrally controlled in this form of capitalism, and individual autonomy and large-scale participation play an important role. However, it is still a form a capitalism, meaning it is mainly extractive, and profit maximization is the main motive.
  • Global commons: This scenario is generative as it aims to add social and environmental value. It uses the digital commons to organize and deploy initiatives globally.
  • Local commons: Similar to the global commons, the local commons are also a generative scenario. However, they use global digital commons to organize activities locally, for example by using global designs to at the same time as local supply chains for manufacturing.

Job guarantee

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Job_guarantee
Eleanor Roosevelt onsite one of the Works Progress Administration Projects, a job guarantee program in the United States

A job guarantee is an economic policy proposal that aims to create full employment and price stability by having the state promise to hire unemployed workers as an employer of last resort (ELR). It aims to provide a sustainable solution to inflation and unemployment.

The economic policy stance currently dominant around the world uses unemployment as a policy tool to control inflation. When inflation rises, the government pursues contractionary fiscal or monetary policy, with the aim of creating a buffer stock of unemployed people, reducing wage demands, and ultimately inflation. When inflationary expectations subside, expansionary policy aims to produce the opposite effect.

By contrast, in a job guarantee program, a buffer stock of employed people (employed in the job guarantee program) is typically intended to provide the same protection against inflation without the social costs of unemployment, hence potentially fulfilling the dual mandate of full employment and price stability.

Overview

A job guarantee is based on a buffer stock principle whereby the public sector offers a fixed wage job to anyone willing and able to work thereby establishing and maintaining a buffer stock of employed workers. This buffer stock expands when private sector activity declines, and declines when private sector activity expands, much like today's unemployed buffer stocks.

A job guarantee thus fulfills an absorption function to minimize the real costs associated with the flux of the private sector. When private sector employment declines, public sector employment will automatically react and increase its payrolls. So in a recession, the increase in public employment will increase net government spending, and stimulate aggregate demand and the economy. Conversely, in a boom, the decline of public sector employment and spending caused by workers leaving their job guarantee jobs for higher paid private sector employment will lessen stimulation, so the job guarantee functions as an automatic stabilizer controlling inflation. The nation always remains fully employed, with a changing mix between private and public sector employment. Since the job guarantee wage is open to everyone, it will functionally become the national minimum wage.

Under a job guarantee, people of working age who are not in full-time education and have less than 35 hours per week of paid employment would be entitled to the balance of 35 hours paid employment, undertaking work of public benefit at the minimum wage, though specifics may change depending on the model. The aim is to replace unemployment and underemployment with paid employment (up to the hours desired by workers), so that those who are at any point in time surplus to the requirements of the private sector (and mainstream public sector) can earn a wage rather than be underemployed or suffer poverty and social exclusion.

A range of income support arrangements, including a generic work-tested benefit payment, could also be available to unemployed people, depending on their circumstances, as an initial subsistence income while arrangements are made to employ them.

Job guarantee theory is often associated with certain post-Keynesian economists, particularly at the Centre of Full Employment and Equity (University of Newcastle, Australia), at the Levy Economics Institute (Bard College), and at University of Missouri – Kansas City including the affiliated Center for Full Employment and Price Stability. The theory was put forward by Hyman Minsky in 1965. Notable job guarantee theories were conceived independently by Bill Mitchell (1998), and Warren Mosler (1997–98). This work was then developed further by L. Randall Wray (1998). A comprehensive treatment of it appears in Mitchell and Muysken (2008).

Inflation control

A fixed job guarantee wage provides an in-built inflation control mechanism. Mitchell (1998) called the ratio of job guarantee employment to total employment the buffer employment ratio (BER). The BER conditions the overall rate of wage demands. When the BER is high, real wage demands will be correspondingly lower. If inflation exceeds the government's announced target, tighter fiscal and monetary policy would be triggered to increase the BER, which entails workers transferring from the inflating sector to the fixed price job guarantee sector. Ultimately this attenuates the inflation spiral. So instead of a buffer stock of unemployed being used to discipline the distributional struggle, a job guarantee policy achieves this via compositional shifts in employment.

Replacing the currently widely-used measure the non-accelerating inflation rate of unemployment (NAIRU), the BER that results in stable inflation is called the non-accelerating inflation buffer employment ratio (NAIBER). It is a full employment steady state job guarantee level, which is dependent on a range of factors including the path of the economy. There is an issue about the validity of an unchanging nominal anchor in an inflationary environment. A job guarantee wage would be adjusted in line with productivity growth to avoid changing real relativities. Its viability as a nominal anchor relies on the fiscal authorities reining in any private wage-price pressures.

No relative wage effects

Mitchell and Muysken believe that a job guarantee introduces no relative wage effects and the rising demand does not necessarily invoke inflationary pressures because it is, by definition, satisfying the net savings desire of the private sector. Additionally, in today's demand constrained economies, firms are likely to increase capacity utilisation to meet the higher sales volumes. Given that the demand impulse is less than required in the NAIRU economy, if there were any demand-pull inflation it would be lower under a job guarantee. There are no new problems faced by employers who wish to hire labour to meet the higher sales levels. Any initial rise in demand will stimulate private sector employment growth while reducing job guarantee employment and spending. However, these demand pressures are unlikely to lead to accelerating inflation while the job guarantee pool contains workers employable by the private sector.

Wage bargaining

While a job guarantee policy frees wage bargaining from the general threat of unemployment, several factors offset this:

  • In professional occupational markets, any unemployment will generate downwards pressure on wages. However, eventually the stock of unemployed professionals will be exhausted, whereupon upward wage-price pressures can be expected to develop. With a strong and responsive tertiary education sector, skill bottlenecks can be avoided more readily than with an unemployed buffer stock;
  • Private firms would still be required to train new workers in job-specific skills in the same way they would in a non-Job Guarantee economy. However, job guarantee workers are far more likely to have retained higher levels of skill than those who are forced to succumb to lengthy spells of unemployment. This changes the bargaining environment rather significantly because firms now have reduced hiring costs. Previously, the same firms would have lowered their hiring standards and provided on-the-job training and vestibule training in tight labour markets. A job guarantee policy thus reduces the "hysteretic inertia" embodied in the long-term unemployed and allows for a smoother private sector expansion;
  • With high long-term unemployment, the excess supply of labour poses a very weak threat to wage bargaining, compared to a job guarantee environment.

List of job guarantee programs

A billboard informing the public of the presence of Expanded Public Works Programme (EPWP) workers employed at the Groot Winterhoek Wilderness Area. The EPWP is an attempt by government to alleviate South Africa's unemployment crisis.

Programs for adults

  • 1848 – The first modern direct job creation scheme was implemented by the Parisian government in France through the National Workshops which took place from February to June 1848.
  • 1928–1991 – The Soviet Union guaranteed a job for nearly everyone from about 1928 (as part of the Great Break) through to its end in 1991. A job guarantee was included in its 1936 constitution, and was given further prominence in the 1977 revision. Later communist states followed this lead.
  • 1935–1943 – In the United States from 1935 to 1943, the Works Progress Administration aimed to ensure all families in the country had one paid job, though there was never a job guarantee. Full employment was achieved by 1942 due to World War II, which led to the ending of the organisation the following year.
  • 1945 – From 1945, the Australian government was committed to full employment through the position established by the White Paper Full Employment in Australia, however this never included a formal job guarantee. The Reserve Bank Act 1959 charges the Reserve Bank of Australia with ensuring full employment, amongst other duties. The Australian government's definition of "full employment" changes with the adoption of the NAIRU concept in the late 1970s, with the government now aiming to keep a sufficient proportion of people unemployed to stop low-unemployment-related inflation.
  • 1946 – The original drafters of the US Employment Act of 1946 intended for it to mandate full employment, however Congress ultimately gave it a broader pro-employment nature.
  • 1948 – The UN's Universal Declaration of Human Rights' Article 23 includes "Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment." It is ratified by most non-socialist countries.
  • 1949–1997 – In the People's Republic of China from 1949 to 1997, the "iron rice bowl" practice guaranteed employment for its citizens.
  • 1978 – The US Humphrey-Hawkins Full Employment Act of 1978 authorized the government to create a "reservoir of public employment" in case private enterprise does not provide sufficient jobs. These jobs are required to be in the lower ranges of skill and pay so as to not draw the workforce away from the private sector. However, the act did not establish such a reservoir (it only authorized it), and no such program has been implemented, even though the unemployment rate has generally been above the rate (3%) targeted by the act.
  • 1998–2010 – The United Kingdom's New Deal was similar to Australia's Work for the Dole scheme, though more focused on young people. It was in place from 1998 to 2010.
  • 2001 – The Argentine government introduced the Jefes de Hogar (Heads of Households) program in 2001 to combat the social malaise that followed the financial crisis in that year.
  • 2005 – Similarly, the government of India in 2005 introduced a five-year plan called the National Rural Employment Guarantee Act (NREGA) to bridge the vast rural–urban income disparities that have emerged as India's information technology sector has boomed. The program successfully empowered women and raised rural wages, but also attracted the ire of landowners who have to pay farm labourers more due to a higher prevailing wage. NREGA projects tend to be highly labour-intensive and low skill, like dam and road construction, and soil conservation, with modest but positive long-term benefits and mediocre management.
  • 2012 – The South African government introduced the Expanded Public Works Program (EPWP) in 2012 to overcome the extremely high unemployment and accompanying poverty in that country. EPWP projects employ workers with government, contractors, or other non-governmental organisations under the Ministerial Conditions of Employment for the EPWP or learnership employment conditions.
  • 2020 – The Public Employment Service (AMS) in Austria in cooperation with University of Oxford economists started a job guarantee pilot in the municipality of Gramatneusiedl (Marienthal). The project's site became famous a century earlier through a landmark study in empirical social research when Marie Jahoda, Paul Lazarsfeld and Hans Zeisel studied the consequences of mass unemployment on a community in the wake of the Great Depression. The current job guarantee pilot returned to the site to study the opposite: what happens when unemployed people are guaranteed a job? The program offers jobs to every unemployed job seeker who has been without a paid job for more than a year. When a job seeker is placed with a private company, the Public Employment Service pays 100% of the wage for the first three months, and 66% during the subsequent nine months. Though, most of the long-term jobless were placed in non-profit training companies tasked with repairing second-hand furniture, renovating housing, public gardening, and similar jobs. The pilot eliminated long-term unemployment – an important result, given the programme’s entirely voluntary nature. Participants’ gained greater financial security, improved their psycho-social stability and social inclusion. The study drew international attention and informed policy reports by the EU, OECD, UN, and ILO.
  • 2030 – In 2021, a report released by California governor Gavin Newsom's Future of Work Commission called for a job guarantee program in California by 2030.

Programs for youth

  • The European Youth Guarantee is a commitment by European Union member states to "guarantee that all young people under the age of 25 receive, within four months of becoming unemployed or leaving formal education, a good quality work offer to match their skills and experience; or the chance to continue their studies or undertake an apprenticeship or professional traineeship." The committed countries agreed to start implementing this in 2014. Since 2014, each year more than 3.5 million young people registered in the program accepted an offer of employment, continued education, a traineeship or an apprenticeship. Correspondingly, youth unemployment in the EU has decreased from a peak of 24% in 2013 to 14% in 2019.
  • Sweden first implemented a similar guarantee in 1984, with fellow Nordic countries Norway (1993), Denmark (1996) and Finland (1996) following. Later, some additional European countries also offered this as well, prior to the EU wide adoption.
  • Germany and many Nordic countries have long had civil and military conscription programs for young people, which requires or gives them the option to do low-paid work for a government body for up to 12 months. This was also the case in the Netherlands until 1997. It was also the case in France, and that country is reintroducing a similar program from 2021.
  • Bhutan runs a Guaranteed Employment Program for youth.

Advocacy

The Labour Party under Ed Miliband went into the 2015 UK general election with a promise to implement a limited job guarantee (specifically, part-time jobs with guaranteed training included for long-term unemployed youth) if elected; however, they lost the election.

Bernie Sanders supports a federal jobs guarantee for the United States and Alexandria Ocasio-Cortez included a jobs-guarantee program as one of her campaign pledges when she ran for, and won, her seat in the U.S. House of Representatives in 2018.

Post-scarcity

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Post-scarcity
 
Post-scarcity is a theoretical economic situation in which most goods can be produced in great abundance with minimal human labor needed, so that they become available to all very cheaply or even freely.

Post-scarcity does not mean that scarcity has been eliminated for all goods and services but that all people can easily have their basic survival needs met along with some significant proportion of their desires for goods and services. Writers on the topic often emphasize that some commodities will remain scarce in a post-scarcity society.

Models

Speculative technology

Futurists who speak of "post-scarcity" suggest economies based on advances in automated manufacturing technologies, often including the idea of self-replicating machines, the adoption of division of labour which in theory could produce nearly all goods in abundance, given adequate raw materials and energy.

More speculative forms of nanotechnology such as molecular assemblers or nanofactories, which do not currently exist, raise the possibility of devices that can automatically manufacture any specified goods given the correct instructions and the necessary raw materials and energy, and many nanotechnology enthusiasts have suggested it will usher in a post-scarcity world.

In the more near-term future, the increasing automation of physical labor using robots is often discussed as means of creating a post-scarcity economy.

Increasingly versatile forms of rapid prototyping machines, and a hypothetical self-replicating version of such a machine known as a RepRap, have also been predicted to help create the abundance of goods needed for a post-scarcity economy. Advocates of self-replicating machines such as Adrian Bowyer, the creator of the RepRap project, argue that once a self-replicating machine is designed, then since anyone who owns one can make more copies to sell (and would also be free to ask for a lower price than other sellers), market competition will naturally drive the cost of such machines down to the bare minimum needed to make a profit, in this case just above the cost of the physical materials and energy that must be fed into the machine as input, and the same should go for any other goods that the machine can build.

Even with fully automated production, limitations on the number of goods produced would arise from the availability of raw materials and energy, as well as ecological damage associated with manufacturing technologies. Advocates of technological abundance often argue for more extensive use of renewable energy and greater recycling in order to prevent future drops in availability of energy and raw materials, and reduce ecological damage. Solar energy in particular is often emphasized, as the cost of solar panels continues to drop (and could drop far more with automated production by self-replicating machines), and advocates point out the total solar power striking the Earth's surface annually exceeds our civilization's current annual power usage by a factor of thousands.

Advocates also sometimes argue that the energy and raw materials available could be greatly expanded by looking to resources beyond the Earth. For example, asteroid mining is sometimes discussed as a way of greatly reducing scarcity for many useful metals such as nickel. While early asteroid mining might involve crewed missions, advocates hope that eventually humanity could have automated mining done by self-replicating machines. If this were done, then the only capital expenditure would be a single self-replicating unit (whether robotic or nanotechnological), after which the number of units could replicate at no further cost, limited only by the available raw materials needed to build more.

Social

A World Future Society report looked at how historically capitalism takes advantage of scarcity. Increased resource scarcity leads to increase and fluctuation of prices, which drives advances in technology for more efficient use of resources such that costs will be considerably reduced, almost to zero. They thus claim that following an increase in scarcity from now, the world will enter a post-scarcity age between 2050 and 2075.

Murray Bookchin's 1971 essay collection Post-Scarcity Anarchism outlines an economy based on social ecology, libertarian municipalism, and an abundance of fundamental resources, arguing that post-industrial societies have the potential to be developed into post-scarcity societies. Such development would enable "the fulfillment of the social and cultural potentialities latent in a technology of abundance".

Bookchin claims that the expanded production made possible by the technological advances of the twentieth century were in the pursuit of market profit and at the expense of the needs of humans and of ecological sustainability. The accumulation of capital can no longer be considered a prerequisite for liberation, and the notion that obstructions such as the state, social hierarchy, and vanguard political parties are necessary in the struggle for freedom of the working classes can be dispelled as a myth.

Marxism

Karl Marx, in a section of his Grundrisse that came to be known as the "Fragment on Machines", argued that the transition to a post-capitalist society combined with advances in automation would allow for significant reductions in labor needed to produce necessary goods, eventually reaching a point where all people would have significant amounts of leisure time to pursue science, the arts, and creative activities; a state some commentators later labeled as "post-scarcity". Marx argued that capitalism—the dynamic of economic growth based on capital accumulation—depends on exploiting the surplus labor of workers, but a post-capitalist society would allow for:

The free development of individualities, and hence not the reduction of necessary labour time so as to posit surplus labour, but rather the general reduction of the necessary labour of society to a minimum, which then corresponds to the artistic, scientific etc. development of the individuals in the time set free, and with the means created, for all of them.

Marx's concept of a post-capitalist communist society involves the free distribution of goods made possible by the abundance provided by automation. The fully developed communist economic system is postulated to develop from a preceding socialist system. Marx held the view that socialism—a system based on social ownership of the means of production—would enable progress toward the development of fully developed communism by further advancing productive technology. Under socialism, with its increasing levels of automation, an increasing proportion of goods would be distributed freely.

Marx did not believe in the elimination of most physical labor through technological advancements alone in a capitalist society, because he believed capitalism contained within it certain tendencies which countered increasing automation and prevented it from developing beyond a limited point, so that manual industrial labor could not be eliminated until the overthrow of capitalism. Some commentators on Marx have argued that at the time he wrote the Grundrisse, he thought that the collapse of capitalism due to advancing automation was inevitable despite these counter-tendencies, but that by the time of his major work Capital: Critique of Political Economy he had abandoned this view, and came to believe that capitalism could continually renew itself unless overthrown.

Fiction

Literature

  • The novella The Midas Plague by Frederik Pohl describes a world of cheap energy, in which robots are overproducing the commodities enjoyed by humankind. The lower-class "poor" must spend their lives in frantic consumption, trying to keep up with the robots' extravagant production, while the upper-class "rich" can live lives of simplicity.
  • The Mars trilogy by Kim Stanley Robinson charts the terraforming of Mars as a human colony and the establishment of a post-scarcity society.
  • The Culture novels by Iain M. Banks are centered on a post-scarcity economy where technology is advanced to such a degree that all production is automated, and there is no use for money or property (aside from personal possessions with sentimental value). People in the Culture are free to pursue their own interests in an open and socially-permissive society.
    • The society depicted in the Culture novels has been described by some commentators as "communist-bloc" or "anarcho-communist". Banks' close friend and fellow science fiction writer Ken MacLeod has said that The Culture can be seen as a realization of Marx's communism, but adds that "however friendly he was to the radical left, Iain had little interest in relating the long-range possibility of utopia to radical politics in the here and now. As he saw it, what mattered was to keep the utopian possibility open by continuing technological progress, especially space development, and in the meantime to support whatever policies and politics in the real world were rational and humane."
  • The Rapture of the Nerds by Cory Doctorow and Charles Stross takes place in a post-scarcity society and involves "disruptive" technology. The title is a derogatory term for the technological singularity coined by SF author Ken MacLeod.
  • Con Blomberg's 1959 short story Sales Talk depicts a post-scarcity society in which society incentivizes consumption to reduce the burden of overproduction. To further reduce production, virtual reality is used to fulfill peoples' needs to create.
  • Cory Doctorow's novel Walkaway presents a modern take on the idea of post-scarcity. With the advent of 3D printing – and especially the ability to use these to fabricate even better fabricators – and with machines that can search for and reprocess waste or discarded materials, the protagonists no longer have need of regular society for the basic essentials of life, such as food, clothing and shelter.

Television and film

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...