Search This Blog

Friday, April 12, 2024

Common ownership

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Common_ownership
 
Common ownership refers to holding the assets of an organization, enterprise or community indivisibly rather than in the names of the individual members or groups of members as common property.

Forms of common ownership exist in every economic system. Common ownership of the means of production is a central goal of socialist political movements as it is seen as a necessary democratic mechanism for the creation and continued function of a communist society. Advocates make a distinction between collective ownership and common property as the former refers to property owned jointly by agreement of a set of colleagues, such as producer cooperatives, whereas the latter refers to assets that are completely open for access, such as a public park freely available to everyone.

Christian societies

The first church in Jerusalem shared all their money and possessions (Acts of the Apostles 2 and 4).

Inspired by the Early Christians, many Christians have since tried to follow their example of community of goods and common ownership. Common ownership is practiced by some Christian groups such as the Hutterites (for about 500 years), the Bruderhof (for some 100 years) and others. In those cases, property is generally owned by a charity set up for the purpose of maintaining the members of the religious groups.

Christian communists typically regard biblical texts in Acts 2 and Acts 4 as evidence that the first Christians lived in a communist society. Additionally, the phrase "To each according to his needs" has a biblical basis in Acts 4:35, which says "to the emissaries to distribute to each according to his need".

In capitalist economies

Common ownership is practiced by large numbers of voluntary associations and non-profit organizations as well as implicitly by all public bodies. While cooperatives generally align with collectivist, socialist economics, retailers' cooperatives in particular exhibit elements of common ownership, while their retailer members may be individually owned.

Some individuals and organizations intentionally produce or support free content, including open source software, public domain works, and fair use media.

Mutual aid is a form of common ownership that is practiced on small scales within capitalist economies, particularly among marginalized communities, and during emergencies such as the COVID-19 pandemic.

In socialist economies

Many socialist movements, including Marxist, anarchist, reformist, and communalist movements, advocate the common ownership of the means of production by all of society as an eventual goal to be achieved through the development of the productive forces, although many socialists classify socialism as public ownership or cooperative ownership of the means of production, reserving common ownership for what Karl Marx and Friedrich Engels termed "upper-stage communism" or what Vladimir Lenin, Emma Goldman, and Peter Kropotkin each simply termed "communism". From Marxist and anarchist analyses, a society based on a superabundance of goods and common ownership of the means of production would be devoid of classes based on ownership of productive property.

Common ownership in a hypothetical communist society is often distinguished from primitive communism, in that communist common ownership is the outcome of social and technological developments leading to the elimination of material scarcity in society.

From 1918 until 1995, the common ownership of the means of production, distribution and exchange was cited in Clause IV of its constitution as a goal of the British Labour Party and was quoted on the back of its membership cards. The clause read:

To secure for the workers by hand or by brain the full fruits of their industry and the most equitable distribution thereof that may be possible upon the basis of the common ownership of the means of production, distribution and exchange, and the best obtainable system of popular administration and control of each industry or service.

Antitrust economics

In antitrust economics, common ownership describes a situation in which large investors own shares in several firms that compete within the same industry. As a result of this overlapping ownership, these firms may have reduced incentives to compete against each other because they internalize the profit-reducing effect that their competitive actions have on each other.

The theory was first developed by Julio Rotemberg in 1984. Several empirical contributions document the growing importance of common ownership and provide evidence to support the theory. Because of concern about these anticompetitive effects, common ownership has "stimulated a major rethinking of antitrust enforcement". The United States Department of Justice, the Federal Trade Commission, the European Commission, and the OECD have all acknowledged concerns about the effects of common ownership on lessening product market competition.

Contract theory

Neoclassical economic theory analyzes common ownership using contract theory. According to the incomplete contracting approach pioneered by Oliver Hart and his co-authors, ownership matters because the owner of an asset has residual control rights. This means that the owner can decide what to do with the asset in every contingency not covered by a contract. In particular, an owner has stronger incentives to make relationship-specific investments than a non-owner, so ownership can ameliorate the so-called hold-up problem. As a result, ownership is a scarce resource (i.e. there are limits to how much they can invest) that should not be wasted. In particular, a central result of the property rights approach says that joint ownership is suboptimal. If we start in a situation with joint ownership (where each party has veto power over the use of the asset) and move to a situation in which there is a single owner, the investment incentives of the new owner are improved while the investment incentives of the other parties remain the same. However, in the basic incomplete contracting framework the suboptimal aspect of joint ownership holds only if the investments are in human capital while joint ownership can be optimal if the investments are in physical capital. Recently, several authors have shown that joint ownership can actually be optimal even if investments are in human capital. In particular, joint ownership can be optimal if the parties are asymmetrically informed, if there is a long-term relationship between the parties, or if the parties have know-how that they may disclose.

Job security

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Job_security

Job security is the probability that an individual will keep their job; a job with a high level of security is such that a person with the job would have a small chance of losing it. Many factors threaten job security: globalization, outsourcing, downsizing, recession, and new technology, to name a few.

Basic economic theory holds that during periods of economic expansion businesses experience increased demand, which in turn necessitates investment in more capital or labor. When businesses are experiencing growth, job confidence and security typically increase. The opposite often holds true during a recession: businesses experience reduced demand and look to downsize their workforces in the short term.

Governments and individuals are both motivated to achieve higher levels of job security. Governments attempt to do this by passing laws (such as the U.S. Civil Rights Act of 1964) which make it illegal to fire employees for certain reasons. Individuals can influence their degree of job security by increasing their skills through education and experience, or by moving to a more favorable location. The official unemployment rate and employee confidence indexes are good indicators of job security in particular fields. These statistics are closely watched by economists, government officials, and banks.

Unions also strongly influence job security. Jobs that traditionally have a strong union presence such as many government jobs and jobs in education, healthcare and law enforcement are considered very secure while many non-unionized private sector jobs are generally believed to offer lower job security, although this varies by industry and country.

In the United States

While all economies are impacted by market forces (which change the supply and demand of labor) the United States is particularly susceptible to these forces due to a long history of fiscal conservatism and minimal government intervention.

Minimal government intervention has helped the United States create an at-will employment system that applies across many industries. Consequently, with limited exceptions, an employee's job security closely follows an employer's demand for their skills. For example, in the aftermath of the dot com boom of 1997–2000, employees in the technology industry experienced a massive drop in job security and confidence. More recently, in 2009 many manufacturing workers experienced a similar drop in job security and confidence. Closely following market forces also means that employment in the United States rebounds when industries adjust to new economic realities. For example, employee confidence and job security in both manufacturing and technology have rebounded substantially.

In the United States job insecurity is higher for men than women, with workers aged 30–64 experiencing more insecurity when compared with other age groups. Divorced or separated workers, and workers with less than a high school diploma also report higher job insecurity. Overall, workers in the construction industry have the highest rate of job insecurity at 55%.

The impact of unemployment and job insecurity on both mental and physical health is now the subject of a growing body of research. This will offer insights into why, for example, an increasing number of men in the United States are not returning to work. In 1960, only 5% of men ages 30–35 were unemployed whereas roughly 13% were unemployed in 2006. The New York Times attributes a large portion of this to blue collar and professional men refusing to work in jobs that they are overqualified for or do not provide adequate benefits in contrast to their previous jobs. It could also be attributed to a mismatch between the skills employees currently have, and the skills employers in traditionally male dominated industries (such as manufacturing) are looking for.

According to data from 2014 employee confidence reports, 50% of all current workers 18 and over feel confident in their ability to find a new job if necessary, and 60% are confident in the future of their employer. Job insecurity, defined as being worried about becoming unemployed, is a concern to 25% of U.S. workers.

Due to lockdowns during the COVID-19 pandemic, workplaces moved from office to home. Employees worried about the potential career consequences of losing productivity and effectiveness while working from home owing to a lack of work-life balance. According to studies, workers worried that their jobs might be at risk if they performed poorly while working from home during the epidemic.

Outsourcing

Overseas outsourcing (sometimes called offshoring) may decrease job security for people in certain occupations such as telemarketers, computer programmers, medical transcriptionists, and bookkeeping clerks. Generally, to outsource work to a different country the job must be quick to learn and the completed work must be transferable with minimal loss of quality.

In India

In India job security is high as Indian labour law make firing difficult for permanent employees. Most Indians work till retirement in the same company apart from workers in some sectors such as technology. Due to large population, competition is high but so is the size of the job market.

Self-replicating machine

From Wikipedia, the free encyclopedia
A simple form of machine self-replication

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. The concept, first proposed by Von Neumann no later than the 1940s, has attracted a range of different approaches involving various types of technology. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann used the term universal constructor to describe such self-replicating machines.

Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here.

History

The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".

In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose.

Von Neumann's kinematic model

A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that directed it to retrieve parts from this "sea" using a manipulator, assemble them into a duplicate of itself, and then copy the contents of its memory tape into the empty duplicate's. The machine was envisioned as consisting of as few as eight different types of components; four logic elements that send and receive stimuli and four mechanical elements used to provide a structural skeleton and mobility. While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicating machine due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.

Von Neumann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

Moore's artificial living plants

In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.

Dyson's replicating systems

The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken."

Advanced Automation for Space Missions

An artist's conception of a "self-growing" robotic lunar factory

In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.

The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.

A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.

A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."

A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining.

Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.

Lackner-Wendt Auxon replicators

In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow".

Recent work

NIAC studies on self-replicating systems

In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:

Bootstrapping self-replicating factories in space

In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.

New York University artificial DNA tile motifs

In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.

Self-replication of magnetic polymers

In 2001, Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.

Self-replication of neural circuits

In 1968, Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.

Harvard Wyss Institute

November 29, 2021 a team at Harvard Wyss Institute built the first living robots that can reproduce.

Self-replicating spacecraft

The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

Prospects for implementation

As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).

An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.

Other sources

  • A number of patents have been granted for self-replicating machine concepts. U.S. patent 5,659,477 "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), U.S. patent 5,764,518 " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003).
  • Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
  • In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
  • In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space.

Commons-based peer production

From Wikipedia, the free encyclopedia

Commons-based peer production (CBPP) is a term coined by Harvard Law School professor Yochai Benkler. It describes a model of socio-economic production in which large numbers of people work cooperatively; usually over the Internet. Commons-based projects generally have less rigid hierarchical structures than those under more traditional business models.

One of the major characteristics of the commons-based peer production is its non-profit scope. Often—but not always—commons-based projects are designed without a need for financial compensation for contributors. For example, sharing of STL (file format) design files for objects freely on the internet enables anyone with a 3-D printer to digitally replicate the object, saving the prosumer significant money.

Synonymous terms for this process include consumer co-production and collaborative media production.

Overview

The history of commons-based peer production communities (by the P2Pvalue project)

Yochai Benkler used this term as early as 2001. Benkler first introduced the term in his 2002 paper in the Yale Law Journal (published as a pre-print in 2001) "Coase's Penguin, or Linux and the Nature of the Firm", whose title refers to the Linux mascot and to Ronald Coase, who originated the transaction costs theory of the firm that provides the methodological template for the paper's analysis of peer production. The paper defines the concept as "decentralized information gathering and exchange" and credits Eben Moglen as the scholar who first identified it without naming it.

Yochai Benkler contrasts commons-based peer production with firm production, in which tasks are delegated based on a central decision-making process, and market-based production, in which allocating different prices to different tasks serves as an incentive to anyone interested in performing a task.

In his book The Wealth of Networks (2006), Yochai Benkler significantly expands on his definition of commons-based peer production. According to Benkler, what distinguishes commons-based production is that it doesn't rely upon or propagate proprietary knowledge: "The inputs and outputs of the process are shared, freely or conditionally, in an institutional form that leaves them equally available for all to use as they choose at their individual discretion." To ensure that the knowledge generated is available for free use, commons-based projects are often shared under an open license.

Not all commons-based production necessarily qualifies as commons-based peer production. According to Benkler, peer production is defined not only by the openness of its outputs, but also by a decentralized, participant-driven working method of working.

Peer production enterprises have two primary advantages over traditional hierarchical approaches to production:

  1. Information gain: Peer production allows individuals to self-assign tasks that suit their own skills, expertise, and interests. Contributors can generate dynamic content that reflects the individual skills and the "variability of human creativity."
  2. Great variability of human and information resources leads to substantial increasing returns to scale to the number of people, and resources and projects that may be accomplished without need for a contract or other factor permitting the proper use of the resource for a project.

In Wikinomics, Don Tapscott and Anthony D. Williams suggest an incentive mechanism behind common-based peer production. "People participate in peer production communities," they write, "for a wide range of intrinsic and self-interested reasons....basically, people who participate in peer production communities love it. They feel passionate about their particular area of expertise and revel in creating something new or better."

Aaron Krowne offers another definition:

Commons-based peer production refers to any coordinated, (chiefly) internet-based effort whereby volunteers contribute project components, and there exists some process to combine them to produce a unified intellectual work. CBPP covers many different types of intellectual output, from software to libraries of quantitative data to human-readable documents (manuals, books, encyclopedias, reviews, blogs, periodicals, and more).

Principles

First, the potential goals of peer production must be modular. In other words, objectives must be divisible into components, or modules, each of which can be independently produced. That allows participants to work asynchronously, without having to wait for each other's contributions or coordinate with each other in person.

Second, the granularity of the modules is essential. Granularity refers to the degree to which objects are broken down into smaller pieces (module size). Different levels of granularity will allow people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation.

Third, a successful peer-production enterprise must have low-cost integration—the mechanism by which the modules are integrated into a whole end product. Thus, integration must include both quality controls over the modules and a mechanism for integrating the contributions into the finished product at relatively low cost.

Participation

Participation in commons-based peer production is often voluntary and not necessarily associated with getting profit out of it. Thus, the motivation behind this phenomenon goes far beyond traditional capitalistic theories, which picture individuals as self-interested and rational agents, such portrayal is also called homo economicus.

However, it can be explained through alternative theories as behavioral economics. Famous psychologist Dan Ariely in his work Predictably Irrational explains that social norms shape people's decisions as much as market norms. Therefore, individuals tend to be willing to create value because of their social constructs, knowing that they won't be paid for that. He draws an example of a thanksgiving dinner: offering to pay would likely offend the family member who prepared the dinner as they were motivated by the pleasure of treating family members.

Similarly, commons-based projects, as claimed by Yochai Benkler, are the results of individuals acting "out of social and psychological motivations to do something interesting". He goes on describing the wide range of reasons as pleasure, socially and psychologically rewarding experiences, to the economic calculation of possible monetary rewards (not necessarily from the project itself).

On the other hand, the need for collaboration and interaction lies at the very core of human nature and turns out to be a very essential feature for one's survival. Enhanced with digital technologies, allowing easier and faster collaboration which was not as noticeable before, it resulted in a new social, cultural and economic trend named collaborative society. This theory outlines further reasons for individuals to participate in peer production such as collaboration with strangers, building or integrating into a community or contributing to a general good.

Examples

Additional examples of commons-based peer production communities (by the P2Pvalue project)
One day living with commons-based peer production communities (by the P2Pvalue project)

Examples of projects using commons-based peer production include:

Outgrowths

Several outgrowths have been:

  • Customization/Specialization: With free and open-source software small groups have the capability to customize a large project according to specific needs. With the rise of low-cost 3-D printing, and other digital manufacturing techniques this is now also becoming true of open source hardware.
  • Longevity: Once code is released under a copyleft free software license it is almost impossible to make it unavailable to the public.
  • Cross-fertilization: Experts in a field can work on more than one project with no legal hassles.
  • Technology Revisions: A core technology gives rise to new implementations of existing projects.
  • Technology Clustering: Groups of products tend to cluster around a core set of technology and integrate with one another.

Related concepts

Interrelated concepts to Commons-based peer production are the processes of peer governance and peer property. To begin with, peer governance is a new mode of governance and bottom-up mode of participative decision-making that is being experimented in peer projects, such as Wikipedia and FLOSS; thus peer governance is the way that peer production, the process in which common value is produced, is managed. Peer Property indicates the innovative nature of legal forms such as the General Public License, the Creative Commons, etc. Whereas traditional forms of property are exclusionary ("if it is mine, it is not yours"), peer property forms are inclusionary. It is from all of us, i.e. also for you, provided you respect the basic rules laid out in the license, such as the openness of the source code for example.

The ease of entering and leaving an organization is a feature of adhocracies.

The principle of commons-based peer production is similar to collective invention, a model of open innovation in economics coined by Robert Allen.

Also related: Open-source economics and Commercial use of copyleft works.

Criticism

Some believe that the commons-based peer production (CBPP) vision, while powerful and groundbreaking, needs to be strengthened at its root because of some allegedly wrong assumptions concerning free and open-source software (FOSS).

The CBPP literature regularly and explicitly quotes FOSS products as examples of artifacts "emerging" by virtue of mere cooperation, with no need for supervising leadership (without "market signals or managerial commands", in Benkler's words).

It can be argued, however, that in the development of any less than trivial piece of software, irrespective of whether it be FOSS or proprietary, a subset of the (many) participants always play—explicitly and deliberately—the role of leading system and subsystem designers, determining architecture and functionality, while most of the people work “underneath” them in a logical, functional sense.

From a micro-level, Bauwens and Pantazis are of the view that CBPP models should be considered a prototype, since it cannot reproduce itself fully outside of the limits that capitalism has imposed on it as a result of the interdependence of CBPP with capitalist competition. The innovative activities of CBPP occur within capitalist competitive contexts, and capitalist firms can gain competitive advantage over firms that rely on personal research without proprietary knowledge, because the former is able to utilize and access the knowledge commons, especially in digital commons where participants in CBPP struggle to earn direct livelihood for themselves. CBPP is then at the risk of being subordinated.

Alternative to capitalism

Commons-based peer production (CBPP) represents an alternative form of production from traditional capitalism. Nevertheless, to this day CBPP is still a prototype of a new way of producing, it cannot be called a complete form of production by itself. CBPP is embedded in the capitalist system and even though the processes and forms of production differ it is still mutually dependent to capital. If CBPP triumphs in its implementation the market and state will not disappear, but their relationship with the means of production will be modified. A socio-economic shift pursued by CBPP will not be straightforward or lead to a utopia, it could help solve some current issues. As any economic transition, new problems will emerge and the transition will be complicated. But, moving towards a CBPP production model will be ideal, a step forward for society. CBPP is still a prototype of what a new way of production and society would look like, and can't separate itself completely from capitalism: commoners should find innovative ways to become more autonomous from capitalism. In a society led by commons the market would continue to exist as in capitalism, but would shift from being mainly extractive to being predominantly generative.

Both scenarios, the extractive as well as the generative, can include elements which are based on peer-to-peer (P2P) dynamics, or social peer-to-peer processes. Therefore, one should not only discuss peer production as an opposing alternative to current forms of market organization, but also needs to discuss how both manifest in the organizations of today’s economy. Four scenarios can be described along the lines of profit maximization and commons on one side, and centralized and decentralized control over digital production infrastructure, such as for example networking technologies: netarchical capitalism, distributed capitalism, global commons, and localized commons. Each of them uses P2P elements to a different extent and thus leads to different outcomes:

  • Netarchical capitalism: In this version of capitalism, P2P elements are mainly found in digital platforms, through which individuals can interact with each other. These platforms are controlled by the platform owners, which capture the value of the P2P exchanges.
  • Distributed capitalism: As compared to the first type, platforms are not centrally controlled in this form of capitalism, and individual autonomy and large-scale participation play an important role. However, it is still a form a capitalism, meaning it is mainly extractive, and profit maximization is the main motive.
  • Global commons: This scenario is generative as it aims to add social and environmental value. It uses the digital commons to organize and deploy initiatives globally.
  • Local commons: Similar to the global commons, the local commons are also a generative scenario. However, they use global digital commons to organize activities locally, for example by using global designs to at the same time as local supply chains for manufacturing.

Job guarantee

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Job_guarantee
Eleanor Roosevelt onsite one of the Works Progress Administration Projects, a job guarantee program in the United States

A job guarantee is an economic policy proposal that aims to create full employment and price stability by having the state promise to hire unemployed workers as an employer of last resort (ELR). It aims to provide a sustainable solution to inflation and unemployment.

The economic policy stance currently dominant around the world uses unemployment as a policy tool to control inflation. When inflation rises, the government pursues contractionary fiscal or monetary policy, with the aim of creating a buffer stock of unemployed people, reducing wage demands, and ultimately inflation. When inflationary expectations subside, expansionary policy aims to produce the opposite effect.

By contrast, in a job guarantee program, a buffer stock of employed people (employed in the job guarantee program) is typically intended to provide the same protection against inflation without the social costs of unemployment, hence potentially fulfilling the dual mandate of full employment and price stability.

Overview

A job guarantee is based on a buffer stock principle whereby the public sector offers a fixed wage job to anyone willing and able to work thereby establishing and maintaining a buffer stock of employed workers. This buffer stock expands when private sector activity declines, and declines when private sector activity expands, much like today's unemployed buffer stocks.

A job guarantee thus fulfills an absorption function to minimize the real costs associated with the flux of the private sector. When private sector employment declines, public sector employment will automatically react and increase its payrolls. So in a recession, the increase in public employment will increase net government spending, and stimulate aggregate demand and the economy. Conversely, in a boom, the decline of public sector employment and spending caused by workers leaving their job guarantee jobs for higher paid private sector employment will lessen stimulation, so the job guarantee functions as an automatic stabilizer controlling inflation. The nation always remains fully employed, with a changing mix between private and public sector employment. Since the job guarantee wage is open to everyone, it will functionally become the national minimum wage.

Under a job guarantee, people of working age who are not in full-time education and have less than 35 hours per week of paid employment would be entitled to the balance of 35 hours paid employment, undertaking work of public benefit at the minimum wage, though specifics may change depending on the model. The aim is to replace unemployment and underemployment with paid employment (up to the hours desired by workers), so that those who are at any point in time surplus to the requirements of the private sector (and mainstream public sector) can earn a wage rather than be underemployed or suffer poverty and social exclusion.

A range of income support arrangements, including a generic work-tested benefit payment, could also be available to unemployed people, depending on their circumstances, as an initial subsistence income while arrangements are made to employ them.

Job guarantee theory is often associated with certain post-Keynesian economists, particularly at the Centre of Full Employment and Equity (University of Newcastle, Australia), at the Levy Economics Institute (Bard College), and at University of Missouri – Kansas City including the affiliated Center for Full Employment and Price Stability. The theory was put forward by Hyman Minsky in 1965. Notable job guarantee theories were conceived independently by Bill Mitchell (1998), and Warren Mosler (1997–98). This work was then developed further by L. Randall Wray (1998). A comprehensive treatment of it appears in Mitchell and Muysken (2008).

Inflation control

A fixed job guarantee wage provides an in-built inflation control mechanism. Mitchell (1998) called the ratio of job guarantee employment to total employment the buffer employment ratio (BER). The BER conditions the overall rate of wage demands. When the BER is high, real wage demands will be correspondingly lower. If inflation exceeds the government's announced target, tighter fiscal and monetary policy would be triggered to increase the BER, which entails workers transferring from the inflating sector to the fixed price job guarantee sector. Ultimately this attenuates the inflation spiral. So instead of a buffer stock of unemployed being used to discipline the distributional struggle, a job guarantee policy achieves this via compositional shifts in employment.

Replacing the currently widely-used measure the non-accelerating inflation rate of unemployment (NAIRU), the BER that results in stable inflation is called the non-accelerating inflation buffer employment ratio (NAIBER). It is a full employment steady state job guarantee level, which is dependent on a range of factors including the path of the economy. There is an issue about the validity of an unchanging nominal anchor in an inflationary environment. A job guarantee wage would be adjusted in line with productivity growth to avoid changing real relativities. Its viability as a nominal anchor relies on the fiscal authorities reining in any private wage-price pressures.

No relative wage effects

Mitchell and Muysken believe that a job guarantee introduces no relative wage effects and the rising demand does not necessarily invoke inflationary pressures because it is, by definition, satisfying the net savings desire of the private sector. Additionally, in today's demand constrained economies, firms are likely to increase capacity utilisation to meet the higher sales volumes. Given that the demand impulse is less than required in the NAIRU economy, if there were any demand-pull inflation it would be lower under a job guarantee. There are no new problems faced by employers who wish to hire labour to meet the higher sales levels. Any initial rise in demand will stimulate private sector employment growth while reducing job guarantee employment and spending. However, these demand pressures are unlikely to lead to accelerating inflation while the job guarantee pool contains workers employable by the private sector.

Wage bargaining

While a job guarantee policy frees wage bargaining from the general threat of unemployment, several factors offset this:

  • In professional occupational markets, any unemployment will generate downwards pressure on wages. However, eventually the stock of unemployed professionals will be exhausted, whereupon upward wage-price pressures can be expected to develop. With a strong and responsive tertiary education sector, skill bottlenecks can be avoided more readily than with an unemployed buffer stock;
  • Private firms would still be required to train new workers in job-specific skills in the same way they would in a non-Job Guarantee economy. However, job guarantee workers are far more likely to have retained higher levels of skill than those who are forced to succumb to lengthy spells of unemployment. This changes the bargaining environment rather significantly because firms now have reduced hiring costs. Previously, the same firms would have lowered their hiring standards and provided on-the-job training and vestibule training in tight labour markets. A job guarantee policy thus reduces the "hysteretic inertia" embodied in the long-term unemployed and allows for a smoother private sector expansion;
  • With high long-term unemployment, the excess supply of labour poses a very weak threat to wage bargaining, compared to a job guarantee environment.

List of job guarantee programs

A billboard informing the public of the presence of Expanded Public Works Programme (EPWP) workers employed at the Groot Winterhoek Wilderness Area. The EPWP is an attempt by government to alleviate South Africa's unemployment crisis.

Programs for adults

  • 1848 – The first modern direct job creation scheme was implemented by the Parisian government in France through the National Workshops which took place from February to June 1848.
  • 1928–1991 – The Soviet Union guaranteed a job for nearly everyone from about 1928 (as part of the Great Break) through to its end in 1991. A job guarantee was included in its 1936 constitution, and was given further prominence in the 1977 revision. Later communist states followed this lead.
  • 1935–1943 – In the United States from 1935 to 1943, the Works Progress Administration aimed to ensure all families in the country had one paid job, though there was never a job guarantee. Full employment was achieved by 1942 due to World War II, which led to the ending of the organisation the following year.
  • 1945 – From 1945, the Australian government was committed to full employment through the position established by the White Paper Full Employment in Australia, however this never included a formal job guarantee. The Reserve Bank Act 1959 charges the Reserve Bank of Australia with ensuring full employment, amongst other duties. The Australian government's definition of "full employment" changes with the adoption of the NAIRU concept in the late 1970s, with the government now aiming to keep a sufficient proportion of people unemployed to stop low-unemployment-related inflation.
  • 1946 – The original drafters of the US Employment Act of 1946 intended for it to mandate full employment, however Congress ultimately gave it a broader pro-employment nature.
  • 1948 – The UN's Universal Declaration of Human Rights' Article 23 includes "Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment." It is ratified by most non-socialist countries.
  • 1949–1997 – In the People's Republic of China from 1949 to 1997, the "iron rice bowl" practice guaranteed employment for its citizens.
  • 1978 – The US Humphrey-Hawkins Full Employment Act of 1978 authorized the government to create a "reservoir of public employment" in case private enterprise does not provide sufficient jobs. These jobs are required to be in the lower ranges of skill and pay so as to not draw the workforce away from the private sector. However, the act did not establish such a reservoir (it only authorized it), and no such program has been implemented, even though the unemployment rate has generally been above the rate (3%) targeted by the act.
  • 1998–2010 – The United Kingdom's New Deal was similar to Australia's Work for the Dole scheme, though more focused on young people. It was in place from 1998 to 2010.
  • 2001 – The Argentine government introduced the Jefes de Hogar (Heads of Households) program in 2001 to combat the social malaise that followed the financial crisis in that year.
  • 2005 – Similarly, the government of India in 2005 introduced a five-year plan called the National Rural Employment Guarantee Act (NREGA) to bridge the vast rural–urban income disparities that have emerged as India's information technology sector has boomed. The program successfully empowered women and raised rural wages, but also attracted the ire of landowners who have to pay farm labourers more due to a higher prevailing wage. NREGA projects tend to be highly labour-intensive and low skill, like dam and road construction, and soil conservation, with modest but positive long-term benefits and mediocre management.
  • 2012 – The South African government introduced the Expanded Public Works Program (EPWP) in 2012 to overcome the extremely high unemployment and accompanying poverty in that country. EPWP projects employ workers with government, contractors, or other non-governmental organisations under the Ministerial Conditions of Employment for the EPWP or learnership employment conditions.
  • 2020 – The Public Employment Service (AMS) in Austria in cooperation with University of Oxford economists started a job guarantee pilot in the municipality of Gramatneusiedl (Marienthal). The project's site became famous a century earlier through a landmark study in empirical social research when Marie Jahoda, Paul Lazarsfeld and Hans Zeisel studied the consequences of mass unemployment on a community in the wake of the Great Depression. The current job guarantee pilot returned to the site to study the opposite: what happens when unemployed people are guaranteed a job? The program offers jobs to every unemployed job seeker who has been without a paid job for more than a year. When a job seeker is placed with a private company, the Public Employment Service pays 100% of the wage for the first three months, and 66% during the subsequent nine months. Though, most of the long-term jobless were placed in non-profit training companies tasked with repairing second-hand furniture, renovating housing, public gardening, and similar jobs. The pilot eliminated long-term unemployment – an important result, given the programme’s entirely voluntary nature. Participants’ gained greater financial security, improved their psycho-social stability and social inclusion. The study drew international attention and informed policy reports by the EU, OECD, UN, and ILO.
  • 2030 – In 2021, a report released by California governor Gavin Newsom's Future of Work Commission called for a job guarantee program in California by 2030.

Programs for youth

  • The European Youth Guarantee is a commitment by European Union member states to "guarantee that all young people under the age of 25 receive, within four months of becoming unemployed or leaving formal education, a good quality work offer to match their skills and experience; or the chance to continue their studies or undertake an apprenticeship or professional traineeship." The committed countries agreed to start implementing this in 2014. Since 2014, each year more than 3.5 million young people registered in the program accepted an offer of employment, continued education, a traineeship or an apprenticeship. Correspondingly, youth unemployment in the EU has decreased from a peak of 24% in 2013 to 14% in 2019.
  • Sweden first implemented a similar guarantee in 1984, with fellow Nordic countries Norway (1993), Denmark (1996) and Finland (1996) following. Later, some additional European countries also offered this as well, prior to the EU wide adoption.
  • Germany and many Nordic countries have long had civil and military conscription programs for young people, which requires or gives them the option to do low-paid work for a government body for up to 12 months. This was also the case in the Netherlands until 1997. It was also the case in France, and that country is reintroducing a similar program from 2021.
  • Bhutan runs a Guaranteed Employment Program for youth.

Advocacy

The Labour Party under Ed Miliband went into the 2015 UK general election with a promise to implement a limited job guarantee (specifically, part-time jobs with guaranteed training included for long-term unemployed youth) if elected; however, they lost the election.

Bernie Sanders supports a federal jobs guarantee for the United States and Alexandria Ocasio-Cortez included a jobs-guarantee program as one of her campaign pledges when she ran for, and won, her seat in the U.S. House of Representatives in 2018.

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...