Search This Blog

Sunday, September 29, 2024

Programmer

From Wikipedia, the free encyclopedia

Betty Jennings and Fran Bilas, part of the first ENIAC programming team
 
Occupation
NamesComputer Programmer
Occupation type
Profession
Activity sectors
Information technology, Software industry
Description
CompetenciesWriting and debugging computer code
Education required
Varies from apprenticeship to bachelor's degree, or self-taught

A programmer, computer programmer or coder is an author of computer source code – someone with skill in computer programming.

The professional titles software developer and software engineer are used for jobs that require a programmer.

Generally, a programmer writes code in a computer language and with an intent to build software that achieves some goal.

Identification

Sometimes a programmer or job position is identified by the language used or target platform. For example, assembly programmer, web developer.

Job title

The job titles that include programming tasks have differing connotations across the computer industry and to different individuals. The following are notable descriptions.

A software developer primarily implements software based on specifications and fixes bugs. Other duties may include reviewing code changes and testing. To achieve the required skills for the job, they might obtain a computer science or associate degree, attend a programming boot camp or be self-taught.

A software engineer usually is responsible for the same tasks as a developer plus broader responsibilities of software engineering including architecting and designing new features and applications, targeting new platforms, managing the software development lifecycle (design, implementation, testing, and deployment), leading a team of programmers, communicating with customers, managers and other engineers, considering system stability and quality, and exploring software development methodologies.

Sometimes, a software engineer is required to have a degree in software engineering, computer engineering, or computer science. Some countries legally require an engineering degree to be called engineer

History

Ada Lovelace is considered by many to be the first computer programmer.

British countess and mathematician Ada Lovelace is often considered to be the first computer programmer. She authored an algorithm, which was published in October 1842, for calculating Bernoulli numbers on the Charles Babbage analytical engine. Because the machine was not completed in her lifetime, she never experienced the algorithm in action.

In 1941, German civil engineer Konrad Zuse was the first person to execute a program on a working, program-controlled, electronic computer. From 1943 to 1945, per computer scientist Wolfgang K. Giloi and AI professor Raúl Rojas et al., Zuse created the first, high-level programming language, Plankalkül.

Members of the 1945 ENIAC programming team of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman have since been credited as the first professional computer programmers.

The software industry

The first company founded specifically to provide software products and services was the Computer Usage Company in 1955. Before that time, computers were programmed either by customers or the few commercial computer manufacturers of the time, such as Sperry Rand and IBM.

The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, governments, and businesses created a demand for software. Many of these programs were written in-house by full-time staff programmers; some were distributed between users of a particular machine for no charge, while others were sold on a commercial basis. Other firms, such as Computer Sciences Corporation (founded in 1959), also started to grow. Computer manufacturers soon started bundling operating systems, system software and programming environments with their machines; the IBM 1620 came with the 1620 Symbolic Programming System and FORTRAN.

The industry expanded greatly with the rise of the personal computer (PC) in the mid-1970s, which brought computing to the average office worker. In the following years, the PC also helped create a constantly growing market for games, applications and utility software. This resulted in increased demand for software developers for that period of time.

Nature of the work

Computer programmers write, test, debug, and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.

Programmers work in many settings, including corporate information technology (IT) departments, big software companies, small service firms and government entities of all sizes. Many professional programmers also work for consulting companies at client sites as contractors. Licensing is not typically required to work as a programmer, although professional certifications are commonly held by programmers. Programming is considered a profession.

Programmers' work varies widely depending on the type of business for which they are writing programs. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Simple programs can be written in a few hours. More complex ones may require more than a year of work, while others are never considered 'complete' but rather are continuously improved as long as they stay in use. In most cases, several programmers work together as a team under a senior programmer's supervision.

Types of software

Programming editors, also known as source code editors, are text editors that are specifically designed for programmers or developers to write the source code of an application or a program. Most of these editors include features useful for programmers, which may include color syntax highlighting, auto indentation, auto-complete, bracket matching, syntax check, and allows plug-ins. These features aid the users during coding, debugging and testing.

Globalization

Market changes in the UK

According to BBC News, 17% of computer science students could not find work in their field six months after graduation in 2009 which was the highest rate of the university subjects surveyed while 0% of medical students were unemployed in the same survey.

Market changes in the US

After the crash of the dot-com bubble (1999–2001) and the Great Recession (2008), many U.S. programmers were left without work or with lower wages. In addition, enrollment in computer-related degrees and other STEM degrees (STEM attrition) in the US has been dropping for years, especially for women, which, according to Beaubouef and Mason, could be attributed to a lack of general interest in science and mathematics and also out of an apparent fear that programming will be subject to the same pressures as manufacturing and agriculture careers. For programmers, the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook originally predicted a growth for programmers of 12 percent from 2010 to 2020 and thereafter a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031. and then a decline of -11 percent from 2022 to 2032. Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. However, for software developers BLS projects for 2019 to 2029 a 22% increase in employment, from 1,469,200 to 1,785,200 jobs with a median base salary of $110,000 per year. This prediction is lower than the earlier 2010 to 2020 predicted increase of 30% for software developers. Though the distinction is somewhat ambiguous, software developers engage in a wider array of aspects of application development and are generally higher skilled than programmers, making outsourcing less of a risk. Another reason for the decline for programmers is their skills are being merged with other professions, such as developers, as employers increase the requirements for a position over time. Then there is the additional concern that recent advances in artificial intelligence might impact the demand for future generations of Software professions.

Software development

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Software_development

Software development
is the process used to create software. Programming and maintaining the source code is the central step of this process, but it also includes conceiving the project, evaluating its feasibility, analyzing the business requirements, software design, testing, to release. Software engineering, in addition to development, also includes project management, employee management, and other overhead functions. Software development may be sequential, in which each step is complete before the next begins, but iterative development methods where multiple steps can be executed at once and earlier steps can be revisited have also been devised to improve flexibility, efficiency, and scheduling.

Software development involves professionals from various fields, not just software programmers but also individuals specialized in testing, documentation writing, graphic design, user support, marketing, and fundraising. A number of tools and models are commonly used in software development, such as integrated development environment (IDE), version control, computer-aided software engineering, and software documentation.

Methodologies

Flowchart of the evolutionary prototyping model, an iterative development model

Each of the available methodologies are best suited to specific kinds of projects, based on various technical, organizational, project, and team considerations.

  • The simplest methodology is the "code and fix", typically used by a single programmer working on a small project. After briefly considering the purpose of the program, the programmer codes it and runs it to see if it works. When they are done, the product is released. This methodology is useful for prototypes but cannot be used for more elaborate programs.
  • In the top-down waterfall model, feasibility, analysis, design, development, quality assurance, and implementation occur sequentially in that order. This model requires one step to be complete before the next begins, causing delays, and makes it impossible to revise previous steps if necessary.
  • With iterative processes these steps are interleaved with each other for improved flexibility, efficiency, and more realistic scheduling. Instead of completing the project all at once, one might go through most of the steps with one component at a time. Iterative development also lets developers prioritize the most important features, enabling lower priority ones to be dropped later on if necessary. Agile is one popular method, originally intended for small or medium sized projects, that focuses on giving developers more control over the features that they work on to reduce the risk of time or cost overruns. Derivatives of agile include extreme programming and Scrum. Open-source software development typically uses agile methodology with concurrent design, coding, and testing, due to reliance on a distributed network of volunteer contributors.
  • Beyond agile, some companies integrate information technology (IT) operations with software development, which is called DevOps or DevSecOps including computer security. DevOps includes continuous development, testing, integration of new code in the version control system, deployment of the new code, and sometimes delivery of the code to clients. The purpose of this integration is to deliver IT services more quickly and efficiently.

Another focus in many programming methodologies is the idea of trying to catch issues such as security vulnerabilities and bugs as early as possible (shift-left testing) to reduce the cost of tracking and fixing them.

In 2009, it was estimated that 32 percent of software projects were delivered on time and budget, and with the full functionality. An additional 44 percent were delivered, but missing at least one of these features. The remaining 24 percent were cancelled prior to release.

Steps

Software development life cycle refers to the systematic process of developing applications.

Feasibility

The sources of ideas for software products are plentiful. These ideas can come from market research including the demographics of potential new customers, existing customers, sales prospects who rejected the product, other internal software development staff, or a creative third party. Ideas for software products are usually first evaluated by marketing personnel for economic feasibility, fit with existing channels of distribution, possible effects on existing product lines, required features, and fit with the company's marketing objectives. In the marketing evaluation phase, the cost and time assumptions become evaluated. The feasibility analysis estimates the project's return on investment, its development cost and timeframe. Based on this analysis, the company can make a business decision to invest in further development. After deciding to develop the software, the company is focused on delivering the product at or below the estimated cost and time, and with a high standard of quality (i.e., lack of bugs) and the desired functionality. Nevertheless, most software projects run late and sometimes compromises are made in features or quality to meet a deadline.

Analysis

Software analysis begins with a requirements analysis to capture the business needs of the software. Challenges for the identification of needs are that current or potential users may have different and incompatible needs, may not understand their own needs, and change their needs during the process of software development. Ultimately, the result of analysis is a detailed specification for the product that developers can work from. Software analysts often decompose the project into smaller objects, components that can be reused for increased cost-effectiveness, efficiency, and reliability. Decomposing the project may enable a multi-threaded implementation that runs significantly faster on multiprocessor computers.

During the analysis and design phases of software development, structured analysis is often used to break down the customer's requirements into pieces that can be implemented by software programmers. The underlying logic of the program may be represented in data-flow diagrams, data dictionaries, pseudocode, state transition diagrams, and/or entity relationship diagrams. If the project incorporates a piece of legacy software that has not been modeled, this software may be modeled to help ensure it is correctly incorporated with the newer software.

Design

Design involves choices about the implementation of the software, such as which programming languages and database software to use, or how the hardware and network communications will be organized. Design may be iterative with users consulted about their needs in a process of trial and error. Design often involves people expert in aspect such as database design, screen architecture, and the performance of servers and other hardware. Designers often attempt to find patterns in the software's functionality to spin off distinct modules that can be reused with object-oriented programming. An example of this is the model–view–controller, an interface between a graphical user interface and the backend.

Programming

The central feature of software development is creating and understanding the software that implements the desired functionality. There are various strategies for writing the code. Cohesive software has various components that are independent from each other. Coupling is the interrelation of different software components, which is viewed as undesirable because it increases the difficulty of maintenance. Often, software programmers do not follow industry best practices, resulting in code that is inefficient, difficult to understand, or lacking documentation on its functionality. These standards are especially likely to break down in the presence of deadlines. As a result, testing, debugging, and revising the code becomes much more difficult. Code refactoring, for example adding more comments to the code, is a solution to improve the understandability of code.

Testing

Testing is the process of ensuring that the code executes correctly and without errors. Debugging is performed by each software developer on their own code to confirm that the code does what it is intended to. In particular, it is crucial that the software executes on all inputs, even if the result is incorrect. Code reviews by other developers are often used to scrutinize new code added to the project, and according to some estimates dramatically reduce the number of bugs persisting after testing is complete. Once the code has been submitted, quality assurance—a separate department of non-programmers for most large companies—test the accuracy of the entire software product. Acceptance tests derived from the original software requirements are a popular tool for this. Quality testing also often includes stress and load checking (whether the software is robust to heavy levels of input or usage), integration testing (to ensure that the software is adequately integrated with other software), and compatibility testing (measuring the software's performance across different operating systems or browsers). When tests are written before the code, this is called test-driven development.

Production

Production is the phase in which software is deployed to the end user. During production, the developer may create technical support resources for users or a process for fixing bugs and errors that were not caught earlier. There might also be a return to earlier development phases if user needs changed or were misunderstood.

Developers

Software development is performed by software developers, usually working on a team. Efficient communications between team members is essential to success. This is more easily achieved if the team is small, used to working together, and located near each other. Communications also help identify problems at an earlier state of development and avoid duplicated effort. Many development projects avoid the risk of losing essential knowledge held by only one employee by ensuring that multiple workers are familiar with each component. Software development involves professionals from various fields, not just software programmers but also individuals specialized in testing, documentation writing, graphic design, user support, marketing, and fundraising. Although workers for proprietary software are paid, most contributors to open-source software are volunteers. Alternately, they may be paid by companies whose business model does not involve selling the software, but something else—such as services and modifications to open source software.

Models and tools

Computer-aided software engineering

Computer-aided software engineering (CASE) is tools for the partial automation of software development. CASE enables designers to sketch out the logic of a program, whether one to be written, or an already existing one to help integrate it with new code or reverse engineer it (for example, to change the programming language).

Documentation

Documentation comes in two forms that are usually kept separate—that intended for software developers, and that made available to the end user to help them use the software. Most developer documentation is in the form of code comments for each file, class, and method that cover the application programming interface (API)—how the piece of software can be accessed by another—and often implementation details. This documentation is helpful for new developers to understand the project when they begin working on it. In agile development, the documentation is often written at the same time as the code. User documentation is more frequently written by technical writers.

Effort estimation

Accurate estimation is crucial at the feasibility stage and in delivering the product on time and within budget. The process of generating estimations is often delegated by the project manager. Because the effort estimation is directly related to the size of the complete application, it is strongly influenced by addition of features in the requirements—the more requirements, the higher the development cost. Aspects not related to functionality, such as the experience of the software developers and code reusability, are also essential to consider in estimation. As of 2019, most of the tools for estimating the amount of time and resources for software development were designed for conventional applications and are not applicable to web applications or mobile applications.

Integrated development environment

Anjuta, a C and C++ IDE for the GNOME environment

An integrated development environment (IDE) supports software development with enhanced features compared to a simple text editor. IDEs often include automated compiling, syntax highlighting of errors, debugging assistance, integration with version control, and semi-automation of tests.

Version control

Version control is a popular way of managing changes made to the software. Whenever a new version is checked in, the software saves a backup of all modified files. If multiple programmers are working on the software simultaneously, it manages the merging of their code changes. The software highlights cases where there is a conflict between two sets of changes and allows programmers to fix the conflict.

View model

The TEAF Matrix of Views and Perspectives

A view model is a framework that provides the viewpoints on the system and its environment, to be used in the software development process. It is a graphical representation of the underlying semantics of a view.

The purpose of viewpoints and views is to enable human engineers to comprehend very complex systems and to organize the elements of the problem around domains of expertise. In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization.

Fitness functions

Fitness functions are automated and objective tests to ensure that the new developments don't deviate from the established constraints, checks and compliance controls.

Intellectual property

Intellectual property can be an issue when developers integrate open-source code or libraries into a proprietary product, because most open-source licenses used for software require that modifications be released under the same license. As an alternative, developers may choose a proprietary alternative or write their own software module.

Application portfolio management

From Wikipedia, the free encyclopedia

IT Application Portfolio Management (APM) is a practice that has emerged in mid to large-size information technology (IT) organizations since the mid-1990s. Application Portfolio Management attempts to use the lessons of financial portfolio management to justify and measure the financial benefits of each application in comparison to the costs of the application's maintenance and operations.

Evolution of the practice

Likely the earliest mention of the Applications Portfolio was in Cyrus Gibson and Richard Nolan's HBR article "Managing the Four Stages of EDP Growth" in 1974.

Gibson and Nolan posited that businesses' understanding and successful use of IT "grows" in predictable stages and a given business' progress through the stages can be measured by observing the Applications Portfolio, User Awareness, IT Management Practices, and IT Resources within the context of an analysis of overall IT spending.

Nolan, Norton & Co. pioneered the use of these concepts in practice with studies at DuPont, Deere, Union Carbide, IBM and Merrill Lynch among others. In these "Stage Assessments" they measured the degree to which each application supported or "covered" each business function or process, spending on the application, functional qualities, and technical qualities. These measures provided a comprehensive view of the application of IT to the business, the strengths and weaknesses, and a road map to improvement.

APM was widely adopted in the late 1980s and through the 1990s as organizations began to address the threat of application failure when the date changed to the year 2000 (a threat that became known as Year 2000 or Y2K). During this time, tens of thousands of IT organizations around the world developed a comprehensive list of their applications, with information about each application.

In many organizations, the value of developing this list was challenged by business leaders concerned about the cost of addressing the Y2K risk. In some organizations, the notion of managing the portfolio was presented to the business people in charge of the Information Technology budget as a benefit of performing the work, above and beyond managing the risk of application failure.

There are two main categories of application portfolio management solutions, generally referred to as 'Top Down' and 'Bottom Up' approaches. The first need in any organization is to understand what applications exist and their main characteristics (such as flexibility, maintainability, owner, etc.), typically referred to as the 'Inventory'. Another approach to APM is to gain a detailed understanding of the applications in the portfolio by parsing the application source code and its related components into a repository database (i.e. 'Bottom Up'). Application mining tools, now marketed as APM tools, support this approach.

Hundreds of tools are available to support the 'Top Down' approach. This is not surprising, because the majority of the task is to collect the right information; the actual maintenance and storage of the information can be implemented relatively easily. For that reason, many organizations bypass using commercial tools and use Microsoft Excel to store inventory data. However, if the inventory becomes complex, Excel can become cumbersome to maintain. Automatically updating the data is not well supported by an Excel-based solution. Finally, such an Inventory solution is completely separate from the 'Bottom Up' understanding needs.

Business case for APM

According to Forrester Research, "For IT operating budgets, enterprises spend two-thirds or more on ongoing operations and maintenance.".

It is common to find organizations that have multiple systems that perform the same function. Many reasons may exist for this duplication, including the former prominence of departmental computing, the application silos of the 1970s and 1980s, the proliferation of corporate mergers and acquisitions, and abortive attempts to adopt new tools. Regardless of the duplication, each application is separately maintained and periodically upgraded, and the redundancy increases complexity and cost.

With a large majority of expenses going to manage the existing IT applications, the transparency of the current inventory of applications and resource consumption is a primary goal of Application Portfolio Management. This enables firms to: 1) identify and eliminate partially and wholly redundant applications, 2) quantify the condition of applications in terms of stability, quality, and maintainability, 3) quantify the business value/impact of applications and the relative importance of each application to the business, 4) allocate resources according to the applications' condition and importance in the context of business priorities.

Transparency also aids strategic planning efforts and diffuses business / IT conflict, because when business leaders understand how applications support their key business functions, and the impact of outages and poor quality, conversations turn away from blaming IT for excessive costs and toward how to best spend precious resources to support corporate priorities.

Portfolio

Taking ideas from investment portfolio management, APM practitioners gather information about each application in use in a business or organization, including the cost to build and maintain the application, the business value produced, the quality of the application, and the expected lifespan. Using this information, the portfolio manager is able to provide detailed reports on the performance of the IT infrastructure in relation to the cost to own and the business value delivered.

Definition of an application

In application portfolio management, the definition of an application is a critical component. Many service providers help organizations create their own definition, due to the often contentious results that come from these definitions.

  • Application software — An executable software component or tightly coupled set of executable software components (one or more), deployed together, that deliver some or all of a series of steps needed to create, update, manage, calculate or display information for a specific business purpose. In order to be counted, each component must not be a member of another application.
  • Software component — An executable set of computer instructions contained in a single deployment container in such a way that it cannot be broken apart further. Examples include a Dynamic Link Library, an ASP web page, and a command line "EXE" application. A zip file may contain more than one software component because it is easy to break them down further (by unpacking the ZIP archive).

Software application and software component are technical terms used to describe a specific instance of the class of application software for the purposes of IT portfolio management. See application software for a definition for non-practitioners of IT Management or Enterprise Architecture.

Software application portfolio management requires a fairly detailed and specific definition of an application in order to create a catalog of applications installed in an organization.

The requirements of a definition for an application

The definition of an application has the following needs in the context of application portfolio management:

  • It must be simple for business team members to explain, understand, and apply.
  • It must make sense to development, operations, and project management in the IT groups.
  • It must be useful as an input to a complex function whose output is the overall cost of the portfolio. In other words, there are many factors that lead to the overall cost of an IT portfolio. The sheer number of applications is one of those factors. Therefore, the definition of an application must be useful in that calculation.
  • It must be useful for the members of the Enterprise Architecture team who are attempting to judge a project with respect to their objectives for portfolio optimization and simplification.
  • It must clearly define the boundaries of an application so that a person working on a measurable 'portfolio simplification' activity cannot simply redefine the boundaries of two existing applications in such a way as to call them a single application.

Many organizations will readdress the definition of an application within the context of their IT portfolio management and governance practices. For that reason, this definition should be considered as a working start.

Examples

The definition of an application can be difficult to convey clearly. In an IT organization, there might be subtle differences in the definition among teams and even within one IT team. It helps to illustrate the definition by providing examples. The section below offers some examples of things that are applications, things that are not applications, and things that comprise two or more applications.

Inclusions

By this definition, the following are applications:

  • A web service endpoint that presents three web services: InvoiceCreate, InvoiceSearch, and InvoiceDetailGet
  • A service-oriented business application (SOBA) that presents a user interface for creating invoices, and that turns around and calls the InvoiceCreate service. (note that the service itself is a different application).
  • A mobile application that is published to an enterprise application store and thus deployed to employee-owned or operated portable devices enabling authenticated access to data and services.
  • A legacy system composed of a rich client, a server-based middle tier, and a database, all of which are tightly coupled. (e.g. changes in one are very likely to trigger changes in another).
  • A website publishing system that pulls data from a database and publishes it to an HTML format as a sub-site on a public URL.
  • A database that presents data to a Microsoft Excel workbook that queries the information for layout and calculations. This is interesting in that the database itself is an application unless the database is already included in another application (like a legacy system).
  • An Excel spreadsheet that contains a coherent set of reusable macros that deliver business value. The spreadsheet itself constitutes a deployment container for the application (like a TAR or CAB file).
  • A set of ASP or PHP web pages that work in conjunction with one another to deliver the experience and logic of a web application. It is entirely possible that a sub-site would qualify as a separate application under this definition if the coupling is loose.
  • A web service end point established for machine-to-machine communication (not for human interaction), but which can be rationally understood to represent one or more useful steps in a business process.

Exclusions

The following are not applications:

  • An HTML website.
  • A database that contains data but is not part of any series of steps to deliver business value using that data.
  • A web service that is structurally incapable of being part of a set of steps that provides value. For example, a web service that requires incoming data that breaks shared schema.
  • A standalone batch script that compares the contents of two databases by making calls to each and then sends e-mail to a monitoring alias if data anomalies are noticed. In this case, the batch script is very likely to be tightly coupled with at least one of the two databases, and therefore should be included in the application boundary that contains the database that it is most tightly coupled with.

Composites

The following are many applications:

  • A composite SOA application composed of a set of reusable services and a user interface that leverages those services. There are at least two applications here (the user interface and one or more service components). Each service is not counted as an application.
  • A legacy client-server app that writes to a database to store data and an Excel spreadsheet that uses macros to read data from the database to present a report. There are TWO apps in this example. The database clearly belongs to the legacy app because it was developed with it, delivered with it, and is tightly coupled to it. This is true even if the legacy system uses the same stored procedures as the Excel spreadsheet.

Methods and measures for evaluating applications

There are many popular financial measures, and even more metrics of different (non-financial or complex) types that are used for evaluating applications or information systems.

Return on investment (ROI)

Return on Investment is one of the most popular performance measurement and evaluation metrics used in business analysis. ROI analysis (when applied correctly) is a powerful tool for evaluating existing information systems and making informed decisions on software acquisitions and other projects. However, ROI is a metric designed for a certain purpose – to evaluate profitability or financial efficiency. It cannot reliably substitute for many other financial metrics in providing an overall economic picture of the information solution. The attempts at using ROI as the sole or principal metric for decision making regarding in-formation systems cannot be productive. It may be appropriate in a very limited number of cases/projects. ROI is a financial measure and does not provide information about efficiency or effectiveness of the information systems.

Economic value added (EVA)

A measure of a company's financial performance based on the residual wealth calculated by deducting cost of capital from its operating profit (adjusted for taxes on a cash basis). (Also referred to as "economic profit".)

Formula = Net Operating Profit After Taxes (NOPAT) - (Capital * Cost of Capital)

Total cost of ownership (TCO)

Total Cost of Ownership is a way to calculate what the application will cost over a defined period of time. In a TCO model, costs for hardware, software, and labor are captured and organized into the various application life cycle stages. An in depth TCO model helps management understand the true cost of the application as it attempts to measure build, run/support, and indirect costs. Many large consulting firms have defined strategies for building a complete TCO model.

Total economic impact (TEI)

TEI was developed by Forrester Research Inc. Forrester claims TEI systematically looks at the potential effects of technology investments across four dimensions: cost — impact on IT; benefits — impact on business; flexibility — future options created by the investment; risk — uncertainty.

Business value of IT (ITBV)

ITBV program was developed by Intel Corporation in 2002. The program uses a set of financial measurements of business value that are called Business Value Dials (Indicators). It is a multidimensional program, including a business component, and is relatively easy to implement.

Applied information economics (AIE)

AIE is a decision analysis method developed by Hubbard Decision Research. AIE claims to be "the first truly scientific and theoretically sound method" that builds on several methods from decision theory and risk analysis including the use of Monte Carlo methods. AIE is not used often because of its complexity.

Return on investment

From Wikipedia, the free encyclopedia

Return on investment (ROI) or return on costs (ROC) is the ratio between net income (over a period) and investment (costs resulting from an investment of some resources at a point in time). A high ROI means the investment's gains compare favourably to its cost. As a performance measure, ROI is used to evaluate the efficiency of an investment or to compare the efficiencies of several different investments. In economic terms, it is one way of relating profits to capital invested.

Purpose

In business, the purpose of the return on investment (ROI) metric is to measure, per period, rates of return on money invested in an economic entity in order to decide whether or not to undertake an investment. It is also used as an indicator to compare different investments within a portfolio. The investment with the largest ROI is usually prioritized, even though the spread of ROI over the time period of an investment should also be taken into account. Recently, the concept has also been applied to scientific funding agencies’ (e.g., National Science Foundation) investments in research of open source hardware and subsequent returns for direct digital replication.

ROI and related metrics provide a snapshot of profitability, adjusted for the size of the investment assets tied up in the enterprise. ROI is often compared to expected (or required) rates of return on money invested. ROI is not time-adjusted (unlike e.g. net present value): most textbooks describe it with a "Year 0" investment and two to three years' income.

Marketing decisions have an obvious potential connection to the numerator of ROI (profits), but these same decisions often influence assets’ usage and capital requirements (for example, receivables and inventories). Marketers should understand the position of their company and the returns expected. For a marketing ROI percentage to be credible, the effects of the marketing program must be isolated from other influences when reported to executives. In a survey of nearly 200 senior marketing managers, 77 percent responded that they found the "return on investment" metric very useful.

Return on investment may be extended to terms other than financial gain. For example, social return on investment (SROI) is a principles-based method for measuring extra-financial value (i.e., environmental and social value not currently reflected in conventional financial accounts) relative to resources invested. It can be used by any entity to evaluate the impact on stakeholders, identify ways to improve performance and enhance the performance of investments.

Limitations with ROI usage

As a decision tool, it is simple to understand. The simplicity of the formula allows users to freely choose variables, e.g., length of the calculation time, whether overhead cost is included, or which factors are used to calculate income or cost components. The use of ROI as an indicator for prioritizing investment projects alone can be misleading since usually the ROI figure is not accompanied by an explanation of its make-up. ROI should be accompanied by the underlying data that forms the inputs, this is often in the format of a business case. For long-term investments, the need for a Net Present Value adjustment is great and without it the ROI is incorrect. Similar to discounted cash flow, a Discounted ROI should be used instead. One limitation associated with the traditional ROI calculation is that it does not fully "capture the short-term or long-term importance, value, or risks associated with natural and social capital", because it does not account for the environmental, social, and governance performance of an organization. Without a metric for measuring the short- and long-term environmental, social and governance performance of a firm, decision makers are planning for the future without considering the extent of the impacts associated with their decisions. One or more separate measures, aligned with relevant compliance functions, are frequently provided for this purpose.

Calculation

Return on investment can be calculated in different ways depending on the goal and application. The most comprehensive formula is:

Return on investment (%) = (current value of investment if not exited yet or sold price of investment if exited + income from investment − initial investment and other expenses) / initial investment and other expenses x 100%.

Example with a share of stock: You bought 1 share of stock for US$100 and paid a buying commission of US$5. Then over a year you received US$4 of dividends and sold the share 1 year after you bought it for US$200 paying a US$5 selling commission.

Your ROI is the following:

ROI = (200 + 4 - 100 - 5 - 5) / (100 + 5 + 5) x 100% = 85.45%

As the duration of this investment is 1 year, this ROI is annual.

For a single-period review, divide the return (net profit) by the resources that were committed (investment):

return on investment = Net income / Investment
where:
Net income = gross profit − expenses.
investment = stock + market outstanding + claims.

or

return on investment = (gain from investment − cost of investment) / cost of investment

or

return on investment = (revenue − cost of goods sold) / cost of goods sold

or

return on investment = (net program benefits / program costs) x 100 

Property

Complications in calculating ROI can arise when real property is refinanced, or a second mortgage is taken out. Interest on a second, or refinanced, loan may increase, and loan fees may be charged, both of which can reduce the ROI, when the new numbers are used in the ROI equation. There may also be an increase in maintenance costs and property taxes, and an increase in utility rates if the owner of a residential rental or commercial property pays these expenses.

Complex calculations may also be required for property bought with an adjustable rate mortgage (ARM) with a variable escalating rate charged annually through the duration of the loan.

Marketing investment

Marketing not only influences net profits but also can affect investment levels too. New plants and equipment, inventories, and accounts receivable are three of the main categories of investments that can be affected by marketing decisions.

RoA, RoNA, RoC, and RoIC, in particular, are similar measures with variations on how 'investment' is defined.

ROI is a popular metric for heads of marketing because of marketing budget allocation. Return on Investment helps identify marketing mix activities that should continue to be funded and which should be cut.

Return on integration (ROInt)

To address the lack of integration of the short and long term importance, value and risks associated with natural and social capital into the traditional ROI calculation, companies are valuing their environmental, social and governance (ESG) performance through an integrated management approach to reporting that expands ROI to Return on Integration. This allows companies to value their investments not just for their financial return but also the long term environmental and social return of their investments. By highlighting environmental, social and governance performance in reporting, decision makers have the opportunity to identify new areas for value creation that are not revealed through traditional financial reporting. The social cost of carbon is one value that can be incorporated into Return on Integration calculations to encompass the damage to society from greenhouse gas emissions that result from an investment. This is an integrated approach to reporting that supports Integrated Bottom Line (IBL) decision making, which takes triple bottom line (TBL) a step further and combines financial, environmental and social performance reporting into one balance sheet. This approach provides decision makers with the insight to identify opportunities for value creation that promote growth and change within an organization.

Contraband

From Wikipedia, the free encyclopedia

Contraband (from Medieval French contrebande "smuggling") is any item that, relating to its nature, is illegal to be possessed or sold. It comprises goods that by their nature are considered too dangerous or offensive in the eyes of the legislator—termed contraband in se—and forbidden.

Derivative contraband consists of goods that may normally be owned, but are liable to be seized because they were used in committing an unlawful act and hence begot illegally, e.g. smuggling goods; stolen goods – knowingly participating in their trade is an offense in itself, called fencing.

Law of armed conflict

Contraband weapons seized by an Afghan and coalition security force during an offensive security operation in Nangarhar

In international law, contraband means goods that are ultimately destined for territory under the control of the enemy and may be susceptible for use in armed conflict. Traditionally, contraband is classified into two categories, absolute contraband and conditional contraband. The former category includes arms, munitions, and various materials, such as chemicals and certain types of machinery that may be used directly to wage war or be converted into instruments of war.

Conditional contraband, formerly known as occasional contraband, consists of such materials as provisions and livestock feed. Cargo of that kind, presumably innocent in character, is subject to seizure if in the opinion of the belligerent nation that seizes them, the supplies are destined for the armed forces of the enemy rather than for civilian use and consumption. In former agreements among nations, certain other commodities, including soap, paper, clocks, agricultural machinery and jewelry, have been classified as non-contraband, but the distinctions have proved meaningless in practice.

Under the conditions of modern warfare, in which armed conflict has largely become a struggle involving the total populations of the contending powers, virtually all commodities are classified by belligerents as absolute contraband.

American Civil War

During the American Civil War, Confederate-owned slaves who sought refuge in Union military camps or who lived in territories that fell under Union control were declared "contraband of war". The policy was first articulated by General Benjamin F. Butler in 1861, in what came to be known as the "Fort Monroe Doctrine," established in Hampton, Virginia. By war's end, the Union had set up 100 contraband camps in the South, and the Roanoke Island Freedmen's Colony (1863–1867) was developed to be a self-sustaining colony. Many adult freedmen worked for wages for the Army at such camps, teachers were recruited from the North for their schools by the American Missionary Association, and thousands of freedmen enlisted from such camps in the United States Colored Troops to fight with the Union against the Confederacy.

Treaties

Numerous treaties defining contraband have been concluded among nations. In time of war, the nations involved have invariably violated the agreements, formulating their own definitions as the fortunes of war indicated. The Declaration of London, drafted at the London Naval Conference of 1908–1909 and made partly effective by most of the European maritime nations at the outbreak of World War I, established comprehensive classifications of absolute and conditional contraband. As the war developed, the lists of articles in each category were constantly revised by the various belligerents despite protests by neutral powers engaged in the carrying trade. By 1916, the list of conditional contraband included practically all waterborne cargo. Thereafter, for the duration of World War I, nearly all cargo in transit to an enemy nation was treated as contraband of war by the intercepting belligerent, regardless of the nature of the cargo. A similar policy was inaugurated by the belligerent powers early in World War II.

Neutral nations

Under international law, the citizens of neutral nations are entitled to trade, at their own risk, with any or all powers engaged in war. No duty to restrain contraband trade is imposed on the neutral governments, but no neutral government has the right to interfere on behalf of citizens whose property is seized by one belligerent if it is in transit to another. The penalty traditionally imposed by belligerents on neutral carriers engaged in commercial traffic with the enemy consists of confiscation of cargo. By the Declaration of London, it was extended to include condemnation of the carrying vessel if more than half the cargo was contraband. The right of warring nations to sink neutral ships transporting contraband is not recognized in international law, but the practice was initiated by Germany in World War I and was often resorted to by the Axis powers in World War II.

Capital accumulation

From Wikipedia, the free encyclopedia

Definition

The definition of capital accumulation is subject to controversy and ambiguities, because it could refer to:
  • A net addition to existing wealth
  • A redistribution of wealth.

Most often, capital accumulation involves both a net addition and a redistribution of wealth, which may raise the question of who really benefits from it most. If more wealth is produced than there was before, a society becomes richer; the total stock of wealth increases. But if some accumulate capital only at the expense of others, wealth is merely shifted from A to B. It is also possible that some accumulate capital much faster than others.[citation needed] When one person is enriched at the expense of another in circumstances that the law sees as unjust it is called unjust enrichment. In principle, it is possible that a few people or organisations accumulate capital and grow richer, although the total stock of wealth of society decreases.

In economics and accounting, capital accumulation is often equated with investment of profit income or savings, especially in real capital goods. The concentration and centralisation of capital are two of the results of such accumulation (see below).

Capital accumulation refers ordinarily to:

  • Real investment in tangible means of production, such as acquisitions, research and development, etc. that can increase the capital flow.
  • Investment in financial assets represented on paper, yielding profit, interest, rent, royalties, fees or capital gains.
  • Investment in non-productive physical assets such as residential real estate or works of art that appreciate in value.

and by extension to:

Both non-financial and financial capital accumulation is usually needed for economic growth, since additional production usually requires additional funds to enlarge the scale of production. Smarter and more productive organization of production can also increase production without increased capital. Capital can be created without increased investment by inventions or improved organization that increase productivity, discoveries of new assets (oil, gold, minerals, etc.), the sale of property, etc.

In modern macroeconomics and econometrics the term capital formation is often used in preference to "accumulation", though the United Nations Conference on Trade and Development (UNCTAD) refers nowadays to "accumulation". The term is occasionally used in national accounts.

Measurement of accumulation

Accumulation can be measured as the monetary value of investments, the amount of income that is reinvested, or as the change in the value of assets owned (the increase in the value of the capital stock). Using company balance sheets, tax data and direct surveys as a basis, government statisticians estimate total investments and assets for the purpose of national accounts, national balance of payments and flow of funds statistics. Usually, the reserve banks and the Treasury provide interpretations and analysis of this data. Standard indicators include capital formation, gross fixed capital formation, fixed capital, household asset wealth, and foreign direct investment.

Organisations such as the International Monetary Fund, UNCTAD, the World Bank Group, the OECD, and the Bank for International Settlements used national investment data to estimate world trends. The Bureau of Economic Analysis, Eurostat and the Japan Statistical Office provide data on the US, Europe and Japan respectively.

Other useful sources of investment information are business magazines such as Fortune, Forbes, The Economist, Business Week, etc., and various corporate "watchdog" organisations and non-governmental organization publications. A reputable scientific journal is the Review of Income and Wealth. In the case of the US, the "Analytical Perspectives" document (an annex to the yearly budget) provides useful wealth and capital estimates applying to the whole country.

Demand-led growth models

In macroeconomics, following the Harrod–Domar model, the savings ratio () and the capital coefficient () are regarded as critical factors for accumulation and growth, assuming that all saving is used to finance fixed investment. The rate of growth of the real stock of fixed capital () is:

where is the real national income. If the capital-output ratio or capital coefficient () is constant, the rate of growth of is equal to the rate of growth of . This is determined by (the ratio of net fixed investment or saving to ) and .

A country might, for example, save and invest 12% of its national income, and then if the capital coefficient is 4:1 (i.e. $4 billion must be invested to increase the national income by 1 billion) the rate of growth of the national income might be 3% annually. However, as Keynesian economics points out, savings do not automatically mean investment (as liquid funds may be hoarded for example). Investment may also not be investment in fixed capital (see above).

Assuming that the turnover of total production capital invested remains constant, the proportion of total investment which just maintains the stock of total capital, rather than enlarging it, will typically increase as the total stock increases. The growth rate of incomes and net new investments must then also increase, in order to accelerate the growth of the capital stock. Simply put, the bigger capital grows, the more capital it takes to keep it growing and the more markets must expand.

The Harrodian model has a problem of unstable static equilibrium, since if the growth rate is not equal to the Harrodian warranted rate, the production will tend to extreme points (infinite or zero production). The Neo-Kaleckians models do not suffer from the Harrodian instability but fails to deliver a convergence dynamic of the effective capacity utilization to the planned capacity utilization. For its turn, the model of the Sraffian Supermultiplier grants a static stable equilibrium and a convergence to the planned capacity utilization. The Sraffian Supermultiplier model diverges from the Harrodian model since it takes the investment as induced and not as autonomous. The autonomous components in this model are the Autonomous Non-Capacity Creating Expenditures, such as exports, credit led consumption and public spending. The growth rate of these expenditures determines the long run rate of capital accumulation and product growth.

Marxist concept

Marx borrowed the idea of capital accumulation or the concentration of capital from early socialist writers such as Charles Fourier, Louis Blanc, Victor Considerant, and Constantin Pecqueur. In Karl Marx's critique of political economy, capital accumulation is the operation whereby profits are reinvested into the economy, increasing the total quantity of capital. Capital was understood by Marx to be expanding value, that is, in other terms, as a sum of capital, usually expressed in money, that is transformed through human labor into a larger value and extracted as profits. Here, capital is defined essentially as economic or commercial asset value that is used by capitalists to obtain additional value (surplus-value). This requires property relations which enable objects of value to be appropriated and owned, and trading rights to be established.

Over-accumulation and crisis

The Marxist analysis of capital accumulation and the development of capitalism identifies systemic issues with the process that arise with expansion of the productive forces. A crisis of overaccumulation of capital occurs when the rate of profit is greater than the rate of new profitable investment outlets in the economy, arising from increasing productivity from a rising organic composition of capital (higher capital input to labor input ratio). This depresses the wage bill, leading to stagnant wages and high rates of unemployment for the working class while excess profits search for new profitable investment opportunities. Marx believed that this cyclical process would be the fundamental cause for the dissolution of capitalism and its replacement by socialism, which would operate according to a different economic dynamic.

In Marxist thought, socialism would succeed capitalism as the dominant mode of production when the accumulation of capital can no longer sustain itself due to falling rates of profit in real production relative to increasing productivity. A socialist economy would not base production on the accumulation of capital, instead basing production on the criteria of satisfying human needs and directly producing use-values. This concept is encapsulated in the principle of production for use.

Concentration and centralization

According to Marx, capital has the tendency for concentration and centralization in the hands of richest capitalists. Marx explains:

"It is concentration of capitals already formed, destruction of their individual independence, expropriation of capitalist by capitalist, transformation of many small into few large capitals.... Capital grows in one place to a huge mass in a single hand, because it has in another place been lost by many.... The battle of competition is fought by cheapening of commodities. The cheapness of commodities demands, ceteris paribus, on the productiveness of labour, and this again on the scale of production. Therefore, the larger capitals beat the smaller. It will further be remembered that, with the development of the capitalist mode of production, there is an increase in the minimum amount of individual capital necessary to carry on a business under its normal conditions. The smaller capitals, therefore, crowd into spheres of production which Modern Industry has only sporadically or incompletely got hold of. Here competition rages.... It always ends in the ruin of many small capitalists, whose capitals partly pass into the hands of their conquerors, partly vanish."

Rate of accumulation

In Marxian economics, the rate of accumulation is defined as (1) the value of the real net increase in the stock of capital in an accounting period, (2) the proportion of realized surplus-value or profit-income which is reinvested, rather than consumed. This rate can be expressed by means of various ratios between the original capital outlay, the realized turnover, surplus-value or profit and reinvestment's (see, e.g., the writings of the economist Michał Kalecki).

Other things being equal, the greater the amount of profit-income that is disbursed as personal earnings and used for consumption purposes, the lower the savings rate and the lower the rate of accumulation is likely to be. However, earnings spent on consumption can also stimulate market demand and higher investment. This is the cause of endless controversies in economic theory about "how much to spend, and how much to save".

In a boom period of capitalism, the growth of investments is cumulative, i.e. one investment leads to another, leading to a constantly expanding market, an expanding labor force, and an increase in the standard of living for the majority of the people.

In a stagnating, decadent capitalism, the accumulation process is increasingly oriented towards investment on military and security forces, real estate, financial speculation, and luxury consumption. In that case, income from value-adding production will decline in favour of interest, rent and tax income, with as a corollary an increase in the level of permanent unemployment.

As a rule, the larger the total sum of capital invested, the higher the return on investment will be. The more capital one owns, the more capital one can also borrow and reinvest at a higher rate of profit or interest. The inverse is also true, and this is one factor in the widening gap between the rich and the poor.

Ernest Mandel emphasized that the rhythm of capital accumulation and growth depended critically on (1) the division of a society's social product between necessary product and surplus product, and (2) the division of the surplus product between investment and consumption. In turn, this allocation pattern reflected the outcome of competition among capitalists, competition between capitalists and workers, and competition between workers. The pattern of capital accumulation can therefore never be simply explained by commercial factors, it also involved social factors and power relationships.

Circuit of capital accumulation from production

Strictly speaking, capital has accumulated only when realized profit income has been reinvested in capital assets. But the process of capital accumulation in production has, as suggested in the first volume of Marx's Das Kapital, at least seven distinct but linked moments:

  • The initial investment of capital (which could be borrowed capital) in means of production and labor power.
  • The command over surplus labour and its appropriation.
  • The valorisation (increase in value) of capital through production of new outputs.
  • The appropriation of the new output produced by employees, containing the added value.
  • The realisation of surplus-value through output sales.
  • The appropriation of realised surplus-value as (profit) income after deduction of costs.
  • The reinvestment of profit income in production.

All of these moments do not refer simply to an economic or commercial process. Rather, they assume the existence of legal, social, cultural and economic power conditions, without which creation, distribution and circulation of the new wealth could not occur. This becomes especially clear when the attempt is made to create a market where none exists, or where people refuse to trade.

In fact Marx argues that the original or primitive accumulation of capital often occurs through violence, plunder, slavery, robbery, extortion and theft. He argues that the capitalist mode of production requires that people be forced to work in value-adding production for someone else, and for this purpose, they must be cut off from sources of income other than selling their labor power.

Simple and expanded reproduction

In volume 2 of Das Kapital, Marx continues the story and shows that, with the aid of bank credit, capital in search of growth can more or less smoothly mutate from one form to another, alternately taking the form of money capital (liquid deposits, securities, etc.), commodity capital (tradeable products, real estate etc.), or production capital (means of production and labor power).

His discussion of the simple and expanded reproduction of the conditions of production offers a more sophisticated model of the parameters of the accumulation process as a whole. At simple reproduction, a sufficient amount is produced to sustain society at the given living standard; the stock of capital stays constant. At expanded reproduction, more product-value is produced than is necessary to sustain society at a given living standard (a surplus product); the additional product-value is available for investments which enlarge the scale and variety of production.

The bourgeois claim there is no economic law according to which capital is necessarily re-invested in the expansion of production, that such depends on anticipated profitability, market expectations and perceptions of investment risk. Such statements only explain the subjective experiences of investors and ignore the objective realities which would influence such opinions. As Marx states in Vol.2, simple reproduction only exists if the variable and surplus capital realized by Dept. 1—producers of means of production—exactly equals that of the constant capital of Dept. 2, producers of articles of consumption (p. 524). Such equilibrium rests on various assumptions, such as a constant labor supply (no population growth). Accumulation does not imply a necessary change in total magnitude of value produced but can simply refer to a change in the composition of an industry (p. 514).

Ernest Mandel introduced the additional concept of contracted economic reproduction, i.e. reduced accumulation where business operating at a loss outnumbers growing business, or economic reproduction on a decreasing scale, for example due to wars, natural disasters or devalorisation.

Balanced economic growth requires that different factors in the accumulation process expand in appropriate proportions. But markets themselves cannot spontaneously create that balance, in fact what drives business activity is precisely the imbalances between supply and demand: inequality is the motor of growth. This partly explains why the worldwide pattern of economic growth is very uneven and unequal, even although markets have existed almost everywhere for a very long time. Some people argue that it also explains government regulation of market trade and protectionism.

Origins

According to Marx, capital accumulation has a double origin, namely in trade and in expropriation, both of a legal or illegal kind. The reason is that a stock of capital can be increased through a process of exchange or "trading up" but also through directly taking an asset or resource from someone else, without compensation. David Harvey calls this accumulation by dispossession. Marx does not discuss gifts and grants as a source of capital accumulation, nor does he analyze taxation in detail (he could not, as he died even before completing his major book, Das Kapital).

The continuation and progress of capital accumulation depends on the removal of obstacles to the expansion of trade, and this has historically often been a violent process. As markets expand, more and more new opportunities develop for accumulating capital, because more and more types of goods and services can be traded in. But capital accumulation may also confront resistance, when people refuse to sell, or refuse to buy (for example a strike by investors or workers, or consumer resistance).

Capital accumulation as social relation

"Accumulation of capital" sometimes also refers in Marxist writings to the reproduction of capitalist social relations (institutions) on a larger scale over time, i.e., the expansion of the size of the proletariat and of the wealth owned by the bourgeoisie.

This interpretation emphasizes that capital ownership, predicated on command over labor, is a social relation: the growth of capital implies the growth of the working class (a "law of accumulation"). In the first volume of Das Kapital Marx had illustrated this idea with reference to Edward Gibbon Wakefield's theory of colonisation:

...Wakefield discovered that in the Colonies, property in money, means of subsistence, machines, and other means of production, does not as yet stamp a man as a capitalist if there be wanting the correlative — the wage-worker, the other man who is compelled to sell himself of his own free-will. He discovered that capital is not a thing, but a social relation between persons, established by the instrumentality of things. Mr. Peel, he moans, took with him from England to Swan River, West Australia, means of subsistence and of production to the amount of £50,000. Mr. Peel had the foresight to bring with him, besides, 3,000 persons of the working-class, men, women, and children. Once arrived at his destination, “Mr. Peel was left without a servant to make his bed or fetch him water from the river.” Unhappy Mr. Peel, who provided for everything except the export of English modes of production to Swan River!

In the third volume of Das Kapital, Marx refers to the "fetishism of capital" reaching its highest point with interest-bearing capital, because now capital seems to grow of its own accord without anybody doing anything. In this case,

The relations of capital assume their most externalised and most fetish-like form in interest-bearing capital. We have here , money creating more money, self-expanding value, without the process that effectuates these two extremes. In merchant's capital, , there is at least the general form of the capitalistic movement, although it confines itself solely to the sphere of circulation, so that profit appears merely as profit derived from alienation; but it is at least seen to be the product of a social relation, not the product of a mere thing. (...) This is obliterated in , the form of interest-bearing capital. (...) The thing (money, commodity, value) is now capital even as a mere thing, and capital appears as a mere thing. The result of the entire process of reproduction appears as a property inherent in the thing itself. It depends on the owner of the money, i.e., of the commodity in its continually exchangeable form, whether he wants to spend it as money or loan it out as capital. In interest-bearing capital, therefore, this automatic fetish, self-expanding value, money generating money, are brought out in their pure state and in this form it no longer bears the birth-marks of its origin. The social relation is consummated in the relation of a thing, of money, to itself.—Instead of the actual transformation of money into capital, we see here only form without content.

Markets with social influence

Product recommendations and information about past purchases have been shown to influence consumers choices significantly whether it is for music, movie, book, technological, and other type of products. Social influence often induces a rich-get-richer phenomenon (Matthew effect) where popular products tend to become even more popular.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...