Search This Blog

Tuesday, July 14, 2020

Basic income

From Wikipedia, the free encyclopedia
 
On 4 October 2013, Swiss activists from Generation Grundeinkommen organized a performance in Bern in which roughly 8 million coins, one coin representing one person out of Switzerland's population, were dumped on a public square. This was done in celebration of the successful collection of more than 125,000 signatures, forcing the government to hold a referendum in 2016 on whether or not to incorporate the concept of basic income in the federal constitution. The measure did not pass, with 76.9% voting against changing the federal constitution to support basic income.
 
Basic income, also called universal basic income (UBI), citizen's income, citizen's basic income, basic income guarantee, basic living stipend, guaranteed annual income, or universal demogrant, is a theoretical governmental public program for a periodic payment delivered to all citizens of a given population without a means test or work requirement.

Basic income can be implemented nationally, regionally, or locally. An unconditional income that is sufficient to meet a person's basic needs (i.e., at or above the poverty line) is sometimes called a full basic income; if it is less than that amount, it may be called a partial basic income. The expression 'negative income tax' (NIT) is used in roughly the same sense as basic income, sometimes with different connotations in respect of the mechanism, timing or conditionality of payments. Some welfare systems are sometimes regarded as steps on the way to a basic income, but because they have conditions attached, they are not basic incomes. One such system is a guaranteed minimum income system, which raises household incomes to a specified minimum. For example, Bolsa Família in Brazil is restricted to low-income families, and the children of recipients are obligated to attend school.

Several political discussions are related to the basic income debate, including those regarding automation, artificial intelligence (AI), and the future of work. A key issue in these debates is whether automation and AI will significantly reduce the number of available jobs and whether a basic income could help alleviate such problems.

History

The idea of a state-run basic income dates back to the early 16th century when Sir Thomas More's Utopia depicted a society in which every person receives a guaranteed income. In the late 18th century, English radical Thomas Spence and American revolutionary Thomas Paine both declared their support for a welfare system that guaranteed all citizens an assured basic income. Nineteenth-century debate on basic income was limited, but during the early part of the 20th century, a basic income called a "state bonus" was widely discussed. In 1946 the United Kingdom implemented unconditional family allowances for the second and subsequent children of every family. In the 1960s and 1970s, the United States and Canada conducted several experiments with negative income taxation, a related welfare system. From the 1980s and onward, the debate in Europe took off more broadly, and since then, it has expanded to many countries around the world. A few countries have implemented large-scale welfare systems that have some similarities to basic income, such as Bolsa Família in Brazil. From 2008 onward, several experiments with basic income and related systems have taken place. 

Governments can contribute to individual and household income maintenance strategies in three ways:
  1. The government can establish a minimum income guarantee and not allow income to fall below levels set for various household types, maintaining these levels by paying means-tested benefits.
  2. Social insurance can pay benefits in the case of sickness, unemployment, or old age, on the basis of contributions paid.
  3. Universal unconditional payments, such as the United Kingdom's Child Benefit for children.
A means-tested benefit that raises a household's income to a guaranteed minimum level is unlike a basic income in that income delivered under a system of guaranteed minimum income is reduced exactly as other sources of income increase, whereas income received from a basic income is constant regardless of other sources of income. Johannes Ludovicus Vives (1492–1540), for example, proposed that the municipal government should be responsible for securing a subsistence minimum to all its residents "not on the grounds of justice but for the sake of a more effective exercise of morally required charity." However, Vives also argued that to qualify for poor relief, the recipient must "deserve the help he or she gets by proving his or her willingness to work."

The first to develop the idea of social insurance was Marquis de Condorcet (1743–1794). After playing a prominent role in the French Revolution, he was imprisoned and sentenced to death. While in prison, he wrote the Esquisse d’un tableau historique des progrès de l’esprit humain ("Sketch for a Historical Picture of the Progress of the Human Mind"; published posthumously by his widow in 1795), the last chapter of which describes his vision of social insurance and how it could reduce inequality, insecurity, and poverty. Condorcet mentioned, very briefly, the idea of a benefit to all children old enough to start working by themselves and to start up a family of their own. He is not known to have said or written anything else on this proposal, but his close friend and fellow member of the U.S. Constitutional Convention Thomas Paine (1737–1809) developed the idea much further, several years after Condorcet's death.

The first social movement for basic income developed around 1920 in the United Kingdom. Its proponents included:
  • Bertrand Russell (1872–1970) argued for a new social model that combined the advantages of socialism and anarchism, and that basic income should be a vital component in that new society.
  • Dennis and Mabel Milner, a Quaker married couple of the Labour Party, published a short pamphlet entitled "Scheme for a State Bonus" (1918) that argued for the "introduction of an income paid unconditionally on a weekly basis to all citizens of the United Kingdom." They considered it a moral right for everyone to have the means to subsistence, and thus it should not be conditional on work or willingness to work.
  • C. H. Douglas was an engineer who became concerned that most British citizens could not afford to buy the goods that were produced, despite the rising productivity in British industry. His solution to this paradox was a new social system he called social credit, a combination of monetary reform and basic income.
In 1944 and 1945, the Beveridge Committee, led by the British economist William Beveridge, developed a proposal for a comprehensive new welfare system of social insurance, means-tested benefits, and unconditional allowances for children. Committee member Lady Rhys-Williams argued that the incomes for adults should be more like a basic income. She was also the first to develop the negative income tax model. Her son Brandon Rhys Williams proposed a basic income to a parliamentary committee in 1982, and soon after that in 1984, the Basic Income Research Group, now the Citizen's Basic Income Trust, began to conduct and disseminate research on basic income.


In the 1960s and 1970s, some welfare debates in the United States and Canada included discussions of basic income. Six pilot projects were also conducted with the negative income tax. President Richard Nixon proposed a massive overhaul of the federal welfare system, replacing many of the federal welfare programs with a negative income tax – a proposal favored by economist Milton Friedman. Nixon said, "The purpose of the negative income tax was to provide both a safety net for the poor and a financial incentive for welfare recipients to work." Congress eventually approved a guaranteed minimum income for the elderly and the disabled, not for all citizens.

In the late 1970s and the 1980s, basic income was more or less forgotten in the United States, but it started to gain some traction in Europe. Basic Income European Network, later renamed to Basic Income Earth Network, was founded in 1986 and started to arrange international conferences every two years. From the 1980s, some people outside party politics and universities took an interest. In West Germany, groups of unemployed people took a stance for the reform.

In 2002, a green paper was commissioned on the topic by the Government of Ireland.

Since 2010, basic income again became an active topic in many countries. Basic income is currently discussed from a variety of perspectives, including in the context of ongoing automation and robotization, often with the argument that these trends mean less paid work in the future. This would create a need for a new welfare model. Several countries are planning for local or regional experiments with basic income or related welfare systems. For example, experiments in Canada, Finland, India, and Namibia have received international media attention. The policy was discussed by the Indian Ministry of Finance in an economic survey in 2017.

So far, no country has introduced an unconditional basic income as law. The first and only national referendum about basic income was held in Switzerland in 2016. The result was a rejection of the basic income proposal in a vote of 76.9% to 23.1%.

Perspectives in the basic income debate

Automation

The debates about basic income and automation are closely linked. U.S. presidential candidate and nonprofit founder Andrew Yang has stated that automation caused the loss of 4 million manufacturing jobs and advocated for a UBI of $1,000/month rather than worker retraining programs.

Some technologists believe that automation, among other things, is creating technological unemployment. Some in the "tech elite" (Marc Andreessen, Sam Altman, Peter Diamandis, and others) support the idea of a UBI.

Andrew Yang holding a microphone while making a speech
Entrepreneur and 2020 Democratic candidate Andrew Yang has advocated for a basic income to counter job displacement through automation. His basic income policy, the Freedom Dividend, proposes to give every American adult $1,000 a month.

Bad behavior

Criticism of a basic income includes the argument that some recipients would spend a basic income on alcohol and other drugs. However, studies of the impact of direct cash transfer programs provide evidence to the contrary. A 2014 World Bank review of 30 scientific studies concludes: "Concerns about the use of cash transfers for alcohol and tobacco consumption are unfounded."

Basic income as a part of a post-capitalistic economic system


Harry Shutt proposed basic income and other measures to make most or all businesses collective rather than private. These measures would create a post-capitalist economic system.

Erik Olin Wright characterizes basic income as a project for reforming capitalism into an economic system by empowering labor in relation to capital, granting workers greater bargaining power with employers in labor markets, which can gradually de-commodify labor by separating work from income. This would allow for an expansion in the scope of the social economy by granting citizens greater means to pursue non-work activities (such as art or other hobbies) that do not yield strong financial returns.

James Meade advocated for a social dividend scheme funded by publicly-owned productive assets. Russell argued for a basic income alongside public ownership as a means of shortening the average working day and achieving full employment.

Economists and sociologists have advocated for a form of basic income as a way to distribute economic profits of publicly owned enterprises to benefit the entire population, also referred to as a social dividend, where the basic income payment represents the return to each citizen on the capital owned by society. These systems would be directly financed from returns on publicly owned assets and are featured as major components of many models of market socialism.

Guy Standing has proposed financing a social dividend from a democratically-accountable sovereign wealth fund built up primarily from the proceeds of a tax on rentier income derived from ownership or control of assets—physical, financial, and intellectual.

During the COVID-19 Pandemic of 2020, U.K. Chancellor of the Exchequer Rishi Sunak rejected calls for the implementation of a basic income, stating that the government were "not in favour of a universal basic income," whilst Business Secretary Alok Sharma said that the UBI has been "tested in other countries and hasn't been taken forward" 

Economic critique

In 2016, the IGM Economic Experts panel at the University of Chicago Booth School of Business was asked whether they agreed with the following statement: "Granting every American citizen over 21-years old a universal basic income of $13,000 a year — financed by eliminating all transfer programs (including Social Security, Medicare, Medicaid, housing subsidies, household welfare payments, and farm and corporate subsidies) — would be a better policy than the status quo." 58 percent of participants disagreed or strongly disagreed, 19 percent were uncertain, and 2 percent agreed. The cost was an issue for those who disagreed as well as a lack of optimization in the structure proposed. Daron Acemoglu, professor of economics at the Massachusetts Institute of Technology, expressed these doubts in the survey: "Current US status quo is horrible. A more efficient and generous social safety net is needed. But UBI is expensive and not generous enough". Eric Maskin has stated that "a minimum income makes sense, but not at the cost of eliminating Social Security and Medicare". Simeon Djankov, professor at the London School of Economics, argues the costs of a generous system are prohibitive.

Another critique comes from the far-left. Douglas Rushkoff, a professor of Media Theory and Digital Economics at the City University of New York, suggests that universal basic income is another way that "obviates the need for people to consider true alternatives to living lives as passive consumers". He sees it as a sophisticated way for corporations to get richer at the expense of public money.

Some conservatives have contended that universal basic income could act as a form of compensation for fiat currency inflation.

Economic growth

Some proponents of UBI have argued that basic income can increase economic growth because it would sustain people while they invest in education to get higher-skilled and well-paid jobs. However, there is also a discussion of basic income within the degrowth movement, which argues against economic growth.

Employment

One argument against basic income is that if people have free and unconditional money, they would "get lazy" and not work as much. Critics argue that less work means less tax revenue and hence less money for the state and cities to fund public projects. The degree of any disincentive to employment because of basic income would likely depend on how generous the basic income was.

Some studies have looked at employment levels during the experiments with basic income and negative income tax and similar systems. In the negative income tax experiments in the United States in the 1970s, for example, there was a five percent decline in the hours worked. The work reduction was largest for second earners in two-earner households and weakest for the main earner. The reduction in hours was higher when the benefit was higher. Participants in these experiments knew that the experiment was limited in time.

In the Mincome experiment in rural Dauphin, Manitoba, also in the 1970s, there were also slight reductions in hours worked during the experiment. However, the only two groups who worked significantly less were new mothers and teenagers working to support their families. New mothers spent this time with their infant children, and working teenagers put significant additional time into their schooling. Under Mincome, "[t]he reduction of work effort was modest: about one per cent for men, three per cent for wives, and five per cent for unmarried women".

A recent study of the Alaska Permanent Fund Dividend -— the largest scale universal basic income program in the United States, running from 1976 to the present -— seems to show this belief is untrue. The researchers, Damon Jones from the University of Chicago Harris School of Public Policy and Ioana Marinescu from the University of Pennsylvania School of Public Policy and Practice, show that although there is a small decrease in work by recipients due to reasons like those in the Manitoba experiment, there has been a 17 percent increase in part-time jobs. The authors theorize that employment remained steady because of the extra income that let people buy more also increased demand for service jobs. This finding is consistent with the economic data of the time. No effect was seen when it came to jobs in manufacturing, which produce exports. Essentially, the authors argue, macroeconomic effects of higher spending supported overall employment. For example, someone who uses the dividend to help with car payments can cut back on hours working as a cashier at a local grocery store. Because more people are spending more, the store must replace the worker who started working less. Meanwhile, the distribution of the dividend doesn't affect the international demand for oil and the jobs connected to it. Jones and Marinescu found instead that the larger scale of the program is what allows it to work and not dissuade people out of the workforce.

Another study that contradicted such a decline in work incentive was a pilot project implemented in 2008 and 2009 in the Namibian village of Omitara. The study found that economic activity actually increased, particularly through the launch of small businesses, and reinforcement of the local market by increasing individuals' buying power. However, the residents of Omitara were described as suffering "dehumanising levels of poverty" before the introduction of the pilot, and as such the project's relevance to potential implementations in developed economies is unknown.

James Meade states that a return to full employment can only be achieved if, among other things, workers offer their services at a low enough price that the required wage for unskilled labor would be too low enough to generate a socially desirable distribution of income. He therefore concludes that a "citizen's income" is necessary to achieve full employment without suffering stagnant or negative growth in wages.

If there is a disincentive to employment because of basic income, the magnitude of such a disincentive may depend on how generous the basic income was. Some campaigners in Switzerland have suggested a level that would be only just liveable, arguing that people would want to supplement it.

Freedom


Philippe van Parijs has argued that basic income at the highest sustainable level is needed to support real freedom, or the freedom to do whatever one "might want to do". By this, van Parijs means that all people should be free to use the resources of the Earth and the "external assets" people make out of them to do whatever they want. Money is like an access ticket to use those resources, and so to make people equally free to do what they want with world assets, the government should give each individual as many such access tickets as possible—that is, the highest sustainable basic income.

Karl Widerquist and others have proposed a theory of freedom in which basic income is needed to protect the power to refuse work; in other words, if the resources necessary to an individual's survival are controlled by another group, that individual has no reasonable choice other than to do whatever the resource-controlling group demands. Before the establishment of governments and landlords, individuals had direct access to the resources they needed to survive. Today, resources necessary for the production of food, shelter and clothing have been privatized in such a way that some have gotten a share and others have not.

Therefore, the argument is that the owners of those resources owe compensation back to non-owners, sufficient at least for them to purchase the resources or goods necessary to sustain their basic needs. This redistribution must be unconditional because people can consider themselves free only if they are not forced to spend all their time doing the bidding of others simply to provide basic necessities to themselves and their families. Under this argument, personal, political and religious freedom are worth little without the power to say no. Basic income therefore may provide economic freedom which, combined with political freedom, freedom of belief and personal freedom, establish each individual's status as a free person.

Gender equality

The Scottish economist Ailsa McKay has argued that basic income is a way to promote gender equality. She noted in 2001 that "social policy reform should take account of all gender inequalities and not just those relating to the traditional labor market" and that "the citizens' basic income model can be a tool for promoting gender-neutral social citizenship rights".

Women perform the majority of unpaid care work around the world. In fact, if unpaid care work performed by women were compensated at even just minimum wage around the world, this would boost measured global economic output by 12 trillion USD, which is 11% of global economic output and is equivalent to the annual economic output of China, according to a study by the McKinsey Global Institute. Thus basic income would be a way to compensate women for the essential care services they already perform and to raise the standard of living for women who devote a substantial portion of their time to unpaid care work.

Some feminists support basic income as a means of guaranteeing minimum financial independence for women. However, others oppose basic income as something that might discourage women from participation in the workforce, reinforcing traditional gender roles of women belonging at home and men at work.

Poverty reduction

Advocates of basic income often argue that it has the potential to reduce or even eradicate poverty.

According to a randomized controlled study in the Rarieda District of Kenya run by the Abdul Latif Jameel Poverty Action Lab at the Massachusetts Institute of Technology (MIT) on the Give Directly program, the impact of an unconditional cash transfer was that for every $1,000 disbursed, there was a $270 increase in earnings, a $430 increase in assets, and a $330 increase in nutrition spending, with no effect on alcohol or tobacco spending.

Milton Friedman, a renowned economist, supported UBI by reasoning that it would help to reduce poverty. He said: "The virtue of [a negative income tax] is precisely that it treats everyone the same way. [...] [T]here’s none of this unfortunate discrimination among people."

Martin Luther King Jr. believed that a basic income was a necessity that would help to reduce poverty, regardless of race, religion or social class. In King's last book before his assassination, Where Do We Go from Here: Chaos or Community?, he said: "I am now convinced that the simplest approach will prove to be the most effective — the solution to poverty is to abolish it directly by a now widely discussed measure: the guaranteed income."

Reduction of medical costs

The Canadian Medical Association passed a motion in 2015 in clear support of basic income and for basic income trials in Canada.

British journalist Paul Mason has stated that universal basic income would probably reduce the high medical costs associated with diseases of poverty. According to Mason, stress diseases like high blood pressure, type II diabetes and the like would probably become less common.

Transparency and administrative efficiency

According to Guy Standing's theories, basic income may be a much simpler and more transparent welfare system than welfare states currently use. Standing suggests that instead of separate welfare programs (including unemployment insurance, child support, pensions, disability, housing support), social support systems could be combined into one income, or could be one basic payment that welfare programs could add to. This may require less paperwork and bureaucracy to check eligibility. The Basic Income Earth Network claims that basic income costs less than current means-tested social welfare benefits, and has proposed an implementation that it claims is financially viable.

A real-world example of how basic income is being implemented to save money can be seen in a program that is being conducted in the Netherlands. The city councillor for the city of Nijmegen, Lisa Westerveld, said in an interview: "In Nijmegen, we get £88m to give to people on welfare, but it costs £15m a year for the civil servants running the bureaucracy of the current system". Her view is shared by Dutch historian and author Rutger Bregman, who believes the Netherlands' welfare system is flawed, and by economist Loek Groot, who believes the country's welfare system wastes too much money. Outcomes of the Dutch program will be analysed by Groot, a professor at the University of Utrecht who hopes to learn if a guaranteed income might be a more effective approach. However, other proponents argue for adding basic income to existing welfare grants, rather than replacing them.  Support for basic income has been expressed by several people associated with conservative political views. While adherents of such views generally favor minimization or abolition of the public provision of welfare services, some have cited basic income as a viable strategy to reduce the amount of bureaucratic administration that is prevalent in many contemporary welfare systems.

Wage slavery and alienation

Frances Fox Piven argues that an income guarantee would benefit all workers by liberating them from the anxiety that results from the "tyranny of wage slavery" and provide opportunities for people to pursue different occupations and develop untapped potentials for creativity. André Gorz saw basic income as a necessary adaptation to the increasing automation of work, yet basic income also enables workers to overcome alienation in work and life and to increase their amount of leisure time.

These arguments imply that a universal basic income would give people enough freedom to pursue work that is satisfactory and interesting even if that work does not provide enough resources to sustain their everyday living. One example is that of Nelle Harper Lee, who lived as a single woman in New York City in the 1950s, writing in her free time and supporting herself by working part-time as an airline clerk. She had written several long stories, but achieved no success of note. One Christmas in the late fifties, a generous friend gave her a year's wages as a gift with the note: "You have one year off from your job to write whatever you please. Merry Christmas". A year later, Lee had produced a draft of To Kill a Mockingbird, a novel that subsequently won the Pulitzer Prize. Most proponents of UBI argue that the net creative output from even a small percentage of basic income subscribers would be a significant contributor to human productivity, one that might be lost if these people are not given the opportunity to pursue work that is interesting to them.

Welfare trap

The welfare trap, or poverty trap, is a speculated problem with means-tested welfare. Recipients of means-tested welfare may be implicitly encouraged to remain on welfare due to economic penalties for transitioning off welfare. These penalties include loss of welfare and possibly higher tax rates. Opponents claim that this creates a harsh marginal tax for those rising out of poverty. A 2013 Cato Institute study claimed that workers could accumulate more wealth from the welfare system than they could from a minimum wage job in at least nine European countries. In three of them; Austria, Croatia and Denmark; the marginal tax rate was nearly 100%.

Problems associated with the welfare trap may be aggravated by workplace automation: this is discussed in the article on wage subsidy.

Proponents of universal basic income claim that it could eliminate welfare traps by removing conditions to receive such an income, but large-scale experiments have not yet produced clear results.

Pilot programs and experiments

Omitara, one of the two poor villages in Namibia where a local basic income was tested in 2008–2009

Since the 1960s, but in particular since 2010, there have been a number of basic income pilot programs. Some examples include:
  • Experiments with negative income tax in United States and Canada in the 1960s and 1970s.
  • The province of Manitoba, Canada experimented with Mincome, a basic guaranteed income, in the 1970s. In the town of Dauphin, Manitoba, labor only decreased by 13%, much less than expected.
  • The basic income grant in Namibia, launched in 2008 and ended in 2009.
  • An independent pilot implemented in São Paulo, Brazil launched in 2009.
  • Basic income trials in several villages in India. whose government has proposed a guaranteed basic income for all citizens.
  • The GiveDirectly experiment in Nairobi, Kenya, the longest-running basic income pilot as of 2017.
  • An experiment in the city of Utrecht in the Netherlands, launched in early 2017, that is testing different rates of aid.
  • A three-year basic income pilot that the Ontario provincial government, Canada, launched in the cities of Hamilton, Thunder Bay and Lindsay in July 2017. Although called basic income, it was only made available to those with a low income and funding would be removed if they obtained employment, making it more related to the current welfare system than true basic income. The pilot project was canceled on 31 July 2018 by the newly elected Progressive Conservative government under Ontario Premier Doug Ford.
  • A two-year pilot the Finnish government began in January 2017 which involved 2,000 subjects In April 2018, the Finnish government rejected a request for funds to extend and expand the program from Kela (Finland's social security agency).
  • A project called Eight in a village in Fort Portal, Uganda, that a nonprofit organization launched in January 2017, which provides income for 56 adults and 88 children through mobile money.
  • Social Income started paying out basic incomes in the form of mobile money in 2020 to people in need in Sierra Leone. The international initiative is financed by contributions from people in the Global North, who donate 1% of their monthly paychecks.
  • In a study in several Indian villages, basic income in the region raised the education rate of young people by 25%.

Examples of payments with similarities

Alaska Permanent Fund

The Permanent Fund of Alaska in the United States provides a kind of yearly basic income based on the oil and gas revenues of the state to nearly all state residents. However, the payment is not high enough to cover basic expenses (it has never exceeded $2,100) and is not a fixed, guaranteed amount. For these reasons, it is not considered a basic income.

Quasi-UBI programs

  • Pension: A payment which in some countries is guaranteed to all citizens above a certain age. The difference from true basic income is that it is restricted to people over a certain age.
  • Child benefit: A program similar to pensions but restricted to parents of children, usually allocated based on the number of children.
  • Conditional cash transfer: A regular payment given to families, but only to the poor. It is usually dependent on basic conditions such as sending their children to school or having them vaccinated. Programs include Bolsa Família in Brazil, Reddito di Cittadinanza in Italy and Programa Prospera in Mexico.
  • Guaranteed minimum income differs from a basic income in that it is restricted to those in search of work and possibly other restrictions, such as savings being below a certain level. Example programs are unemployment benefits in the U.K. and the revenu de solidarité active in France.

Examples

  • Bolsa Família is a large social welfare program in Brazil that provides money to many low-income families in the country. The system is related to basic income, but has more conditions, like asking the recipients to keep their children in school until graduation. Brazilian Senator Eduardo Suplicy championed a law that ultimately passed in 2004 that declared Bolsa Família the first step towards a national basic income. However, the program has not yet been expanded to a full basic income.
  • The Rythu Bandhu scheme is a welfare scheme started in the state of Telangana, India in May 2018, aimed at helping farmers. Each farm owner receives 4,000 INR per acre twice a year for rabi and kharif harvests. A budget allocation of 120 billion INR (1.6 million USD as of June 2020) was made in the 2018–2019 state budget. The scheme offers financial help of 8,000 INR (105 USD as of June 2020) per year to each farmer (for two crops), holds no cap on money disbursed to the number of acres of land owned, and does not discriminate between rich or poor landowners. Preliminary results in 2018 were promising for getting farmers funding they need to invest in farming — procuring fertilizers, seeds, pesticides, and other materials. The first phase of the survey concluded that 85% of farmers received checks for amounts ranging from 1,000 INR (13 USD as of June 2020) to 20,000 INR (262 USD as of June 2020) for farmland comprising less than an acre to about five acres, and about 10% of farmers received checks for amounts between 20,000 INR to 50,000 INR (654 USD as of June 2020). Only 1% of farmers got amounts more than 50,000 INR. The spending pattern revealed that 28.5% of farmers opted to buy seed, about 18% spent the money on fertilizer, 15.4% on new agricultural assets including farm equipment, and 8.6% on pesticides. Only 4.4% of beneficiaries said they utilized it for household consumption and an insignificant percentage for repayment of loans. The scheme received a high satisfaction rate of 92% from farmers since other forms of capital investment like welfare or loans had many strings attached to it and would not reach the farmers before the cropping season starts. Other states and countries are following the development of the program to see if they can implement it for their farmers. This is a new type of program that is considered an embryonic UBI or quasi-UBI to replace traditional systems of agricultural support.
  • Citizen Capitalism is a supplemental income program proposed by legal scholar Lynn Stout and her co-authors Tamara Belinfanti and Sergio Gramitto of the book Citizen Capitalism: How A Universal Fund Can Provide Influence and Income to All, published in 2019. In the book, the authors propose building a not-for-profit universal fund composed of shares donated by corporations and philanthropists in which every American would receive one share. These shares could not be sold, donated, or borrowed against. However, each "citizen shareholder" would receive an even portion of the net dividends paid out by shares in the fund, therefore contributing to the amelioration of income inequality. Each shareholder would also receive additional influence in the form of a vote (corresponding to their shares in the fund), potentially providing for a significantly expanded degree of citizen engagement in the role of public corporations in American society.

Basic income in cryptocurrencies and as part of social media apps

Nimses is a concept that offers universal basic income to every member of its system. The idea of Nimses consists of a time-based currency called Nim (1 nim = 1 minute of life). Every person in Nimses receives nims that can be spent on different goods and services. This concept was initially adopted in Eastern Europe.

Electroneum is a cryptocurrency project which uses a mobile application to pay users. The first KYC/AML compliant cryptocurrency, Electroneum enables users to mine using their mobile phone through a simulated mining system. The system pays up to $3.00 per month to its users, with the goal of enabling the world's unbanked population with financial freedom. The cryptocurrency can currently be used to purchase mobile top-ups from the South African telecommunications company The Unlimited as well as to transact with any business that has integrated the Electroneum API, or directly between individuals.

In response to COVID-19

Democratic politicians Andrew Yang, Alexandria Ocasio-Cortez and Tulsi Gabbard were early advocates for universal basic income in response to the COVID-19 pandemic. On 17 March, the Trump administration indicated that some payment would be given to non-millionaires as part of a stimulus package. This amounts to $1,200 per adult and $500 per child in the CARES Act, which passed unanimously in the Senate and House and was signed into law by President Trump in late March.

Public opinion

Support for a universal basic income varies widely across Europe, as shown by a recent edition of the European Social Survey. A high share of the population tends to support the scheme in southern and eastern European Union countries, while enthusiasm tends to be lower in western European countries such as France and Germany, and even lower in Scandinavian countries such as Norway and Sweden. Individuals who face greater economic insecurity because of low income and unemployment tend to be more supportive of a basic income. Overall, support tends to be on average higher in countries where existing unemployment benefits are not generous or the receipt of benefits is conditioned on certain job search behavior.  An April 2020 public poll by YouGov found that the majority of the public in the United Kingdom supported a universal basic income in response to the 2020 COVID-19 pandemic, with only 24% unsupportive.

A poll conducted by the University of Chicago in March 2020 indicated that 51% of Americans aged 18–36 support a monthly basic income of $1,000. Support for universal basic income spans the political spectrum, with conservatives, progressives, and libertarians all having camps both for and against basic income.

Petitions, polls and referendums

  • 2008: an official petition for basic income was launched in Germany by Susanne Wiest. The petition was accepted, and Susanne Wiest was invited for a hearing at the German parliament's Commission of Petitions. After the hearing, the petition was closed as "unrealizable."
  • 2013–2014: a European Citizens' Initiative collected 280,000 signatures demanding that the European Commission study the concept of an unconditional basic income.
  • 2015: a citizen's initiative in Spain received 185,000 signatures, short of the required number to mandate that the Spanish parliament discuss the proposal.
  • 2016: the world's first universal basic income referendum in Switzerland on 5 June 2016 was rejected with a 76.9% majority. Also in 2016, a poll showed that 58% of the EU's population is aware of basic income, and 65% would vote in favor of the idea.
  • 2017: Politico/Morning Consult asked 1,994 Americans about their opinions on several political issues including national basic income; 43% either "strongly supported" or "somewhat supported" the idea.
  • 2019: in a September poll conducted by The Hill and HarrisX, 49% of U.S. registered voters support basic income, up 6% from a similar survey conducted six months earlier.
  • 2019: In November, an Austrian initiative received approximately 70,000 signatures but failed to reach the 100,000 signatures needed for a parliamentary discussion. The initiative was started by Peter Hofer. His proposal suggested a basic income of 1200 EUR for every Austrian citizen.
  • 2020: A public poll by YouGov in 2020 has found that the majority of people in the United Kingdom support a universal basic income, with only 24% unsupportive. In March 2020, over 170 MPs and Lords from all political parties signed a letter calling on the government to introduce a basic income during the coronavirus pandemic.

Prominent advocates

Prominent contemporary advocates include Economics Nobel Prize winners Peter Diamond and Christopher Pissarides, tech investor and engineer Elon Musk, political philosopher Philippe Van Parijs, former finance minister of Greece Yanis Varoufakis, Facebook founder Mark Zuckerberg, eBay founder Pierre Omidyar, and entrepreneur and nonprofit founder Andrew Yang, who ran for the Democratic nomination for the 2020 United States presidential election on a platform of instituting a $1,000-a-month universal basic income.

On 13 March 2020, Democratic representatives Ro Khanna and Tim Ryan introduced legislation to provide payments to low-income citizens during the COVID-19 crisis via an earned income tax credit. On 16 March, Republican senators Mitt Romney and Tom Cotton stated their support for a $1,000 basic income, the former saying it should be a one-time payment to help with short-term costs. Senator Bernie Sanders has called for $2,000 in monthly basic income to help "every person in the United States, including the undocumented, the homeless, the unbanked, and young adults excluded from the CARES Act." House Speaker Nancy Pelosi has also suggested that a basic income could be "worthy of attention."

On 12 April 2020, Pope Francis called for the introduction of basic income in response to coronavirus.

Observational error

From Wikipedia, the free encyclopedia

Observational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not a "mistake". Variability is an inherent part of the results of measurements and of the measurement process.

Measurement errors can be divided into two components: random error and systematic error.

Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measurements of a constant attribute or quantity are taken. Systematic errors are errors that are not determined by chance but are introduced by an inaccuracy (involving either the observation or measurement process) inherent to the system. Systematic error may also refer to an error with a non-zero mean, the effect of which is not reduced when observations are averaged.

Science and experiments

When either randomness or uncertainty modeled by probability theory is attributed to such errors, they are "errors" in the sense in which that term is used in statistics; see errors and residuals in statistics

Every time we repeat a measurement with a sensitive instrument, we obtain slightly different results. The common statistical model used is that the error has two additive parts:
  1. Systematic error which always occurs, with the same value, when we use the instrument in the same way and in the same case
  2. Random error which may vary from observation to another.
Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error. 

Random error (or random variation) is due to factors which cannot or will not be controlled. Some possible reason to forgo controlling for these random errors is because it may be too expensive to control them each time the experiment is conducted or the measurements are made. Other reasons may be that whatever we are trying to measure is changing in time, or is fundamentally probabilistic (as is the case in quantum mechanics). Random error often occurs when instruments are pushed to the extremes of their operating limits. For example, it is common for digital balances to exhibit random error in their least significant digit. Three measurements of a single object might read something like 0.9111g, 0.9110g, and 0.9112g.

Random errors versus systematic errors

Measurement errors can be divided into two components: random error and systematic error.

Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter's interpretation of the instrumental reading. Random errors show up as different results for ostensibly the same repeated measurement. They can be estimated by comparing multiple measurements, and reduced by averaging multiple measurements. 

Systematic error is predictable and typically constant or proportional to the true value. If the cause of the systematic error can be identified, then it usually can be eliminated. Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods of observation, or interference of the environment with the measurement process, and always affect the results of an experiment in a predictable direction. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation. 

The Performance Test Standard PTC 19.1-2005 “Test Uncertainty”, published by the American Society of Mechanical Engineers (ASME), discusses systematic and random errors in considerable detail. In fact, it conceptualizes its basic uncertainty categories in these terms. Random error can be caused by unpredictable fluctuations in the readings of a measurement apparatus, or in the experimenter's interpretation of the instrumental reading; these fluctuations may be in part due to interference of the environment with the measurement process. The concept of random error is closely related to the concept of precision. The higher the precision of a measurement instrument, the smaller the variability (standard deviation) of the fluctuations in its readings.

Sources of systematic error

Imperfect calibration

Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error. If you consider an experimenter taking a reading of the time period of a pendulum swinging past a fiducial marker: If their stop-watch or timer starts with 1 second on the clock then all of their results will be off by 1 second (zero error). If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be a percentage error in the calculated average of their results; the final result will be slightly larger than the true period.

Distance measured by radar will be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation.

Systematic errors may also be present in the result of an estimate based upon a mathematical model or physical law. For instance, the estimated oscillation frequency of a pendulum will be systematically in error if slight movement of the support is not accounted for.

Quantity

Systematic errors can be either constant, or related (e.g. proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of a ruler can be affected by environmental temperature). When it is constant, it is simply due to incorrect zeroing of the instrument. When it is not constant, it can change its sign. For instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual temperature is 200°, 0°, or −100°, the measured temperature will be 204° (systematic error = +4°), 0° (null systematic error) or −102° (systematic error = −2°), respectively. Thus the temperature will be overestimated when it will be above zero, and underestimated when it will be below zero.

Drift

Systematic errors which change during an experiment (drift) are easier to detect. Measurements indicate trends with time rather than varying randomly about a mean. Drift is evident if a measurement of a constant quantity is repeated several times and the measurements drift one way during the experiment. If the next measurement is higher than the previous measurement as may occur if an instrument becomes warmer during the experiment then the measured quantity is variable and it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, the zero reading is a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, potentially by resetting the instrument immediately before the experiment then it needs to be allowed by subtracting its (possibly time-varying) value from the readings, and by taking it into account while assessing the accuracy of the measurement. 

If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, if you think of the timing of a pendulum using an accurate stopwatch several times you are given readings randomly distributed about the mean. Hopings systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running.

Measuring instruments such as ammeters and voltmeters need to be checked periodically against known standards.

Systematic errors can also be detected by measuring already known quantities. For example, a spectrometer fitted with a diffraction grating may be checked by using it to measure the wavelength of the D-lines of the sodium electromagnetic spectrum which are at 600 nm and 589.6 nm. The measurements may be used to determine the number of lines per millimetre of the diffraction grating, which can then be used to measure the wavelength of any other spectral line.

Constant systematic errors are very difficult to deal with as their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove systematic error is through calibration of the measurement instrument.

Sources of random error

The random or stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be normally distributed when the stochastic error is the sum of many independent random errors because of the central limit theorem. Stochastic errors added to a regression equation account for the variation in Y that cannot be explained by the included Xs.

Surveys

The term "Observational error" is also sometimes used to refer to response errors and some other types of non-sampling error. In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).

These errors can be random or systematic. Random errors are caused by unintended mistakes by respondents, interviewers and/or coders. Systematic error can occur if there is a systematic reaction of the respondents to the method used to formulate the survey question. Thus, the exact formulation of a survey question is crucial, since it affects the level of measurement error. Different tools are available for the researchers to help them decide about this exact formulation of their questions, for instance estimating the quality of a question using MTMM experiments or predicting this quality using the Survey Quality Predictor software (SQP). This information about the quality can also be used in order to correct for measurement error.

Effect on regression analysis

If the dependent variable in a regression is measured with error, regression analysis and associated hypothesis testing are unaffected, except that the R2 will be lower than it would be with perfect measurement.

However, if one or more independent variables is measured with error, then the regression coefficients and standard hypothesis tests are invalid.

Clinical decision support system

From Wikipedia, the free encyclopedia
 
A clinical decision support system (CDSS) is a health information technology system that is designed to provide physicians and other health professionals with clinical decision support (CDS), that is, assistance with clinical decision-making tasks. A working definition has been proposed by Robert Hayward of the Centre for Health Evidence: "Clinical decision support systems link health observations with health knowledge to influence health choices by clinicians for improved health care". CDSSs constitute a major topic in artificial intelligence in medicine.

Characteristics

A clinical decision support system has been defined as an "active knowledge systems, which use two or more items of patient data to generate case-specific advice." This implies that a CDSS is simply a decision support system that is focused on using knowledge management in such a way so as to achieve clinical advice for patient care based on multiple items of patient data.

Purpose

The main purpose of modern CDSS is to assist clinicians at the point of care. This means that clinicians interact with a CDSS to help to analyse, and reach a diagnosis based on, patient data.

In the early days, CDSSs were conceived of as being used to literally make decisions for the clinician. The clinician would input the information and wait for the CDSS to output the "right" choice and the clinician would simply act on that output. However, the modern methodology of using CDSSs to assist means that the clinician interacts with the CDSS, utilizing both their own knowledge and the CDSS, to make a better analysis of the patient's data than either human or CDSS could make on their own. Typically, a CDSS makes suggestions for the clinician to look through, and the clinician is expected to pick out useful information from the presented results and discount erroneous CDSS suggestions.

The two main types of CDSS are knowledge-based and non-knowledge-based :

An example of how a clinical decision support system might be used by a clinician is a diagnosis decision support system. A DDSS requests some of the patients data and in response, proposes a set of appropriate diagnoses. The physician then takes the output of the DDSS and determines which diagnoses might be relevant and which are not, and if necessary orders further tests to narrow down the diagnosis.

Another example of a CDSS would be a case-based reasoning (CBR) system. A CBR system might use previous case data to help determine the appropriate amount of beams and the optimal beam angles for use in radiotherapy for brain cancer patients; medical physicists and oncologists would then review the recommended treatment plan to determine its viability.

Another important classification of a CDSS is based on the timing of its use. Physicians use these systems at point of care to help them as they are dealing with a patient, with the timing of use being either pre-diagnosis, during diagnosis, or post diagnosis. Pre-diagnosis CDSS systems are used to help the physician prepare the diagnoses. CDSS used during diagnosis help review and filter the physician's preliminary diagnostic choices to improve their final results. Post-diagnosis CDSS systems are used to mine data to derive connections between patients and their past medical history and clinical research to predict future events. As of 2012 it has been claimed that decision support will begin to replace clinicians in common tasks in the future.

Another approach, used by the National Health Service in England, is to use a DDSS (either, in the past, operated by the patient, or, today, by a phone operative who is not medically-trained) to triage medical conditions out of hours by suggesting a suitable next step to the patient (e.g. call an ambulance, or see a general practitioner on the next working day). The suggestion, which may be disregarded by either the patient or the phone operative if common sense or caution suggests otherwise, is based on the known information and an implicit conclusion about what the worst-case diagnosis is likely to be; it is not always revealed to the patient, because it might well be incorrect and is not based on a medically-trained person's opinion - it is only used for initial triage purposes.

Knowledge-based CDSS

Most CDSSs consist of three parts: the knowledge base, an inference engine, and a mechanism to communicate. The knowledge base contains the rules and associations of compiled data which most often take the form of IF-THEN rules. If this was a system for determining drug interactions, then a rule might be that IF drug X is taken AND drug Y is taken THEN alert user. Using another interface, an advanced user could edit the knowledge base to keep it up to date with new drugs. The inference engine combines the rules from the knowledge base with the patient's data. The communication mechanism allows the system to show the results to the user as well as have input into the system.

An expression language such as GELLO or CQL (Clinical Quality Language) is needed for expressing knowledge artifacts in a computable manner. For example: if a patient has diabetes mellitus, and if the last hemoglobin A1c test result was less than 7%, recommend re-testing if it has been over 6 months, but if the last test result was greater than or equal to 7%, then recommend re-testing if it has been over 3 months.

The current focus of the HL7 CDS WG is to build on the Clinical Quality Language (CQL). CMS has announced that it plans to use CQL for the specification of eCQMs (https://ecqi.healthit.gov/cql).

Non-knowledge-based CDSS

CDSSs which do not use a knowledge base use a form of artificial intelligence called machine learning, which allow computers to learn from past experiences and/or find patterns in clinical data. This eliminates the need for writing rules and for expert input. However, since systems based on machine learning cannot explain the reasons for their conclusions, most clinicians do not use them directly for diagnoses, for reliability and accountability reasons. Nevertheless, they can be useful as post-diagnostic systems, for suggesting patterns for clinicians to look into in more depth.

As of 2012, three types of non-knowledge-based systems are support-vector machines, artificial neural networks and genetic algorithms.
  1. Artificial neural networks use nodes and weighted connections between them to analyse the patterns found in patient data to derive associations between symptoms and a diagnosis.
  2. Genetic algorithms are based on simplified evolutionary processes using directed selection to achieve optimal CDSS results. The selection algorithms evaluate components of random sets of solutions to a problem. The solutions that come out on top are then recombined and mutated and run through the process again. This happens over and over until the proper solution is discovered. They are functionally similar to neural networks in that they are also "black boxes" that attempt to derive knowledge from patient data.
  3. Non-knowledge-based networks often focus on a narrow list of symptoms, such as symptoms for a single disease, as opposed to the knowledge based approach which cover the diagnosis of many different diseases.

Regulations

United States

With the enactment of the American Recovery and Reinvestment Act of 2009 (ARRA), there is a push for widespread adoption of health information technology through the Health Information Technology for Economic and Clinical Health Act (HITECH). Through these initiatives, more hospitals and clinics are integrating electronic medical records (EMRs) and computerized physician order entry (CPOE) within their health information processing and storage. Consequently, the Institute of Medicine (IOM) promoted usage of health information technology including clinical decision support systems to advance quality of patient care. The IOM had published a report in 1999, To Err is Human, which focused on the patient safety crisis in the United States, pointing to the incredibly high number of deaths. This statistic attracted great attention to the quality of patient care.

With the enactment of the HITECH Act included in the ARRA, encouraging the adoption of health IT, more detailed case laws for CDSS and EMRs are still being defined by the Office of National Coordinator for Health Information Technology (ONC) and approved by Department of Health and Human Services (HHS). A definition of "Meaningful use" is yet to be published.

Despite the absence of laws, the CDSS vendors would almost certainly be viewed as having a legal duty of care to both the patients who may adversely be affected due to CDSS usage and the clinicians who may use the technology for patient care. However, duties of care legal regulations are not explicitly defined yet. 

With recent effective legislations related to performance shift payment incentives, CDSS are becoming more attractive.

Effectiveness

The evidence of the effectiveness of CDSS is mixed. There are certain disease entities, which benefit more from CDSS than other disease entities. A 2018 systematic review identified six medical conditions, in which CDSS improved patient outcomes in hospital settings, including: blood glucose management, blood transfusion management, physiologic deterioration prevention, pressure ulcer prevention, acute kidney injury prevention, and venous thromboembolism prophylaxis.  A 2014 systematic review did not find a benefit in terms of risk of death when the CDSS was combined with the electronic health record. There may be some benefits, however, in terms of other outcomes. A 2005 systematic review had concluded that CDSSs improved practitioner performance in 64% of the studies and patient outcomes in 13% of the studies. CDSSs features associated with improved practitioner performance included automatic electronic prompts rather than requiring user activation of the system.

A 2005 systematic review found... "Decision support systems significantly improved clinical practice in 68% of trials." The CDSS features associated with success included integration into the clinical workflow rather than as a separate log-in or screen., electronic rather than paper-based templates, providing decision support at the time and location of care rather than prior and providing recommendations for care.

However, later systematic reviews were less optimistic about the effects of CDS, with one from 2011 stating "There is a large gap between the postulated and empirically demonstrated benefits of [CDSS and other] eHealth technologies ... their cost-effectiveness has yet to be demonstrated".

A 5-year evaluation of the effectiveness of a CDSS in implementing rational treatment of bacterial infections was published in 2014; according to the authors, it was the first long term study of a CDSS.

Challenges to adoption

Clinical challenges

Much effort has been put forth by many medical institutions and software companies to produce viable CDSSs to support all aspects of clinical tasks. However, with the complexity of clinical workflows and the demands on staff time high, care must be taken by the institution deploying the support system to ensure that the system becomes a fluid and integral part of the clinical workflow. Some CDSSs have met with varying amounts of success, while others have suffered from common problems preventing or reducing successful adoption and acceptance.

Two sectors of the healthcare domain in which CDSSs have had a large impact are the pharmacy and billing sectors. There are commonly used pharmacy and prescription ordering systems that now perform batch-based checking of orders for negative drug interactions and report warnings to the ordering professional. Another sector of success for CDSS is in billing and claims filing. Since many hospitals rely on Medicare reimbursements to stay in operation, systems have been created to help examine both a proposed treatment plan and the current rules of Medicare in order to suggest a plan that attempts to address both the care of the patient and the financial needs of the institution.

Other CDSSs that are aimed at diagnostic tasks have found success, but are often very limited in deployment and scope. The Leeds Abdominal Pain System went operational in 1971 for the University of Leeds hospital, and was reported to have produced a correct diagnosis in 91.8% of cases, compared to the clinicians' success rate of 79.6%.

Despite the wide range of efforts by institutions to produce and use these systems, widespread adoption and acceptance has still not yet been achieved for most offerings. One large roadblock to acceptance has historically been workflow integration. A tendency to focus only on the functional decision making core of the CDSS existed, causing a deficiency in planning for how the clinician will actually use the product in situ. Often CDSSs were stand-alone applications, requiring the clinician to cease working on their current system, switch to the CDSS, input the necessary data (even if it had already been inputted into another system), and examine the results produced. The additional steps break the flow from the clinician's perspective and cost precious time.

Technical challenges and barriers to implementation

Clinical decision support systems face steep technical challenges in a number of areas. Biological systems are profoundly complicated, and a clinical decision may utilize an enormous range of potentially relevant data. For example, an electronic evidence-based medicine system may potentially consider a patient's symptoms, medical history, family history and genetics, as well as historical and geographical trends of disease occurrence, and published clinical data on medicinal effectiveness when recommending a patient's course of treatment. 

Clinically, a large deterrent to CDSS acceptance is workflow integration. 

Another source of contention with many medical support systems is that they produce a massive number of alerts. When systems produce high volume of warnings (especially those that do not require escalation), aside from the annoyance, clinicians may pay less attention to warnings, causing potentially critical alerts to be missed.

Maintenance

One of the core challenges facing CDSS is difficulty in incorporating the extensive quantity of clinical research being published on an ongoing basis. In a given year, tens of thousands of clinical trials are published. Currently, each one of these studies must be manually read, evaluated for scientific legitimacy, and incorporated into the CDSS in an accurate way. In 2004, it was stated that the process of gathering clinical data and medical knowledge and putting them into a form that computers can manipulate to assist in clinical decision-support is "still in its infancy".

Nevertheless, it is more feasible for a business to do this centrally, even if incompletely, than for each individual doctor to try to keep up with all the research being published.

In addition to being laborious, integration of new data can sometimes be difficult to quantify or incorporate into the existing decision support schema, particularly in instances where different clinical papers may appear conflicting. Properly resolving these sorts of discrepancies is often the subject of clinical papers itself, which often take months to complete.

Evaluation

In order for a CDSS to offer value, it must demonstrably improve clinical workflow or outcome. Evaluation of CDSS is the process of quantifying its value to improve a system's quality and measure its effectiveness. Because different CDSSs serve different purposes, there is no generic metric which applies to all such systems; however, attributes such as consistency (with itself, and with experts) often apply across a wide spectrum of systems.

The evaluation benchmark for a CDSS depends on the system's goal: for example, a diagnostic decision support system may be rated based upon the consistency and accuracy of its classification of disease (as compared to physicians or other decision support systems). An evidence-based medicine system might be rated based upon a high incidence of patient improvement, or higher financial reimbursement for care providers.

Combining with electronic health records

Implementing EHRs was an inevitable challenge. The reasons behind this challenge are that it is a relatively uncharted area, and there are many issues and complications during the implementation phase of an EHR. This can be seen in the numerous studies that have been undertaken. However, challenges in implementing electronic health records (EHRs) have received some attention, but less is known about the process of transitioning from legacy EHRs to newer systems.

EHRs are a way to capture and utilise real-time data to provide high-quality patient care, ensuring efficiency and effective use of time and resources. Incorporating EHR and CDSS together into the process of medicine has the potential to change the way medicine has been taught and practiced. It has been said that "the highest level of EHR is a CDSS".

Since "clinical decision support systems (CDSS) are computer systems designed to impact clinician decision making about individual patients at the point in time that these decisions are made", it is clear that it would be beneficial to have a fully integrated CDSS and EHR. 

Even though the benefits can be seen, to fully implement a CDSS that is integrated with an EHR has historically required significant planning by the healthcare facility/organisation, in order for the purpose of the CDSS to be successful and effective. The success and effectiveness can be measured by the increase in patient care being delivered and reduced adverse events occurring. In addition, there would be a saving of time and resources, and benefits in terms of autonomy and financial benefits to the healthcare facility/organisation.

Benefits of CDSS combined with EHR

A successful CDSS/EHR integration will allow the provision of best practice, high quality care to the patient, which is the ultimate goal of healthcare.

Errors have always occurred in healthcare, so trying to minimise them as much as possible is important in order to provide quality patient care. Three areas that can be addressed with the implementation of CDSS and Electronic Health Records (EHRs), are:
  1. Medication prescription errors
  2. Adverse drug events
  3. Other medical errors
CDSSs will be most beneficial in the future when healthcare facilities are "100% electronic" in terms of real-time patient information, thus simplifying the number of modifications that have to occur to ensure that all the systems are up to date with each other.

The measurable benefits of clinical decision support systems on physician performance and patient outcomes remain the subject of ongoing research.

Barriers

Implementing electronic health records (EHR) in healthcare settings incurs challenges; none more important than maintaining efficiency and safety during rollout, but in order for the implementation process to be effective, an understanding of the EHR users' perspectives is key to the success of EHR implementation projects. In addition to this, adoption needs to be actively fostered through a bottom-up, clinical-needs-first approach. The same can be said for CDSS.

As of 2007, the main areas of concern with moving into a fully integrated EHR/CDSS system have been:
  1. Privacy
  2. Confidentiality
  3. User-friendliness
  4. Document accuracy and completeness
  5. Integration
  6. Uniformity
  7. Acceptance
  8. Alert desensitisation
as well as the key aspects of data entry that need to be addressed when implementing a CDSS to avoid potential adverse events from occurring. These aspects include whether:
  • correct data is being used
  • all the data has been entered into the system
  • current best practice is being followed
  • the data is evidence-based
A service oriented architecture has been proposed as a technical means to address some of these barriers.

Status in Australia

As of July 2015, the planned transition to EHRs in Australia is facing difficulties. The majority of healthcare facilities are still running completely paper-based systems, and some are in a transition phase of scanned EHRs, or are moving towards such a transition phase.

Victoria has attempted to implement EHR across the state with its HealthSMART program, but due to unexpectedly high costs it has cancelled the project.

South Australia (SA) however is slightly more successful than Victoria in the implementation of an EHR. This may be due to all public healthcare organisations in SA being centrally run. 

(However, on the other hand, the UK's National Health Service is also centrally administered, and its National Programme for IT in the 2000s, which included EHRs in its remit, was an expensive disaster.) 

SA is in the process of implementing "Enterprise patient administration system (EPAS)". This system is the foundation for all public hospitals and health care sites for an EHR within SA and it was expected that by the end of 2014 all facilities in SA will be connected to it. This would allow for successful integration of CDSS into SA and increase the benefits of the EHR. By July 2015 it was reported that only 3 out of 75 health care facilities implemented EPAS.

With the largest health system in the country and a federated rather than centrally administered model, New South Wales is making consistent progress towards statewide implementation of EHRs. The current iteration of the state's technology, eMR2, includes CDSS features such as a sepsis pathway for identifying at-risk patients based upon data input to the electronic record. As of June 2016, 93 of 194 sites in-scope for the initial roll-out had implemented eMR2.

Status in Finland

Duodecim EBMEDS Clinical Decision Support service is used by more than 60% of Finnish public health care doctors. 

DeepMind

From Wikipedia, the free encyclopedia
 
DeepMind Technologies Limited
DeepMind logo.png
Type of businessSubsidiary
Founded23 September 2010;
Headquarters
6 Pancras Square,
London N1C 4AG, UK
Founder(s)
CEODemis Hassabis
General managerLila Ibrahim
IndustryArtificial Intelligence
Employees1,000+ as of June 2020
ParentIndependent (2010–2014)
Google Inc. (2014–2015)
Alphabet Inc. (2015–present)
URLwww.deepmind.com
Entrance of building where Google and DeepMind are located at 6 Pancras Square, London, UK.

DeepMind Technologies is a UK artificial intelligence company founded in September 2010, and acquired by Google in 2014. The company is based in London, with research centres in Canada, France, and the United States. In 2015, it became a wholly owned subsidiary of Alphabet Inc.
 
The company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.

The company made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.

History


During one of the interviews, Demis Hassabis said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included Breakout, Pong and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. “The cognitive processes which the AI goes through are said to be very like those a human who had never seen the game would use to understand and attempt to master it.” The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything.

Major venture capital firms Horizons Ventures and Founders Fund invested in the company, as well as entrepreneurs Scott Banister, Peter Thiel, and Elon Musk. Jaan Tallinn was an early investor and an adviser to the company. On 26 January 2014, Google announced the company had acquired DeepMind for $500 million, and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013. The company was afterwards renamed Google DeepMind and kept that name for about two years.

In 2014, DeepMind received the "Company of the Year" award from Cambridge Computer Laboratory.

In September 2015, DeepMind and the Royal Free NHS Trust signed their initial Information Sharing Agreement (ISA) to co-develop a clinical task management app, Streams.

After Google's acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board. DeepMind, together with Amazon, Google, Facebook, IBM and Microsoft, is a founding member of Partnership on AI, an organization devoted to the society-AI interface. DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor. In October 2017, DeepMind launched a new research team to investigate AI ethics.

In December 2019, Co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role.

Machine learning

DeepMind Technologies' goal is to "solve intelligence", which they are trying to achieve by combining "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms". They are trying to formalize intelligence in order to not only implement it into machines, but also understand the human brain, as Demis Hassabis explains:
[...] attempting to distil intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds.
Google Research has released a paper in 2016 regarding AI Safety and avoiding undesirable behaviour during the AI learning process. Deepmind has also released several publications via its website. In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours.

To date, the company has published research on computer systems that are able to play games, and developing these systems, ranging from strategy games such as Go to arcade games. According to Shane Legg, human-level machine intelligence can be achieved "when a machine can learn to play a really wide range of games from perceptual stream input and output, and transfer understanding across games[...]."

Research describing an AI playing seven different Atari 2600 video games (the Pong game in Video Olympics, Breakout, Space Invaders, Seaquest, Beamrider, Enduro, and Q*bert) reportedly led to the company's acquisition by Google. Hassabis has mentioned the popular e-sport game StarCraft as a possible future challenge, since it requires a high level of strategic thinking and handling imperfect information. The first demonstration of the DeepMind progress in StarCraft II occurred on 24 January 2019, on StarCrafts Twitch channel and DeepMind's YouTube channel.

In July 2018, researchers from DeepMind trained one of its systems to play the famous computer game Quake III Arena.

As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science. DeepMind has received substantial media attention, especially during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019.

Deep reinforcement learning

As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within its scope, DeepMind claims that its system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolutional neural network, with a novel form of Q-learning, a form of model-free reinforcement learning. They test the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could.

In 2013, DeepMind demonstrated an AI system could surpass human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert. DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Doom, which first appeared in the early 1990s.

In 2020, DeepMind published Agent57, an AI Agent which surpasses human level performance on all 57 games of the Atari2600 suite.

AlphaGo and successors

In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This is the first time an artificial intelligence (AI) defeated a professional Go player. Previously, computers were only known to have played Go at "amateur" level. Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force.

In March 2016 it beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with a score of 4-1 in a five-game match.

In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. It used a supervised learning protocol, studying large numbers of games played by humans against each other.

In 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to 0. AlphaGo Zero's strategies were self-taught. AlphaGo Zero was able to beat its predecessor after just three days with less processing power than AlphaGo; in comparison, the original AlphaGo needed months to learn how to play.

Later that year, AlphaZero, a modified version of AlphaGo Zero but for handling any two-player game of perfect information, gained superhuman abilities at chess and shogi. Like AlphaGo Zero, AlphaZero learned solely through self-play.

Technology

AlphaGo technology was developed based on the deep reinforcement learning approach. This makes AlphaGo different from the rest of AI technologies on the market. With that said, AlphaGo's ‘brain’ was introduced to various moves based on the historical tournament data. The number of moves was increased gradually until it eventually processed over 30 million of them. The aim was to have the system mimic the human player and eventually become better. It played against itself and learned not only from its own defeats but wins as well; thus, it learned to improve itself over the time and increased its winning rate as a result.

AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training these networks employed a lookahead Monte Carlo tree search (MCTS), using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.

Zero trained using reinforcement learning in which the system played millions of games against itself. Its only guide was to increase its win rate. It did so without learning from games played by humans. Its only input features are the black and white stones from the board. It uses a single neural network, rather than separate policy and value networks. Its simplified tree search relies upon this neural network to evaluate positions and sample moves, without Monte Carlo rollouts. A new reinforcement learning algorithm incorporates lookahead search inside the training loop. AlphaGo Zero employed around 15 people and millions in computing resources. Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48.

AlphaFold

In 2016 DeepMind turned its artificial intelligence to protein folding, one of the toughest problems in science. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. “This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem,” Hassabis said to The Guardian.

WaveNet and WaveRNN

Also in 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant. In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet.

In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI. In 2019, Google started to roll it out to Google Duo users.

AlphaStar

In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match.

In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed. By October 2019, AlphaStar reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions.

Miscellaneous contributions to Google

Google has stated that DeepMind algorithms have greatly increased the efficiency of cooling its data centers. In addition, DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations. DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power.

DeepMind Health

In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was announced to develop AI applications for healthcare. DeepMind would be applied to the analysis of anonymised eye scans, searching for early signs of diseases leading to blindness

In August 2016, a research programme with University College London Hospital was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas.

There are also projects with the Royal Free London NHS Foundation Trust and Imperial College Healthcare NHS Trust to develop new clinical mobile apps linked to electronic patient records. Staff at the Royal Free Hospital were reported as saying in December 2017 that access to patient data through the app had saved a ‘huge amount of time’ and made a ‘phenomenal’ difference to the management of patients with acute kidney injury. Test result data is sent to staff's mobile phones and alerts them to change in the patient's condition. It also enables staff to see if someone else has responded, and to show patients their results in visual form.

In November 2017, DeepMind announced a research partnership with the Cancer Research UK Centre at Imperial College London with the goal of improving breast cancer detection by applying machine learning to mammography. Additionally, in February 2018, DeepMind announced it was working with the U.S. Department of Veterans Affairs in an attempt to use machine learning to predict the onset of acute kidney injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need.

DeepMind developed an app called Streams, which sends alerts to doctors about patients at risk of acute risk injury. On 13 November 2018, DeepMind announced that its health division and the Streams app would be absorbed into Google Health. Privacy advocates said the announcement betrayed patient trust and appeared to contradict previous statements by DeepMind that patient data would not be connected to Google accounts or services. A spokesman for DeepMind said that patient data would still be kept separate from Google services or projects.

NHS data-sharing controversy

In April 2016, New Scientist obtained a copy of a data sharing agreement between DeepMind and the Royal Free London NHS Foundation Trust. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions.

A complaint was filed to the Information Commissioner's Office (ICO), arguing that the data should be pseudonymised and encrypted. In May 2016, New Scientist published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare products Regulatory Agency.

In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her "considered opinion" the data-sharing agreement between DeepMind and the Royal Free took place on an "inappropriate legal basis". The Information Commissioner's Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind.

DeepMind Ethics and Society

In October 2017, DeepMind announced a new research unit, DeepMind Ethics & Society. Their goal is to fund external research of the following themes: privacy, transparency, and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world's challenges. As a result, the team hopes to further understand the ethical implications of AI and aid society to seeing AI can be beneficial.

This new subdivision of DeepMind is a completely separate unit from the partnership of leading companies using AI, academia, civil society organizations and nonprofits of the name Partnership on Artificial Intelligence to Benefit People and Society of which DeepMind is also a part. The DeepMind Ethics and Society board is also distinct from the mooted AI Ethics Board that Google originally agreed to form when acquiring DeepMind.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...