Search This Blog

Tuesday, May 26, 2015

Friedrich Hayek


From Wikipedia, the free encyclopedia

Friedrich Hayek
Friedrich Hayek.jpg
Born (1899-05-08)8 May 1899
Vienna, Cisleithania, Austria-Hungary
Died 23 March 1992(1992-03-23) (aged 92)
Freiburg im Breisgau, Baden-Württemberg, Germany
Nationality
  • Austrian
  • British
Institution
Field
School or tradition
Austrian School
Alma mater
Influences
Influenced
Contributions
Awards
Signature Friedrich von Hayek signature.gif
Information at IDEAS / RePEc

Friedrich Hayek CH (German: [ˈfʁiːdʁɪç ˈaʊ̯ɡʊst ˈhaɪ̯ɛk]; 8 May 1899 – 23 March 1992), born in Austria-Hungary as Friedrich August von Hayek and frequently referred to as F. A. Hayek, was an Austrian and British economist and philosopher best known for his defence of classical liberalism. Hayek shared the Nobel Memorial Prize in Economic Sciences with Gunnar Myrdal for his "pioneering work in the theory of money and economic fluctuations and ... penetrating analysis of the interdependence of economic, social and institutional phenomena".[1]

Hayek was a major social theorist and political philosopher of the twentieth century,[2][3] and his account of how changing prices communicate information which enables individuals to co-ordinate their plans is widely regarded as an important achievement in economics.[4]

Hayek served in World War I and said that his experience in the war and his desire to help avoid the mistakes that had led to the war led him to his career. Hayek lived in Austria, Great Britain, the United States and Germany, and became a British subject in 1938. He spent most of his academic life at the London School of Economics (LSE), the University of Chicago, and the University of Freiburg.

In 1984, he was appointed a member of the Order of the Companions of Honour by Queen Elizabeth II on the advice of Prime Minister Margaret Thatcher for his "services to the study of economics".[5] He was the first recipient of the Hanns Martin Schleyer Prize in 1984.[6] He also received the US Presidential Medal of Freedom in 1991 from President George H. W. Bush.[7] In 2011, his article "The Use of Knowledge in Society" was selected as one of the top 20 articles published in The American Economic Review during its first 100 years.[8]

Early life

Friedrich August von Hayek was born in Vienna to August von Hayek and Felicitas née von Juraschek. Friedrich's father, from whom he received his middle name, was also born in Vienna in 1871. He was a medical doctor employed by the municipal ministry of health, with passion in botany, in which he wrote a number of monographs.
August von Hayek was also a part-time botany lecturer at the University of Vienna. Friedrich's mother was born in 1875 to a wealthy, conservative, land-owning family. As her mother died several years prior to Friedrich's birth, Felicitas gained a significant inheritance which provided as much as half of her and August's income during the early years of their marriage. Hayek was the oldest of three brothers, Heinrich (1900–69) and Erich (1904–86), who were one-and-a-half and five years younger than him.[9]

His father's career as a university professor influenced Friedrich's goals later in life.[10] Both of his grandfathers, who lived long enough for Friedrich to know them, were scholars. Franz von Juraschek was a leading economist in Austria-Hungary and a close friend of Eugen Böhm von Bawerk, one of the founders of the Austrian School of Economics. Von Juraschek was a statistician and was later employed by the Austrian government. Friedrich's paternal gradfather, Gustav Edler von Hayek, taught natural sciences at the Imperial Realobergymnasium (secondary school) in Vienna. He wrote systematic works in biology, some of which are relatively well known.[11]

On his mother's side, Hayek was second cousin to the philosopher Ludwig Wittgenstein. His mother often played with Wittgenstein's sisters, and had known Ludwig well. As a result of their family relationship, Hayek became one of the first to read Wittgenstein's Tractatus Logico-Philosophicus when the book was published in its original German edition in 1921. Although Hayek met Wittgenstein on only a few occasions, Hayek said that Wittgenstein's philosophy and methods of analysis had a profound influence on his own life and thought.[12] In his later years, Hayek recalled a discussion of philosophy with Wittgenstein, when both were officers during World War I.[13] After Wittgenstein's death, Hayek had intended to write a biography of Wittgenstein and worked on collecting family materials; and he later assisted biographers of Wittgenstein.[14]

At his father's suggestion, Hayek, as a teenager, read the genetic and evolutionary works of Hugo de Vries and the philosophical works of Ludwig Feuerbach.[15] In school Hayek was much taken by one instructor's lectures on Aristotle's ethics.

In 1917, Hayek joined an artillery regiment in the Austro-Hungarian Army and fought on the Italian front. Much of Hayek's combat experience was spent as a spotter in an aeroplane. Hayek suffered damage to his hearing in his left ear during the war,[16] and was decorated for bravery. During this time Hayek also survived the 1918 flu pandemic.[17]

Hayek then decided to pursue an academic career, determined to help avoid the mistakes that had led to the war. Hayek said of his experience, "The decisive influence was really World War I. It's bound to draw your attention to the problems of political organization." He vowed to work for a better world.[18]

Education and career

At the University of Vienna, Hayek earned doctorates in law and political science in 1921 and 1923 respectively; and he also studied philosophy, psychology, and economics. For a short time, when the University of Vienna closed, Hayek studied in Constantin von Monakow's Institute of Brain Anatomy, where Hayek spent much of his time staining brain cells. Hayek's time in Monakow's lab, and his deep interest in the work of Ernst Mach, inspired Hayek's first intellectual project, eventually published as The Sensory Order (1952). It located connective learning at the physical and neurological levels, rejecting the "sense data" associationism[clarification needed] of the empiricists and logical positivists. Hayek presented his work to the private seminar he had created with Herbert Furth called the Geistkreis.[19]

During Hayek's years at the University of Vienna, Carl Menger's work on the explanatory strategy of social science and Friedrich von Wieser's commanding presence in the classroom left a lasting influence on him.[20] Upon the completion of his examinations, Hayek was hired by Ludwig von Mises on the recommendation of Wieser as a specialist for the Austrian government working on the legal and economic details of the Treaty of Saint Germain.

Between 1923 and 1924 Hayek worked as a research assistant to Prof. Jeremiah Jenks of New York University, compiling macroeconomic data on the American economy and the operations of the US Federal Reserve.[21]

Initially sympathetic to Wieser's democratic socialism, Hayek's economic thinking shifted away from socialism and toward the classical liberalism of Carl Menger after reading von Mises' book Socialism. It was sometime after reading Socialism that Hayek began attending von Mises' private seminars, joining several of his university friends, including Fritz Machlup, Alfred Schutz, Felix Kaufmann, and Gottfried Haberler, who were also participating in Hayek's own, more general, private seminar. It was during this time that he also encountered and befriended noted political philosopher Eric Voegelin, with whom he retained a long-standing relationship.[22]

With the help of Mises, in the late 1920s Hayek founded and served as director of the Austrian Institute for Business Cycle Research, before joining the faculty of the London School of Economics (LSE) in 1931 at the behest of Lionel Robbins. Upon his arrival in London, Hayek was quickly recognised as one of the leading economic theorists in the world, and his development of the economics of processes in time and the co-ordination function of prices inspired the ground-breaking work of John Hicks, Abba Lerner, and many others in the development of modern microeconomics.[23]

In 1932, Hayek suggested that private investment in the public markets was a better road to wealth and economic co-ordination in Britain than government spending programs, as argued in a letter he co-signed with Lionel Robbins and others in an exchange of letters with John Maynard Keynes in The Times.[24][25] The nearly decade long deflationary depression in Britain dating from Churchill's decision in 1925 to return Britain to the gold standard at the old pre-war, pre-inflationary par was the public policy backdrop for Hayek's single public engagement with Keynes over British monetary and fiscal policy, otherwise Hayek and Keynes agreed on many theoretical matters, and their economic disagreements were fundamentally theoretical, having to do almost exclusively with the relation of the economics of extending the length of production to the economics of labour inputs.

Economists who studied with Hayek at the LSE in the 1930s and the 1940s include Arthur Lewis, Ronald Coase, John Kenneth Galbraith, Abba Lerner, Nicholas Kaldor, George Shackle, Thomas Balogh, Vera Smith, L. K. Jha, Arthur Seldon, Paul Rosenstein-Rodan, and Oskar Lange.[26][27][28] Hayek also taught or tutored many other LSE students, including David Rockefeller.[29]

Unwilling to return to Austria after the Anschluss brought it under the control of Nazi Germany in 1938, Hayek remained in Britain and became a British subject in 1938. He held this status for the remainder of his life, but he did not live in Great Britain after 1950. He lived in the United States from 1950 to 1962 and then mostly in Germany but also briefly in Austria.[30]

The Road to Serfdom

Hayek was concerned about the general view in Britain's academia that fascism was a capitalist reaction to socialism and The Road to Serfdom arose from those concerns. It was written between 1940 and 1943. The title was inspired by the French classical liberal thinker Alexis de Tocqueville's writings on the "road to servitude".[31] It was first published in Britain by Routledge in March 1944 and was quite popular, leading Hayek to call it "that unobtainable book", also due in part to wartime paper rationing.[32] When it was published in the United States by the University of Chicago in September of that year, it achieved greater popularity than in Britain.[33] At the arrangement of editor Max Eastman, the American magazine Reader's Digest also published an abridged version in April 1945, enabling The Road to Serfdom to reach a far wider audience than academics. The book is widely popular among those advocating individualism and classical liberalism.[citation needed]

Chicago

In 1950, Hayek left the London School of Economics for the University of Chicago, where he became a professor in the Committee on Social Thought. Senior university officials at Chicago wanted the Economics Department to hire him but they declined to do so. Hayek's friend Milton Friedman was highly critical of Hayek's economics writings, Prices and Production and his book on capital theory.[34] Hayek at the time was working in political theory, not economics, and was hostile to the positivist approach the Department embraced. Hayek had frequent contacts with some members of the Department of Economics, and his political views resembled those of many of the Chicago School activists. In terms of ideology, said Friedman, the majority of the Chicago economics department "was on Hayek's side."[35] Hayek actively promoted Aaron Director, who was active in the Chicago School in helping to fund and establish what became the "Law and Society" program in the University of Chicago Law School.[36]
Hayek, Frank Knight, Friedman and George Stigler worked together in forming the Mont Pèlerin Society, an international forum for libertarian economists. Hayek and Friedman cooperated in support of the Intercollegiate Society of Individualists, later renamed the Intercollegiate Studies Institute, an American student organisation devoted to libertarian ideas.[37][38]

Hayek's first class at Chicago was a faculty seminar on the philosophy of science attended by many of the University's most notable scientists of the time, including Enrico Fermi, Sewall Wright and Leó Szilárd. During his time at Chicago, Hayek worked on the philosophy of science, economics, political philosophy, and the history of ideas. Hayek's economics notes from this period have yet to be published. Hayek received a Guggenheim Fellowship in 1954.[39]

After editing a book on John Stuart Mill's letters he planned to publish two books on the liberal order, The Constitution of Liberty and "The Creative Powers of a Free Civilization" (eventually the title for the second chapter of The Constitution of Liberty).[40] He completed The Constitution of Liberty in May 1959, with publication in February 1960. Hayek was concerned "with that condition of men in which coercion of some by others is reduced as much as is possible in society".[41] Hayek was disappointed that the book did not receive the same enthusiastic general reception as The Road to Serfdom had sixteen years before.[42]

Freiburg, Los Angeles, and Salzburg

From 1962 until his retirement in 1968, he was a professor at the University of Freiburg, West Germany, where he began work on his next book, Law, Legislation and Liberty. Hayek regarded his years at Freiburg as "very fruitful".[43] Following his retirement, Hayek spent a year as a visiting professor of philosophy at the University of California, Los Angeles, where he continued work on Law, Legislation and Liberty, teaching a graduate seminar by the same name and another on the philosophy of social science. Primary drafts of the book were completed by 1970, but Hayek chose to rework his drafts and finally brought the book to publication in three volumes in 1973, 1976 and 1979.

He became a professor at the University of Salzburg from 1969 to 1977; he then returned to Freiburg, where he spent the rest of his days. When Hayek left Salzburg in 1977, he wrote, "I made a mistake in moving to Salzburg." The economics department was small, and the library facilities were inadequate.[44]

Nobel laureate

On 9 October 1974, it was announced that Hayek would be awarded the Nobel Memorial Prize in Economics, along with Swedish socialist economist Gunnar Myrdal. The reasons for the two of them winning the prize are described in the Nobel committee's press release.[45] He was surprised at being given the award and believed that he was given it with Myrdal to balance the award with someone from the opposite side of the political spectrum.[46]

During the Nobel ceremony in December 1974, Hayek met the Russian dissident Aleksandr Solzhenitsyn. Hayek later sent him a Russian translation of The Road to Serfdom.[46] Although he spoke with apprehension at his award speech about the danger which the authority of the prize would lend to an economist,[47] the prize brought much greater public awareness of Hayek and has been described by his biographer as "the great rejuvenating event in his life".[48]

Later years

United Kingdom politics

In February 1975, Margaret Thatcher was elected leader of the British Conservative Party. The Institute of Economic Affairs arranged a meeting between Hayek and Thatcher in London soon after.[49] During Thatcher's only visit to the Conservative Research Department in the summer of 1975, a speaker had prepared a paper on why the "middle way" was the pragmatic path the Conservative Party should take, avoiding the extremes of left and right. Before he had finished, Thatcher "reached into her briefcase and took out a book. It was Hayek's The Constitution of Liberty. Interrupting our pragmatist, she held the book up for all of us to see. 'This', she said sternly, 'is what we believe', and banged Hayek down on the table".[50]

In 1977, Hayek was critical of the Lib-Lab pact, in which the British Liberal Party agreed to keep the British Labour government in office. Writing to The Times, Hayek said, "May one who has devoted a large part of his life to the study of the history and the principles of liberalism point out that a party that keeps a socialist government in power has lost all title to the name 'Liberal'. Certainly no liberal can in future vote 'Liberal'".[51] Hayek was criticised by Liberal politicians Gladwyn Jebb and Andrew Phillips, who both claimed that the purpose of the pact was to discourage socialist legislation.

Lord Gladwyn pointed out that the German Free Democrats were in coalition with the German Social Democrats.[52] Hayek was defended by Professor Antony Flew who stated that the German Social Democrats, unlike the British Labour Party, had, since the late 1950s, abandoned public ownership of the means of production, distribution and exchange and had instead embraced the social market economy.[53]

In 1978, Hayek came into conflict with the Liberal Party leader, David Steel, who claimed that liberty was possible only with "social justice and an equitable distribution of wealth and power, which in turn require a degree of active government intervention" and that the Conservative Party were more concerned with the connection between liberty and private enterprise than between liberty and democracy. Hayek claimed that a limited democracy might be better than other forms of limited government at protecting liberty but that an unlimited democracy was worse than other forms of unlimited government because "its government loses the power even to do what it thinks right if any group on which its majority depends thinks otherwise".

Hayek stated that if the Conservative leader had said "that free choice is to be exercised more in the market place than in the ballot box, she has merely uttered the truism that the first is indispensable for individual freedom while the second is not: free choice can at least exist under a dictatorship that can limit itself but not under the government of an unlimited democracy which cannot".[54]

Influence on central European politics

US President Ronald Reagan listed Hayek as among the two or three people who most influenced his philosophy and welcomed Hayek to the White House as a special guest.[55] In the 1970s and 1980s, the writings of Hayek were also a major influence on many of the leaders of the "velvet" revolution in Central Europe during the collapse of the old Soviet Empire. Here are some supporting examples:
There is no figure who had more of an influence, no person had more of an influence on the intellectuals behind the Iron Curtain than Friedrich Hayek. His books were translated and published by the underground and black market editions, read widely, and undoubtedly influenced the climate of opinion that ultimately brought about the collapse of the Soviet Union.[56]
Milton Friedman (Hoover Institution)
The most interesting among the courageous dissenters of the 1980s were the classical liberals, disciples of F. A. Hayek, from whom they had learned about the crucial importance of economic freedom and about the often-ignored conceptual difference between liberalism and democracy.[57]
Andrzej Walicki (History, Notre Dame)
Estonian Prime Minister Mart Laar came to my office the other day to recount his country's remarkable transformation. He described a nation of people who are harder-working, more virtuous – yes, more virtuous, because the market punishes immorality – and more hopeful about the future than they've ever been in their history. I asked Mr. Laar where his government got the idea for these reforms. Do you know what he replied? He said, "We read Milton Friedman and F. A. Hayek."[58]
—US Representative Dick Armey
I was 25 years old and pursuing my doctorate in economics when I was allowed to spend six months of post-graduate studies in Naples, Italy. I read the Western economic textbooks and also the more general work of people like Hayek. By the time I returned to Czechoslovakia, I had an understanding of the principles of the market. In 1968, I was glad at the political liberalism of the Dubcek Prague Spring, but was very critical of the Third Way they pursued in economics.[59]
Václav Klaus (former President of the Czech Republic)

Recognition

In 1980, Hayek, a non-practicing Roman Catholic,[60] was one of twelve Nobel laureates to meet with Pope John Paul II, "to dialogue, discuss views in their fields, communicate regarding the relationship between Catholicism and science, and 'bring to the Pontiff's attention the problems which the Nobel Prize Winners, in their respective fields of study, consider to be the most urgent for contemporary man'".[61]

In 1984, he was appointed as a member of the Order of the Companions of Honour (CH) by Queen Elizabeth II of the United Kingdom on the advice of the British Prime Minister Margaret Thatcher for his "services to the study of economics".[5] Hayek had hoped to receive a baronetcy, and after he was awarded the CH he sent a letter to his friends requesting that he be called the English version of Friedrich (Frederick) from now on. After his 20 min audience with the Queen, he was "absolutely besotted" with her according to his daughter-in-law, Esca Hayek. Hayek said a year later that he was "amazed by her. That ease and skill, as if she'd known me all my life." The audience with the Queen was followed by a dinner with family and friends at the Institute of Economic Affairs. When, later that evening, Hayek was dropped off at the Reform Club, he commented: "I've just had the happiest day of my life."[62]

In 1991, US President George H. W. Bush awarded Hayek the Presidential Medal of Freedom, one of the two highest civilian awards in the United States, for a "lifetime of looking beyond the horizon". Hayek died on 23 March 1992 in Freiburg, Germany, and was buried on 4 April in the Neustift am Wald cemetery in the northern outskirts of Vienna according to the Catholic rite.[63] In 2011, his article The Use of Knowledge in Society was selected as one of the top 20 articles published in the American Economic Review during its first 100 years.[8]

The New York University Journal of Law and Liberty holds an annual lecture in his honour.[64]

Work

The business cycle

Hayek's principal investigations in economics concerned capital, money, and the business cycle. Mises had earlier applied the concept of marginal utility to the value of money in his Theory of Money and Credit (1912), in which he also proposed an explanation for "industrial fluctuations" based on the ideas of the old British Currency School and of Swedish economist Knut Wicksell. Hayek used this body of work as a starting point for his own interpretation of the business cycle, elaborating what later became known as the "Austrian Theory of the Business Cycle". Hayek spelled out the Austrian approach in more detail in his book, published in 1929, an English translation of which appeared in 1933 as Monetary Theory and the Trade Cycle. There he argued for a monetary approach to the origins of the cycle. In his Prices and Production (1931), Hayek argued that the business cycle resulted from the central bank's inflationary credit expansion and its transmission over time, leading to a capital misallocation caused by the artificially low interest rates. Hayek claimed that "the past instability of the market economy is the consequence of the exclusion of the most important regulator of the market mechanism, money, from itself being regulated by the market process".
Hayek's analysis was based on Böhm-Bawerk's concept of the "average period of production"[65] and on the effects that monetary policy could have upon it. In accordance with the reasoning later outlined in his essay The Use of Knowledge in Society (1945), Hayek argued that a monopolistic governmental agency like a central bank can neither possess the relevant information which should govern supply of money, nor have the ability to use it correctly.[66]

In 1929, Lionel Robbins assumed the helm of the London School of Economics (LSE). Eager to promote alternatives to what he regarded as the narrow approach of the school of economic thought that then dominated the English-speaking academic world (centred at the University of Cambridge and deriving largely from the work of Alfred Marshall), Robbins invited Hayek to join the faculty at LSE, which he did in 1931. According to Nicholas Kaldor, Hayek's theory of the time-structure of capital and of the business cycle initially "fascinated the academic world" and appeared to offer a less "facile and superficial" understanding of macroeconomics than the Cambridge school's.[67]

Also in 1931, Hayek critiqued Keynes's Treatise on Money (1930) in his "Reflections on the pure theory of Mr. J. M. Keynes"[68] and published his lectures at the LSE in book form as Prices and Production.[69] Unemployment and idle resources are, for Keynes, caused by a lack of effective demand; for Hayek, they stem from a previous, unsustainable episode of easy money and artificially low interest rates.

The economic calculation problem

Building on the earlier work of Ludwig von Mises and others, Hayek also argued that while in centrally planned economies an individual or a select group of individuals must determine the distribution of resources, these planners will never have enough information to carry out this allocation reliably. This argument, first proposed by Max Weber, says that the efficient exchange and use of resources can be maintained only through the price mechanism in free markets (see economic calculation problem).
In 1935, Hayek published Collectivist Economic Planning, a collection of essays from an earlier debate that had been initiated by Ludwig von Mises. Hayek included Mises's essay, in which Mises argued that rational planning was impossible under socialism.

Some socialists such as H. D. Dickinson and Oskar Lange, responded by invoking general equilibrium theory, which they argued disproved Mises's thesis. They noted that the difference between a planned and a free market system lay in who was responsible for solving the equations. They argued, if some of the prices chosen by socialist managers were wrong, gluts or shortages would appear, signalling them to adjust the prices up or down, just as in a free market. Through such a trial and error, a socialist economy could mimic the efficiency of a free market system, while avoiding its many problems.

Hayek challenged this vision in a series of contributions. In "Economics and Knowledge" (1937), he pointed out that the standard equilibrium theory assumed that all agents have full and correct information. In the real world, however, different individuals have different bits of knowledge, and furthermore, some of what they believe is wrong.

In The Use of Knowledge in Society (1945), Hayek argued that the price mechanism serves to share and synchronise local and personal knowledge, allowing society's members to achieve diverse, complicated ends through a principle of spontaneous self-organization. He contrasted the use of the price mechanism with central planning, arguing that the former allows for more rapid adaptation to changes in particular circumstances of time and place.[70] Thus, he set the stage for Oliver Williamson's later contrast between markets and hierarchies as alternative co-ordination mechanisms for economic transactions.[71] He used the term catallaxy to describe a "self-organizing system of voluntary co-operation". Hayek's research into this argument was specifically cited by the Nobel Committee in its press release awarding Hayek the Nobel prize.[45]

Against collectivism

Hayek was one of the leading academic critics of collectivism in the 20th century. Hayek argued that all forms of collectivism (even those theoretically based on voluntary co-operation) could only be maintained by a central authority of some kind. In Hayek's view, the central role of the state should be to maintain the rule of law, with as little arbitrary intervention as possible. In his popular book, The Road to Serfdom (1944) and in subsequent academic works, Hayek argued that socialism required central economic planning and that such planning in turn leads towards totalitarianism.

From The Road to Serfdom:
Although our modern socialists' promise of greater freedom is genuine and sincere, in recent years observer after observer has been impressed by the unforeseen consequences of socialism, the extraordinary similarity in many respects of the conditions under "communism" and "fascism".[72]
Hayek posited that a central planning authority would have to be endowed with powers that would impact and ultimately control social life, because the knowledge required for centrally planning an economy is inherently decentralised, and would need to be brought under control.

Though Hayek did argue that the state should provide law centrally, others have pointed out that this contradicts his arguments about the role of judges in "discovering" the law, suggesting that Hayek would have supported decentralized provision of legal services.[73]

Hayek also wrote that the state can play a role in the economy, and specifically, in creating a "safety net". He wrote, "There is no reason why, in a society which has reached the general level of wealth ours has, the first kind of security should not be guaranteed to all without endangering general freedom; that is: some minimum of food, shelter and clothing, sufficient to preserve health. Nor is there any reason why the state should not help to organize a comprehensive system of social insurance in providing for those common hazards of life against which few can make adequate provision."[74]

Investment and choice

Perhaps more fully than any other economist, Hayek investigated the choice theory of investment. He examined the inter-relations between non-permanent production goods and "latent" or potentially economic permanent resources – building on the choice theoretical insight that, "processes that take more time will evidently not be adopted unless they yield a greater return than those that take less time".[75]

Hayek's work on the microeconomics of the choice theoretics of investment, non-permanent goods, potential permanent resources, and economically-adapted permanent resources mark a central dividing point between his work in areas of macroeconomics and that of most all other economists. Hayek's work on the macroeconomic subjects of central planning, trade cycle theory, the division of knowledge, and entrepreneurial adaptation especially, differ greatly from the opinions of macroeconomic "Marshallian" economists in the tradition of John Maynard Keynes and the microeconomic "Walrasian" economists in the tradition of Abba Lerner.

Philosophy of science

During World War II, Hayek started doing the ‘Abuse of Reason’ project. His goal was to show how a number of then-popular doctrines and beliefs, doctrines with which he disagreed, had a common origin in some fundamental misconceptions about the social science.[76] In his philosophy of science, which has much in common with that of his good friend Karl Popper, Hayek was highly critical of what he termed scientism: a false understanding of the methods of science that has been mistakenly forced upon the social sciences, but that is contrary to the practices of genuine science. Usually, scientism involves combining the philosophers' ancient demand for demonstrative justification with the associationists' false view that all scientific explanations are simple two-variable linear relationships. Hayek points out that much of science involves the explanation of complex multivariable and nonlinear phenomena, and the social science of economics and undesigned order compares favourably with such complex sciences as Darwinian biology. These ideas were developed in The Counter-Revolution of Science in 1952, and in some of Hayek's later essays in the philosophy of science such as "Degrees of Explanation" and "The Theory of Complex Phenomena".

Psychology

In The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (1952), Hayek independently developed a "Hebbian learning" model of learning and memory – an idea which he first conceived in 1920, prior to his study of economics. Hayek's expansion of the "Hebbian synapse" construction into a global brain theory has received continued attention[77][78][79][80] in neuroscience, cognitive science, computer science, behavioural science, and evolutionary psychology, by scientists such as Gerald Edelman, and Joaquin Fuster.

Hayek posited two orders, the sensory order that we experience, and the natural order that natural science has revealed. Hayek thought that the sensory order is in fact a product of the brain. He characterized the brain as a highly complex but self-ordering, hierarchical classification system, a huge network of connections.

Spontaneous order

Hayek viewed the free price system not as a conscious invention (that which is intentionally designed by man), but as spontaneous order or what he referred to as "that which is the result of human action but not of human design". 
Thus, Hayek put the price mechanism on the same level as, for example, language.
Hayek attributed the birth of civilisation to private property in his book The Fatal Conceit (1988). He explained that price signals are the only means of enabling each economic decision maker to communicate tacit knowledge or dispersed knowledge to each other, to solve the economic calculation problem.

Social and political philosophy

In the latter half of his career Hayek made a number of contributions to social and political philosophy, which he based on his views on the limits of human knowledge,[81] and the idea of spontaneous order in social institutions. He argues in favour of a society organised around a market order, in which the apparatus of state is employed almost (though not entirely) exclusively to enforce the legal order (consisting of abstract rules, and not particular commands) necessary for a market of free individuals to function. These ideas were informed by a moral philosophy derived from epistemological concerns regarding the inherent limits of human knowledge. Hayek argued that his ideal individualistic, free-market polity would be self-regulating to such a degree that it would be 'a society which does not depend for its functioning on our finding good men for running it'.[82]

Hayek disapproved of the notion of 'social justice'. He compared the market to a game in which 'there is no point in calling the outcome just or unjust'[83] and argued that 'social justice is an empty phrase with no determinable content';[84] likewise "the results of the individual's efforts are necessarily unpredictable, and the question as to whether the resulting distribution of incomes is just has no meaning".[85] He generally regarded government redistribution of income or capital as an unacceptable intrusion upon individual freedom: "the principle of distributive justice, once introduced, would not be fulfilled until the whole of society was organized in accordance with it. This would produce a kind of society which in all essential respects would be the opposite of a free society."[84]

With regard to a safety net, Hayek advocated "some provision for those threatened by the extremes of indigence or starvation, be if only in the interest of those who require protection against acts of desperation on the part of the needy".[86] As referenced in the section on "The economic calculation problem", Hayek wrote that "there is no reason why... the state should not help to organize a comprehensive system of social insurance". Summarizing on this topic, Wapshott[87] writes "[Hayek] advocated mandatory universal health care and unemployment insurance, enforced, if not directly provided, by the state." Bernard Harcourt says that "Hayek was adamant about this."[88] In the 1973 Law, Legislation, and Liberty, Hayek wrote:
There is no reason why in a free society government should not assure to all, protection against severe deprivation in the form of an assured minimum income, or a floor below which nobody need descend. To enter into such an insurance against extreme misfortune may well be in the interest of all; or it may be felt to be a clear moral duty of all to assist, within the organised community, those who cannot help themselves. So long as such a uniform minimum income is provided outside the market to all those who, for any reason, are unable to earn in the market an adequate maintenance, this need not lead to a restriction of freedom, or conflict with the Rule of Law.[89]
And in The Road to Serfdom:
Nor is there any reason why the state should not assist the individuals in providing for those common hazards of life against which, because of their uncertainty, few individuals can make adequate provision. Where, as in the case of sickness and accident, neither the desire to avoid such calamities nor the efforts to overcome their consequences are as a rule weakened by the provision of assistance – where, in short, we deal with genuinely insurable risks – the case for the state's helping to organize a comprehensive system of social insurance is very strong.... Wherever communal action can mitigate disasters against which the individual can neither attempt to guard himself nor make the provision for the consequences, such communal action should undoubtedly be taken.[72]

Critiques

Business cycle critiques

Keynes asked his friend Piero Sraffa to respond publicly to Hayek's challenge;[clarification needed] instead of formulating an alternative theory, Sraffa elaborated on the logical inconsistencies of Hayek's argument, especially concerning the effect of inflation-induced "forced savings" on the capital sector and about the definition of a "natural" interest rate in a growing economy.[90] Others who responded negatively to Hayek's work on the business cycle included John Hicks, Frank Knight, and Gunnar Myrdal.[91] Kaldor later wrote that Hayek's Prices and Production had produced "a remarkable crop of critics" and that the total number of pages in British and American journals dedicated to the resulting debate "could rarely have been equalled in the economic controversies of the past."[67]


Hayek continued his research on monetary and capital theory, revising his theories of the relations between credit cycles and capital structure in Profits, Interest and Investment (1939) and The Pure Theory of Capital (1941), but his reputation as an economic theorist had by then fallen so much that those works were largely ignored, except for scathing critiques by Nicholas Kaldor.[67][92] Lionel Robbins himself, who had embraced the Austrian theory of the business cycle in The Great Depression (1934), later regretted having written that book and accepted many of the Keynesian counter-arguments.[93]

Hayek never produced the book-length treatment of "the dynamics of capital" that he had promised in the Pure Theory of Capital. After 1941, he continued to publish works on the economics of information, political philosophy, the theory of law, and psychology, but seldom on macroeconomics. At the University of Chicago, Hayek was not part of the economics department and did not influence the rebirth of neoclassical theory which took place there (see Chicago school of economics). When, in 1974, he shared the Nobel Memorial Prize in Economics with Gunnar Myrdal, the latter complained about being paired with an "ideologue". Milton Friedman declared himself "an enormous admirer of Hayek, but not for his economics. I think Prices and Production is a very flawed book. I think his [Pure Theory of Capital] is unreadable. On the other hand, The Road to Serfdom is one of the great books of our time."[93]

Critiques of his concept of collectivist rationalism

Arthur M. Diamond argues Hayek's problems arise when he goes beyond claims that can be evaluated within economic science. Diamond argued that: “The human mind, Hayek says, is not just limited in its ability to synthesize a vast array of concrete facts, it is also limited in its ability to give a deductively sound ground to ethics. Here is where the tension develops, for he also wants to give a reasoned moral defense of the free market. He is an intellectual skeptic who wants to give political philosophy a secure intellectual foundation. It is thus not too surprising that what results is confused and contradictory.”[94]

Chandran Kukathas argues that Hayek's defence of liberalism is unsuccessful because it rests on presuppositions which are incompatible. The unresolved dilemma of his political philosophy is how to mount a systematic defence of liberalism if one emphasizes the limited capacity of reason.[95]

Norman P. Barry similarly notes that the “critical rationalism” in Hayek’s writings appears incompatible with “a certain kind of fatalism, that we must wait for evolution to pronounce its verdict.”[96]

New Right critiques

Alain de Benoist of the Nouvelle Droite (New Right) produced a highly critical essay on Hayek's work in an issue of Telos, citing the flawed assumptions behind Hayek's idea of "spontaneous order" and the authoritarian, totalising implications of his free-market ideology.[97]

Hayek's views on dictatorship

Hayek had sent António de Oliveira Salazar a copy of Hayek’s The Constitution of Liberty (1960) in 1962. Hayek hoped that his book—this “preliminary sketch of new constitutional principles”—“may assist” Salazar “in his endeavour to design a constitution which is proof against the abuses of democracy.”

Hayek visited Chile in the 1970s and 1980s during the Government Junta of general Augusto Pinochet and accepted being named Honorary Chairman of the Centro de Estudios Públicos, the think tank formed by the economists who transformed Chile into a free market economy.

Asked about the liberal, non-democratic rule by a Chilean interviewer, Hayek is translated from German to Spanish to English as having said, "As long term institutions, I am totally against dictatorships. But a dictatorship may be a necessary system for a transitional period. [...] Personally I prefer a liberal dictatorship to democratic government devoid of liberalism. My personal impression – and this is valid for South America – is that in Chile, for example, we will witness a transition from a dictatorial government to a liberal government." In a letter to the London Times, he defended the Pinochet regime and said that he had "not been able to find a single person even in much maligned Chile who did not agree that personal freedom was much greater under Pinochet than it had been under Allende."[98][99] Hayek admitted that "it is not very likely that this will succeed, even if, at a particular point in time, it may be the only hope there is.", he explained, however, "It is not certain hope, because it will always depend on the goodwill of an individual, and there are very few individuals one can trust. But if it is the sole opportunity which exists at a particular moment it may be the best solution despite this. And only if and when the dictatorial government is visibly directing its steps towards limited democracy".

For Hayek, the supposedly stark difference between authoritarianism and totalitarianism has much importance and Hayek places heavy weight on this distinction in his defence of transitional dictatorship. For example, when Hayek visited Venezuela in May 1981, he was asked to comment on the prevalence of "totalitarian" regimes in Latin America. In reply, Hayek warned against confusing "totalitarianism with authoritarianism," and said that he was unaware of "any totalitarian governments in Latin America. The only one was Chile under Allende". For Hayek, however, the word 'totalitarian' signifies something very specific: the want to “organize the whole of society” to attain a “definite social goal” —which is stark in contrast to “liberalism and individualism”.[100]

Influence and recognition

Hayek's influence on the development of economics is widely acknowledged. Hayek is the second-most frequently cited economist[citation needed] (after Kenneth Arrow) in the Nobel lectures of the prize winners in economics, particularly since his lecture was critical of the field of orthodox economics and neo-classical modelisation. A number of Nobel Laureates in economics, such as Vernon Smith and Herbert A. Simon, recognise Hayek as the greatest modern economist.[citation needed] Another Nobel winner, Paul Samuelson, believed that Hayek was worthy of his award but nevertheless claimed that "there were good historical reasons for fading memories of Hayek within the mainstream last half of the twentieth century economist fraternity. In 1931, Hayek's Prices and Production had enjoyed an ultra-short Byronic success. In retrospect hindsight tells us that its mumbo-jumbo about the period of production grossly misdiagnosed the macroeconomics of the 1927–1931 (and the 1931–2007) historical scene".[101] Despite this comment, Samuelson spent the last 50 years of his life obsessed with the problems of capital theory identified by Hayek and Böhm-Bawerk, and Samuelson flatly judged Hayek to have been right and his own teacher, Joseph Schumpeter, to have been wrong on the central economic question of the 20th century, the feasibility of socialist economic planning in a production goods dominated economy.[102]

Hayek is widely recognised for having introduced the time dimension to the equilibrium construction and for his key role in helping inspire the fields of growth theory, information economics, and the theory of spontaneous order. The "informal" economics presented in Milton Friedman's massively influential popular work Free to Choose (1980), is explicitly Hayekian in its account of the price system as a system for transmitting and co-ordinating knowledge. This can be explained by the fact that Friedman taught Hayek's famous paper "The Use of Knowledge in Society" (1945) in his graduate seminars.

In 1944 he was elected as a Fellow of the British Academy,[103] after he was nominated for membership by Keynes.[104]

Harvard economist and former Harvard University President Lawrence Summers explains Hayek's place in modern economics: "What's the single most important thing to learn from an economics course today? What I tried to leave my students with is the view that the invisible hand is more powerful than the [un]hidden hand. Things will happen in well-organized efforts without direction, controls, plans. That's the consensus among economists. That's the Hayek legacy."[105]

By 1947, Hayek was an organiser of the Mont Pelerin Society, a group of classical liberals who sought to oppose what they saw as socialism in various areas. He was also instrumental in the founding of the Institute of Economic Affairs, the free-market think tank that inspired Thatcherism. He was in addition a member of the Philadelphia Society.[106]

Hayek had a long-standing and close friendship with philosopher of science Karl Popper, also from Vienna. In a letter to Hayek in 1944, Popper stated, "I think I have learnt more from you than from any other living thinker, except perhaps Alfred Tarski." (See Hacohen, 2000). Popper dedicated his Conjectures and Refutations to Hayek. For his part, Hayek dedicated a collection of papers, Studies in Philosophy, Politics, and Economics, to Popper and, in 1982, said that "ever since his Logik der Forschung first came out in 1934, I have been a complete adherent to his general theory of methodology".[107] Popper also participated in the inaugural meeting of the Mont Pelerin Society. Their friendship and mutual admiration, however, do not change the fact that there are important differences between their ideas.[108]

Hayek also played a central role in Milton Friedman's intellectual development. Friedman wrote:
"My interest in public policy and political philosophy was rather casual before I joined the faculty of the University of Chicago. Informal discussions with colleagues and friends stimulated a greater interest, which was reinforced by Friedrich Hayek's powerful book The Road to Serfdom, by my attendance at the first meeting of the Mont Pelerin Society in 1947, and by discussions with Hayek after he joined the university faculty in 1950. In addition, Hayek attracted an exceptionally able group of students who were dedicated to a libertarian ideology. They started a student publication, The New Individualist Review, which was the outstanding libertarian journal of opinion for some years. I served as an adviser to the journal and published a number of articles in it...."[109]
Hayek's greatest intellectual debt was to Carl Menger, who pioneered an approach to social explanation similar to that developed in Britain by Bernard Mandeville and the Scottish moral philosophers in the Scottish Enlightenment. He had a wide-reaching influence on contemporary economics, politics, philosophy, sociology, psychology and anthropology. For example, Hayek's discussion in The Road to Serfdom (1944) about truth, falsehood and the use of language influenced some later opponents of postmodernism.[110]

Hayek and conservatism

Hayek received new attention in the 1980s and 1990s with the rise of conservative governments in the United States, United Kingdom, and Canada. After winning the United Kingdom general election, 1979, Margaret Thatcher appointed Keith Joseph, the director of the Hayekian Centre for Policy Studies, as her secretary of state for industry in an effort to redirect parliament's economic strategies. Likewise, David Stockman, Ronald Reagan's most influential financial official in 1981 was an acknowledged follower of Hayek.[111]

Hayek wrote an essay, "Why I Am Not a Conservative"[112] (included as an appendix to The Constitution of Liberty), in which he disparaged conservatism for its inability to adapt to changing human realities or to offer a positive political program, remarking, "Conservatism is only as good as what it conserves". Although he noted that modern day conservatism shares many opinions on economics with classical liberals, particularly a belief in the free market, he believed it's because conservatism wants to "stand still", whereas liberalism embraces the free market because it "wants to go somewhere". Hayek identified himself as a classical liberal but noted that in the United States it had become almost impossible to use "liberal" in its original definition, and the term "libertarian" has been used instead.

However, for his part, Hayek found this term "singularly unattractive" and offered the term "Old Whig" (a phrase borrowed from Edmund Burke) instead. In his later life, he said, "I am becoming a Burkean Whig." However, Whiggery as a political doctrine had little affinity for classical political economy, the tabernacle of the Manchester School and William Gladstone.[113] His essay has served as an inspiration to other liberal-minded economists wishing to distinguish themselves from conservative thinkers, for example James M. Buchanan's essay "Why I, Too, Am Not a Conservative: The Normative Vision of Classical Liberalism".

A common term in much of the world for what Hayek espoused is "neoliberalism". A British scholar, Samuel Brittan, concluded in 2010, "Hayek's book [The Constitution of Liberty] is still probably the most comprehensive statement of the underlying ideas of the moderate free market philosophy espoused by neoliberals."[114] Brittan adds that although Raymond Plant (2009) comes out in the end against Hayek's doctrines, Plant gives The Constitution of Liberty a "more thorough and fair-minded analysis than it has received even from its professed adherents".[114]

In Why F A Hayek is a Conservative,[115] British policy analyst Madsen Pirie believes Hayek mistakes the nature of the conservative outlook. Conservatives, he says, are not averse to change – but like Hayek, they are highly averse to change being imposed on the social order by people in authority who think they know how to run things better. They wish to allow the market to function smoothly and give it the freedom to change and develop. It is an outlook, says Pirie, that Hayek and conservatives both share.

Personal life

In August 1926, Hayek married Helen Berta Maria von Fritsch (1901–1960), a secretary at the civil service office where Hayek worked. They had two children together.[116] Friedrich and Helen divorced in July 1950 and he married Helene Bitterlich (1900–1996)[117] just a few weeks later, moving to Arkansas to take advantage of permissive divorce laws.[118]

Hayek was an agnostic.[119]

Legacy and honours


Friedrich Hayek's grave in Neustifter Friedhof, Vienna

Even after his death, Hayek's intellectual presence is noticeable, especially in the universities where he had taught: the London School of Economics, the University of Chicago, and the University of Freiburg. A number of tributes have resulted, many posthumous:

Selected bibliography

Volume I. Rules and Order, 1973.[123]
Volume II. The Mirage of Social Justice, 1976.[124]
Volume III. The Political Order of a Free People, 1979.[125]
  • The Fatal Conceit: The Errors of Socialism, 1988. Note: The authorship of The Fatal Conceit is under scholarly dispute.[126] The book in its published form may actually have been written entirely by its editor W.W. Bartley, III, not by Hayek.[127]

John Forbes Nash, Jr.



From Wikipedia, the free encyclopedia

John Forbes Nash, Jr.
John Forbes Nash, Jr. by Peter Badge.jpg
John Forbes Nash, Jr. at a symposium of game theory at the university of Cologne, Germany, 2nd Nov. 2006
Born (1928-06-13)June 13, 1928
Bluefield, West Virginia, U.S.
Died May 23, 2015(2015-05-23) (aged 86)
Monroe Township, New Jersey, U.S.
Residence United States
Nationality American
Fields
Institutions
Alma mater
Doctoral advisor Albert W. Tucker
Known for
Notable awards
Spouse Alicia Lopez-Harrison de Lardé (m. 1957–1963) (divorced); (m. 2001–2015) (their deaths)
Children 2

John Forbes Nash, Jr. (June 13, 1928 – May 23, 2015) was an American mathematician whose works in game theory, differential geometry, and partial differential equations have provided insight into the factors that govern chance and events inside complex systems in daily life.

His theories are used in economics, computing, evolutionary biology, artificial intelligence, accounting, computer science (minimax algorithm which is based on Nash Equilibrium), games of skill, politics and military theory. Serving as a Senior Research Mathematician at Princeton University during the latter part of his life, he shared the 1994 Nobel Memorial Prize in Economic Sciences with game theorists Reinhard Selten and John Harsanyi. In 2015, he was awarded the Abel Prize for his work on nonlinear partial differential equations.

In 1959, Nash began showing clear signs of mental illness, and spent several years at psychiatric hospitals being treated for paranoid schizophrenia. After 1970, his condition slowly improved, allowing him to return to academic work by the mid-1980s.[1] His struggles with his illness and his recovery became the basis for Sylvia Nasar's biography, A Beautiful Mind, as well as a film of the same name starring Russell Crowe.[2][3][4]

On May 23, 2015, Nash and his wife, Alicia de Lardé Nash, while riding in a taxi, were killed in a motor vehicle accident in New Jersey.

Youth

Nash was born on June 13, 1928, in Bluefield, West Virginia, United States. His father, John Forbes Nash, was an electrical engineer for the Appalachian Electric Power Company. His mother, Margaret Virginia (née Martin) Nash, had been a schoolteacher before she married. He was baptized in the Episcopal Church directly opposite the Martin house on Tazewell Street.[5] He had a younger sister, Martha (born November 16, 1930).

Education

Nash attended kindergarten and public school, and he learned from books provided by his parents and grandparents. Nash's grandmother played piano at home, and Nash's memories of listening to her when he visited were pleasant.[6] Nash's parents pursued opportunities to supplement their son's education, and arranged for him to take advanced mathematics courses at a local community college during his final year of high school. Nash attended the Carnegie Institute of Technology (CIT; now Carnegie Mellon University) with a full scholarship, the George Westinghouse Scholarship, and initially majored in chemical engineering. He switched to chemistry, and eventually to mathematics. After graduating in 1948 with a B.S. degree and an M.S. degree, both in mathematics, he accepted a scholarship to Princeton University, where he pursued graduate studies in mathematics.[6]

Nash's adviser and former CIT professor Richard Duffin wrote a letter of recommendation for graduate school consisting of a single sentence: "This man is a genius."[7] Nash was accepted by Harvard University, but the chairman of the mathematics department of Princeton, Solomon Lefschetz, offered him the John S. Kennedy fellowship, which was enough to convince Nash that Princeton valued him more.[8] Nash also considered Princeton more favorably because of its location closer to his family in Bluefield.[6] He went to Princeton, where he worked on his equilibrium theory, later known as the Nash equilibrium.

Major contributions

Game theory

Nash earned a Ph.D. degree in 1950 with a 28-page dissertation on non-cooperative games.[9][10] The thesis, which was written under the supervision of doctoral advisor Albert W. Tucker, contained the definition and properties of the Nash equilibrium. A crucial concept in non-cooperative games, it won Nash the Nobel Memorial Prize in Economic Sciences in 1994.

Nash's major publications relating to this concept are in the following papers:

Other mathematics

Nash did groundbreaking work in the area of real algebraic geometry:
His work in mathematics includes the Nash embedding theorem, which shows that every abstract Riemannian manifold can be isometrically realized as a submanifold of Euclidean space. He also made significant contributions to the theory of nonlinear parabolic partial differential equations and to singularity theory.

In her book A Beautiful Mind, author Sylvia Nasar explains that Nash was working on proving Hilbert's nineteenth problem, a theorem involving elliptic partial differential equations when, in 1956, he suffered a severe disappointment. He learned that an Italian mathematician, Ennio de Giorgi, had published a proof just months before Nash achieved his proof. Each took different routes to get to their solutions. The two mathematicians met each other at the Courant Institute of Mathematical Sciences of New York University during the summer of 1956. It has been speculated that if only one had solved the problem, he would have been given the Fields Medal for the proof.[6]

In 2011, the National Security Agency declassified letters written by Nash in the 1950s, in which he had proposed a new encryption–decryption machine.[11] The letters show that Nash had anticipated many concepts of modern cryptography, which are based on computational hardness.[12]

Personal life

In 1951, Nash was hired by the Massachusetts Institute of Technology (MIT) as a C. L. E. Moore instructor in the mathematics faculty. About a year later, Nash began a relationship in Massachusetts with Eleanor Stier, a nurse he met while she cared for him as a patient. They had a son, John David Stier, but Nash left Stier when she told him of her pregnancy.[13] The film based on Nash's life, A Beautiful Mind, was criticized during the run-up to the 2002 Oscars for omitting this aspect of his life. He was said to have abandoned her based on her social status, which he thought to have been beneath his.[14]

In 1954, while in his 20s, Nash was arrested for indecent exposure in an entrapment of homosexuals in Santa Monica, California. Although the charges were dropped, he was stripped of his top-secret security clearance and fired from RAND Corporation, where he had spent a few summers as a consultant.[15]

Not long after breaking up with Stier, Nash met Alicia Lopez-Harrison de Lardé (born January 1, 1933), a naturalized U.S. citizen from El Salvador. De Lardé graduated from MIT, having majored in physics.[6] They married in February 1957; although Nash was an atheist, the ceremony was performed in a Roman Catholic church.[16][17]

In 1958, Nash earned a tenured position at MIT, and his first signs of mental illness were evident in early 1959. At this time, his wife was pregnant with their first child. He resigned his position as a member of the MIT mathematics faculty in the spring of 1959[6] and his wife had him admitted to McLean Hospital for treatment of schizophrenia that same year. Their son, John Charles Martin Nash, was born soon afterward. The child was not named for a year because his wife felt Nash should have a say in the name given to the boy. Due to the stress of dealing with his illness, Nash and de Lardé divorced in 1963. After his final hospital discharge in 1970, Nash lived in de Lardé's house as a boarder. This stability seemed to help him, and he learned how to consciously discard his paranoid delusions.[18] He stopped taking psychiatric medication and was allowed by Princeton to audit classes. He continued to work on mathematics and eventually he was allowed to teach again. In the 1990s, de Lardé and Nash resumed their relationship, remarrying in 2001.

Death

While riding in a taxicab on May 23, 2015, Nash and his wife, Alicia de Lardé Nash, were killed as the result of a motor vehicle collision on the New Jersey Turnpike near Monroe Township. They were on their way home after a visit to Norway where Nash had received the Abel Prize. The driver of the cab they were riding in from Newark Airport lost control of the vehicle and eventually struck a guard rail. Both Nash and his wife were ejected from the car upon impact.[19][20][21][22][23] At the time of his death, was 86 years old and a longtime resident of West Windsor Township, New Jersey.[24][25]

Following his death, obituaries appeared in scientific and conventional media throughout the world. In addition to their obituary for Nash,[26] The New York Times also published an article containing many notable quotes of Nash, assembled from diverse media and publications, providing his reflections on his life and achievements,[27] as well as an article on the cornerstone of his game theory on making choices in life.[28]

Mental illness


Nash in November 2006 at a game theory conference in Cologne, Germany

Nash's mental illness first began to manifest in the form of paranoia; his wife later described his behavior as erratic. Nash seemed to believe that all men who wore red ties were part of a communist conspiracy against him; Nash mailed letters to embassies in Washington, D.C., declaring that they were establishing a government.[1][29] Nash's psychological issues crossed into his professional life when he gave an American Mathematical Society lecture at Columbia University in 1959. Although ostensibly pertaining to a proof of the Riemann hypothesis, the lecture was incomprehensible. Colleagues in the audience immediately realized that something was wrong.[30]

He was admitted to the McLean Hospital, April–May 1959, where he was diagnosed with paranoid schizophrenia. According to the clinical diagnosis, a person suffering from this disorder is dominated by relatively stable, often paranoid, fixed beliefs that are either false, over-imaginative or unrealistic, usually accompanied by experiences of seemingly real perception of something not actually present – particularly auditory and perceptional disturbances, a lack of motivation for life, and mild clinical depression.[31]

In 1961, Nash was admitted to the New Jersey State Hospital at Trenton.[32] Over the next nine years, he spent periods in psychiatric hospitals, where, aside from receiving antipsychotic medications, he was administered insulin shock therapy.[31][33][34]

Although he sometimes took prescribed medication, Nash later wrote that he only ever did so under pressure. After 1970, he was never committed to a hospital again, and he refused any further medication. According to Nash, the film A Beautiful Mind inaccurately implied that he was taking the new atypical antipsychotics during this period. He attributed the depiction to the screenwriter (whose mother, he notes, was a psychiatrist), who was worried about the film encouraging people with the disorder to stop taking their medication.[35] Journalist Robert Whitaker wrote an article suggesting that recovery from problems like Nash's can be hindered by such drugs.[36]

Nash has said the psychotropic drugs are overrated and that the adverse effects are not given enough consideration once someone is deemed mentally ill.[37][38][39] According to Sylvia Nasar, author of the book A Beautiful Mind, on which the movie was based, Nash recovered gradually with the passage of time. Encouraged by his then former wife, de Lardé, Nash worked in a communitarian setting where his eccentricities were accepted. De Lardé said of Nash, "it's just a question of living a quiet life".[1]

Nash dated the start of what he termed "mental disturbances" to the early months of 1959, when his wife was pregnant. He described a process of change "from scientific rationality of thinking into the delusional thinking characteristic of persons who are psychiatrically diagnosed as 'schizophrenic' or 'paranoid schizophrenic'"[6] including seeing himself as a messenger or having a special function in some way, and with supporters and opponents and hidden schemers, and a feeling of being persecuted, and looking for signs representing divine revelation.[40] Nash suggested his delusional thinking was related to his unhappiness, his desire to feel important and be recognized, and his characteristic way of thinking, saying, "I wouldn't have had good scientific ideas if I had thought more normally." He also said, "If I felt completely pressureless I don't think I would have gone in this pattern".[41] He did not draw a categorical distinction between schizophrenia and bipolar disorder.[42] Nash reported that he did not hear voices until around 1964, and later engaged in a process of consciously rejecting them.[43] He reported that he was always taken to hospitals against his will. He only temporarily renounced his "dream-like delusional hypotheses" after being in a hospital long enough to decide to superficially conform – to behave normally or to experience "enforced rationality". Only gradually on his own did he "intellectually reject" some of the "delusionally influenced" and "politically oriented" thinking as a waste of effort. However, by 1995, although he was "thinking rationally again in the style that is characteristic of scientists," he said he also felt more limited.[6][44]
Nash wrote in 1994:
I spent times of the order of five to eight months in hospitals in New Jersey, always on an involuntary basis and always attempting a legal argument for release. And it did happen that when I had been long enough hospitalized that I would finally renounce my delusional hypotheses and revert to thinking of myself as a human of more conventional circumstances and return to mathematical research. In these interludes of, as it were, enforced rationality, I did succeed in doing some respectable mathematical research. Thus there came about the research for "Le problème de Cauchy pour les équations différentielles d'un fluide général"; the idea that Prof. Hironaka called "the Nash blowing-up transformation"; and those of "Arc Structure of Singularities" and "Analyticity of Solutions of Implicit Function Problems with Analytic Data".
But after my return to the dream-like delusional hypotheses in the later 60s I became a person of delusionally influenced thinking but of relatively moderate behavior and thus tended to avoid hospitalization and the direct attention of psychiatrists.

Thus further time passed. Then gradually I began to intellectually reject some of the delusionally influenced lines of thinking which had been characteristic of my orientation. This began, most recognizably, with the rejection of politically oriented thinking as essentially a hopeless waste of intellectual effort. So at the present time I seem to be thinking rationally again in the style that is characteristic of scientists.[6]

Recognition and later career

In 1978, Nash was awarded the John von Neumann Theory Prize for his discovery of non-cooperative equilibria, now called Nash equilibria. He won the Leroy P. Steele Prize in 1999.

In 1994, he received the Nobel Memorial Prize in Economic Sciences (along with John Harsanyi and Reinhard Selten) as a result of his game theory work as a Princeton graduate student. In the late 1980s, Nash had begun to use email to gradually link with working mathematicians who realized that he was the John Nash and that his new work had value. They formed part of the nucleus of a group that contacted the Bank of Sweden's Nobel award committee and were able to vouch for Nash's mental health ability to receive the award in recognition of his early work.[citation needed]

As of 2011 Nash's recent work involved ventures in advanced game theory, including partial agency, which show that, as in his early career, he preferred to select his own path and problems. Between 1945 and 1996, he published 23 scientific studies.

Nash has suggested hypotheses on mental illness. He has compared not thinking in an acceptable manner, or being "insane" and not fitting into a usual social function, to being "on strike" from an economic point of view. He has advanced views in evolutionary psychology about the value of human diversity and the potential benefits of apparently nonstandard behaviors or roles.[45]

Nash has developed work on the role of money in society. Within the framing theorem that people can be so controlled and motivated by money that they may not be able to reason rationally about it, he has criticized interest groups that promote quasi-doctrines based on Keynesian economics that permit manipulative short-term inflation and debt tactics that ultimately undermine currencies. He has suggested a global "industrial consumption price index" system that would support the development of more "ideal money" that people could trust rather than more unstable "bad money". He notes that some of his thinking parallels economist and political philosopher Friedrich Hayek's thinking regarding money and a nontypical viewpoint of the function of the authorities.[46][47]

Nash received an honorary degree, Doctor of Science and Technology, from Carnegie Mellon University in 1999, an honorary degree in economics from the University of Naples Federico II on March 19, 2003,[48] an honorary doctorate in economics from the University of Antwerp in April 2007, and was keynote speaker at a conference on game theory. He has also been a prolific guest speaker at a number of world-class events, such as the Warwick Economics Summit in 2005 held at the University of Warwick. In 2012 he was elected as a fellow of the American Mathematical Society.[49] On May 19, 2015, a few days before his death, Nash, along with Louis Nirenberg, was awarded the 2015 Abel Prize by King Harald V of Norway at a ceremony in Oslo.[50]

Representation in culture

At Princeton, campus legend Nash became known as "The Phantom of Fine Hall"[51] (Princeton's mathematics center), a shadowy figure who would scribble arcane equations on blackboards in the middle of the night. He is referred to in a novel set at Princeton, The Mind-Body Problem, 1983, by Rebecca Goldstein.[1]

Sylvia Nasar's biography of Nash, A Beautiful Mind, was published in 1998. A film by the same name was released in 2001, directed by Ron Howard with Russell Crowe playing Nash.

Awards

The coming merge of human and machine intelligence

by Jeff Stibel 
Original link:  http://medicalxpress.com/news/2015-05-merge-human-machine-intelligence.html


Credit: Rice University

For most of the past two million years, the human brain has been growing steadily. But something has recently changed. In a surprising reversal, human brains have actually been shrinking for the last 20,000 years or so. We have lost nearly a baseball-sized amount of matter from a brain that isn't any larger than a football.

The descent is rapid and pronounced. The anthropologist John Hawks describes it as a "major downsizing in an evolutionary eyeblink." If this pace is maintained, scientists predict that our brains will be no larger than those of our forebears, Homo erectus, within another 2,000 years.

The reason that our brains are shrinking is simple: our biology is focused on survival, not . Larger brains were necessary to allow us to learn to use language, tools and all of the innovations that allowed our species to thrive. But now that we have become civilized—domesticated, if you will—certain aspects of intelligence are less necessary.

This is actually true of all animals: domesticated animals, including dogs, cats, hamsters and birds, have 10 to 15 percent smaller brains than their counterparts in the wild. Because brains are so expensive to maintain, large sizes are selected out when nature sees no direct survival benefit. It is an inevitable fact of life.

Fortunately, another influence has evolved over the past 20,000 years that is making us smarter even as our brains are shrinking: technology. Technology has allowed us to leapfrog evolution, enabling our brains and bodies to do things that were otherwise impossible biologically. We weren't born with wings, but we've created airplanes, helicopters, hot air balloons and hang gliders. We don't have sufficient natural strength or speed to bring down big game, but we've created spears, rifles and livestock farms.

Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature's intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world's information embedded in our minds via the Internet.

Psychics and physics

In the late 1800s, a German astronomer named Hans Berger fell off a horse and was nearly trampled by cavalry. He narrowly escaped injury, but was forever changed by the incident, owing to the reaction of his sister. Though she was miles away at the time, Berger's sister was instantly overcome with a feeling that Hans was in trouble. Berger took this as evidence of the mind's psychic ability and dedicated the rest of his life to finding certain proof.

Berger abandoned his study of astronomy and enrolled in medical school to gain an understanding of the brain that would allow him to prove a "correlation between objective activity in the brain and subjective psychic phenomena." He later joined the University of Jena in Germany as professor of neurology to pursue his quest.

At the time, psychic interest was relatively high. There were numerous academics devoted to the field, studying at prestigious institutions such as Stanford and Duke, Oxford and Cambridge. Still, it was largely considered bunk science, with most credible academics focused on dispelling, rather than proving, claims of psychic ability. But one of those psychic beliefs happened to be true.

That belief is the now well-understood notion that our brains communicate electrically. This was a radical idea at the time; after all, the electromagnetic field had only been discovered in 1865. But Berger found proof. He invented a device called the electroencephalogram (you probably know it as an EEG) that recorded brain waves. Using his new EEG, Berger was the first to demonstrate that our neurons actually talk to one another, and that they do so with electrical pulses. He published his results in 1929.

The new normal

As often happens with revolutionary ideas, Berger's EEG results were either ignored or lambasted as trickery. This was, after all, preternatural activity. But over the next decade, enough independent scholars verified the results that they became widely accepted. Berger saw his findings as evidence of the mind's potential for "psychic" activity, and he continued searching for more evidence until the day he hanged himself in frustration. The rest of the scientific community went back to what it had always been doing, "good science," and largely forgot about the electric neuron.

That was the case until the biophysicist Eberhard Fetz came along in 1969 and elaborated on Berger's discovery. Fetz reasoned that if brains were controlled by electricity, then perhaps we could use our brains to control . In a small primate lab at the University of Washington in Seattle, he connected the brain of a rhesus monkey to an electrical meter and then watched in amazement as the monkey learned how to control the level of the meter with nothing but its thoughts.

While incredible, this insight didn't have much application in 1969. But with the rapid development of silicon chips, computers and data networks, the technology now exists to connect people's brains to the Internet, and it's giving rise to a new breed of intelligence.

Scientists in labs across the globe are busy perfecting computer chips that can be implanted in the human brain. In many ways, the results, if successful, fit squarely in the realm of "psychics." There may be no such thing as paranormal activity, but make no mistake that all of the following are possible and on the horizon: telepathy, no problem; telekinesis, absolutely; clairvoyance, without question; ESP, oh yeah. While not psychic, Hans Berger may have been right all along.

The Six Million Dollar Man, for real

Jan Scheuermann lifted a chocolate bar to her mouth and took a bite. A grin spread across her face as she declared, "One small nibble for a woman, one giant bite for BCI."

BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004.

These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is.

BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann's case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.

A smart, vibrant woman in her early 50s, Scheuermann has been unable to use her arms and legs since she was diagnosed with a rare genetic disease at the age of 40. "I have not moved things for about 10 years . . . . This is the ride of my life," she said. "This is the roller coaster. This is skydiving." Other patients use brain-controlled implants to communicate, control wheelchairs, write emails and connect to the Internet.

The technology is surprisingly simple to understand. BrainGate is merely tapping into the brain's electrical signals in the same way that Berger's EEG and Fetz's electrical meter did. The BrainGate chip, once attached to the motor cortex, reads the brain's electrical signals and sends them to a computer, which interprets them and sends along instructions to other electrical devices like a robotic arm or a wheelchair.

In that respect, it's not much different from using your television remote to change the channel. Potentially the technology will enable bionics, restore communication abilities and give disabled people previously unimaginable access to the world.

Mind meld

But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.

Computers have been creeping closer to our brains since their invention. What started as large mainframes became desktops, then laptops, then tablets and smartphones that we hold only inches from our faces, and now Google Glass, which (albeit undergoing a redesign) delivers the Internet in a pair of eyeglasses.

Back in 2004, Google's founders told Playboy magazine that one day we'd have direct access to the Internet through brain implants, with "the entirety of the world's information as just one of our thoughts."

A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe.

Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.

The pursuit of has been plagued by problems. For one, we keep changing the definition of intelligence. In the 1960s, we said a computer that could beat a backgammon champion would surely be intelligent. But in the 1970s, when Gammonoid beat Luigi Villa—the world champion backgammon player—by a score of 7-1, we decided that backgammon was too easy, requiring only straightforward calculations.

We changed the rules to focus on games of sophisticated rules and strategies, like chess. Yet when IBM's Deep Blue computer beat the reigning chess champion, Gary Kasparov, in 1997, we changed the rules again. No longer were sophisticated calculations or logical decision-making acts of intelligence.

Perhaps when computers could answer human knowledge questions, then they'd be intelligent. Of course, we had to revise that theory in 2011 when IBM's Watson computer soundly beat the best humans at Jeopardy. But all of these computers were horribly bad sports: they couldn't say hello, shake hands or make small talk of any kind. Each time a machine defies our definition of intelligence we move to a new definition.

What makes us human?

We've done the same thing in nature. We once argued that what set us apart from other animals was our ability to use tools. Then we saw primates and crows using tools. So we changed our minds and said that what makes us intelligent is our ability to use language. Then biologists taught the first chimpanzee how to use sign language, and we decided that intelligence couldn't be about language after all.

Next came self-consciousness and awareness, until experiments unequivocally proved that dolphins are self-aware. With animal intelligence as well as machine intelligence, we keep changing the goalposts.

There are those who believe we can transcend the moving goalposts. These bold adventurers have most recently focused on brain science, attempting to reverse engineer the brain. As the theory goes, once we understand all of the brain's parts, we can recreate them to build an intelligent system.

But there are two problems with this approach. First, the inner workings of the brain are largely a mystery. Neuroscience is making tremendous progress, but it is still early.

The second issue with reverse engineering the brain is more fundamental. Just as the Wright brothers didn't learn to fly by dissecting birds, we will not learn to create intelligence by recreating a brain. It is pretty clear that an intelligent machine will look nothing like a three-pound wrinkly lump of clay, nor will it have cells or blood or fat.

Daniel Dennett, University Professor and Austin B. Fletcher Professor of Philosophy at Tufts—whom I consider a mentor and a guide on the quest to solving the mysteries of the mind—was an advocate of reverse engineering at one point. But he recently changed course, saying "I'm trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart."

Dennett's mistake was to reduce the brain to the neuron in an attempt to rebuild it. That is reducing the brain one step too far, pushing us from the edge of the forest to deep into the trees. This is the danger in any kind of reverse engineering. Biologists reduced ant colonies down to individuals, but we have now learned that the ant network, the colony, is the critical level. Reducing flight to the feathers of a bird would not have worked, but reducing it to wingspan did the trick. Feathers are one step too far, just as are ants and neurons.

Scientists have oversimplified the function of a neuron, treating it as a predictable switching device that fires on and off. That would be incredibly convenient if it were true. But neurons are only logical when they work—and a neuron misfires up to 90 percent of the time. Artificial intelligence almost universally ignores this fact.

The new intelligence

Focusing on a single neuron's on/off switch misses what is happening with the network of neurons, which performs amazing feats. The faultiness of the individual neuron allows for the plasticity and adaptive nature of the network as a whole. Intelligence cannot be replicated by creating a bunch of switches, faulty or not. Instead, we must focus on the network.

Neurons may be good analogs for transistors and maybe even computer chips, but they're not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn't allow BrainGate patients to move a or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network.

It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes.

Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.

Significant other

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sig...