Search This Blog

Sunday, October 3, 2021

Laissez-faire

From Wikipedia, the free encyclopedia

Laissez-faire (/ˌlɛsˈfɛər/ LESS-ay-FAIR; from French: laissez faire [lɛse fɛʁ] (About this soundlisten), lit.'let do') is an economic system in which transactions between private groups of people are free from or almost free from any form of economic interventionism such as regulation and subsidies. As a system of thought, laissez-faire rests on the axioms that the individual is the basic unit in society and has a natural right to freedom; that the physical order of nature is a harmonious and self-regulating system; and that corporations are creatures of the state and therefore the citizens must watch them closely due to their propensity to disrupt the Smithian spontaneous order.

These axioms constitute the basic elements of laissez-faire thought. Another basic principle holds that markets should be competitive, a rule that the early advocates of laissez-faire always emphasized. With the aims of maximizing freedom and of allowing markets to self-regulate, early advocates of laissez-faire proposed a impôt unique, a tax on land rent (similar to Georgism) to replace all taxes that they saw as damaging welfare by penalizing production.

Proponents of laissez-faire argue for a complete separation of government from the economic sector. The phrase laissez-faire is part of a larger French phrase and literally translates to "let [it/them] do", but in this context the phrase usually means to "let it be" and in expression "laid back." Although never practiced with full consistency, laissez-faire capitalism emerged in the mid-18th century and was further popularized by Adam Smith's book The Wealth of Nations. It was most prominent in Britain and the United States during this period but both economies became steadily more controlled throughout the 19th and 20th centuries. While associated with capitalism in common usage, there are also non-capitalist forms of laissez-faire, including some forms of market socialism.

Etymology and usage

The term laissez-faire likely originated in a meeting that took place around 1681 between powerful French Controller-General of Finances Jean-Baptiste Colbert and a group of French businessmen headed by M. Le Gendre. When the eager mercantilist minister asked how the French state could be of service to the merchants and help promote their commerce, Le Gendre replied simply: "Laissez-nous faire" ("Leave it to us" or "Let us do [it]", the French verb not requiring an object).

The anecdote on the Colbert–Le Gendre meeting appeared in a 1751 article in the Journal économique, written by French minister and champion of free trade René de Voyer, Marquis d'Argenson—also the first known appearance of the term in print. Argenson himself had used the phrase earlier (1736) in his own diaries in a famous outburst:

Laissez faire, telle devrait être la devise de toute puissance publique, depuis que le monde est civilisé [...]. Détestable principe que celui de ne vouloir grandir que par l'abaissement de nos voisins ! Il n'y a que la méchanceté et la malignité du cœur de satisfaites dans ce principe, et l’intérêt y est opposé. Laissez faire, morbleu ! Laissez faire !!

Let go, which should be the motto of all public power, since the world was civilized [...]. [It is] a detestable principle of those that want to enlarge [themselves] but by the abasement of our neighbours. There is but the wicked and the malignant heart[s] [who are] satisfied by this principle and [its] interest is opposed. Let go, for God's sake! Let go!!

— René Louis de Voyer de Paulmy d'Argenson

Vincent de Gournay, a French Physiocrat and intendant of commerce in the 1750s, popularized the term laissez-faire as he allegedly adopted it from François Quesnay's writings on China. Quesnay coined the phrases laissez-faire and laissez-passer, laissez-faire being a translation of the Chinese term wu wei (無為). Gournay ardently supported the removal of restrictions on trade and the deregulation of industry in France. Delighted with the Colbert–Le Gendre anecdote, he forged it into a larger maxim all his own: "Laissez faire et laissez passer" ("Let do and let pass"). His motto has also been identified as the longer "Laissez faire et laissez passer, le monde va de lui même !" ("Let do and let pass, the world goes on by itself!"). Although Gournay left no written tracts on his economic policy ideas, he had immense personal influence on his contemporaries, notably his fellow Physiocrats, who credit both the laissez-faire slogan and the doctrine to Gournay.

Before d'Argenson or Gournay, P. S. de Boisguilbert had enunciated the phrase "On laisse faire la nature" ("Let nature run its course"). D'Argenson himself during his life was better known for the similar, but less-celebrated motto "Pas trop gouverner" ("Govern not too much").

The Physiocrats proclaimed laissez-faire in 18th-century France, placing it at the very core of their economic principles and famous economists, beginning with Adam Smith, developed the idea. It is with the Physiocrats and the classical political economy that the term laissez-faire is ordinarily associated. The book Laissez Faire and the General-Welfare State states: "The physiocrats, reacting against the excessive mercantilist regulations of the France of their day, expressed a belief in a 'natural order' or liberty under which individuals in following their selfish interests contributed to the general good. Since, in their view, this natural order functioned successfully without the aid of government, they advised the state to restrict itself to upholding the rights of private property and individual liberty, to removing all artificial barriers to trade, and to abolishing all useless laws".

The French phrase laissez-faire gained currency in English-speaking countries with the spread of Physiocratic literature in the late 18th century. George Whatley's 1774 Principles of Trade (co-authored with Benjamin Franklin) re-told the Colbert-LeGendre anecdote—this may mark the first appearance of the phrase in an English-language publication.

Herbert Spencer was opposed to a slightly different application of laissez faire—to "that miserable laissez-faire" that leads to men's ruin, saying: "Along with that miserable laissez-faire which calmly looks on while men ruin themselves in trying to enforce by law their equitable claims, there goes activity in supplying them, at other men's cost, with gratis novel-reading!" In Spencer's case, the right of private ownership was being assailed and it was that miserable spirit of laissez-faire in halls of legislation that exhausted men in the effort of protecting their right.

As a product of the Enlightenment, laissez-faire was "conceived as the way to unleash human potential through the restoration of a natural system, a system unhindered by the restrictions of government". In a similar vein, Adam Smith viewed the economy as a natural system and the market as an organic part of that system. Smith saw laissez-faire as a moral program and the market its instrument to ensure men the rights of natural law. By extension, free markets become a reflection of the natural system of liberty. For Smith, laissez-faire was "a program for the abolition of laws constraining the market, a program for the restoration of order and for the activation of potential growth".

However, Smith and notable classical economists such as Thomas Malthus and David Ricardo did not use the phrase. Jeremy Bentham used the term, but it was probably James Mill's reference to the laissez-faire maxim (together with the "Pas trop gouverner" motto) in an 1824 entry for the Encyclopædia Britannica that really brought the term into wider English usage. With the advent of the Anti-Corn Law League (founded 1838), the term received much of its English meaning.

Smith first used the metaphor of an invisible hand in his book The Theory of Moral Sentiments (1759) to describe the unintentional effects of economic self-organization from economic self-interest. Although not the metaphor itself, the idea lying behind the invisible hand belongs to Bernard de Mandeville and his Fable of the Bees (1705). In political economy, that idea and the doctrine of laissez-faire have long been closely related. Some have characterized the invisible-hand metaphor as one for laissez-faire, although Smith never actually used the term himself. In Third Millennium Capitalism (2000), Wyatt M. Rogers Jr. notes a trend whereby recently "conservative politicians and economists have chosen the term 'free-market capitalism' in lieu of laissez-faire".

American individualist anarchists such as Benjamin Tucker saw themselves as economic laissez-faire socialists and political individualists while arguing that their "anarchistic socialism" or "individual anarchism" was "consistent Manchesterism".

History

Europe

In Europe, the laissez-faire movement was first widely promoted by the Physiocrats, a movement that included Vincent de Gournay (1712–1759), a successful merchant turned political figure. Gournay is postulated to have adapted the Taoist concept wu wei, from the writings on China by François Quesnay (1694–1774). Gournay held that government should allow the laws of nature to govern economic activity, with the state only intervening to protect life, liberty and property. François Quesnay and Anne Robert Jacques Turgot, Baron de l'Aulne took up Gournay's ideas. Quesnay had the ear of the King of France, Louis XV and in 1754 persuaded him to give laissez-faire a try. On September 17, the King abolished all tolls and restraints on the sale and transport of grain. For more than a decade, the experiment appeared successful, but 1768 saw a poor harvest, and the cost of bread rose so high that there was widespread starvation while merchants exported grain in order to obtain the best profit. In 1770, the Comptroller-General of Finances Joseph Marie Terray revoked the edict allowing free trade in grain.

The doctrine of laissez-faire became an integral part of 19th-century European liberalism. Just as liberals supported freedom of thought in the intellectual sphere, so were they equally prepared to champion the principles of free trade and free competition in the sphere of economics, seeing the state as merely a passive policeman, protecting private property and administering justice, but not interfering with the affairs of its citizens. Businessmen, British industrialists in particular, were quick to associate these principles with their own economic interests. Many of the ideas of the physiocrats spread throughout Europe and were adopted to a greater or lesser extent in Sweden, Tuscany, Spain and in the newly created United States. Adam Smith, author of The Wealth of Nations (1776), met Quesnay and acknowledged his influence.

In Britain, the newspaper The Economist (founded in 1843) became an influential voice for laissez-faire capitalism. Laissez-faire advocates opposed food aid for famines occurring within the British Empire. In 1847, referring to the famine then underway in Ireland, founder of The Economist James Wilson wrote: "It is no man's business to provide for another". More specifically, in An Essay on the Principle of Population, Malthus argued that there was nothing that could be done to avoid famines because he felt he had mathematically proven that population growth tends to exceed growth in food production. However, The Economist campaigned against the Corn Laws that protected landlords in the United Kingdom of Great Britain and Ireland against competition from less expensive foreign imports of cereal products. The Great Famine in Ireland in 1845 led to the repeal of the Corn Laws in 1846. The tariffs on grain which kept the price of bread artificially high were repealed. However, repeal of the Corn Laws came too late to stop the Irish famine, partly because it was done in stages over three years.

A group that became known as the Manchester Liberals, to which Richard Cobden (1804–1865) and John Bright (1811–1889) belonged, were staunch defenders of free trade. After the death of Cobden, the Cobden Club (founded in 1866) continued their work. In 1860, Britain and France concluded a trade treaty, after which other European countries signed several similar treaties. The breakdown of laissez-faire as practised by the British Empire was partly led by British companies eager for state support of their positions abroad, in particular British oil companies.

United States

Frank Bourgin's study of the Constitutional Convention and subsequent decades argues that direct government involvement in the economy was intended by the Founding Fathers. The reason for this was the economic and financial chaos the nation suffered under the Articles of Confederation. The goal was to ensure that dearly-won political independence was not lost by being economically and financially dependent on the powers and princes of Europe. The creation of a strong central government able to promote science, invention, industry and commerce was seen as an essential means of promoting the general welfare and making the economy of the United States strong enough for them to determine their own destiny. Others view Bourgin's study, written in the 1940s and not published until 1989, as an over-interpretation of the evidence, intended originally to defend the New Deal and later to counter Ronald Reagan's economic policies.

Historian Kathleen G. Donohue argues that in the 19th century liberalism in the United States had distinctive characteristics and that "at the center of classical liberal theory [in Europe] was the idea of laissez-faire. To the vast majority of American classical liberals, however, laissez-faire did not mean "no government intervention" at all. On the contrary, they were more than willing to see government provide tariffs, railroad subsidies, and internal improvements, all of which benefited producers". Notable examples of government intervention in the period prior to the American Civil War include the establishment of the Patent Office in 1802; the establishment of the Office of Standard Weights and Measures in 1830; the creation of the Coast and Geodetic Survey in 1807 and other measures to improve river and harbor navigation; the various Army expeditions to the west, beginning with Lewis and Clark's Corps of Discovery in 1804 and continuing into the 1870s, almost always under the direction of an officer from the Army Corps of Topographical Engineers and which provided crucial information for the overland pioneers that followed; the assignment of Army Engineer officers to assist or direct the surveying and construction of the early railroads and canals; and the establishment of the First Bank of the United States and Second Bank of the United States as well as various protectionist measures (e.g. the tariff of 1828). Several of these proposals met with serious opposition and required a great deal of horse-trading to be enacted into law. For instance, the First National Bank would not have reached the desk of President George Washington in the absence of an agreement that was reached between Alexander Hamilton and several Southern members of Congress to locate the capitol in the District of Columbia. In contrast to Hamilton and the Federalists was Thomas Jefferson and James Madison's opposing political party, the Democratic-Republicans.

Most of the early opponents of laissez-faire capitalism in the United States subscribed to the American School. This school of thought was inspired by the ideas of Hamilton, who proposed the creation of a government-sponsored bank and increased tariffs to favor Northern industrial interests. Following Hamilton's death, the more abiding protectionist influence in the antebellum period came from Henry Clay and his American System. In the early 19th century, "it is quite clear that the laissez-faire label is an inappropriate one" to apply to the relationship between the United States government and industry. In the mid-19th century, the United States followed the Whig tradition of economic nationalism, which included increased state control, regulation and macroeconomic development of infrastructure. Public works such as the provision and regulation transportation such as railroads took effect. The Pacific Railway Acts provided the development of the First Transcontinental Railroad. In order to help pay for its war effort in the Civil War, the United States government imposed its first personal income tax on 5 August 1861 as part of the Revenue Act of 1861 (3% of all incomes over US$800; rescinded in 1872).

Following the Civil War, the movement towards a mixed economy accelerated. Protectionism increased with the McKinley Tariff of 1890 and the Dingley Tariff of 1897. Government regulation of the economy expanded with the enactment of the Interstate Commerce Act of 1887 and the Sherman Anti-trust Act. The Progressive Era saw the enactment of more controls on the economy as evidenced by the Woodrow Wilson administration's New Freedom program. Following World War I and the Great Depression, the United States turned to a mixed economy which combined free enterprise with a progressive income tax and in which from time to time the government stepped in to support and protect American industry from competition from overseas. For example, in the 1980s the government sought to protect the automobile industry by "voluntary" export restrictions from Japan.

In 1986, Pietro S. Nivola wrote: "By and large, the comparative strength of the dollar against major foreign currencies has reflected high U.S. interest rates driven by huge federal budget deficits. Hence, the source of much of the current deterioration of trade is not the general state of the economy, but rather the government's mix of fiscal and monetary policies – that is, the problematic juxtaposition of bold tax reductions, relatively tight monetary targets, generous military outlays, and only modest cuts in major entitlement programs. Put simply, the roots of the trade problem and of the resurgent protectionism it has fomented are fundamentally political as well as economic".

A more recent advocate of total laissez-faire has been Objectivist Ayn Rand, who described it as "the abolition of any and all forms of government intervention in production and trade, the separation of State and Economics, in the same way and for the same reasons as the separation of Church and State". This viewpoint is summed up in what is known as the iron law of regulation, which states that all government economic regulation eventually leads to a net loss in social welfare. Rand's political philosophy emphasized individual rights (including property rights) and she considered laissez-faire capitalism the only moral social system because in her view it was the only system based on the protection of those rights. She opposed statism, which she understood to include theocracy, absolute monarchy, Nazism, fascism, communism, socialism and dictatorship. Rand believed that natural rights should be enforced by a constitutionally limited government. Although her political views are often classified as conservative or libertarian, she preferred the term "radical for capitalism". She worked with conservatives on political projects, but disagreed with them over issues such as religion and ethics. She denounced libertarianism, which she associated with anarchism. She rejected anarchism as a naïve theory based in subjectivism that could only lead to collectivism in practice.

Models

Capitalism

A closely related conception is that of raw or pure capitalism, or unrestrained capitalism, that refers to capitalism free of social regulations, with low, minimal or no government and operating almost entirely on the profit motive. Other than laissez-faire economics and anarcho-capitalism, it is not associated with a school of thought. It typically has a bad connotation, which hints towards a perceived need for restraint due to social needs and securities that can not be adequately responded to by companies with just a motive for making profit.

Robert Kuttner states that "for over a century, popular struggles in democracies have used the nation-state to temper raw capitalism. The power of voters has offset the power of capital. But as national barriers have come down in the name of freer commerce, so has the capacity of governments to manage capitalism in a broad public interest. So the real issue is not 'trade' but democratic governance".

The main issues of raw capitalism are said to lie in its disregard for quality, durability, sustainability, respect for the environment and human beings as well as a lack of morality. From this more critical angle, companies might naturally aim to maximise profits at the expense of workers' and broader social interests.

Advocates of laissez-faire capitalism argue that it relies on a constitutionally limited government that unconditionally bans the initiation of force and coercion, including fraud. Therefore, free market economists such as Milton Friedman and Thomas Sowell argue that, under such a system, relationships between companies and workers are purely voluntary and mistreated workers will seek better treatment elsewhere. Thus, most companies will compete for workers on the basis of pay, benefits, and work-life balance just as they compete with one another in the marketplace on the basis of the relative cost and quality of their goods.

So-called "raw" or "hyper-capitalism" is a prime motive of cyberpunk in dystopian works such as Syndicate.

Socialism

Although laissez-faire has been commonly associated with capitalism, there is a similar laissez-faire economic theory and system associated with socialism called left-wing laissez-faire, or free-market anarchism, also known as free-market anti-capitalism and free-market socialism to distinguish it from laissez-faire capitalism. One first example of this is mutualism as developed by Pierre-Joseph Proudhon in the 18th century, from which emerged individualist anarchism. Benjamin Tucker is one eminent American individualist anarchist who adopted a laissez-faire system he termed anarchistic socialism in contraposition to state socialism. This tradition has been recently associated with contemporary scholars such as Kevin Carson, Roderick T. Long, Charles W. Johnson, Brad Spangler, Sheldon Richman, Chris Matthew Sciabarra and Gary Chartier, who stress the value of radically free markets, termed freed markets to distinguish them from the common conception which these left-libertarians believe to be riddled with capitalist and statist privileges. Referred to as left-wing market anarchists or market-oriented left-libertarians, proponents of this approach strongly affirm the classical liberal ideas of self-ownership and free markets while maintaining that taken to their logical conclusions these ideas support anti-capitalist, anti-corporatist, anti-hierarchical and pro-labor positions in economics; anti-imperialism in foreign policy; and thoroughly radical views regarding such cultural issues as gender, sexuality and race. Critics of laissez-faire as commonly understood argues that a truly laissez-faire system would be anti-capitalist and socialist.

Kevin Carson describes his politics as on "the outer fringes of both free market libertarianism and socialism" and has also been highly critical of intellectual property. Carson has identified the work of Benjamin Tucker, Thomas Hodgskin, Ralph Borsodi, Paul Goodman, Lewis Mumford, Elinor Ostrom, Peter Kropotkin and Ivan Illich as sources of inspiration for his approach to politics and economics. In addition to individualist anarchist Benjamin Tucker's big four monopolies (land, money, tariffs and patents), he argues that the state has also transferred wealth to the wealthy by subsidizing organizational centralization in the form of transportation and communication subsidies. Carson believes that Tucker overlooked this issue due to Tucker's focus on individual market transactions whereas he also focuses on organizational issues. As such, the primary focus of his most recent work has been decentralized manufacturing and the informal and household economies. The theoretical sections of Carson's Studies in Mutualist Political Economy are also presented as an attempt to integrate marginalist critiques into the labor theory of value.

In response to claims that he uses the term capitalism incorrectly, Carson says he is deliberately choosing to resurrect what he claims to be an old definition of the term in order to "make a point". He claims that "the term 'capitalism,' as it was originally used, did not refer to a free market, but to a type of statist class system in which capitalists controlled the state and the state intervened in the market on their behalf". Carson holds that "capitalism, arising as a new class society directly from the old class society of the Middle Ages, was founded on an act of robbery as massive as the earlier feudal conquest of the land. It has been sustained to the present by continual state intervention to protect its system of privilege without which its survival is unimaginable". Carson argues that in a truly laissez-faire system the ability to extract a profit from labor and capital would be negligible. Carson coined the pejorative term vulgar libertarianism, a phrase that describes the use of a free market rhetoric in defense of corporate capitalism and economic inequality. According to Carson, the term is derived from the phrase vulgar political economy which Karl Marx described as an economic order that "deliberately becomes increasingly apologetic and makes strenuous attempts to talk out of existence the ideas which contain the contradictions [existing in economic life]".

Gary Chartier offers an understanding of property rights as contingent yet tightly constrained social strategies, reflective of the importance of multiple, overlapping rationales for separate ownership and of natural law principles of practical reasonableness, defending robust yet non-absolute protections for these rights in a manner similar to that employed by David Hume. This account is distinguished both from Lockean and neo-Lockean views which deduce property rights from the idea of self-ownership and from consequentialist accounts that might license widespread ad hoc interference with the possessions of groups and individuals. Chartier uses this account to ground a clear statement of the natural law basis for the view that solidaristic wealth redistribution by individual persons is often morally required, but as a response by individuals and grass-roots networks to particular circumstances rather than as a state-driven attempt to achieve a particular distributive pattern. He advances detailed arguments for workplace democracy rooted in such natural law principles as subsidiarity, defending it as morally desirable and as a likely outcome of the elimination of injustice rather than as something to be mandated by the state.

Chartier has discussed natural law approaches to land reform and to the occupation of factories by workers. He objects on natural law grounds to intellectual property protections, drawing on his theory of property rights more generally and develops a general natural law account of boycotts. He has argued that proponents of genuinely freed markets should explicitly reject capitalism and identify with the global anti-capitalist movement while emphasizing that the abuses the anti-capitalist movement highlights result from state-tolerated violence and state-secured privilege rather than from voluntary cooperation and exchange. According to Chartier, "it makes sense for [freed-market advocates] to name what they oppose 'capitalism.' Doing so calls attention to the freedom movement's radical roots, emphasizes the value of understanding society as an alternative to the state, underscores the fact that proponents of freedom object to non-aggressive as well as aggressive restraints on liberty, ensures that advocates of freedom aren't confused with people who use market rhetoric to prop up an unjust status quo, and expresses solidarity between defenders of freed markets and workers — as well as ordinary people around the world who use "capitalism" as a short-hand label for the world-system that constrains their freedom and stunts their lives".

Criticism

Over the years, a number of economists have offered critiques of laissez-faire economics. Adam Smith acknowledges some moral ambiguities towards the system of capitalism. Smith had misgivings concerning some aspects of each of the major character-types produced by modern capitalist society, namely the landlords, the workers and the capitalists. Smith claimed that "[t]he landlords' role in the economic process is passive. Their ability to reap a revenue solely from ownership of land tends to make them indolent and inept, and so they tend to be unable to even look after their own economic interests" and that "[t]he increase in population should increase the demand for food, which should increase rents, which should be economically beneficial to the landlords". According to Smith, the landlords should be in favour of policies which contribute to the growth in the wealth of nations, but they often are not in favour of these pro-growth policies because of their own indolent-induced ignorance and intellectual flabbiness.

Many philosophers have written on the systems society has created to manage their civilizations. Thomas Hobbes utilized the concept of a "state of nature," which is a time before any government or laws, as a starting point to consider the question. In this time, life would be "war of all against all." Further, "In such condition, there is no place for industry; because the fruit thereof is uncertain . . . continual fear and danger of violent death, and the life of man solitary, poor, nasty, brutish, and short." Smith was quite clear that he believed that without morality and laws, society would fail. From that perspective, it would be strange for Smith to support a pure Laissez-Faire style of capitalism, and what he does support in Wealth of Nations is heavily dependent on the moral philosophy from his previous work, Theory of Moral Sentiments.

Regardless of preferred political preference, all societies require shared moral values as a prerequisite on which to build laws to protect individuals from each other. Adam Smith wrote Wealth of Nations during the Enlightenment, a period of time when the prevailing attitude was, "All things can be Known." In effect, European thinkers, inspired by the likes of Isaac Newton and others, set about to "find the laws" of all things, that there existed a "natural law" underlying all aspects of life. They believed that these could be discovered and that everything in the universe could be rationally demystified and cataloged, including human interactions.

Critics and market abolitionists such as David McNally argue in the Marxist tradition that the logic of the market inherently produces inequitable outcomes and leads to unequal exchanges, arguing that Smith's moral intent and moral philosophy espousing equal exchange was undermined by the practice of the free market he championed. According to McNally, the development of the market economy involved coercion, exploitation and violence that Smith's moral philosophy could not countenance.

The British economist John Maynard Keynes condemned laissez-faire economic policy on several occasions. In The End of Laissez-faire (1926), one of the most famous of his critiques, Keynes argues that the doctrines of laissez-faire are dependent to some extent on improper deductive reasoning and says the question of whether a market solution or state intervention is better must be determined on a case-by-case basis.

The Austrian School economist Friedrich Hayek stated that a freely competitive, laissez-faire banking industry tends to be endogenously destabilizing and pro-cyclical, arguing that the need for central banking control was inescapable.

Karl Polanyi's Great Transformation criticizes self-regulating markets as aberrational, unnatural phenomena which tend towards social disruption.

Laissez-faire not just in its primitive and European form, but the American laissez-faire, and its result as legalization and granting of doubtful loans, has caused the current global economic crisis and imbalance in the distribution of capital. In fact, the American laissez-faire has become a kind of tool for circumventing economically useful laws.

Laissez-faire in its original sense, that is, independence in private property and the means of production and protection of the rights of individuals against governmental and official aggression, protection of the capital of independent individuals against other individuals, against foreign governments and, most importantly, against the government of the same system. This type of laissez-faire is actually very safe and helpful. But a look at the facts about the United States reveals that many companies that are somehow affiliated with the United States government but are not mentioned in the budget document (they are in fact government but are considered private for misappropriation) are covered by the laissez-faire capitalist system and without the consent of the citizens, they started confiscating public property and circumventing the bans imposed on the government as private companies. In fact, these types of companies have interfered in all economic fields (both in the United States and in the world) and in many ways have violated the economic freedom of citizens and have made laissez-faire such a worthless and unidentified.

In fact, the inefficiency of the laissez-faire is not due to the doctrine itself, but its implementation requires careful reviews and controls. Although this doctrine has been proposed for the balance and ease of economic progress, it does not eliminate poverty from the current situation. With the emergence of monopolies in the world economy by corrupt governments, under the cover of private and individual companies, the balance of supply and demand has been lost and the labor force has suffered these losses and their quality of life has declined.

Association of Professional Futurists

From Wikipedia, the free encyclopedia
 
Association of
Professional Futurists
APF logo since 2015
APF logo since 2015
AbbreviationAPF
Formation2002
TypeAssociation
Legal statusNonprofit 501(c)(6)
HeadquartersWashington, DC, USA
Region
Worldwide
Membership
500 members, 40 countries
Chair
Prof. Shermon Cruz
12 directors, 4 continents

The Association of Professional Futurists (APF) was founded in 2002 to validate the competencies of emerging futurists. As analysts, speakers, managers or consultants, APF's credentialed members cultivate strategic foresight for their organizations and clients. APF represents the professional side of the futures movement, while groups such as the World Futures Studies Federation, the World Future Society or The Millennium Project, represent its academic, popular, and activists expressions, respectively.

History

APF emerged as a network of practicing futurists who were utilizing futurology methods. As the field approached the year 2000, it began to renew old calls and issue new ones  to raise its internal standards in regards to ethics, competencies, and quality of work. While few felt that futurists--an occupational interest group at best--might become a full-fledged recognized profession via certification, the nine members of APF's founding board, including Peter Bishop, Jennifer Jarratt, Andy Hines, and Herb Rubenstein felt that foresight professionals should lead the global discussion about professional futures practice, encourage the use of futures and foresight in strategic decision making, and offer services, resources and training for foresight professionals to advance their skills and knowledge.

The Association of Professional Futurists has 500 individual members from 40 countries, including authors and speakers, such as Clem Bezold, Sohail Inayatullah, Thomas Frey, Alexandra Levit, Richard Slaughter, and Amy Webb. Beyond individuals, it has renowned organizational members, such as Arup Foresight, the Foresight Alliance, the Institute for the Future-Palo Alto, Institute for Futures Research-Stellenbosch, Kantar Foresight, Kairos Futures, Kedge, Leading Futurists LLC, OCAD University, SAMI Consulting, Shaping Tomorrow, and Tamkang University.

Instead of certifying members through coursework, professional futurists chose a pathway to credential its members, based on a peer-review assessment of their competencies. APF Professional Membership is conferred following a portfolio review to those who can, at the minimum, document performance in two of seven professional standards: consulting, organizational function, postgraduate degree, certificate program, speaking, teaching or writing. Full Members may use the appellation of APF after their name. Besides its Full Member program, APF also offers Provisional, Associate, and Student Memberships.

Programs & Publications

APF Annual gatherings have been a key activity since its founding. The first gathering was an "Applied Futures Summit" in Seattle in April 2002, at which founders agreed to establish the Association. The second gathering was in Austin, TX, focused on "The Future of Futures," employing a scenario planning approach to explore the next decade of the field. Each subsequent gathering has focused on a particular topic, such as Design Thinking in Pasadena, CA, or the Future of Virtual Reality in Las Vegas, NV, Global Health in Seattle, WA, Blockchain Futures in Brisbane, Australia, or Resurgent City in Pittsburgh, PA.

APF hosts shorter "Pro Dev" workshops preceding larger conferences, in addition to annual gatherings, such as its September 2019 workshop in Mexico City on the "Praxis of Professional Futurists." As a digital learning platform, APF members also conduct various events online, ranging from Twitterchats, to webinars, to day-long learning festivals that address topics such as the future of museums, the future of machine intelligence, diverse futures, and design thinking. In 2020, APF began to host monthly member-only "Foresight Friday" webinars to showcase outstanding work by its professional members.

APF's flagship publication for members is its newsletter, published quarterly since 2003. The Compass features recaps of APF events, articles on future trends, methodology salons, book reviews, plus member news and promotions. Non-members may view themed or conference editions.

Professionalism

Helping raise professionalism of futurists has been a perennial pursuit of the APF. In 2016, after three appointed studies over nine years, APF released a Foresight Competency Model, a product of 23 members from 4 continents that mapped the personal, academic, workplace, and technical competencies that futurists draw upon to support their work as consulting, organizational or academic futurists.

The Foresight Competency Model addresses the basic question of what one ought to be capable of doing as a professional futurist. At the center of the model is a circle of six foresight competencies: Framing, Scanning, Futuring, Designing, Visioning, and Adapting.

Six Foresight Competencies
Practice Description
Framing Defining the focal issue and current conditions
Scanning Exploring signals of change and cross-impacts
Futuring Identifying a baseline and alternative futures
Visioning Developing and committing to a preferred future
Designing Developing prototypes and artifacts to achieve goals
Adapting Generating strategies for alternative futures

The Foresight Competency Model also defined sector competencies for different types of foresight professionals, such as consulting or organizational futurists, at the entry, associate, and senior career level. The origins of the Foresight Competency Model arose from previous taxonomies of futures research methods that offered guidelines for carrying out successful strategic foresight, developed over four decades.

Futurist Recognition

APF's members annually select and recognize significant futures works. The first awards were announced in 2008. The ten 'most significant futures works' in 2008 included Peter Schwartz's The Art of the Long View, Wendell Bell's Foundations of Futures Studies: Human Science for a New Era, Bertrand de Jouvenel's L'Art de la Conjecture (The Art of Conjecture), and Ray Kurzweil's The Age of Spiritual Machines.

APF also has an annual student recognition program in which universities offering undergraduate, Masters and/or PhDs in foresight and futures studies can submit up to three student works that the instructor(s) considers being of exceptional quality in terms of originality, content, and contribution to the field.

As is the intention of many associations, APF has sought to improve the image and performance of the field. APF's credentialed members have written for and are cited in various journals and magazines such as Wired, Fast Company, Futures, Technological Forecasting and Social Change, Foresight, World Futures Review, The Futurist Journal, Futures & Foresight Science, and the Journal for Futures Studies.

APF is led by an international board of 12 futurists from five continents along with key volunteers. It is incorporated in the State of Delaware and is formed as a 501(c)(6) business league, with its headquarters in Washington, DC. It is considered exempt by the IRS as it is not organized for profit. APF's Twitter feed is @profuturists

Nanomaterials

From Wikipedia, the free encyclopedia

Nanomaterials describe, in principle, materials of which a single unit small sized (in at least one dimension) between 1 and 100 nm (the usual definition of nanoscale).

Nanomaterials research takes a materials science-based approach to nanotechnology, leveraging advances in materials metrology and synthesis which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, thermo-physical or mechanical properties.

Nanomaterials are slowly becoming commercialized and beginning to emerge as commodities.

Definition

In ISO/TS 80004, nanomaterial is defined as the "material with any external dimension in the nanoscale or having internal structure or surface structure in the nanoscale", with nanoscale defined as the "length range approximately from 1 nm to 100 nm". This includes both nano-objects, which are discrete pieces of material, and nanostructured materials, which have internal or surface structure on the nanoscale; a nanomaterial may be a member of both these categories.

On 18 October 2011, the European Commission adopted the following definition of a nanomaterial: "A natural, incidental or manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1 nm – 100 nm. In specific cases and where warranted by concerns for the environment, health, safety or competitiveness the number size distribution threshold of 50% may be replaced by a threshold between 1% to 50%."

Sources

Engineered

Engineered nanomaterials have been deliberately engineered and manufactured by humans to have certain required properties.

Legacy nanomaterials are those that were in commercial production prior to the development of nanotechnology as incremental advancements over other colloidal or particulate materials. They include carbon black and titanium dioxide nanoparticles.

Incidental

Nanomaterials may be unintentionally produced as a byproduct of mechanical or industrial processes through combustion and vaporization. Sources of incidental nanoparticles include vehicle engine exhausts, smelting, welding fumes, combustion processes from domestic solid fuel heating and cooking. For instance, the class of nanomaterials called fullerenes are generated by burning gas, biomass, and candle. It can also be a byproduct of wear and corrosion products. Incidental atmospheric nanoparticles are often referred to as ultrafine particles, which are unintentionally produced during an intentional operation, and could contribute to air pollution.

Natural

Biological systems often feature natural, functional nanomaterials. The structure of foraminifera (mainly chalk) and viruses (protein, capsid), the wax crystals covering a lotus or nasturtium leaf, spider and spider-mite silk, the blue hue of tarantulas, the "spatulae" on the bottom of gecko feet, some butterfly wing scales, natural colloids (milk, blood), horny materials (skin, claws, beaks, feathers, horns, hair), paper, cotton, nacre, corals, and even our own bone matrix are all natural organic nanomaterials.

Natural inorganic nanomaterials occur through crystal growth in the diverse chemical conditions of the Earth's crust. For example, clays display complex nanostructures due to anisotropy of their underlying crystal structure, and volcanic activity can give rise to opals, which are an instance of a naturally occurring photonic crystals due to their nanoscale structure. Fires represent particularly complex reactions and can produce pigments, cement, fumed silica etc.

Natural sources of nanoparticles include combustion products forest fires, volcanic ash, ocean spray, and the radioactive decay of radon gas. Natural nanomaterials can also be formed through weathering processes of metal- or anion-containing rocks, as well as at acid mine drainage sites.

Gallery of natural nanomaterials

Types

Nano-objects are often categorized as to how many of their dimensions fall in the nanoscale. A nanoparticle is defined a nano-object with all three external dimensions in the nanoscale, whose longest and the shortest axes do not differ significantly. A nanofiber has two external dimensions in the nanoscale, with nanotubes being hollow nanofibers and nanorods being solid nanofibers. A nanoplate/nanosheet has one external dimension in the nanoscale, and if the two larger dimensions are significantly different it is called a nanoribbon. For nanofibers and nanoplates, the other dimensions may or may not be in the nanoscale, but must be significantly larger. In all cases, a significant difference is noted to typically be at least a factor of 3.

Nanostructured materials are often categorized by what phases of matter they contain. A nanocomposite is a solid containing at least one physically or chemically distinct region, or collection of regions, having at least one dimension in the nanoscale.. A nanofoam has a liquid or solid matrix, filled with a gaseous phase, where one of the two phases has dimensions on the nanoscale. A nanoporous material is a solid material containing nanopores, voids in the form of open or closed pores of sub-micron lengthscales. A nanocrystalline material has a significant fraction of crystal grains in the nanoscale.

Nanoporous materials

The term nanoporous materials contain subsets of microporous and mesoporous materials. Microporous materials are porous materials with a mean pore size smaller than 2nm, while mesoporous materials are those with pores sizes in the region 2-50 nm. Microporous materials exhibit pore sizes with comparable length-scale to small molecules. For this reason such materials may serve valuable applications including separation membranes. Mesoporous materials are interesting towards applications that require high specific surface areas, while enabling penetration for molecules that may be too large to enter the pores of a microporous material. In some sources, nanoporous materials and nanofoam are sometimes considered nanostructures but not nanomaterials because only the voids and not the materials themselves are nanoscale. Although the ISO definition only considers round nano-objects to be nanoparticles, other sources use the term nanoparticle for all shapes.

Nanoparticles

Nanoparticles have all three dimensions on the nanoscale. Nanoparticles can also be embedded in a bulk solid to form a nanocomposite.

Fullerenes

The fullerenes are a class of allotropes of carbon which conceptually are graphene sheets rolled into tubes or spheres. These include the carbon nanotubes (or silicon nanotubes) which are of interest both because of their mechanical strength and also because of their electrical properties.

Rotating view of C60, one kind of fullerene

The first fullerene molecule to be discovered, and the family's namesake, buckminsterfullerene (C60), was prepared in 1985 by Richard Smalley, Robert Curl, James Heath, Sean O'Brien, and Harold Kroto at Rice University. The name was a homage to Buckminster Fuller, whose geodesic domes it resembles. Fullerenes have since been found to occur in nature. More recently, fullerenes have been detected in outer space.

For the past decade, the chemical and physical properties of fullerenes have been a hot topic in the field of research and development, and are likely to continue to be for a long time. In April 2003, fullerenes were under study for potential medicinal use: binding specific antibiotics to the structure of resistant bacteria and even target certain types of cancer cells such as melanoma. The October 2005 issue of Chemistry and Biology contains an article describing the use of fullerenes as light-activated antimicrobial agents. In the field of nanotechnology, heat resistance and superconductivity are among the properties attracting intense research.

A common method used to produce fullerenes is to send a large current between two nearby graphite electrodes in an inert atmosphere. The resulting carbon plasma arc between the electrodes cools into sooty residue from which many fullerenes can be isolated.

There are many calculations that have been done using ab-initio Quantum Methods applied to fullerenes. By DFT and TDDFT methods one can obtain IR, Raman and UV spectra. Results of such calculations can be compared with experimental results.

Metal-based nanoparticles

Inorganic nanomaterials, (e.g. quantum dots, nanowires and nanorods) because of their interesting optical and electrical properties, could be used in optoelectronics. Furthermore, the optical and electronic properties of nanomaterials which depend on their size and shape can be tuned via synthetic techniques. There are the possibilities to use those materials in organic material based optoelectronic devices such as Organic solar cells, OLEDs etc. The operating principles of such devices are governed by photoinduced processes like electron transfer and energy transfer. The performance of the devices depends on the efficiency of the photoinduced process responsible for their functioning. Therefore, better understanding of those photoinduced processes in organic/inorganic nanomaterial composite systems is necessary in order to use them in optoelectronic devices.

Nanoparticles or nanocrystals made of metals, semiconductors, or oxides are of particular interest for their mechanical, electrical, magnetic, optical, chemical and other properties. Nanoparticles have been used as quantum dots and as chemical catalysts such as nanomaterial-based catalysts. Recently, a range of nanoparticles are extensively investigated for biomedical applications including tissue engineering, drug delivery, biosensor.

Nanoparticles are of great scientific interest as they are effectively a bridge between bulk materials and atomic or molecular structures. A bulk material should have constant physical properties regardless of its size, but at the nano-scale this is often not the case. Size-dependent properties are observed such as quantum confinement in semiconductor particles, surface plasmon resonance in some metal particles and superparamagnetism in magnetic materials.

Nanoparticles exhibit a number of special properties relative to bulk material. For example, the bending of bulk copper (wire, ribbon, etc.) occurs with movement of copper atoms/clusters at about the 50 nm scale. Copper nanoparticles smaller than 50 nm are considered super hard materials that do not exhibit the same malleability and ductility as bulk copper. The change in properties is not always desirable. Ferroelectric materials smaller than 10 nm can switch their polarization direction using room temperature thermal energy, thus making them useless for memory storage. Suspensions of nanoparticles are possible because the interaction of the particle surface with the solvent is strong enough to overcome differences in density, which usually result in a material either sinking or floating in a liquid. Nanoparticles often have unexpected visual properties because they are small enough to confine their electrons and produce quantum effects. For example, gold nanoparticles appear deep red to black in solution.

The often very high surface area to volume ratio of nanoparticles provides a tremendous driving force for diffusion, especially at elevated temperatures. Sintering is possible at lower temperatures and over shorter durations than for larger particles. This theoretically does not affect the density of the final product, though flow difficulties and the tendency of nanoparticles to agglomerate do complicate matters. The surface effects of nanoparticles also reduces the incipient melting temperature.

One-dimensional nanostructures

The smallest possible crystalline wires with cross-section as small as a single atom can be engineered in cylindrical confinement. Carbon nanotubes, a natural semi-1D nanostructure, can be used as a template for synthesis. Confinement provides mechanical stabilization and prevents linear atomic chains from disintegration; other structures of 1D nanowires are predicted to be mechanically stable even upon isolation from the templates.

Two-dimensional nanostructures

2D materials are crystalline materials consisting of a two-dimensional single layer of atoms. The most important representative graphene was discovered in 2004. Thin films with nanoscale thicknesses are considered nanostructures, but are sometimes not considered nanomaterials because they do not exist separately from the substrate.

Bulk nanostructured materials

Some bulk materials contain features on the nanoscale, including nanocomposites, nanocrystalline materials, nanostructured films, and nanotextured surfaces.

Box-shaped graphene (BSG) nanostructure is an example of 3D nanomaterial. BSG nanostructure has appeared after mechanical cleavage of pyrolytic graphite. This nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm.

Applications

Nano materials are used in a variety of, manufacturing processes, products and healthcare including paints, filters, insulation and lubricant additives. In healthcare Nanozymes are nanomaterials with enzyme-like characteristics. They are an emerging type of artificial enzyme, which have been used for wide applications in such as biosensing, bioimaging, tumor diagnosis, antibiofouling and more. High quality filters may be produced using nanostructures, these filters are capable of removing particulate as small as a virus as seen in a water filter created by Seldon Technologies. Nanomaterials membrane bioreactor (NMs-MBR), the next generation of conventional MBR, are recently proposed for the advanced treatment of wastewater. In the air purification field, nano technology was used to combat the spread of MERS in Saudi Arabian hospitals in 2012. Nanomaterials are being used in modern and human-safe insulation technologies, in the past they were found in Asbestos-based insulation. As a lubricant additive, nano materials have the ability to reduce friction in moving parts. Worn and corroded parts can also be repaired with self-assembling anisotropic nanoparticles called TriboTEX. Nanomaterials have also been applied in a range of industries and consumer products. Mineral nanoparticles such as titanium-oxide have been used to improve UV protection in sunscreen. In the sports industry, lighter bats to have been produced with carbon nanotubes to improve performance. Another application is in the military, where mobile pigment nanoparticles have been used to create more effective camouflage. Nanomaterials can also be used in three-way-catalyst (TWC) applications. TWC converters have the advantage of controlling the emission of nitrogen oxides (NOx), which are precursors to acid rain and smog. In core-shell structure, nanomaterials form shell as the catalyst support to protect the noble metals such as palladium and rhodium. The primary function is that the supports can be used for carrying catalysts active components, making them highly dispersed, reducing the use of noble metals, enhancing catalysts activity, and improving the mechanical strength.

Synthesis

The goal of any synthetic method for nanomaterials is to yield a material that exhibits properties that are a result of their characteristic length scale being in the nanometer range (1 – 100 nm). Accordingly, the synthetic method should exhibit control of size in this range so that one property or another can be attained. Often the methods are divided into two main types, "bottom up" and "top down".

Bottom up methods

Bottom up methods involve the assembly of atoms or molecules into nanostructured arrays. In these methods the raw material sources can be in the form of gases, liquids or solids. The latter require some sort of disassembly prior to their incorporation onto a nanostructure. Bottom up methods generally fall into two categories: chaotic and controlled.

Chaotic processes involve elevating the constituent atoms or molecules to a chaotic state and then suddenly changing the conditions so as to make that state unstable. Through the clever manipulation of any number of parameters, products form largely as a result of the insuring kinetics. The collapse from the chaotic state can be difficult or impossible to control and so ensemble statistics often govern the resulting size distribution and average size. Accordingly, nanoparticle formation is controlled through manipulation of the end state of the products. Examples of chaotic processes are laser ablation, exploding wire, arc, flame pyrolysis, combustion, and precipitation synthesis techniques.

Controlled processes involve the controlled delivery of the constituent atoms or molecules to the site(s) of nanoparticle formation such that the nanoparticle can grow to a prescribed sizes in a controlled manner. Generally the state of the constituent atoms or molecules are never far from that needed for nanoparticle formation. Accordingly, nanoparticle formation is controlled through the control of the state of the reactants. Examples of controlled processes are self-limiting growth solution, self-limited chemical vapor deposition, shaped pulse femtosecond laser techniques, and molecular beam epitaxy.

Top down methods

Top down methods adopt some 'force' (e. g. mechanical force, laser) to break bulk materials into nanoparticles. A popular method involves mechanical break apart bulk materials into nanomaterials is 'ball milling'. Besides, nanoparticles can also be made by laser ablation which apply short pulse lasers (e. g. femtosecond laser) to ablate a target (solid).

Characterization

Novel effects can occur in materials when structures are formed with sizes comparable to any one of many possible length scales, such as the de Broglie wavelength of electrons, or the optical wavelengths of high energy photons. In these cases quantum mechanical effects can dominate material properties. One example is quantum confinement where the electronic properties of solids are altered with great reductions in particle size. The optical properties of nanoparticles, e.g. fluorescence, also become a function of the particle diameter. This effect does not come into play by going from macrosocopic to micrometer dimensions, but becomes pronounced when the nanometer scale is reached.

In addition to optical and electronic properties, the novel mechanical properties of many nanomaterials is the subject of nanomechanics research. When added to a bulk material, nanoparticles can strongly influence the mechanical properties of the material, such as the stiffness or elasticity. For example, traditional polymers can be reinforced by nanoparticles (such as carbon nanotubes) resulting in novel materials which can be used as lightweight replacements for metals. Such composite materials may enable a weight reduction accompanied by an increase in stability and improved functionality.

Finally, nanostructured materials with small particle size such as zeolites, and asbestos, are used as catalysts in a wide range of critical industrial chemical reactions. The further development of such catalysts can form the basis of more efficient, environmentally friendly chemical processes.

The first observations and size measurements of nano-particles were made during the first decade of the 20th century. Zsigmondy made detailed studies of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914. He used an ultramicroscope that employs a dark field method for seeing particles with sizes much less than light wavelength.

There are traditional techniques developed during the 20th century in interface and colloid science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section.

These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Others apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions.

There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. This information is required for proper system stabilization, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems.

Uniformity

The chemical processing and synthesis of high performance technological components for the private, industrial and military sectors requires the use of high purity ceramics, polymers, glass-ceramics and material composites. In condensed bodies formed from fine powders, the irregular sizes and shapes of nanoparticles in a typical powder often lead to non-uniform packing morphologies that result in packing density variations in the powder compact.

Uncontrolled agglomeration of powders due to attractive van der Waals forces can also give rise to in microstructural inhomogeneities. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved.

In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding inhomogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from inhomogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws. 

It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. A number of dispersants such as ammonium citrate (aqueous) and imidazoline or oleyl alcohol (nonaqueous) are promising solutions as possible additives for enhanced dispersion and deagglomeration. Monodisperse nanoparticles and colloids provide this potential.

Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline colloidal structures would appear to be the basic elements of sub-micrometer colloidal materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in high performance materials and components. 

Nanomaterials in articles, patents, and products

The quantitative analysis of nanomaterials showed that nanoparticles, nanotubes, nanocrystalline materials, nanocomposites, and graphene have been mentioned in 400000, 181000, 144000, 140000, and 119000 ISI-indexed articles, respectively, by Sep 2018. As far as patents are concerned, nanoparticles, nanotubes, nanocomposites, graphene, and nanowires have been played a role in 45600, 32100, 12700, 12500, and 11800 patents, respectively. Monitoring approximately 7000 commercial nano-based products available on global markets revealed that the properties of around 2330 products have been enabled or enhanced aided by nanoparticles. Liposomes, nanofibers, nanocolloids, and aerogels were also of the most common nanomaterials in consumer products.

The European Union Observatory for Nanomaterials (EUON) has produced a database (NanoData) that provides information on specific patents, products, and research publications on nanomaterials.

Health and safety

World Health Organization guidelines

The World Health Organization (WHO) published a guideline on protecting workers from potential risk of manufactured nanomaterials at the end of 2017. WHO used a precautionary approach as one of its guiding principles. This means that exposure has to be reduced, despite uncertainty about the adverse health effects, when there are reasonable indications to do so. This is highlighted by recent scientific studies that demonstrate a capability of nanoparticles to cross cell barriers and interact with cellular structures. In addition, the hierarchy of controls was an important guiding principle. This means that when there is a choice between control measures, those measures that are closer to the root of the problem should always be preferred over measures that put a greater burden on workers, such as the use of personal protective equipment (PPE). WHO commissioned systematic reviews for all important issues to assess the current state of the science and to inform the recommendations according to the process set out in the WHO Handbook for guideline development. The recommendations were rated as "strong" or "conditional" depending on the quality of the scientific evidence, values and preferences, and costs related to the recommendation.

The WHO guidelines contain the following recommendations for safe handling of manufactured nanomaterials (MNMs)

A. Assess health hazards of MNMs

  1. WHO recommends assigning hazard classes to all MNMs according to the Globally Harmonized System (GHS) of Classification and Labelling of Chemicals for use in safety data sheets. For a limited number of MNMs this information is made available in the guidelines (strong recommendation, moderate-quality evidence).
  2. WHO recommends updating safety data sheets with MNM-specific hazard information or indicating which toxicological end-points did not have adequate testing available (strong recommendation, moderate-quality evidence).
  3. For the respirable fibres and granular biopersistent particles' groups, the GDG suggests using the available classification of MNMs for provisional classification of nanomaterials of the same group (conditional recommendation, low-quality evidence).

B. Assess exposure to MNMs

  1. WHO suggests assessing workers' exposure in workplaces with methods similar to those used for the proposed specific occupational exposure limit (OEL) value of the MNM (conditional recommendation, low-quality evidence).
  2. Because there are no specific regulatory OEL values for MNMs in workplaces, WHO suggests assessing whether workplace exposure exceeds a proposed OEL value for the MNM. A list of proposed OEL values is provided in an annex of the guidelines. The chosen OEL should be at least as protective as a legally mandated OEL for the bulk form of the material (conditional recommendation, low-quality evidence).
  3. If specific OELs for MNMs are not available in workplaces, WHO suggests a step-wise approach for inhalation exposure with, first an assessment of the potential for exposure; second, conducting basic exposure assessment and third, conducting a comprehensive exposure assessment such as those proposed by the Organisation for Economic Cooperation and Development (OECD) or Comité Européen de Normalisation (the European Committee for Standardization, CEN) (conditional recommendation, moderate quality evidence).
  4. For dermal exposure assessment, WHO found that there was insufficient evidence to recommend one method of dermal exposure assessment over another.

C. Control exposure to MNMs

  1. Based on a precautionary approach, WHO recommends focusing control of exposure on preventing inhalation exposure with the aim of reducing it as much as possible (strong recommendation, moderate-quality evidence).
  2. WHO recommends reduction of exposures to a range of MNMs that have been consistently measured in workplaces especially during cleaning and maintenance, collecting material from reaction vessels and feeding MNMs into the production process. In the absence of toxicological information, WHO recommends implementing the highest level of controls to prevent workers from any exposure. When more information is available, WHO recommends taking a more tailored approach (strong recommendation, moderate-quality evidence).
  3. WHO recommends taking control measures based on the principle of hierarchy of controls, meaning that the first control measure should be to eliminate the source of exposure before implementing control measures that are more dependent on worker involvement, with PPE being used only as a last resort. According to this principle, engineering controls should be used when there is a high level of inhalation exposure or when there is no, or very little, toxicological information available. In the absence of appropriate engineering controls PPE should be used, especially respiratory protection, as part of a respiratory protection programme that includes fit-testing (strong recommendation, moderate-quality evidence).
  4. WHO suggests preventing dermal exposure by occupational hygiene measures such as surface cleaning, and the use of appropriate gloves (conditional recommendation, low quality evidence).
  5. When assessment and measurement by a workplace safety expert is not available, WHO suggests using control banding for nanomaterials to select exposure control measures in the workplace. Owing to a lack of studies, WHO cannot recommend one method of control banding over another (conditional recommendation, very low-quality evidence).

For health surveillance WHO could not make a recommendation for targeted MNM-specific health surveillance programmes over existing health surveillance programmes that are already in use owing to the lack of evidence. WHO considers training of workers and worker involvement in health and safety issues to be best practice but could not recommend one form of training of workers over another, or one form of worker involvement over another, owing to the lack of studies available. It is expected that there will be considerable progress in validated measurement methods and risk assessment and WHO expects to update these guidelines in five years' time, in 2022.

Other guidance

Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, are subjects of ongoing research. Of the possible hazards, inhalation exposure appears to present the most concern. Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. Acute inhalation exposure of healthy animals to biodegradable inorganic nanomaterials have not demonstrated significant toxicity effects. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials, although no reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013. Additional concerns include skin contact and ingestion exposure, and dust explosion hazards.

Elimination and substitution are the most desirable approaches to hazard control. While the nanomaterials themselves often cannot be eliminated or substituted with conventional materials, it may be possible to choose properties of the nanoparticle such as size, shape, functionalization, surface charge, solubility, agglomeration, and aggregation state to improve their toxicological properties while retaining the desired functionality. Handling procedures can also be improved, for example, using a nanomaterial slurry or suspension in a liquid solvent instead of a dry powder will reduce dust exposure. Engineering controls are physical changes to the workplace that isolate workers from hazards, mainly ventilation systems such as fume hoods, gloveboxes, biosafety cabinets, and vented balance enclosures. Administrative controls are changes to workers' behavior to mitigate a hazard, including training on best practices for safe handling, storage, and disposal of nanomaterials, proper awareness of hazards through labeling and warning signage, and encouraging a general safety culture. Personal protective equipment must be worn on the worker's body and is the least desirable option for controlling hazards. Personal protective equipment normally used for typical chemicals are also appropriate for nanomaterials, including long pants, long-sleeve shirts, and closed-toed shoes, and the use of safety gloves, goggles, and impervious laboratory coats. In some circumstances respirators may be used.

Exposure assessment is a set of methods used to monitor contaminant release and exposures to workers. These methods include personal sampling, where samplers are located in the personal breathing zone of the worker, often attached to a shirt collar to be as close to the nose and mouth as possible; and area/background sampling, where they are placed at static locations. The assessment should use both particle counters, which monitor the real-time quantity of nanomaterials and other background particles; and filter-based samples, which can be used to identify the nanomaterial, usually using electron microscopy and elemental analysis. As of 2016, quantitative occupational exposure limits have not been determined for most nanomaterials. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits for carbon nanotubes, carbon nanofibers, and ultrafine titanium dioxide. Agencies and organizations from other countries, including the British Standards Institute and the Institute for Occupational Safety and Health in Germany, have established OELs for some nanomaterials, and some companies have supplied OELs for their products.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...