Search This Blog

Monday, November 7, 2022

Encyclopedism

From Wikipedia, the free encyclopedia
 
Natural History, written by Pliny the Elder in the first century, was the first book to be called an encyclopedia. It was highly regarded in the Middle Ages. This profusely illustrated manuscript was produced in the 13th century.

Encyclopedism is an outlook that aims to include a wide range of knowledge in a single work. The term covers both encyclopedias themselves and related genres in which comprehensiveness is a notable feature. The word encyclopedia is a Latinization of the Greek enkýklios paideía, which means all-around education. The encyclopedia is "one of the few generalizing influences in a world of overspecialization. It serves to recall that knowledge has unity," according to Lewis Shore, editor of Collier's Encyclopedia. It should not be "a miscellany, but a concentration, a clarification, and a synthesis", according to British writer H. G. Wells.

Besides comprehensiveness, encyclopedic writing is distinguished by its lack of a specific audience or practical application. The author explains facts concisely for the benefit of a reader who will then use the information in a way that the writer does not try to anticipate. Early examples of encyclopedic writing include discussions of agriculture and craft by Roman writers such as Pliny the Elder and Varro – discussions presumably not intended as practical advice to farmers or craftsmen.

The vast majority of classical learning was lost during the Dark Ages. This enhanced the status of encyclopedic works which survived, including those of Aristotle and Pliny. With the development of printing in the 15th century, the range of knowledge available to readers expanded greatly. Encyclopedic writing became both a practical necessity and a clearly distinguished genre. Renaissance encyclopedists were keenly aware of how much classical learning had been lost. They hoped to recover and record knowledge and were anxious to prevent further loss.

In their modern form, encyclopedias consist of alphabetized articles written by teams of specialists. This format was developed in the 18th century by expanding the technical dictionary to include non-technical topics. The Encyclopédie (1751–1772), edited by Diderot and D'Alembert, was a model for many later works. Like Renaissance encyclopedists, Diderot worried about the possible destruction of civilization and selected knowledge he hoped would survive.

Etymology

In 1517, Bavarian Johannes Aventinus wrote the first book that used the word "encyclopedia" in the title.

The word "encyclopedia" is a Latinization of the Greek enkýklios paideía. The Greek phrase refers to the education that a well-round student should receive. Latin writer Quintilian uses it to refer to the subjects a student of oratory should be familiar with before beginning an apprenticeship. It translates literally as "in (en) the circle (kýklios) of knowledge (paideía)." The earliest citation for "encyclopedia" given in Oxford English Dictionary refers to the Greek curriculum and is dated 1531.

The use of the term to refer to a genre of literature was prompted by a line that Pliny used in the preface of Natural History:</ref> "My object is to treat of all those things which the Greeks include in the Encyclopædia [tē̂s enkyklíou paideías], which, however, are either not generally known or are rendered dubious from our ingenious conceits." Pliny writes the relevant phrase using Greek letters. Latin printers of incunabula lacked the typeface to render it. Some printers substituted encyclopædia or another Latin phrase. Others just left a blank space. This led to the misunderstanding that Pliny had called his work an encyclopedia.

In the Renaissance, writers who wanted their work compared to that of Pliny used the word. In 1517, Bavarian Johannes Aventinus wrote Encyclopedia orbisqve doctrinarum, a Latin reference work. Ringelberg's Cyclopedia was published in 1541 and Paul Scalich's Encyclopedia in 1559. Both of these reference works were written in Latin. The French Encyclopédistes popularized the word in the 18th century.

The Oxford English Dictionary's first citation of "enyclopedism" is dated 1833. The context is a book on Diderot.

History

In the 4th century BC, Aristotle wrote on a broad range of topics and explained how knowledge can be classified.

Aristotle

The Greek writer and teacher Aristotle (384–322 BC) had much to say on a broad range of subjects, including biology, anatomy, psychology, physics, meteorology, zoology, poetics, rhetoric, logic, epistemology, metaphysics, ethics, and political thought. He was among the first writers to describe how to classify material by subject, the first step in writing an encyclopedia. Aristotle wrote to help his students follow his teaching, so his corpus did not much resemble an encyclopedia during his lifetime. Long after his death, commentators filled in the gaps, reordered his works, and put his writing in a systematic form. Catalogs of his work were produced by Andronicus in the first century and by Ptolemy in the second century. As Aristotle's corpus was one of the few encyclopedic works to survive the Middle Ages, it became a widely used reference work in late medieval and Renaissance times.

Alexandria

Dorotheus (mid first century AD) and Pamphilus (late first century AD) both wrote enormous lexicons. Neither work has survived, but their lengths suggest that they were considerably more than just dictionaries. Pamphilus's work was 95 books long and was a sequel to a lexicon of four books by Zopyrion. This passage from the Souda suggests that it was made up of alphabetized entries:

Pamphilus, of Alexandria, a grammarian of the school of Aristarchus. He wrote A Meadow, which is a summary of miscellaneous contents. On rare words; i.e. vocabulary in 95 books (it contains entries from epsilon to omega, because Zopyrion had done letters alpha to delta.) On unexplained matters in Nicander and the so called Optica; Art of Criticism and a large number of other grammatical works.

Hesychius (fifth century) credits Diogenianus as a source, who in turn used Pamphilus. This is the only form in which any of Pamphilus's work may have survived.

Rome

A Roman who wanted to learn about a certain subject would send a slave to a private library with orders to copy relevant passages from whatever books were available. As they were less likely to withdraw or buy a book, readers were little concerned with the scope of a given work. So the emergence of encyclopedic writing cannot be explained by practical need. Instead, it may have been inspired by Cato's ideal of the vir bonus, the informed citizen able to participate in the life of the Republic.

Three Roman works are commonly identified as encyclopedic: The collected works of Varro (116–27 BC), Pliny the Elder's (c. 77–79 AD) Natural History, and On the Arts by Cornelius Celsus (c. 25 BC – c. 50 AD). These three were grouped together as a genre, not by the Romans themselves, but by later writers in search of antique precedent.

In Cicero's time, the study of literature was still controversial. In Pro Archia, Cicero explains that he studied literature to improve his rhetorical skills and because it provides a source of elevating moral examples. Varro's emphasis on the city's history and culture suggests patriotic motives. Pliny emphasized utilitarian motives and public service. He criticized Livy for writing history simply for his own pleasure.

Varro

Varro's Antiquities consisted of 41 books on Roman history. His Disciplines was nine books on liberal arts. Varro also wrote 25 books on Latin and 15 on law. Only fragments of Varro's work survive. According to Cicero, Varro's comprehensive work allowed the Romans to feel at home in their own city.

Cornelius Celsus

Cornelius Celsus wrote prolifically on various topics in first century Rome. He knew "all things," according to a tribute by Quintilian. Only his work on medicine survives.

Celsus wrote prolifically on many subjects. "Cornelius Celsus, a man of modest intellect, could write not only about all these arts but also left behind accounts of military science, agriculture, and medicine: indeed, he deserves, on the basis on this design alone, to be thought to have known all things," according to Quintilian. Only the medical section of his massive On the Arts has survived. This is eight books long. Celsus followed the structure of the medical writers that had gone before him. He summarized their views in a workmanlike manner. He seldom presented insights of his own. He struggled to manage the overwhelming quantity of relevant source material. His medical books were rediscovered in 1426-1427 at libraries in the Vatican and in Florence and published in 1478. He is our main source concerning Roman medical practices.

Pliny the Elder

If Varro made the Romans feel at home their own city, Pliny tried to do the same for the natural world and for the Empire. Pliny's approach was very different than that of Celsus. He was a man ahead of his time. Not content to build on what went before, he reorganized the world of knowledge to fit his encyclopedic vision. In a Latin preface, the writer customarily listed the models he hoped to surpass. Pliny found no model in previous writing. Instead, he emphasized that his work was novicium (new), a word suitable for describing a major discovery. Although Pliny was widely read, no later Roman writer followed his structure or claimed him as a model. Niccolò Leoniceno published an essay in 1492 listing Pliny's many scientific errors.

In the introduction of Natural History, Pliny writes:

... in Thirty-six Books I have comprised 20,000 Things that are worthy of Consideration, and these I have collected out of about 2000 Volumes that I have diligently read (and of which there are few that Men otherwise learned have ventured to meddle with, for the deep Matter therein contained), and those written by one hundred several excellent Authors; besides a Multitude of other Matters, which either were unknown to our former Writers, or Experience has lately ascertained.

With an entire book dedicated to listing sources, Natural History is 37 books long. (It's 10 volumes in the modern translation.) Eschewing established disciplines and categories, Pliny begins with a general description of the world. Book 2 covers astronomy, meteorology, and the elements. Books 3–6 cover geography. Humanity is covered in Book 7, animals in Books 8–11, trees in 12–17, agriculture in 18–19, medicine in 20–32, metals in 33–34, and craft and art in 35–37.

Following Aristotle, Pliny counts four elements: fire, earth, air and water. There are seven planets: Saturn, Jupiter, Mars ("of a fiery and burning nature"), the Sun, Venus, Mercury, and the Moon ("the last of the stars"). The earth is a "perfect globe," suspended in the middle of space, that rotates with incredible swiftness once every 24 hours. As a good Stoic, Pliny dismisses astrology: "it is ridiculous to suppose, that the great head of all things, whatever it be, pays any regard to human affairs." He considers the possibility of other worlds ("there will be so many suns and so many moons, and that each of them will have immense trains of other heavenly bodies") only to dismiss such speculation as "madness." The idea of space travel is "perfect madness."

Pliny had opinions on a wide variety of subjects often interjected them. He tells us which uses of plants, animals, and stone are proper, and which ones are improper. Was the Roman Empire benefiting or corrupting the classical world? Pliny returns to this theme repeatedly. He analogizes Rome's civilizing mission to the way poisonous plants of all nation were tamed into medicines. Pliny also wants us to know that he is a heroic explorer, a genius responsible for a highly original and most remarkable work. The extensive reading and note taking of his slave secretaries is rarely mentioned.

At the very end of the work, Pliny writes, "Hail Nature, parent of all things, and in recognition of the fact that I alone have praised you in all your manifestations, look favorably upon me." Here Pliny points to comprehensiveness as his project's outstanding asset. Nature awarded Pliny a heroic death that gave him "a kind of eternal life," according to his nephew. The great encyclopedist was commander of the Naples fleet and died trying to assist the local inhabitants during the eruption of Vesuvius in AD 79.

The Middle Ages

Vincent of Beauvais (c. 1190 – 1264?) was one of the best-known encyclopedists of the medieval period. This illustration is from a 15th-century French translation of his work.

While classical and modern encyclopedic writers sought to distribute knowledge, those of the Middle Ages were more interested in establishing orthodoxy. They produced works to be used as educational texts in schools and universities. Students could consider the knowledge within them as safely orthodox and thus be kept from heresy. Limiting knowledge was an important part of their function.

As a Stoic, Pliny began with astronomy and ended with the fine arts. Cassiodorus attempted to write a Christian equivalent to Pliny's work. His Institutiones (560) begins with discussions of scripture and the church. Other subjects are treated briefly toward the end of the work. With onset of the Dark Ages, access to Greek learning and literacy in Greek declined. The works of Boethius (c. 480–524) filled the gap by compiling Greek handbooks and summarizing their content in Latin. These works served as general purpose references in the early Middle Ages.

The Etymologies (c. 600–625) by Isidore of Seville consisted of extracts from earlier writers. Three of the Isidore's twenty books represent material from Pliny. Isidore was the most widely read and fundamental text in terms of medieval encyclopedic writing.

These early medieval writers organized their material in the form of a trivium (grammar, logic, rhetoric) followed by a quadrivium (geometry, arithmetic, astronomy, music). This division of seven liberal arts was a feature of monastic education as well as the medieval universities, which developed beginning in the 12th century.

From the fourth to the ninth centuries, Byzantium experienced a series of religious debates. As part of these debates, excerpts were compiled and organized thematically to support the theological views of the compiler. Once orthodoxy was established, the energy of the compilation tradition transferred to other subjects. The tenth century, or Macedonian dynasty, saw a flowering of encyclopedic writing. The Suda is believed to have been compiled at this time. This is the earliest work that a modern reader would recognize as an encyclopedia. It contains 30,000 alphabetized entries. The Suda is not mentioned until the 12th century, and it might have been put together in stages.

The most massive encyclopedia of the Middle Ages was Speculum Maius (The Great Mirror) by Vincent of Beauvais. It was 80 books long and was completed in 1244. With a total of 4.5 million words, the work is presumably the product of an anonymous team. (By comparison, the current edition of Britannica has 44 million words.) It was divided into three sections. "Naturale" covered God and the natural world; "Doctrinale" covered language, ethics, crafts, medicine; and "Historiale" covered world history. Vincent had great respect for classical writers such Aristotle, Cicero, and Hippocrates. The encyclopedia shows a tendency toward "exhaustiveness," or systemic plagiarism, typical of the medieval period. Vincent was used as a source by Chaucer. The full version of Speculum proved to be too long to circulate in the era of manuscripts and manual copying. However, an abridged version by Bartholomeus Anglicus did enjoy a wide readership.

The Arab counterpart to these works was Kitab al-Fehrest by Ibn al-Nadim.

Renaissance

With the advent of printing and a dramatic reduction in paper costs, the volume of encyclopedic writing exploded in the Renaissance. This was an age of "info-lust" and enormous compilations. Many compilers cited the fear of a traumatic loss of knowledge to justify their efforts. They were keenly aware of how much classical learning had been lost in the Dark Ages. Pliny was their model. His axiom that, "there is no book so bad that some good cannot be got from it" was a favorite. Conrad Gesner listed over 10,000 books in Bibliotheca universalis (1545). By including both Christian and barbarian works, Gesner rejected the medieval quest for orthodoxy. Ironically, Jesuit Antonio Possevino used Bibliotheca universalis as a basis to create a list of forbidden books.

England

The invention of printing helped spread new ideas, but also revived old misconceptions. Printers of incunabulia were eager to publish books, both ancient and modern. The best-known encyclopedia of Elizabethan England was Batman upon Bartholomew, published in 1582. This book is based on a work compiled by Bartholomaeus Anglicus in the 13th century. It was translated by John Trevisa in 1398, revised by Thomas Berthelet in 1535, and revised again by Stephen Batman. In Shakespeare's day, it represented a worldview already four centuries old, only modestly updated. Yet several ideas inspired by Batman can be found in Shakespeare. The idea that the rays of the moon cause madness can be found Measure for Measure and Othello, hence the word "lunacy." The discussion of the geometric properties of the soul in King Lear is likely to reflect the influence of Batman as well. An encyclopedia that Shakespeare consulted more obviously than Batman is French Academy by Pierre de la Primaudaye. Primaudaye was much taken with analogies, some of which have found their way into Shakespeare: the unweeded garden, death as an unknown country, and the world as a stage. (Various other sources have also been suggested for the last analogy.) Both Batman and Primaudaye were Protestant.

Francis Bacon wrote a plan for an encyclopedia in Instauratio magna (1620). He drew up a checklist of the major areas of knowledge a complete encyclopedia needed to contain. Bacon's plan influenced Diderot and thus indirectly later encyclopedias, which generally follow Diderot's scheme.

The Enlightenment

Encyclopédie (1751–1777), edited by Diderot and D'Alembert, was greatly admired and a model for many subsequent works.

While ancient and medieval encyclopedism emphasized the classics, liberal arts, informed citizenship, or law, the modern encyclopedia springs from a separate tradition. The advance of technology meant that there was much unfamiliar terminology to explain. John Harris's Lexicon Technicum (1704) proclaims itself, "An Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves." This was the first alphabetical encyclopedia written in English. Harris's work inspired Ephraim Chambers's Cyclopedia (1728). Chambers's two-volume work is considered the first modern encyclopedia.

Encyclopédie (1751–1777) was a massively expanded version of Chambers's idea. This 32-volume work, edited by Diderot and D'Alembert, was the pride of Enlightenment France. It consisted of 21 volumes of text and 11 volumes of illustrations. There were 74,000 articles written by more than 130 contributors. It presented a secular worldview, drawing the ire of several Church officials. It sought to empower its readers with knowledge and played a role in fomenting the dissent that led to the French Revolution. Diderot explained the project this way:

This is a work that cannot be completed except by a society of men of letters and skilled workmen, each working separately on his own part, but all bound together solely by their zeal for the best interests of the human race and a feeling of mutual good will.

This realization that no one person, not even a genius like Pliny assisted by slave secretaries, could produce a work of the comprehensiveness required, is the mark of the modern era of encyclopedism.

Diderot's project was a great success and inspired several similar projects, including Britain's Encyclopædia Britannica (first edition, 1768) as well as Germany's Brockhaus Enzyklopädie (beginning 1808). Enlightenment encyclopedias also inspired authors and editors to undertake or critique "encyclopedic" knowledge projects in other genres and formats: the 65-volume Universal History (Sale et al) (1747-1768), for example, far exceeded its predecessors in terms of scope, and The General Magazine of Arts and Sciences (1755-1765) published by Benjamin Martin (lexicographer) sought to bring encyclopedism to the monthly periodical. A loyal subscriber, he wrote, would “be allowed to make a great Proficiency, if he can make himself Master of the useful Arts and Sciences in the Compass of Ten Years.” In Laurence Sterne's The Life and Opinions of Tristram Shandy, Gentleman (1759-1767), the title character satirically refers to his fictional autobiography as a “cyclopædia of arts and sciences." Such "experiments in encyclopedism" demonstrate the widespread literary and cultural influence of the form in the 18th century.

The 19th and 20th centuries

Once solely for society's elites, in the 19th and 20th centuries encyclopedias were increasingly written, marketed to, and purchased by middle and working class households. Different styles of encyclopedism emerged which would target particular age groups, presenting the works as educational tools—even made available through payment plans advertised on TV.

One of the earliest individuals to advocate for a technologically enhanced encyclopedia indexing all the world's information was H. G. Wells. Inspired by the possibilities of microfilm, he put forward his idea of a global encyclopedia in the 1930s through a series of international talks and his essay World Brain.

It would be another several decades before the earliest electronic encyclopedias were published in the 1980s and 1990s. The production of electronic encyclopedias began as conversions of printed work, but soon added multimedia elements, requiring new methods of content gathering and presentation. Early applications of hypertext similarly had a great benefit to readers but did not require significant changes in writing. The launching of Wikipedia in the 2000s and its subsequent rise in popularity and influence, however, radically altered popular conception of the ways in which an encyclopedia is produced (collaboratively, openly) and consumed (ubiquitously).

China

The nearest Chinese equivalent to an encyclopedia is the leishu. These consist of extensive quotations arranged by category. The earliest known Chinese encyclopedia is Huang Lan (Emperor's mirror), produced around 220 under the Wei dynasty. No copy has survived. The best-known leishu are those of Li Fang (925–996), who wrote three such works during the Song dynasty. These three were later combined with a fourth work, Cefu Yuangui, to create Four Great Books of Song.

Criticism of patents

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Criticism_of_patents

Legal scholars, economists, activists, policymakers, industries, and trade organizations have held differing views on patents and engaged in contentious debates on the subject. Critical perspectives emerged in the nineteenth century that were especially based on the principles of free trade. Contemporary criticisms have echoed those arguments, claiming that patents block innovation and waste resources that could otherwise be used productively, and also block access to an increasingly important "commons" of enabling technologies (a phenomenon called the tragedy of the anticommons), apply a "one size fits all" model to industries with differing needs, that is especially unproductive for industries other than chemicals and pharmaceuticals and especially unproductive for the software industry. Enforcement by patent trolls of poor quality patents has led to criticism of the patent office as well as the system itself. Patents on pharmaceuticals have also been a particular focus of criticism, as the high prices they enable puts life-saving drugs out of reach of many people. Alternatives to patents have been proposed, such Joseph Stiglitz's suggestion of providing "prize money" (from a "prize fund" sponsored by the government) as a substitute for the lost profits associated with abstaining from the monopoly given by a patent.

These debates are part of a larger discourse on intellectual property protection which also reflects differing perspectives on copyright.

History

Criticism of patents reached an early peak in Victorian Britain between 1850 and 1880, in a campaign against patenting that expanded to target copyright too and, in the judgment of historian Adrian Johns, "remains to this day the strongest [campaign] ever undertaken against intellectual property", coming close to abolishing patents. Its most prominent activists – Isambard Kingdom Brunel, William Robert Grove, William Armstrong and Robert A. MacFie – were inventors and entrepreneurs, and it was also supported by radical laissez-faire economists (The Economist published anti-patent views), law scholars, scientists (who were concerned that patents were obstructing research) and manufacturers. Johns summarizes some of their main arguments as follows:

[Patents] projected an artificial idol of the single inventor, radically denigrated the role of the intellectual commons, and blocked a path to this commons for other citizens – citizens who were all, on this account, potential inventors too. [...] Patentees were the equivalent of squatters on public land – or better, of uncouth market traders who planted their barrows in the middle of the highway and barred the way of the people.

Similar debates took place during that time in other European countries such as France, Prussia, Switzerland and the Netherlands (but not in the United States).

Based on the criticism of patents as state-granted monopolies perceived to be inconsistent with free trade, the Netherlands abolished patents in 1869 (having established them in 1817) – but later reversed the action and reintroduced them in 1912. In Switzerland, criticism of patents delayed the introduction of patent laws until 1907.

Contemporary arguments

Contemporary arguments have focused on ways that patents can slow innovation by: blocking researchers' and companies' access to basic, enabling technology, and particularly following the explosion of patent filings in the 1990s, through the creation of "patent thickets"; wasting productive time and resources fending off enforcement of low-quality patents that should not have existed, particularly by "patent trolls"; and wasting money on patent litigation. Patents on pharmaceuticals have been a particular focus of criticism, as the high prices they enable puts life-saving drugs out of reach of many people.

Blocking innovation

"[Patents] serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors."

Elon Musk

The most general argument against patents is that "intellectual property" in all its forms represents an effort to claim something that should not be owned, and harms society by slowing innovation and wasting resources.

Law professors Michael Heller and Rebecca Sue Eisenberg have described an ongoing tragedy of the anticommons with regard to the proliferation of patents in the field of biotechnology, wherein intellectual property rights have become so fragmented that, effectively, no one can take advantage of them as to do so would require an agreement between the owners of all of the fragments.

Some public campaigns for improving access to medicines and genetically modified food have expressed a concern for "preventing the over-reach" of intellectual property protection including patent protection, and "to retain a public balance in property rights". Some economists and scientists and law professors have raised concerns that patents retard technical progress and innovation. Others claim that patents have had no effect on research, based on surveys of scientists.

In a 2008 publication, Yi Quan of the Kellogg School of Management concluded that the imposition of pharmaceutical patents under the TRIPS Agreement did not increase innovation in the pharmaceutical industry. The publication also said there appeared to be an optimal level of patent protection that increased domestic innovation.

Poor patent quality and patent trolls

Patents have also been criticized for being granted on already-known inventions, with some complaining in the United States that the USPTO fails "to do a serious job of examining patents, thus allowing bad patents to slip through the system." On the other hand, some argue that because of low number of patents going into litigation, increasing quality of patents at patent prosecution stage increases overall legal costs associated with patents, and that current USPTO policy is a reasonable compromise between full trial on examination stage on one hand, and pure registration without examination, on the other hand.

Enforcement of patents – especially patents perceived as being overly broad – by patent trolls, has brought criticism of the patent system, though some commentators suggest that patent trolls are not bad for the patent system at all but instead realign market participant incentives, make patents more liquid, and clear the patent market.

Some patents granted in Russia have been denounced as pseudoscientific (for example, health-related patents using lunar phase or religious icons).

Litigation costs

According to James Bessen, the costs of patent litigation exceed their investment value in all industries except chemistry and pharmaceuticals. For example, in the software industry, litigation costs are twice the investment value. Bessen and Meurer also note that software and business model litigation accounts for a disproportionate share (almost 40 percent) of patent litigation cost, and the poor performance of the patent system negatively affects these industries.

Different industries but one law

Richard Posner noted that the most controversial feature of US patent law is that it covers all industries in the same way, but not all industries benefit from the time-limited monopoly a patent provides in order to spur innovation. He said that while the pharmaceutical industry is "poster child" for the need for a twenty-year monopoly, since costs to bring to a market are high, the time of development is often long, and the risks are high, in other industries like software the cost and risk of innovation is much lower and the cycle of innovation is quicker, and obtaining and enforcing patents and defending against patent litigation is generally a waste of resources in those industries.

Pharmaceutical patents

Some have raised ethical objections specifically with respect to pharmaceutical patents and the high prices for medication that they enable their proprietors to charge, which poor people in the developed world, and developing world, cannot afford. Critics also question the rationale that exclusive patent rights and the resulting high prices are required for pharmaceutical companies to recoup the large investments needed for research and development. One study concluded that marketing expenditures for new drugs often doubled the amount that was allocated for research and development.

In 2003, World Trade Organization (WTO) reached an agreement, which provides a developing country with options for obtaining needed medications under compulsory licensing or importation of cheaper versions of the drugs, even before patent expiration.

In 2007 the government of Brazil declared Merck's efavirenz anti-retroviral drug a "public interest" medicine, and challenged Merck to negotiate lower prices with the government or have Brazil strip the patent by issuing a compulsory license.

It is reported that Ghana, Tanzania, the Democratic Republic of the Congo and Ethiopia have similar plans to produce generic antiviral drugs. Western pharmaceutical companies initially responded with legal challenges, but some have now promised to introduce alternative pricing structures for developing countries and NGOs.

In July 2008 Nobel Prize-winning scientist Sir John Sulston called for an international biomedical treaty to clear up issues over patents.

In response to these criticisms, one review concluded that less than 5 percent of medicines on the World Health Organization's list of essential drugs are under patent. Also, the pharmaceutical industry has contributed US$2 billion for healthcare in developing countries, providing HIV/AIDS drugs at lower cost or even free of charge in certain countries, and has used differential pricing and parallel imports to provide medication to the poor. Other groups are investigating how social inclusion and equitable distribution of research and development findings can be obtained within the existing intellectual property framework, although these efforts have received less exposure.

Quoting a World Health Organization report, Trevor Jones (director of research and development at the Wellcome Foundation, as of 2006) argued in 2006 that patent monopolies do not create monopoly pricing. He argued that the companies given monopolies "set prices largely on the willingness/ability to pay, also taking into account the country, disease and regulation" instead of receiving competition from legalized generics.

Proposed alternatives to the patent system

Alternatives have been discussed to address the issue of financial incentivization to replace patents. Mostly, they are related to some form of direct or indirect government funding. One example is Joseph Stiglitz's idea of providing "prize money" (from a "prize fund" sponsored by the government) as a substitute for the lost profits associated with abstaining from the monopoly given by a patent. Another approach is to remove the issue of financing development from the private sphere altogether, and to cover the costs with direct government funding.

Algorithmic efficiency

From Wikipedia, the free encyclopedia

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.

For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.

For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort sorts the list in time proportional to the number of elements squared (, see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list (). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (), but has a space requirement linear in the length of the list (). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice.

Background

The importance of efficiency with respect to time was emphasised by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine:

"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"

Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was then to use the fastest algorithm that could fit in the available memory.

Modern computers are significantly faster than the early computers, and have a much larger amount of memory available (Gigabytes instead of Kilobytes). Nevertheless, Donald Knuth emphasised that efficiency is still an important consideration:

"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"

Overview

An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago.

Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.

There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order.

In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues.

Theoretical analysis

In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input . Big O notation is an asymptotic measure of function complexity, where roughly means the time requirement for an algorithm is proportional to , omitting lower-order terms that contribute less than to the growth of the function as grows arbitrarily large. This estimate may be misleading when is small, but is generally sufficiently accurate when is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs.

Some examples of Big O notation applied to algorithms' asymptotic time complexity include:

Notation Name Examples
constant Finding the median from a sorted list of measurements; Using a constant-size lookup table; Using a suitable hash function for looking up an item.
logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.
linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry.
linearithmic, loglinear, or quasilinear Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort
quadratic Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort
exponential Finding the optimal (non-approximate) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search

Benchmarking: measuring performance

For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed.

Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.

Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.

Implementation concerns

Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded, or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler. See the articles on just-in-time compilation and interpreted languages.

There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls.

Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI.

Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have a great amount of knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations.

Measures of resource usage

Measures are normally expressed as a function of the size of the input .

The two most common measures are:

  • Time: how long does the algorithm take to complete?
  • Space: how much working memory (typically RAM) is needed by the algorithm? This has two aspects: the amount of memory needed by the code (auxiliary space usage), and the amount of memory needed for the data on which the code operates (intrinsic space usage).

For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are:

  • Direct power consumption: power needed directly to operate the computer.
  • Indirect power consumption: power needed for cooling, lighting, etc.

As of 2018, power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing.

Less common measures of computational efficiency may also be relevant in some cases:

  • Transmission size: bandwidth could be a limiting factor. Data compression can be used to reduce the amount of data to be transmitted. Displaying a picture or image (e.g. Google logo) can result in transmitting tens of thousands of bytes (48K in this case) compared with transmitting six bytes for the text "Google". This is important for I/O bound computing tasks.
  • External space: space needed on a disk or other external memory device; this could be for temporary storage while the algorithm is being carried out, or it could be long-term storage needed to be carried forward for future reference.
  • Response time (latency): this is particularly relevant in a real-time application when the computer system must respond quickly to some external event.
  • Total cost of ownership: particularly if a computer is dedicated to one particular algorithm.

Time

Theory

Analyze the algorithm, typically using time complexity analysis to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Algorithms which include parallel processing may be more difficult to analyze.

Practice

Use a benchmark to time the use of an algorithm. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests.

Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment.

This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions.

Space

This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation.

There are up to four aspects of memory usage to consider:

  • The amount of memory needed to hold the code for the algorithm.
  • The amount of memory needed for the input data.
  • The amount of memory needed for any output data.
    • Some algorithms, such as sorting, often rearrange the input data and don't need any additional space for output data. This property is referred to as "in-place" operation.
  • The amount of memory needed as working space during the calculation.

Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory.

Caching and memory hierarchy

Current computers can have relatively large amounts of memory (possibly Gigabytes), so having to squeeze an algorithm into a confined amount of memory is much less of a problem than it used to be. But the presence of four different categories of memory can be significant:

  • Processor registers, the fastest of computer memory technologies with the least amount of storage space. Most direct computation on modern computers occurs with source and destination operands in registers before being updated to the cache, main memory and virtual memory if needed. On a processor core, there are typically on the order of hundreds of bytes or fewer of register availability, although a register file may contain more physical registers than architectural registers defined in the instruction set architecture.
  • Cache memory is the second fastest and second smallest memory available in the memory hierarchy. Caches are present in CPUs, GPUs, hard disk drives and external peripherals, and are typically implemented in static RAM. Memory caches are multi-leveled; lower levels are larger, slower and typically shared between processor cores in multi-core processors. In order to process operands in cache memory, a processing unit must fetch the data from the cache, perform the operation in registers and write the data back to the cache. This operates at speeds comparable (about 2-10 times slower) with the CPU or GPU's arithmetic logic unit or floating-point unit if in the L1 cache. It is about 10 times slower if there is an L1 cache miss and it must be retrieved from and written to the L2 cache, and a further 10 times slower if there is an L2 cache miss and it must be retrieved from an L3 cache, if present.
  • Main physical memory is most often implemented in dynamic RAM (DRAM). The main memory is much larger (typically gigabytes compared to ≈8 megabytes) than an L3 CPU cache, with read and write latencies typically 10-100 times slower. As of 2018, RAM is increasingly implemented on-chip of processors, as CPU or GPU memory.
  • Virtual memory is most often implemented in terms of secondary storage such as a hard disk, and is an extension to the memory hierarchy that has much larger storage space but much larger latency, typically around 1000 times slower than a cache miss for a value in RAM. While originally motivated to create the impression of higher amounts of memory being available than were truly available, virtual memory is more important in contemporary usage for its time-space tradeoff and enabling the usage of virtual machines. Cache misses from main memory are called page faults, and incur huge performance penalties on programs.

An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to virtual memory. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another.

In the early days of electronic computing, if an algorithm and its data wouldn't fit in main memory then the algorithm couldn't be used. Nowadays the use of virtual memory appears to provide much memory, but at the cost of performance. If an algorithm and its data will fit in cache memory, then very high speed can be obtained; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.

Criticism of the current state of programming

Software efficiency halves every 18 months, compensating Moore's Law

May goes on to state:

In ubiquitous systems, halving the instructions executed can double the battery life and big data sets bring big opportunities for better software and algorithms: Reducing the number of operations from N × N to N × log(N) has a dramatic effect when N is large ... for N = 30 billion, this change is as good as 50 years of technology improvements.

  • Software author Adam N. Rosenburg in his blog "The failure of the Digital computer", has described the current state of programming as nearing the "Software event horizon", (alluding to the fictitious "shoe event horizon" described by Douglas Adams in his Hitchhiker's Guide to the Galaxy book). He estimates there has been a 70 dB factor loss of productivity or "99.99999 percent, of its ability to deliver the goods", since the 1980s—"When Arthur C. Clarke compared the reality of computing in 2001 to the computer HAL 9000 in his book 2001: A Space Odyssey, he pointed out how wonderfully small and powerful computers were but how disappointing computer programming had become".

Competitions for the best algorithms

The following competitions invite entries for the best algorithms based on some arbitrary criteria decided by the judges:

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...