Search This Blog

Saturday, January 7, 2023

Empire and Communications

From Wikipedia, the free encyclopedia

The latest edition of Harold Innis's Empire and Communications.

Empire and Communications is a book published in 1950 by University of Toronto professor Harold Innis. It is based on six lectures Innis delivered at Oxford University in 1948. The series, known as the Beit Lectures, was dedicated to exploring British imperial history. Innis, however, decided to undertake a sweeping historical survey of how communications media influence the rise and fall of empires. He traced the effects of media such as stone, clay, papyrus, parchment and paper from ancient to modern times.

Innis argued that the "bias" of each medium toward space or toward time helps to determine the nature of the civilization in which that medium dominates. "Media that emphasize time are those that are durable in character such as parchment, clay and stone," he writes in his introduction. These media tend to favour decentralization. "Media that emphasize space are apt to be less durable and light in character, such as papyrus and paper." These media generally favour large, centralized administrations. Innis believed that to persist in time and to occupy space, empires needed to strike a balance between time-biased and space-biased media. Such a balance is likely to be threatened, however, when monopolies of knowledge exist favouring some media over others.

Empire and Communications examines the impact of media such as stone, clay, papyrus and the alphabet on the empires of Egypt and Babylonia. It also looks at the oral tradition in ancient Greece; the written tradition and the Roman Empire; the influence of parchment and paper in medieval Europe and the effects of paper and the printing press in modern times.

Chapter 1. Introduction

Harold Innis's highly condensed prose style, which frequently ranges over many centuries and several key ideas in one or two sentences, can make his writing in Empire and Communications difficult to understand. Biographer Paul Heyer recommends that readers use Innis's introduction as a helpful guide.

Harold Innis noted that papyrus documents enabled Rome to administer its huge empire.

Empire, bias and balance

In his introduction, Innis promises to examine the significance of communications in a small number of empires. "The effective government of large areas," he writes, "depends to a very important extent on the efficiency of communication." He argues for example, that light and easily transported papyrus enabled Rome to govern a large, centralized empire. For Innis, papyrus is associated with the political and administrative control of space. It, therefore, is a space-biased medium. Parchment, dominant after the breakup of the Roman Empire, was a durable medium used for hand copying manuscripts in medieval monasteries. For Innis, parchment favours decentralization and is associated with the religious control of time. It, therefore, is a time-biased medium. Innis argues that in order to last, large-scale political organizations such as empires must balance biases toward time and space. "They have tended to flourish under conditions in which civilization reflects the influence of more than one medium and in which the bias of one medium towards decentralization is offset by the bias of another medium towards centralization."

Writing, printing, and speech

Innis divides the history of the empires and civilizations he will examine into two periods, one for writing and the other for printing. "In the writing period we can note the importance of various media such as the clay tablet of Mesopotamia, the papyrus roll in the Egyptian and in the Graeco-Roman world, parchment codex in the late Graeco-Roman world and the early Middle Ages, and paper after its introduction in the Western world from China." Innis notes that he will concentrate on paper as a medium in the printing period along with the introduction of paper-making machinery at the beginning of the 19th century and the use of wood pulp in the manufacture of paper after 1850.

He is quick to add, however, that it would be presumptuous to conclude that writing alone determined the course of civilizations. Historians naturally focus on writing because it endures. "We are apt to overlook the significance of the spoken word," he writes, "and to forget that it has left little tangible remains." For Innis, that tendency poses a problem. "It is scarcely possible for generations disciplined in the written and the printed tradition to appreciate the oral tradition." Therefore, the media biases of one civilization make understanding other peoples difficult, if not impossible.

"A change in the type of medium implies a change in the type of appraisal and hence makes it difficult for one civilization to understand another." As an example, Innis refers to our tendency to impose a modern conception of time on past civilizations. "With the dominance of arithmetic and the decimal system, dependent apparently on the number of fingers or toes, modern students have accepted the linear measure of time," he writes. "The dangers of applying this procrustean device in the appraisal of civilizations in which it did not exist illustrate one of numerous problems."

Innis also contrasts the strikingly different effects of writing and speaking. He argues that "writing as compared with speaking involves an impression at the second remove and reading an impression at the third remove. The voice of a second-rate person is more impressive than the published opinion of superior ability."

Chapter 2. Egypt: From stone to papyrus

Harold Innis traces the evolution of ancient Egyptian dynasties and kingdoms in terms of their use of stone or papyrus as dominant media of communication. His outline of Egyptian civilization is a complex and highly detailed analysis of how these media, along with several other technologies, affected the distribution of power in society.

Influence of the Nile

A funerary stele from ancient Egypt's Middle Kingdom. Innis believed that hieroglyphics engraved in stone originally perpetuated the divine power of Egyptian kings.

Innis begins, as other historians do, with the crucial importance of the Nile as a formative influence on Egyptian civilization. The river provided the water and fertile land needed for agricultural production in a desert region. Innis writes that the Nile therefore, "acted as a principle of order and centralization, necessitated collective work, created solidarity, imposed organizations on the people, and cemented them in a society." This observation is reminiscent of Innis's earlier work on the economic influence of waterways and other geographical features in his book, The Fur Trade in Canada, first published in 1930. However, in Empire and Communications, Innis extends his economic analysis to explore the influence of the Nile on religion, associating the river with the sun god Ra, creator of the universe. In a series of intellectual leaps, Innis asserts that Ra's power was vested in an absolute monarch whose political authority was reinforced by specialized astronomical knowledge. Such knowledge was used to produce the calendar which could predict the Nile's yearly floods.

Stone, hieroglyphics and absolute monarchs

As the absolute monarchy extended its influence over Egypt, a pictorial hieroglyphic writing system was invented to express the idea of royal immortality. According to Innis, the idea of the divine right of autocratic monarchs was developed from 2895 BC to 2540 BC. "The pyramids," Innis writes, "carried with them the art of pictorial representation as an essential element of funerary ritual." The written word on the tomb, he asserts, perpetuated the divine power of kings.

Innis suggests that the decline of the absolute monarchy after 2540 BC may have been related to the need for a more accurate calendar based on the solar year. He suggests that priests may have developed such a calendar increasing their power and authority. After 2000 BC, peasants, craftsmen, and scribes obtained religious and political rights. "The profound disturbances in Egyptian civilization," Innis writes "involved in the shift from absolute monarchy to more democratic organization coincided with a shift in emphasis on stone as a medium of communication or as a basis of prestige, as shown in the pyramids, to an emphasis on papyrus."

Papyrus and the power of scribes

Innis traces the influence of the newer medium of papyrus on political power in ancient Egypt. The growing use of papyrus led to the replacement of cumbersome hieroglyphic scripts by cursive or hieratic writing. Rapid writing styles made administration more efficient and highly trained scribes became part of a privileged civil service. Innis writes. however, that the replacement of one dominant medium by another led to upheaval.

The shift from dependence on stone to dependence on papyrus and the changes in political and religious institutions imposed an enormous strain on Egyptian civilization. Egypt quickly succumbed to invasion from peoples equipped with new instruments of attack. Invaders with the sword and the bow and long-range weapons broke through Egyptian defence, dependent on the battle-axe and the dagger. With the use of bronze and possibly iron weapons, horses and chariots, Syrian Semitic peoples under the Hyksos or Shepherd kings captured and held Egypt from 1660 to 1580 BC.

Hyksos rule lasted about a century until the Egyptians drove them out. Innis writes that the invaders had adopted hieroglyphic writing and Egyptian customs, "but complexity enabled the Egyptians to resist." The Egyptians may have won their victory using horses and light chariots acquired from the Libyans.

Empire and the one true god

Innis writes that the military organization that expelled the Hyksos enabled the Egyptians to establish and expand an empire that included Syria and Palestine, and that eventually reached the Euphrates. Egyptian administrators used papyrus and a postal service to run the empire, but adopted cuneiform as a more efficient script. The pharaoh Akhnaton tried to introduce Aten, the solar disk as the one true god, a system of worship that would provide a common ideal for the whole empire. But the priests and the people resisted "a single cult in which duty to the empire was the chief consideration." Priestly power, Innis writes, resulted from religious control over the complex and difficult art of writing. The monarch's attempts to maintain an empire extended in space were defeated by a priestly monopoly over knowledge systems concerned with time --- systems that began with the need for accurate predictions about when the Nile would overflow its banks. Innis argues that priestly theocracy gradually cost Egypt its empire. "Monopoly over writing supported an emphasis on religion and the time concept, which defeated efforts to solve the problem of space."

Chapter 3. Babylonia: The origins of writing

In this chapter, Innis outlines the history of the world's first civilizations in Mesopotamia. He starts with the fertile plains between the Tigris and Euphrates rivers, but as the history unfolds, his discussion extends to large parts of the modern Middle East. Biographer Paul Heyer's warning that Innis's work can be challenging applies to te 3000 history. Innis's condensed, elliptical prose demands careful reading as he traces the origins of writing from clay tablet and cuneiform script to the efficient Phoenician alphabet written on parchment and papyrus. Along the way, Innis comments on many aspects of the ancient Middle Eastern empires, including power struggles between priests and kings, the evolution of military technologies and the development of the Hebrew Bible.

History begins at Sumer

Innis begins by observing that unlike in Egypt where calculating the timing of the Nile's flooding was a source of power, the Tigris and Euphrates rivers in southern Mesopotamia were used for irrigation. Therefore the ability to measure time precisely was somewhat less critical. Nevertheless, as in Egypt, the small city-states of Sumer depended on the rivers and so, the cycles of agricultural production were organized around them. The rivers also provided communications materials. In Egypt, the Nile's papyrus became a medium for writing while in Mesopotamia, the rivers yielded the alluvial sediments the Sumerians used to fashion the clay tablets on which they inscribed their wedge-shaped, cuneiform script. Their earliest writing recorded agricultural accounts and economic transactions.

Innis points out that the tablets were not well suited to pictographic writing because making straight lines "tended to pull up the clay." Therefore, Sumerian scribes used a cylindrical reed stylus to stamp or press wedges and lines on the moist tablet. Scribes gradually developed cuneiform signs to represent syllables and the sounds of the spoken language. Innis writes that as a heavy material, clay was not very portable and so was not generally suited for communication over large areas. Cuneiform inscription required years of training overseen by priests. Innis contends therefore, that as a writing medium, clay tended to favour decentralization and religious control.

From city-states to empires

Innis suggests that religious control in Sumer became a victim of its own successes. "The accumulation of wealth and power in the hands of priests and the temple organizations," he writes, "was probably followed by ruthless warfare between city-states." The time-bound priests, unskilled in technological change and the military arts, lost power to spatially oriented kings intent on territorial expansion. Around 2350 BC, the Sumerians were conquered by their northern, Semitic neighbours the Akkadians. Under Sargon the Great, the empire expanded to include extensive territories reaching northwest as far as Turkey and west to the Mediterranean. Thus begins the rise and fall of a series of empires over approximately two thousand years. Innis mentions many of them, but focuses more attention on innovations that facilitated their growth. These include the advancement of civil law under Hammurabi, the development of mathematics including fixed standards of weights and measures, as well as the breeding of horses that combined speed with strength and that, along with three-man chariots, helped deliver spectacular military victories to the Assyrians.

Alphabet, empire and trade

The Phoenician alphabet. The Phoenicians were sailors and traders who travelled widely taking their versatile alphabet with them.

In discussing the advent and spread of the alphabet, Innis refers to what he sees as the subversive relationship between those at the centre of civilizations and those on their fringes or margins. He argues that monopolies of knowledge develop at the centre only to be challenged and eventually overthrown by new ideas or techniques that take shape on the margins. Thus, the Phoenician alphabet, a radically simplified writing system, undermined the elaborate hieroglyphic and cuneiform scripts overseen by priestly elites in Egypt and Babylonia. "The Phoenicians had no monopoly of knowledge," Innis writes, "[which] might hamper the development of writing." As a trading people, the Phoenicians needed "a swift and concise method of recording transactions." The alphabet with its limited number of visual symbols to represent the primary elements of human speech was well suited to trade. "Commerce and the alphabet were inextricably interwoven, particularly when letters of the alphabet were used as numerals." The alphabet, combined with the use of parchment and papyrus, Innis argues, had a decentralizing effect favouring cities and smaller nations over centralized empires. He suggests that improved communication, made possible by the alphabet, enabled the Assyrians and the Persians to administer large empires in which trading cities helped offset concentrations of power in political and religious organizations.

Alphabet, the Hebrews and religion

Innis sketches the influence of the alphabet on the Hebrews in the marginal territory of Palestine. The Hebrews combined oral and written traditions in their scriptures. Innis points out that they had previously acquired key ideas from the Egyptians. "The influence of Egypt on the Hebrews," he writes, "was suggested in the emphasis on the sacred character of writing and on the power of the word which when uttered brought about creation itself. The word is the word of wisdom. Word, wisdom, and God were almost identical theological concepts." The Hebrews distrusted images. For them, words were the true source of wisdom. "The written letter replaced the graven image as an object of worship." In a typically complex passage, Innis writes:

"Denunciation of images and concentration on the abstract in writing opened the way for advance from blood relationship to universal ethical standards and strengthened the position of the prophets in their opposition to absolute monarchical power. The abhorrence of idolatry of graven images implied a sacred power in writing, observance of the law, and worship of the one true God."

The alphabet enabled the Hebrews to record their rich oral tradition in poetry and prose. "Hebrew has been described as the only Semitic language before Arabic to produce an important literature characterized by simplicity, vigour and lyric force. With other Semitic languages it was admirably adapted to the vivid, vigorous description of concrete objects and events." Innis traces the influence of various strands in scriptural writing suggesting that the combination of these sources strengthened the movement toward monotheism.

In a summary passage, Innis explores the wide-ranging influence of the alphabet in ancient times. He argues that it enabled the Assyrians and Persians to expand their empires, allowed for the growth of trade under the Arameans and Phoenicians and invigorated religion in Palestine. As such, the alphabet provided a balance. "An alphabet became the basis of political organization through efficient control of territorial space and of religious organization through efficient control over time in the establishment of monotheism."

Chapter 4. Greece and the oral tradition

"Greek civilization," Innis writes, "was a reflection of the power of the spoken word." In this chapter, he explores how the vitality of the spoken word helped the ancient Greeks create a civilization that profoundly influenced all of Europe. Greek civilization differed in significant ways from the empires of Egypt and Babylonia. Innis biographer John Watson notes that those preceding empires "had revolved around an uneasy alliance of absolute monarchs and scholarly theocrats." The monarchs ruled by force while an elite priestly class controlled religious dogma through their monopolies of knowledge over complex writing systems. "The monarch was typically a war leader whose grasp of the concept of space allowed him to expand his territory," Watson writes, "incorporating even the most highly articulated theocracies. The priests specialized in elaborating conceptions of time and continuity." Innis argues that the Greeks struck a different balance, one based on "the freshness and elasticity of an oral tradition" that left its stamp on Western poetry, drama, sculpture, architecture, philosophy, science and mathematics.

Socrates, Plato and the spoken word

Detail of the painting The Death of Socrates by Jacques-Louis David.

Innis begins by examining Greek civilization at its height in the 5th century BC. He points out that the philosopher Socrates (c. 470 BC–399 BC) "was the last great product and exponent of the oral tradition." Socrates taught using a question and answer technique that produced discussion and debate. His student, Plato (428/427 BC – 348/347 BC), elaborated on these Socratic conversations by writing dialogues in which Socrates was the central character. This dramatic device engaged readers in the debate while allowing Plato to search for truth using a dialectical method or one based on discussion. "The dialogues were developed," Innis writes "as a most effective instrument for preserving [the] power of the spoken word on the written page." He adds that Plato's pupil, Aristotle (384 BC – 322 BC), regarded the Platonic dialogues as "half-way between poetry and prose." Innis argues that Plato's use of the flexible oral tradition in his writing enabled him to escape the confines of a rigid philosophical system. "Continuous philosophical discussion aimed at truth. The life and movement of dialectic opposed the establishment of a finished system of dogma." This balance between speech and prose also contributed to the immortality of Plato's work.

Innis writes that the power of the oral tradition reached its height in the tragedies of Aeschylus, Sophocles and Euripides when "drama became the expression of Athenian democracy." He argues that tragedy attracted the interest and participation of everyone. "To know oneself was to know man's powerlessness and to know the indestructible and conquering majesty of suffering humanity."

For Innis, the fall of Athens to Sparta in 404 BC and the trial and execution of Socrates for corrupting Athenian youth were symptoms of the collapse of the older oral culture. That culture had sustained a long poetic tradition, but Plato attacked poetry as a teaching device and expelled poets from his ideal republic. According to Innis, Plato and Aristotle developed prose in defence of a new culture in which gods and poets were subordinated to philosophical and scientific inquiry. Innis argues that eventually, the spread of writing widened the gap between the city-states hastening the collapse of Greek civilization.

The Greek alphabet

Innis notes that the early Mycenaean Greeks of the Bronze Age developed their own styles of communication because they escaped the cultural influence of the Minoans they had conquered on the island of Crete. "The complexity of the script of Minoan civilization and its relative restriction to Crete left the Greeks free to develop their own traditions." Innis adds that the growth of a strong oral tradition reflected in Greek epic poetry also fostered resistance to the dominance of other cultures. This led the Greeks to take over and modify the Phoenician alphabet possibly around the beginning of the 7th century BC. The Greeks adapted this 24-letter, Semitic alphabet which consisted only of consonants to their rich oral tradition by using some of its letters to represent vowel sounds. Innis writes that the vowels in each written word "permitted the expression of fine distinctions and light shades of meaning." The classics professor, Eric Havelock, whose work influenced Innis, makes a similar point when he argues that this alphabet enabled the Greeks to record their oral literary tradition with a "wealth of detail and depth of psychological feeling" absent in other Near Eastern civilizations with more limited writing systems. Innis himself quotes scholar Richard Jebb's claim that the Greek language "'responds with happy elasticity to every demand of the Greek intellect...the earliest work of art created by the spontaneous working of the Greek mind.'"

Poetry, politics and the oral tradition

"The power of the oral tradition," Innis writes, "implied the creation of a structure suited to its needs." That structure consisted of the metres and stock phrases of epic poetry which included the Homeric poems, the Iliad and Odyssey. The epics were sung by professional minstrels who pleased audiences by reshaping the poems to meet the needs of new generations. Innis points out that music was central to the oral tradition and the lyre accompanied the performance of the epic poems. He argues that the Homeric poems reflected two significant developments. The first was the rise of an aristocratic civilization which valued justice and right action over the traditional ties of kinship. The second was the humanization of the Greek gods whose limited powers encouraged belief in rational explanations for the order of things. "Decline of belief in the supernatural led to the explanation of nature in terms of natural causes," Innis writes. "With the independent search for truth, science was separated from myth."

Head of the poet Sappho.

Gradually, the flexible oral tradition gave rise to other kinds of poetry. Innis notes that these new kinds of literature "reflected the efficiency of the oral tradition in expressing the needs of social change." Hesiod wrote about agricultural themes, becoming the first spokesman for common people. Innis writes that his poems were produced "by an individual who made no attempt to conceal his personality." In the 7th century BC, Archilochos took poetry a step further when he contributed to breaking down the heroic code of epic poetry. Innis suggests he responded to a rising public opinion while historian J.B. Bury describes him as venting his feelings freely and denouncing his enemies. Innis argues that these changes in poetic style and form coincided with the replacement of Greek kingdoms by republics in the 8th and 7th centuries BC. Finally, he mentions the development of shorter, lyric poetry that could be intensely personal as shown in the work of Sappho. This profusion of short personal lyrics likely coincided with the spread of writing and the increasing use of papyrus from Egypt.

Greek science and philosophy

Innis credits the oral tradition with fostering the rise of Greek science and philosophy. He argues that when combined with the simplicity of the alphabet, the oral tradition prevented the development of a highly specialized class of scribes and a priestly monopoly over education. Moreover, unlike the Hebrews, the Greeks did not develop written religious texts. "The Greeks had no Bible with a sacred literature attempting to give reasons and coherence to the scheme of things, making dogmatic assertions and strangling science in infancy." Innis contends that the flexibility of the oral tradition encouraged the introduction of a new medium, mathematics. Thales of Miletus may have discovered trigonometry. He also studied geometry and astronomy, using mathematics as "a means of discarding allegory and myth and advancing universal generalizations." Thus, mathematics gave rise to philosophical speculation. The map maker, Anaximander also sought universal truths becoming "the first to write down his thoughts in prose and to publish them, thus definitely addressing the public and giving up the privacy of his thought." According to Innis, this use of prose "reflected a revolutionary break, an appeal to rational authority and the influence of the logic of writing."

Chapter 5. Rome and the written tradition

In this chapter, Harold Innis focuses on the gradual displacement of oral communication by written media during the long history of the Roman Empire. The spread of writing hastened the downfall of the Roman Republic, he argues, facilitating the emergence of a Roman Empire stretching from Britain to Mesopotamia. To administer such a vast empire, the Romans were forced to establish centralized bureaucracies. These bureaucracies depended on supplies of cheap papyrus from the Nile Delta for the long-distance transmission of written rules, orders and procedures. The bureaucratic Roman state backed by the influence of writing, in turn, fostered absolutism, the form of government in which power is vested in a single ruler. Innis adds that Roman bureaucracy destroyed the balance between oral and written law giving rise to fixed, written decrees. The torture of Roman citizens and the imposition of capital punishment for relatively minor crimes became common as living law "was replaced by the dead letter." Finally, Innis discusses the rise of Christianity, a religion which spread through the use of scripture inscribed on parchment. He writes that the Byzantine Empire in the east eventually flourished because of a balance in media biases. Papyrus enabled the governing of a large spatial empire, while parchment contributed to the development of a religious hierarchy concerned with time.

Rome and Greece

The initials SPQR stood for Senātus Populusque Rōmānus ("The Senate and the People of Rome"). They were emblazoned on the banners of Roman legions.

"The achievements of a rich oral tradition in Greek civilization," Innis writes, "became the basis of Western culture." He asserts that Greek culture had the power "to awaken the special forces of each people by whom it was adopted" and the Romans were no exception. According to Innis, it appears Greek colonies in Sicily and Italy along with Greek traders introduced the Greek alphabet to Rome in the 7th century BC. The alphabet was developed into a Graeco-Etruscan script when Rome was governed by an Etruscan king. The Etruscans also introduced Greek gods in the 6th century BC apparently to reinforce their own rule. Rome became isolated from Greece in the 5th and 4th centuries BC and overthrew the monarchy. A patrician aristocracy took control, but after prolonged class warfare, gradually shared power with the plebeians. Innis suggests that Roman law flourished at this time because of its oral tradition. A priestly class, "equipped with trained memories," made and administered the laws, their power strengthened because there was no body of written law. Although plebeian pressure eventually resulted in the adoption of the Twelve Tables—a written constitution—interpretation remained in the hands of priests in the College of Pontiffs. One of Roman law's greatest achievements, Innis writes, lay in the development of civil laws governing families, property and contracts. Paternal rights were limited, women became independent and individual initiative was given the greatest possible scope.

Innis seems to suggest that political stability coupled with strong oral traditions in law and religion contributed to the unity of the Roman Republic. He warns however, that the growing influence of written laws, treaties and decrees in contrast to the oral tradition of civil law "boded ill for the history of the republic and the empire."

Innis quickly sketches the Roman conquest of Italy and its three wars with the North African city of Carthage. The Punic Wars ended with the destruction of Carthage in 146 BC. At the same time, Rome pursued military expansion in the eastern Mediterranean eventually conquering Macedonia and Greece as well as extending Roman rule to Pergamum in modern-day Turkey.

Rome and the problems of Greek empire

Innis interrupts his account of Roman military expansion to discuss earlier problems that had arisen from the Greek conquests undertaken by Philip of Macedon and his son, Alexander the Great. Philip and Alexander had established a Macedonian Empire which controlled the Persian Empire as well as territory as far east as India. Innis suggests Rome would inherit the problems that faced Philip and Alexander including strong separatist tendencies. After Alexander's death, four separate Hellenistic dynasties arose. The Seleucids controlled the former Persian Empire; the Ptolemies ruled in Egypt; the Attalids in Pergamum and the Antigonids in Macedonia.

Seleucid dynasty

The Seleucid rulers attempted to dominate Persian, Babylonian and Hebrew religions but failed to establish the concept of the Greek city-state. Their kingdom eventually collapsed. Innis concludes that monarchies that lack the binding powers of nationality and religion and that depend on force were inherently insecure, unable to resolve dynastic problems.

Ptolemaic dynasty

Innis discusses various aspects of Ptolemaic rule over Egypt including the founding of the ancient library and university at Alexandria made possible by access to abundant supplies of papyrus. "By 285 BC the library established by Ptolemy I had 20,000 manuscripts," Innis writes, "and by the middle of the first century 700,000, while a smaller library established by Ptolemy II...possibly for duplicates had 42,800." He points out that the power of the written tradition in library and university gave rise to specialists, not poets and scholars — drudges who corrected proofs and those who indulged in the mania of book collecting. "Literature was divorced from life, thought from action, poetry from philosophy." Innis quotes the epic poet Apollonius's claim that "a great book was a great evil." Cheap papyrus also facilitated the rise of an extensive administrative system eventually rife with nepotism and other forms of bureaucratic corruption. "An Egyptian theocratic state," Innis notes, "compelled its conquerors to establish similar institutions designed to reduce its power."

Attalid dynasty

Innis contrasts the scholarly pursuits of the Attalid dynasty at Pergamum with what he sees as the dilettantism of Alexandria. He writes that Eumenes II who ruled from 197 to 159 BC established a library, but was forced to rely on parchment because Egypt had prohibited the export of papyrus to Pergamum. Innis suggests that the Attalids probably preserved the masterpieces of ancient Greek prose. He notes that Pergamum had shielded a number of cities from attacks by the Gauls. "Its art reflected the influence of the meeting of civilization and barbarism, a conflict of good and evil, in the attempt at unfamiliar ways of expression."

Antigonid dynasty

Innis writes that the Antigonids "gradually transformed the small city-states of Greece into municipalities." They captured Athens in 261 BC and Sparta in 222 BC. The Greek cities of this period developed common interests. "With supplies of papyrus and parchment and the employment of educated slaves," Innis writes, "books were produced on an unprecedented scale. Hellenistic capitals provided a large reading public. Most of the books, however, were "third-hand compendia of snippets and textbooks, short cuts to knowledge, quantities of tragedies, and an active comedy of manners in Athens. Literary men wrote books about other books and became bibliophiles." Innis reports that by the 2nd century "everything had been swamped by the growth of rhetoric." He argues that once classical Greek philosophy "became crystallized in writing," it was superseded by an emphasis on philosophical teaching. He mentions Stoicism, the Cynics and Epicurean teachings all of which emphasized the priority of reason over popular religion. "The Olympian religion and the city-state were replaced by philosophy and science for the educated and by Eastern religions for the common man." As communication between these two groups became increasingly difficult, cultural division stimulated the rise of a class structure. Innis concludes that the increasing emphasis on writing also created divisions among Athens, Alexandria and Pergamum weakening science and philosophy and opening "the way to religions from the East and force from Rome in the West."

Greek influence and Roman prose

Innis returns to his account of Roman history by noting that Rome's military successes in the eastern Mediterranean brought it under the direct influence of Greek culture. He quotes the Roman poet Horace: "Captive Greece took captive her proud conqueror." Innis gives various examples of Greek influence in Rome. They include the introduction of Greek tragedies and comedies at Roman festivals to satisfy the demands of soldiers who had served in Greek settlements as well as the translation of the Odyssey into Latin.

Innis mentions there was strong opposition to this spread of Greek culture. He reports for example, that Cato the Elder deplored what he saw as the corrupting effects of Greek literature. Cato responded by laying the foundations for a dignified and versatile Latin prose. In the meantime, the Roman Senate empowered officials to expel those who taught rhetoric and philosophy and in 154 BC, two disciples of Epicurus were banished from Rome. Nevertheless, Innis points out that Greek influence continued as "Greek teachers and grammarians enhanced the popularity of Hellenistic ideals in literature."

Meantime, Innis asserts, Roman prose "gained fresh power in attempts to meet problems of the Republic." He is apparently referring to the vast enrichment of the Roman aristocracy and upper middle class as wealth poured in from newly conquered provinces. "The plunder from the provinces provided the funds for that orgy of corrupt and selfish wealth which was to consume the Republic in revolution," writes Will Durant in his series of volumes called The Story of Civilization. Innis mentions that the large-scale farms owned by aristocrats brought protests presumably from small farmers forced off the land and into the cities as part of a growing urban proletariat. The Gracchi brothers were among the first, Innis writes, "to use the weapon of Greek rhetoric" in their failed attempts to secure democratic reforms. Gaius Gracchus made Latin prose more vivid and powerful. Innis adds that political speeches such as his "were given wider publicity through an enlarged circle of readers." As political oratory shaped Latin prose style, written speech almost equaled the power of oral speech.

Writing, empire and religion

Rome's dominance of Egypt, Innis writes, gave it access to papyrus which supported a chain of interrelated developments that would eventually lead to the decline and fall of Rome. Papyrus facilitated the spread of writing which in turn, permitted the growth of bureaucratic administration needed to govern territories that would eventually stretch from Britain to Mesopotamia. "The spread of writing contributed to the downfall of the Republic and the emergence of the empire," Innis writes.

Roman Colosseum, symbol of permanence.

Centralized administrative bureaucracy helped create the conditions for the emergence of absolute rulers such as the Caesars which, in turn, led to emperor worship. According to Innis, the increased power of writing touched every aspect of Roman culture including law which became rigidly codified and increasingly reliant on such harsh measures as torture and capital punishment even for relatively trivial crimes. "The written tradition dependent on papyrus and the roll supported an emphasis on centralized bureaucratic administration," Innis writes. "Rome became dependent on the army, territorial expansion, and law at the expense of trade and an international economy."

Innis notes that Rome attempted to increase its imperial prestige by founding libraries. And, with the discovery of cement about 180 BC, the Romans constructed magnificent buildings featuring arch, vault and dome. "Vaulted architecture became an expression of equilibrium, stability, and permanence, monuments which persisted through centuries of neglect."

Innis argues that the gradual rise of Christianity from its origins as a Jewish sect among lower social strata on the margins of empire was propelled by the development of the parchment codex, a much more convenient medium than cumbersome papyrus rolls. "The oral tradition of Christianity was crystallized in books which became sacred," Innis writes. He adds that after breaking away from Judaism, Christianity was forced to reach out to other religions, its position strengthened further by scholars who attempted to synthesize Jewish religion and Greek philosophy in the organization of the Church.

Constantine ended official persecution of Christianity and moved the imperial capital to Constantinople eventually creating a religious split between the declining Western Roman Empire and believers in the East. "As the power of empire was weakened in the West that of the Church of Rome increased and difficulties with heresies in the East became more acute." Innis contends the Eastern or Byzantine Empire survived after the fall of Rome because it struck a balance between time and space-biased media. "The Byzantine empire developed on the basis of a compromise between organization reflecting the bias of different media: that of papyrus in the development of an imperial bureaucracy in relation to a vast area and that of parchment in the development of an ecclesiastical hierarchy in relation to time."

Chapter 6. Middle Ages: Parchment and paper

In Chapter 6, Innis tries to show how the medium of parchment supported the power of churches, clergy and monasteries in medieval Europe after the breakdown of the Roman empire. Rome's centralized administration had depended on papyrus, a fragile medium produced in the Nile Delta. Innis notes that parchment, on the other hand, is a durable medium that can be produced wherever farm animals are raised. He argues, therefore, that parchment is suited to the decentralized administration of a wide network of local religious institutions. However, the arrival of paper via China and the Arab world, challenged the power of religion and its preoccupation with time. "A monopoly of knowledge based on parchment," Innis writes, "invited competition from a new medium such as paper which emphasized the significance of space as reflected in the growth of nationalist monarchies." He notes that paper also facilitated the growth of commerce and trade in the 13th century.

Monasteries and books

Innis writes that monasticism originated in Egypt and spread rapidly partly in protest against Caesaropapism or the worldly domination of the early Christian church by emperors. He credits St. Benedict with adapting monasticism to the needs of the Western church. The Rule of St. Benedict required monks to engage in spiritual reading. Copying books and storing them in monastery libraries soon became sacred duties. Innis notes that copying texts on parchment required strength and effort:

Working six hours a day the scribe produced from two to four pages and required from ten months to a year and a quarter to copy a Bible. The size of the scriptures absorbed the energies of monasteries. Libraries were slowly built up and uniform rules in the care of books were generally adopted in the 13th century. Demands for space led to the standing of books upright on the shelves in the 14th and 15th centuries and to the rush of library construction in the 15th century.

Innis points out that Western monasteries preserved and transmitted the classics of the ancient world.

Islam, images, and Christianity

Innis writes that Islam (which he sometimes refers to as Mohammedanism) gathered strength by emphasizing the sacredness of the written word. He notes that the Caliph Iezid II ordered the destruction of pictures in Christian churches within the Umayyad Empire. The banning of icons within churches was also sanctioned by Byzantine Emperor Leo III in 730 while Emperor Constantine V issued a decree in 753–754 condemning image worship. Innis writes that this proscription of images was designed to strengthen the empire partly by curbing the power of monks, who relied on images to sanction their authority. Monasteries, he notes, had amassed large properties through their exemption from taxation and competed with the state for labour. Byzantine emperors reacted by secularizing large monastic properties, restricting the number of monks, and causing persecution, which drove large numbers of them to Italy.

The Western church, on the other hand, saw images as useful especially for reaching the illiterate. Innis adds that by 731, iconoclasts were excluded from the Church and Charles Martel's defeat of the Arabs in 732 ended Muslim expansion in western Europe. The Synod of Gentilly (767), the Lateran Council (769), and the Second Council of Nicea (787), sanctioned the use of images although Charlemagne prohibited image veneration or worship.

Chapter 7. Mass media, from print to radio

In his final chapter, Harold Innis traces the rise of mass media beginning with the printing press in 15th century Europe and ending with mass circulation newspapers, magazines, books, movies and radio in the 19th and 20th centuries. He argues that such media gradually undermined the authority of religion and enabled the rise of science, facilitating Reformation, Renaissance and Revolution, political, industrial and commercial. For Innis, space-biased and mechanized mass media helped create modern empires, European and American, bent on territorial expansion and obsessed with present-mindedness. "Mass production and standardization are the enemies of the West," he warned. "The limitations of mechanization of the printed and the spoken word must be emphasized and determined efforts to recapture the vitality of the oral tradition must be made."

Bibles and the print revolution

Innis notes that the expense of producing hand-copied, manuscript Bibles on parchment invited lower-cost competition, especially in countries where the copyists' guild did not hold a strong monopoly. "In 1470 it was estimated in Paris that a printed Bible cost about one-fifth that of a manuscript Bible," Innis writes. He adds that the sheer size of the scriptures hastened the introduction of printing and that the flexibility of setting the limited number of alphabetic letters in type permitted small-scale, privately-owned printing enterprises.

"By the end of the fifteenth century presses had been established in the larger centres of Europe," Innis writes. This led to a growing book trade as commercially minded printers reproduced various kinds of books including religious ones for the Church, medical and legal texts and translations from Latin and Greek. The Greek New Testament that Erasmus produced in 1516 became the basis for Martin Luther's German translation (1522) and William Tyndale's English version (1526). The rise in the numbers of Bibles and other books printed in native or vernacular languages contributed to the growth in the size or printing establishments and further undermined the influence of hand-copied, religious manuscripts. The printed word gained authority over the written one. Innis quotes historian W.E.H. Lecky: "The age of cathedrals had passed. The age of the printing press had begun."

Innis notes that Luther "took full advantage of an established book trade and large numbers of the New and later the Old Testament were widely distributed at low prices." Luther's attacks on the Catholic Church including his protests against the sale of indulgences, Canon law and the authority of the priesthood were widely distributed as pamphlets along with Luther's emphasis on St. Paul's doctrine of salvation through faith alone.

Monopolies of knowledge had developed and declined partly in relation to the medium of communication on which they were built, and tended to alternate as they emphasized religion, decentralization and time; or force, centralization, and space. Sumerian culture based on the medium of clay was fused with Semitic culture based on the medium of stone to produce the Babylonian empires. Egyptian civilization, based on a fusion of dependence on stone and dependence on papyrus, produced an unstable empire which eventually succumbed to religion. The Assyrian and Persian empires attempted to combine Egyptian and Babylonian civilization, and the latter succeeded with its appeal to toleration. Hebrew civilization emphasized the sacred character of writing in opposition to political organizations that emphasized the graven image. Greek civilization based on the oral tradition produced the powerful leaven that destroyed political empires. Rome assumed control over the medium on which Egyptian civilization had been based, and built up an extensive bureaucracy, but the latter survived in a fusion in the Byzantine Empire with Christianity based on the parchment codex. In the United States the dominance of the newspaper led to large-scale developments of monopolies of communication in terms of space and implied a neglect of problems of time...The bias of paper towards an emphasis on space and its monopolies of knowledge has been checked by the development of a new medium, the radio. The results have been evident in an increasing concern with problems of time, reflected in the growth of planning and the socialized state. The instability involved in dependence on the newspaper in the United States and the Western world has facilitated an appeal to force as a possible stabilizing factor. The ability to develop a system of government in which the bias of communication can be checked and an appraisal of the significance of space and time can be reached remains a problem of empire and of the Western world.

Recent critical opinion

Innis's research findings, however dubiously achieved, put him far ahead of his time. Consider a paragraph written in 1948: "Formerly it required time to influence public opinion in favour of war. We have now reached the position in which opinion is systematically aroused and kept near boiling point.... [The] attitude [of the U.S.] reminds one of the stories of the fanatic fear of mice shown by elephants." Innis's was a dark vision because he saw the "mechanized" media as replacing ordinary face-to-face conversation. Such conversations since Socrates had helped equip free individuals to build free societies by examining many points of view. Instead we were to be increasingly dominated by a single point of view in print and electronic media: the view of the imperial centre. Would Innis have been cheered by the rise of the Internet and its millions of online conversations? Probably not. As Watson observes, the advent of the web is eradicating margins. The blogosphere simply multiplies the number of outlets for the same few messages. If we are to hope for new insights and criticism of the imperial centre, Watson says, we will have to turn to marginal groups: immigrants, women, gays, First Nations, francophones and Hispanics. They are as trapped in the imperial centre as the rest of us, but they still maintain a healthy alienation from the centre's self-referential follies. — Crawford Kilian

Reporting bias

From Wikipedia, the free encyclopedia

In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects (for example about past medical history, smoking, sexual experiences). In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.

In empirical research, authors may be under-reporting unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error. In this context, reporting bias can eventually lead to a status quo where multiple investigators discover and discard the same results, and later experimenters justify their own reporting bias by observing that previous experimenters reported different results. Thus, each incident of reporting bias can make future incidents more likely.

Reporting biases in research

Research can only contribute to knowledge if it is communicated from investigators to the community. The generally accepted primary means of communication is “full” publication of the study methods and results in an article published in a scientific journal. Sometimes, investigators choose to present their findings at a scientific meeting as well, either through an oral or poster presentation. These presentations are included as part of the scientific record as brief “abstracts” which may or may not be recorded in publicly accessible documents typically found in libraries or the World Wide Web.

Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available.

Reporting bias occurs when the dissemination of research findings is influenced by the nature and direction of the results, for instance in systematic reviews. Positive results is a commonly used term to describe a study finding that one intervention is better than another.

Various attempts have been made to overcome the effects of the reporting biases, including statistical adjustments to the results of published studies. None of these approaches has proved satisfactory, however, and there is increasing acceptance that reporting biases must be tackled by establishing registers of controlled trials and by promoting good publication practice. Until these problems have been addressed, estimates of the effects of treatments based on published evidence may be biased.

Case study

Litigation brought upon by consumers and health insurers against Pfizer for the fraudulent sales practices in marketing of the drug gabapentin in 2004 revealed a comprehensive publication strategy that employed elements of reporting bias. Spin was used to put emphasis on favorable findings that favored gabapentin, and also to explain away unfavorable findings towards the drug. In this case, favorable secondary outcomes became the focus over the original primary outcome, which was unfavorable. Other changes found in outcome reporting include the introduction of a new primary outcome, failure to distinguish between primary and secondary outcomes, and failure to report one or more protocol-defined primary outcomes.

The decision to publish certain findings in certain journals is another strategy. Trials with statistically significant findings were generally published in academic journals with higher circulation more often than trials with nonsignificant findings. Timing of publication results of trials was influenced, in that the company tried to optimize the timing between the release of two studies. Trials with nonsignificant findings were found to be published in a staggered fashion, as to not have two consecutive trials published without salient findings. Ghost authorship was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.

Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation.

Types of reporting bias

Publication bias

The publication or nonpublication of research findings, depending on the nature and direction of the results. Although medical writers have acknowledged the problem of reporting biases for over a century, it was not until the second half of the 20th century that researchers began to investigate the sources and size of the problem of reporting biases.

Over the past two decades, evidence has accumulated that failure to publish research studies, including clinical trials testing intervention effectiveness, is pervasive. Almost all failure to publish is due to failure of the investigator to submit; only a small proportion of studies are not published because of rejection by journals.

The most direct evidence of publication bias in the medical field comes from follow-up studies of research projects identified at the time of funding or ethics approval. These studies have shown that “positive findings” is the principal factor associated with subsequent publication: researchers say that the reason they don't write up and submit reports of their research for publication is usually because they are “not interested” in the results (editorial rejection by journals is a rare cause of failure to publish).

Even those investigators who have initially published their results as conference abstracts are less likely to publish their findings in full unless the results are “significant”. This is a problem because data presented in abstracts are frequently preliminary or interim results and thus may not be reliable representations of what was found once all data were collected and analyzed. In addition, abstracts are often not accessible to the public through journals, MEDLINE, or easily accessed databases. Many are published in conference programs, conference proceedings, or on CD-ROM, and are made available only to meeting registrants.

The main factor associated with failure to publish is negative or null findings. Controlled trials that are eventually reported in full are published more rapidly if their results are positive. Publication bias leads to overestimates of treatment effect in meta-analyses, which in turn can lead doctors and decision makers to believe a treatment is more useful than it is.

It is now well-established that publication bias with more favorable efficacy results is associated with the source of funding for studies that would not otherwise be explained through usual risk of bias assessments.

Time lag bias

The rapid or delayed publication of research findings, depending on the nature and direction of the results. In a systematic review of the literature, Hopewell and her colleagues found that overall, trials with “positive results” (statistically significant in favor of the experimental arm) were published about a year sooner than trials with “null or negative results” (not statistically significant or statistically significant in favor of the control arm).

Multiple (duplicate) publication bias

The multiple or singular publication of research findings, depending on the nature and direction of the results. Investigators may also publish the same findings multiple times using a variety of patterns of “duplicate” publication. Many duplicates are published in journal supplements, potentially difficult to access literature. Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect.

Location bias

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results. There is also evidence that, compared to negative or null results, statistically significant results are on average published in journals with greater impact factors, and that publication in the mainstream (non grey) literature is associated with an overall greater treatment effect compared to the grey literature.

Citation bias

The citation or non-citation of research findings, depending on the nature and direction of the results. Authors tend to cite positive results over negative or null results, and this has been established over a broad cross section of topics. Differential citation may lead to a perception in the community that an intervention is effective when it is not, and it may lead to over-representation of positive findings in systematic reviews if those left uncited are difficult to locate.

Selective pooling of results in a meta-analysis is a form of citation bias that is particularly insidious in its potential to influence knowledge. To minimize bias, pooling of results from similar but separate studies requires an exhaustive search for all relevant studies. That is, a meta-analysis (or pooling of data from multiple studies) must always have emerged from a systematic review (not a selective review of the literature), even though a systematic review does not always have an associated meta-analysis.

Language bias

The publication of research findings in a particular language, depending on the nature and direction of the results. There is longstanding question about whether there is a language bias such that investigators choose to publish their negative findings in non-English language journals and reserve their positive findings for English language journals. Some research has shown that language restrictions in systematic reviews can change the results of the review and in other cases, authors have not found that such a bias exists.

Knowledge reporting bias

The frequency with which people write about actions, outcomes, or properties is not a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. People write about only some parts of the world around them; much of the information is left unsaid.

Outcome reporting bias

The selective reporting of some outcomes but not others, depending on the nature and direction of the results. A study may be published in full, but pre-specified outcomes omitted or misrepresented. Efficacy outcomes that are statistically significant have a higher chance of being fully published compared to those that are not statistically significant.

Selective reporting of suspected or confirmed adverse treatment effects is an area for particular concern because of the potential for patient harm. In a study of adverse drug events submitted to Scandinavian drug licensing authorities, reports for published studies were less likely than unpublished studies to record adverse events (for example, 56 vs 77% respectively for Finnish trials involving psychotropic drugs). Recent attention in the lay and scientific media on failure to accurately report adverse events for drugs (e.g., selective serotonin uptake inhibitors, rosiglitazone, rofecoxib) has resulted in additional publications, too numerous to review, indicating substantial selective outcome reporting (mainly suppression) of known or suspected adverse events.

Meta-analysis

From Wikipedia, the free encyclopedia
 
Graphical summary of a meta-analysis of over 1,000 cases of diffuse intrinsic pontine glioma and other pediatric gliomas, in which information about the mutations involved as well as generic outcomes were distilled from the underlying primary literature.
 
A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analyses can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. Meta-analytic results are considered the most trustworthy source of evidence by the evidence-based medicine literature.

Not only can meta-analyses provide an estimate of the unknown effect size, it also has the capacity to contrast results from different studies and identify patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light with multiple studies.

However, there are some methodological problems with meta-analysis. If individual studies are systematically biased due to questionable research practices (e.g., data dredging, data peeking, dropping studies) or the publication bias at the journal level, the meta-analytic estimate of the overall treatment effect may not reflect the actual efficacy of a treatment. Meta-analysis has also been criticized for averaging differences among heterogeneous studies because these differences could potentially inform clinical decisions. For example, if there are two groups of patients experiencing different treatment effects studies in two randomised control trials (RCTs) reporting conflicting results, the meta-analytic average is representative of neither group, similarly to averaging the weight of apples and oranges, which is neither accurate for apples nor oranges. In performing a meta-analysis, an investigator must make choices which can affect the results, including deciding how to search for studies, selecting studies based on a set of objective criteria, dealing with incomplete data, analyzing the data, and accounting for or choosing not to account for publication bias. This makes meta-analysis malleable in the sense that these methodological choices made in completing a meta-analysis are not determined but may affect the results. For example, Wanous and colleagues examined four pairs of meta-analyses on the four topics of (a) job performance and satisfaction relationship, (b) realistic job previews, (c) correlates of role conflict and ambiguity, and (d) the job satisfaction and absenteeism relationship, and illustrated how various judgement calls made by the researchers produced different results.

Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works. Here it is convenient to follow the terminology used by the Cochrane Collaboration, and use "meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of 'research synthesis' or 'evidence synthesis', such as combining information from qualitative studies, for the more general context of systematic reviews. A meta-analysis is a secondary source. In addition, meta-analysis may also be applied to a single study in cases where there are many cohorts which have not gone through identical selection criteria or to which the same investigational methodologies have not been applied to all in the same manner or under the same exacting conditions. Under these circumstances each cohort is treated as an individual study and meta-analysis is used to draw study-wide conclusions.

History

The historical roots of meta-analysis can be traced back to 17th century studies of astronomy, while a paper published in 1904 by the statistician Karl Pearson in the British Medical Journal which collated data from several studies of typhoid inoculation is seen as the first time a meta-analytic approach was used to aggregate the outcomes of multiple clinical studies. The first meta-analysis of all conceptually identical experiments concerning a particular research issue, and conducted by independent researchers, has been identified as the 1940 book-length publication Extrasensory Perception After Sixty Years, authored by Duke University psychologists J. G. Pratt, J. B. Rhine, and associates. This encompassed a review of 145 reports on ESP experiments published from 1882 to 1939, and included an estimate of the influence of unpublished papers on the overall effect (the file-drawer problem). The term "meta-analysis" was coined in 1976 by the statistician Gene V. Glass, who stated "my major interest currently is in what we have come to call ...the meta-analysis of research. The term is a bit grand, but it is precise and apt ... Meta-analysis refers to the analysis of analyses". Although this led to him being widely recognized as the modern founder of the method, the methodology behind what he termed "meta-analysis" predates his work by several decades. The statistical theory surrounding meta-analysis was greatly advanced by the work of Nambury S. Raju, Larry V. Hedges, Harris Cooper, Ingram Olkin, John E. Hunter, Jacob Cohen, Thomas C. Chalmers, Robert Rosenthal, Frank L. Schmidt, John E. Hunter, and Douglas G. Claurett. In 1992, meta-analysis was first applied to ecological questions by Jessica Gurevitch who used meta-analysis to study competition in field experiments. The field of meta-analysis has expanded greatly since the 1970s and touches multiple disciplines including psychology, medicine, and ecology. Further the more recent creation of evidence synthesis communities has increased the cross pollination of ideas, methods, and the creation of software tools across disciplines.

Steps in a meta-analysis

A meta-analysis is usually preceded by a systematic review, as this allows identification and critical appraisal of all the relevant evidence (thereby limiting the risk of bias in summary estimates). The general steps are then as follows:

  1. Formulation of the research question, e.g. using the PICO model (Population, Intervention, Comparison, Outcome).
  2. Search of literature
  3. Selection of studies ('incorporation criteria')
    1. Based on quality criteria, e.g. the requirement of randomization and blinding in a clinical trial
    2. Selection of specific studies on a well-specified subject, e.g. the treatment of breast cancer.
    3. Decide whether unpublished studies are included to avoid publication bias (file drawer problem)
  4. Decide which dependent variables or summary measures are allowed. For instance, when considering a meta-analysis of published (aggregate) data:
    • Differences (discrete data)
    • Means (continuous data)
    • Hedges' g is a popular summary measure for continuous data that is standardized in order to eliminate scale differences, but it incorporates an index of variation between groups:
      1. in which is the treatment mean, is the control mean, the pooled variance.
  5. Selection of a meta-analysis model, e.g. fixed effect or random effects meta-analysis.
  6. Examine sources of between-study heterogeneity, e.g. using subgroup analysis or meta-regression.

Formal guidance for the conduct and reporting of meta-analyses is provided by the Cochrane Handbook.

For reporting guidelines, see the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.

Methods and assumptions

Approaches

In general, two types of evidence can be distinguished when performing a meta-analysis: individual participant data (IPD), and aggregate data (AD). The aggregate data can be direct or indirect.

AD is more commonly available (e.g. from the literature) and typically represents summary estimates such as odds ratios or relative risks. This can be directly synthesized across conceptually similar studies using several approaches (see below). On the other hand, indirect aggregate data measures the effect of two treatments that were each compared against a similar control group in a meta-analysis. For example, if treatment A and treatment B were directly compared vs placebo in separate meta-analyses, we can use these two pooled results to get an estimate of the effects of A vs B in an indirect comparison as effect A vs Placebo minus effect B vs Placebo.

IPD evidence represents raw data as collected by the study centers. This distinction has raised the need for different meta-analytic methods when evidence synthesis is desired, and has led to the development of one-stage and two-stage methods. In one-stage methods the IPD from all studies are modeled simultaneously whilst accounting for the clustering of participants within studies. Two-stage methods first compute summary statistics for AD from each study and then calculate overall statistics as a weighted average of the study statistics. By reducing IPD to AD, two-stage methods can also be applied when IPD is available; this makes them an appealing choice when performing a meta-analysis. Although it is conventionally believed that one-stage and two-stage methods yield similar results, recent studies have shown that they may occasionally lead to different conclusions.

Statistical models for aggregate data

Direct evidence: Models incorporating study effects only

Fixed effects model

The fixed effect model provides a weighted average of a series of study estimates. The inverse of the estimates' variance is commonly used as study weight, so that larger studies tend to contribute more than smaller studies to the weighted average. Consequently, when studies within a meta-analysis are dominated by a very large study, the findings from smaller studies are practically ignored. Most importantly, the fixed effects model assumes that all included studies investigate the same population, use the same variable and outcome definitions, etc. This assumption is typically unrealistic as research is often prone to several sources of heterogeneity; e.g. treatment effects may differ according to locale, dosage levels, study conditions, ...

Random effects model

A common model used to synthesize heterogeneous research is the random effects model of meta-analysis. This is simply the weighted average of the effect sizes of a group of studies. The weight that is applied in this process of weighted averaging with a random effects meta-analysis is achieved in two steps:

  1. Step 1: Inverse variance weighting
  2. Step 2: Un-weighting of this inverse variance weighting by applying a random effects variance component (REVC) that is simply derived from the extent of variability of the effect sizes of the underlying studies.

This means that the greater this variability in effect sizes (otherwise known as heterogeneity), the greater the un-weighting and this can reach a point when the random effects meta-analysis result becomes simply the un-weighted average effect size across the studies. At the other extreme, when all effect sizes are similar (or variability does not exceed sampling error), no REVC is applied and the random effects meta-analysis defaults to simply a fixed effect meta-analysis (only inverse variance weighting).

The extent of this reversal is solely dependent on two factors:

  1. Heterogeneity of precision
  2. Heterogeneity of effect size

Since neither of these factors automatically indicates a faulty larger study or more reliable smaller studies, the re-distribution of weights under this model will not bear a relationship to what these studies actually might offer. Indeed, it has been demonstrated that redistribution of weights is simply in one direction from larger to smaller studies as heterogeneity increases until eventually all studies have equal weight and no more redistribution is possible. Another issue with the random effects model is that the most commonly used confidence intervals generally do not retain their coverage probability above the specified nominal level and thus substantially underestimate the statistical error and are potentially overconfident in their conclusions. Several fixes have been suggested but the debate continues on. A further concern is that the average treatment effect can sometimes be even less conservative compared to the fixed effect model and therefore misleading in practice. One interpretational fix that has been suggested is to create a prediction interval around the random effects estimate to portray the range of possible effects in practice. However, an assumption behind the calculation of such a prediction interval is that trials are considered more or less homogeneous entities and that included patient populations and comparator treatments should be considered exchangeable and this is usually unattainable in practice.

There are many methods used to estimate between studies variance with restricted maximum likelihood estimator being the least prone to bias and one of the most commonly used. Several advanced iterative techniques for computing the between studies variance exist including both maximum likelihood and restricted maximum likelihood method and random effects models using these methods can be run with multiples software platforms including in Excel, Stata, SPSS, and R.

Most meta-analyses include between 2 and 4 studies and such a sample is more often than not inadequate to accurately estimate heterogeneity. Thus it appears that in small meta-analyses, an incorrect zero between study variance estimate is obtained, leading to a false homogeneity assumption. Overall, it appears that heterogeneity is being consistently underestimated in meta-analyses and sensitivity analyses in which high heterogeneity levels are assumed could be informative. These random effects models and software packages mentioned above relate to study-aggregate meta-analyses and researchers wishing to conduct individual patient data (IPD) meta-analyses need to consider mixed-effects modelling approaches.

IVhet model

Doi & Barendregt working in collaboration with Khan, Thalib and Williams (from the University of Queensland, University of Southern Queensland and Kuwait University), have created an inverse variance quasi likelihood based alternative (IVhet) to the random effects (RE) model for which details are available online. This was incorporated into MetaXL version 2.0, a free Microsoft excel add-in for meta-analysis produced by Epigear International Pty Ltd, and made available on 5 April 2014. The authors state that a clear advantage of this model is that it resolves the two main problems of the random effects model. The first advantage of the IVhet model is that coverage remains at the nominal (usually 95%) level for the confidence interval unlike the random effects model which drops in coverage with increasing heterogeneity. The second advantage is that the IVhet model maintains the inverse variance weights of individual studies, unlike the RE model which gives small studies more weight (and therefore larger studies less) with increasing heterogeneity. When heterogeneity becomes large, the individual study weights under the RE model become equal and thus the RE model returns an arithmetic mean rather than a weighted average. This side-effect of the RE model does not occur with the IVhet model which thus differs from the RE model estimate in two perspectives: Pooled estimates will favor larger trials (as opposed to penalizing larger trials in the RE model) and will have a confidence interval that remains within the nominal coverage under uncertainty (heterogeneity). Doi & Barendregt suggest that while the RE model provides an alternative method of pooling the study data, their simulation results demonstrate that using a more specified probability model with untenable assumptions, as with the RE model, does not necessarily provide better results. The latter study also reports that the IVhet model resolves the problems related to underestimation of the statistical error, poor coverage of the confidence interval and increased MSE seen with the random effects model and the authors conclude that researchers should henceforth abandon use of the random effects model in meta-analysis. While their data is compelling, the ramifications (in terms of the magnitude of spuriously positive results within the Cochrane database) are huge and thus accepting this conclusion requires careful independent confirmation. The availability of a free software (MetaXL) that runs the IVhet model (and all other models for comparison) facilitates this for the research community.

Direct evidence: Models incorporating additional information

Quality effects model

Doi and Thalib originally introduced the quality effects model. They introduced a new approach to adjustment for inter-study variability by incorporating the contribution of variance due to a relevant component (quality) in addition to the contribution of variance due to random error that is used in any fixed effects meta-analysis model to generate weights for each study. The strength of the quality effects meta-analysis is that it allows available methodological evidence to be used over subjective random effects, and thereby helps to close the damaging gap which has opened up between methodology and statistics in clinical research. To do this a synthetic bias variance is computed based on quality information to adjust inverse variance weights and the quality adjusted weight of the ith study is introduced. These adjusted weights are then used in meta-analysis. In other words, if study i is of good quality and other studies are of poor quality, a proportion of their quality adjusted weights is mathematically redistributed to study i giving it more weight towards the overall effect size. As studies become increasingly similar in terms of quality, re-distribution becomes progressively less and ceases when all studies are of equal quality (in the case of equal quality, the quality effects model defaults to the IVhet model – see previous section). A recent evaluation of the quality effects model (with some updates) demonstrates that despite the subjectivity of quality assessment, the performance (MSE and true variance under simulation) is superior to that achievable with the random effects model. This model thus replaces the untenable interpretations that abound in the literature and a software is available to explore this method further.

Indirect evidence: Network meta-analysis methods

A network meta-analysis looks at indirect comparisons. In the image, A has been analyzed in relation to C and C has been analyzed in relation to b. However the relation between A and B is only known indirectly, and a network meta-analysis looks at such indirect evidence of differences between methods and interventions using statistical method.

Indirect comparison meta-analysis methods (also called network meta-analyses, in particular when multiple treatments are assessed simultaneously) generally use two main methodologies. First, is the Bucher method which is a single or repeated comparison of a closed loop of three-treatments such that one of them is common to the two studies and forms the node where the loop begins and ends. Therefore, multiple two-by-two comparisons (3-treatment loops) are needed to compare multiple treatments. This methodology requires that trials with more than two arms have two arms only selected as independent pair-wise comparisons are required. The alternative methodology uses complex statistical modelling to include the multiple arm trials and comparisons simultaneously between all competing treatments. These have been executed using Bayesian methods, mixed linear models and meta-regression approaches.

Bayesian framework

Specifying a Bayesian network meta-analysis model involves writing a directed acyclic graph (DAG) model for general-purpose Markov chain Monte Carlo (MCMC) software such as WinBUGS. In addition, prior distributions have to be specified for a number of the parameters, and the data have to be supplied in a specific format. Together, the DAG, priors, and data form a Bayesian hierarchical model. To complicate matters further, because of the nature of MCMC estimation, overdispersed starting values have to be chosen for a number of independent chains so that convergence can be assessed. Recently, multiple R software packages were developed to simplify the model fitting (e.g., metaBMA and RoBMA) and even implemented in statistical software with graphical user interface (GUI): JASP. Although the complexity of the Bayesian approach limits usage of this methodology, recent tutorial papers are trying to increase accessibility of the methods. Methodology for automation of this method has been suggested but requires that arm-level outcome data are available, and this is usually unavailable. Great claims are sometimes made for the inherent ability of the Bayesian framework to handle network meta-analysis and its greater flexibility. However, this choice of implementation of framework for inference, Bayesian or frequentist, may be less important than other choices regarding the modeling of effects (see discussion on models above).

Frequentist multivariate framework

On the other hand, the frequentist multivariate methods involve approximations and assumptions that are not stated explicitly or verified when the methods are applied (see discussion on meta-analysis models above). For example, the mvmeta package for Stata enables network meta-analysis in a frequentist framework. However, if there is no common comparator in the network, then this has to be handled by augmenting the dataset with fictional arms with high variance, which is not very objective and requires a decision as to what constitutes a sufficiently high variance. The other issue is use of the random effects model in both this frequentist framework and the Bayesian framework. Senn advises analysts to be cautious about interpreting the 'random effects' analysis since only one random effect is allowed for but one could envisage many. Senn goes on to say that it is rather naıve, even in the case where only two treatments are being compared to assume that random-effects analysis accounts for all uncertainty about the way effects can vary from trial to trial. Newer models of meta-analysis such as those discussed above would certainly help alleviate this situation and have been implemented in the next framework.

Generalized pairwise modelling framework

An approach that has been tried since the late 1990s is the implementation of the multiple three-treatment closed-loop analysis. This has not been popular because the process rapidly becomes overwhelming as network complexity increases. Development in this area was then abandoned in favor of the Bayesian and multivariate frequentist methods which emerged as alternatives. Very recently, automation of the three-treatment closed loop method has been developed for complex networks by some researchers as a way to make this methodology available to the mainstream research community. This proposal does restrict each trial to two interventions, but also introduces a workaround for multiple arm trials: a different fixed control node can be selected in different runs. It also utilizes robust meta-analysis methods so that many of the problems highlighted above are avoided. Further research around this framework is required to determine if this is indeed superior to the Bayesian or multivariate frequentist frameworks. Researchers willing to try this out have access to this framework through a free software.

Tailored meta-analysis

Another form of additional information comes from the intended setting. If the target setting for applying the meta-analysis results is known then it may be possible to use data from the setting to tailor the results thus producing a 'tailored meta-analysis'. This has been used in test accuracy meta-analyses, where empirical knowledge of the test positive rate and the prevalence have been used to derive a region in Receiver Operating Characteristic (ROC) space known as an 'applicable region'. Studies are then selected for the target setting based on comparison with this region and aggregated to produce a summary estimate which is tailored to the target setting.

Aggregating IPD and AD

Meta-analysis can also be applied to combine IPD and AD. This is convenient when the researchers who conduct the analysis have their own raw data while collecting aggregate or summary data from the literature. The generalized integration model (GIM) is a generalization of the meta-analysis. It allows that the model fitted on the individual participant data (IPD) is different from the ones used to compute the aggregate data (AD). GIM can be viewed as a model calibration method for integrating information with more flexibility.

Validation of meta-analysis results

The meta-analysis estimate represents a weighted average across studies and when there is heterogeneity this may result in the summary estimate not being representative of individual studies. Qualitative appraisal of the primary studies using established tools can uncover potential biases, but does not quantify the aggregate effect of these biases on the summary estimate. Although the meta-analysis result could be compared with an independent prospective primary study, such external validation is often impractical. This has led to the development of methods that exploit a form of leave-one-out cross validation, sometimes referred to as internal-external cross validation (IOCV). Here each of the k included studies in turn is omitted and compared with the summary estimate derived from aggregating the remaining k- 1 studies. A general validation statistic, Vn based on IOCV has been developed to measure the statistical validity of meta-analysis results. For test accuracy and prediction, particularly when there are multivariate effects, other approaches which seek to estimate the prediction error have also been proposed.

Challenges

A meta-analysis of several small studies does not always predict the results of a single large study. Some have argued that a weakness of the method is that sources of bias are not controlled by the method: a good meta-analysis cannot correct for poor design or bias in the original studies. This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called 'best evidence synthesis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size. However, others have argued that a better approach is to preserve information about the variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.

Publication bias: the file drawer problem

A funnel plot expected without the file drawer problem. The largest studies converge at the tip while smaller studies show more or less symmetrical scatter at the base
 
A funnel plot expected with the file drawer problem. The largest studies still cluster around the tip, but the bias against publishing negative studies has caused the smaller studies as a whole to have an unjustifiably favorable result to the hypothesis

Another potential pitfall is the reliance on the available body of published studies, which may create exaggerated outcomes due to publication bias, as studies which show negative results or insignificant results are less likely to be published. For example, pharmaceutical companies have been known to hide negative studies and researchers may have overlooked unpublished studies such as dissertation studies or conference abstracts that did not reach publication. This is not easily solved, as one cannot know how many studies have gone unreported.

This file drawer problem (characterized by negative or non-significant results being tucked away in a cabinet), can result in a biased distribution of effect sizes thus creating a serious base rate fallacy, in which the significance of the published studies is overestimated, as other studies were either not submitted for publication or were rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis.

The distribution of effect sizes can be visualized with a funnel plot which (in its most common version) is a scatter plot of standard error versus the effect size. It makes use of the fact that the smaller studies (thus larger standard errors) have more scatter of the magnitude of effect (being less precise) while the larger studies have less scatter and form the tip of the funnel. If many negative studies were not published, the remaining positive studies give rise to a funnel plot in which the base is skewed to one side (asymmetry of the funnel plot). In contrast, when there is no publication bias, the effect of the smaller studies has no reason to be skewed to one side and so a symmetric funnel plot results. This also means that if no publication bias is present, there would be no relationship between standard error and effect size. A negative or positive relation between standard error and effect size would imply that smaller studies that found effects in one direction only were more likely to be published and/or to be submitted for publication.

Apart from the visual funnel plot, statistical methods for detecting publication bias have also been proposed. These are controversial because they typically have low power for detection of bias, but also may make false positives under some circumstances. For instance small study effects (biased smaller studies), wherein methodological differences between smaller and larger studies exist, may cause asymmetry in effect sizes that resembles publication bias. However, small study effects may be just as problematic for the interpretation of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources of bias.

A Tandem Method for analyzing publication bias has been suggested for cutting down false positive error problems. This Tandem method consists of three stages. Firstly, one calculates Orwin's fail-safe N, to check how many studies should be added in order to reduce the test statistic to a trivial size. If this number of studies is larger than the number of studies used in the meta-analysis, it is a sign that there is no publication bias, as in that case, one needs a lot of studies to reduce the effect size. Secondly, one can do an Egger's regression test, which tests whether the funnel plot is symmetrical. As mentioned before: a symmetrical funnel plot is a sign that there is no publication bias, as the effect size and sample size are not dependent. Thirdly, one can do the trim-and-fill method, which imputes data if the funnel plot is asymmetrical.

The problem of publication bias is not trivial as it is suggested that 25% of meta-analyses in the psychological sciences may have suffered from publication bias. However, low power of existing tests and problems with the visual appearance of the funnel plot remain an issue, and estimates of publication bias may remain lower than what truly exists.

Most discussions of publication bias focus on journal practices favoring publication of statistically significant findings. However, questionable research practices, such as reworking statistical models until significance is achieved, may also favor statistically significant findings in support of researchers' hypotheses.

Problems related to studies not reporting non-statistically significant effects

Studies often do not report the effects when they do not reach statistical significance. For example, they may simply say that the groups did not show statistically significant differences, without reporting any other information (e.g. a statistic or p-value). Exclusion of these studies would lead to a situation similar to publication bias, but their inclusion (assuming null effects) would also bias the meta-analysis. MetaNSUE, a method created by Joaquim Radua, has shown to allow researchers to include unbiasedly these studies. Its steps are as follows:

Problems related to the statistical approach

Other weaknesses are that it has not been determined if the statistically most accurate method for combining results is the fixed, IVhet, random or quality effect models, though the criticism against the random effects model is mounting because of the perception that the new random effects (used in meta-analysis) are essentially formal devices to facilitate smoothing or shrinkage and prediction may be impossible or ill-advised. The main problem with the random effects approach is that it uses the classic statistical thought of generating a "compromise estimator" that makes the weights close to the naturally weighted estimator if heterogeneity across studies is large but close to the inverse variance weighted estimator if the between study heterogeneity is small. However, what has been ignored is the distinction between the model we choose to analyze a given dataset, and the mechanism by which the data came into being. A random effect can be present in either of these roles, but the two roles are quite distinct. There's no reason to think the analysis model and data-generation mechanism (model) are similar in form, but many sub-fields of statistics have developed the habit of assuming, for theory and simulations, that the data-generation mechanism (model) is identical to the analysis model we choose (or would like others to choose). As a hypothesized mechanisms for producing the data, the random effect model for meta-analysis is silly and it is more appropriate to think of this model as a superficial description and something we choose as an analytical tool – but this choice for meta-analysis may not work because the study effects are a fixed feature of the respective meta-analysis and the probability distribution is only a descriptive tool.

Problems arising from agenda-driven bias

The most severe fault in meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social, or political agenda such as the passage or defeat of legislation. People with these types of agendas may be more likely to abuse meta-analysis due to personal bias. For example, researchers favorable to the author's agenda are likely to have their studies cherry-picked while those not favorable will be ignored or labeled as "not credible". In addition, the favored authors may themselves be biased or paid to produce results that support their overall political, social, or economic goals in ways such as selecting small favorable data sets and not incorporating larger unfavorable data sets. The influence of such biases on the results of a meta-analysis is possible because the methodology of meta-analysis is highly malleable.

A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals, 15 from specialty medicine journals, and three from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed a total of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources, with 219 (69%) receiving funding from industry (i.e. one or more authors having financial ties to the pharmaceutical industry). Of the 509 RCTs, 132 reported author conflict of interest disclosures, with 91 studies (69%) disclosing one or more authors having financial ties to industry. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised."

For example, in 1998, a US federal judge found that the United States Environmental Protection Agency had abused the meta-analysis process to produce a study claiming cancer risks to non-smokers from environmental tobacco smoke (ETS) with the intent to influence policy makers to pass smoke-free–workplace laws. The judge found that:

EPA's study selection is disturbing. First, there is evidence in the record supporting the accusation that EPA "cherry picked" its data. Without criteria for pooling studies into a meta-analysis, the court cannot determine whether the exclusion of studies likely to disprove EPA's a priori hypothesis was coincidence or intentional. Second, EPA's excluding nearly half of the available studies directly conflicts with EPA's purported purpose for analyzing the epidemiological studies and conflicts with EPA's Risk Assessment Guidelines. See ETS Risk Assessment at 4-29 ("These data should also be examined in the interest of weighing all the available evidence, as recommended by EPA's carcinogen risk assessment guidelines (U.S. EPA, 1986a) (emphasis added)). Third, EPA's selective use of data conflicts with the Radon Research Act. The Act states EPA's program shall "gather data and information on all aspects of indoor air quality" (Radon Research Act § 403(a)(1)) (emphasis added).

As a result of the abuse, the court vacated Chapters 1–6 of and the Appendices to EPA's "Respiratory Health Effects of Passive Smoking: Lung Cancer and other Disorders".

Comparability and validity of included studies

Meta-analysis may often not be a substitute for an adequately powered primary study.

Heterogeneity of methods used may lead to faulty conclusions. For instance, differences in the forms of an intervention or the cohorts that are thought to be minor or are unknown to the scientists could lead to substantially different results, including results that distort the meta-analysis' results or are not adequately considered in its data. Vice versa, results from meta-analyses may also make certain hypothesis or interventions seem nonviable and preempt further research or approvals, despite certain modifications – such as intermittent administration, personalized criteria and combination measures – leading to substantially different results, including in cases where such have been successfully identified and applied in small-scale studies that were considered in the meta-analysis. Standardization, reproduction of experiments, open data and open protocols may often not mitigate such problems, for instance as relevant factors and criteria could be unknown or not be recorded.

There is a debate about the appropriate balance between testing with as few animals or humans as possible and the need to obtain robust, reliable findings. It has been argued that unreliable research is inefficient and wasteful and that studies are not just wasteful when they stop too late but also when they stop too early. In large clinical trials, planned, sequential analyses are sometimes used if there is considerable expense or potential harm associated with testing participants. In applied behavioural science, "megastudies" have been proposed to investigate the efficacy of many different interventions designed in an interdisciplinary manner by separate teams. One such study used a fitness chain to recruit a large number participants. It has been suggested that behavioural interventions are often hard to compare [in meta-analyses and reviews], as "different scientists test different intervention ideas in different samples using different outcomes over different time intervals", causing a lack of comparability of such individual investigations which limits "their potential to inform policy".

Weak inclusion standards lead to misleading conclusions

Meta-analyses in education are often not restrictive enough in regards to the methodological quality of the studies they include. For example, studies that include small samples or researcher-made measures lead to inflated effect size estimates. However, this problem also troubles meta-analysis of clinical trials. The use of different quality assessment tools (QATs) lead to including different studies and obtaining conflicting estimates of average treatment effects.

Applications in modern science

Modern statistical meta-analysis does more than just combine the effect sizes of a set of studies using a weighted average. It can test if the outcomes of studies show more variation than the variation that is expected because of the sampling of different numbers of research participants. Additionally, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design can be coded and used to reduce variance of the estimator (see statistical models above). Thus some methodological weaknesses in studies can be corrected statistically. Other uses of meta-analytic methods include the development and validation of clinical prediction models, where meta-analysis may be used to combine individual participant data from different research centers and to assess the model's generalisability, or even to aggregate existing prediction models.

Meta-analysis can be done with single-subject design as well as group research designs. This is important because much research has been done with single-subject research designs. Considerable dispute exists for the most appropriate meta-analytic technique for single subject research.

Meta-analysis leads to a shift of emphasis from single studies to multiple studies. It emphasizes the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed "meta-analytic thinking". The results of a meta-analysis are often shown in a forest plot.

Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method'. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each study's effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel–Haenszel method and the Peto method.

Seed-based d mapping (formerly signed differential mapping, SDM) is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM or PET.

Different high throughput techniques such as microarrays have been used to understand Gene expression. MicroRNA expression profiles have been used to identify differentially expressed microRNAs in particular cell or tissue type or disease conditions or to check the effect of a treatment. A meta-analysis of such expression profiles was performed to derive novel conclusions and to validate the known findings.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...