Search This Blog

Tuesday, November 8, 2022

Ancient Greek medicine

From Wikipedia, the free encyclopedia
 
Physician treating a patient (Attic red-figure aryballos, 480–470 BC)

Ancient Greek medicine was a compilation of theories and practices that were constantly expanding through new ideologies and trials. Many components were considered in ancient Greek medicine, intertwining the spiritual with the physical. Specifically, the ancient Greeks believed health was affected by the humors, geographic location, social class, diet, trauma, beliefs, and mindset. Early on the ancient Greeks believed that illnesses were "divine punishments" and that healing was a "gift from the Gods". As trials continued wherein theories were tested against symptoms and results, the pure spiritual beliefs regarding "punishments" and "gifts" were replaced with a foundation based in the physical, i.e., cause and effect.

Humorism (or the four humors) refers to blood, phlegm, yellow bile and black bile. Each of the four humors were linked to an organ, temper, season and element. It was also theorized that sex played a role in medicine because some diseases and treatments were different for females than for males. Moreover, geographic location and social class affected the living conditions of the people and might subject them to different environmental issues such as mosquitoes, rats, and availability of clean drinking water. Diet was thought to be an issue as well and might be affected by a lack of access to adequate nourishment. Trauma, such as that suffered by gladiators, from dog bites or other injuries, played a role in theories relating to understanding anatomy and infections. Additionally, there was significant focus on the beliefs and mindset of the patient in the diagnosis and treatment theories. It was recognized that the mind played a role in healing, or that it might also be the sole basis for the illness.

Ancient Greek medicine began to revolve around the theory of humors. The humoral theory states that good health comes from a perfect balance of the four humors: blood, phlegm, yellow bile, and black bile. Consequently, poor health resulted from improper balance of the four humors. Hippocrates, known as the "Father of Modern Medicine", established a medical school at Cos and is the most important figure in ancient Greek medicine. Hippocrates and his students documented numerous illnesses in the Hippocratic Corpus, and developed the Hippocratic Oath for physicians, which is still in use today. His students and him also created medical terminology that is part of our vocabulary today. Medical words included acute, chronic, epidemic, exacerbation, relapse, and others. The contributions to ancient Greek medicine of Hippocrates, Socrates and others had a lasting influence on Islamic medicine and medieval European medicine until many of their findings eventually became obsolete in the 14th century.

The earliest known Greek medical school opened in Cnidus in 700 BC. Alcmaeon, author of the first anatomical compilation, worked at this school, and it was here that the practice of observing patients was established. Despite their known respect for ancient Egyptian medicine, attempts to discern any particular influence on Greek practice at this early time have not been dramatically successful because of the lack of sources and the challenge of understanding ancient medical terminology. It is clear, however, that the Greeks imported Egyptian substances into their pharmacopoeia, and the influence became more pronounced after the establishment of a school of Greek medicine in Alexandria.

Asclepieia

View of the Asklepieion of Kos, the best preserved instance of an Asclepieion.

Asclepius was espoused as the first physician, and myth placed him as the son of Apollo. Temples dedicated to the healer-god Asclepius, known as Asclepieia (Greek: Ἀσκληπιεῖα; sing. Ἀσκληπιεῖον Asclepieion), functioned as centers of medical advice, prognosis, and healing. At these shrines, patients would enter a dream-like state of induced sleep known as "enkoimesis" (Greek: ἐγκοίμησις) not unlike anesthesia, in which they either received guidance from the deity in a dream or were cured by surgery. Asclepeia provided carefully controlled spaces conducive to healing and fulfilled several of the requirements of institutions created for healing. The Temple of Asclepius in Pergamum had a spring that flowed down into an underground room in the Temple. People would come to drink the waters and to bathe in them because they were believed to have medicinal properties. Mud baths and hot teas such as chamomile were used to calm them or peppermint tea to soothe their headaches, which is still a home remedy used by many today. The patients were encouraged to sleep in the facilities too. Their dreams were interpreted by the doctors and their symptoms were then reviewed. Dogs would occasionally be brought in to lick open wounds for assistance in their healing. In the Asclepieion of Epidaurus, three large marble boards dated to 350 BC preserve the names, case histories, complaints, and cures of about 70 patients who came to the temple with a problem and shed it there. Some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place, but with the patient in a state of enkoimesis induced with the help of soporific substances such as opium.

The Rod of Asclepius is a universal symbol for medicine to this day. However, it is frequently confused with Caduceus, which was a staff wielded by the god Hermes. The Rod of Asclepius embodies one snake with no wings whereas Caduceus is represented by two snakes and a pair of wings depicting the swiftness of Hermes.

Ancient Greek physicians

Ancient Greek physicians did not regard disease as being of supernatural origin, i.e., brought about from the dissatisfaction of the gods or from demonic possession. 'The Greeks developed a system of medicine based on an empirico-rational approach, such that they relied ever more on naturalistic observation, enhanced by practical trial and error experience, abandoning magical and religious justifications of human bodily dysfunction.' However, in some instances, the fault of the ailment was still placed on the patient and the role of the physician was to conciliate with the gods or exorcise the demon with prayers, spells, and sacrifices.

The Hippocratic Corpus and Humorism

Surgical tools, 5th century BC. Reconstructions based on descriptions within the Hippocratic corpus. Thessaloniki Technology Museum

The Hippocratic Corpus opposes ancient beliefs, offering biologically based approaches to disease instead of magical intervention. The Hippocratic Corpus is a collection of about seventy early medical works from ancient Greece that are associated with Hippocrates and his students. Although once thought to have been written by Hippocrates himself, many scholars today believe that these texts were written by a series of authors over several decades. The Corpus contains the treatise, the Sacred Disease, which argues that if all diseases were derived from supernatural sources, biological medicines would not work. The establishment of the humoral theory of medicine focused on the balance between blood, yellow and black bile, and phlegm in the human body. Being too hot, cold, dry or wet disturbed the balance between the humors, resulting in disease and illness. Gods and demons were not believed to punish the patient, but attributed to bad air (miasma theory). Physicians who practiced humoral medicine focused on reestablishing balance between the humors. The shift from supernatural disease to biological disease did not completely abolish Greek religion, but offered a new method of how physicians interacted with patients.

Ancient Greek physicians who followed humorism emphasized the importance of environment. Physicians believed patients would be subjected to various diseases based on the environment they resided. The local water supply and the direction the wind blew influenced the health of the local populace. Patients played an important role in their treatment. Stated in the treatise "Aphorisms", "[i]t is not enough for the physician to do what is necessary, but the patient and the attendant must do their part as well". Patient compliance was rooted in their respect for the physician. According to the treatise "Prognostic", a physician was able to increase their reputation and respect through "prognosis", knowing the outcome of the disease. Physicians had an active role in the lives of patients, taking into consideration their residence. Distinguishing between fatal diseases and recoverable disease was important for patient trust and respect, positively influencing patient compliance.

Asclepius (center) arrives in Kos and is greeted by Hippocrates (left) and a citizen (right), mosaic from the Asclepieion of Kos, 2nd-3rd century AD

With the growth of patient compliance in Greek medicine, consent became an important factor between the doctor and patient relationship. Presented with all the information concerning the patient's health, the patient makes the decision to accept treatment. Physician and patient responsibility is mentioned in the treatise "Epidemics", where it states, "there are three factors in the practice of medicine: the disease, the patient and the physician. The physician is the servant of science, and the patient must do what he can to fight the disease with the assistance of the physician".

Aristotle's influence on Greek perception

Ancient Greek philosopher Aristotle was the most influential scholar of the living world from antiquity. Aristotle's biological writings demonstrate great concern for empiricism, biological causation, and the diversity of life. Aristotle did not experiment, however, holding that items display their real natures in their own environments, rather than controlled artificial ones. While in modern-day physics and chemistry this assumption has been found unhelpful, in zoology and ethology it remains the dominant practice, and Aristotle's work "retains real interest". He made countless observations of nature, especially the habits and attributes of plants and animals in the world around him, which he devoted considerable attention to categorizing. In all, Aristotle classified 540 animal species, and dissected at least 50.

Aristotle believed that formal causes guided all natural processes. Such a teleological view gave Aristotle cause to justify his observed data as an expression of formal design; for example suggesting that Nature, giving no animal both horns and tusks, was staving off vanity, and generally giving creatures faculties only to such a degree as they are necessary. In a similar fashion, Aristotle believed that creatures were arranged in a graded scale of perfection rising from plants on up to man—the scala naturae or Great Chain of Being.

He held that the level of a creature's perfection was reflected in its form, but not foreordained by that form. Yet another aspect of his biology divided souls into three groups: a vegetative soul, responsible for reproduction and growth; a sensitive soul, responsible for mobility and sensation; and a rational soul, capable of thought and reflection. He attributed only the first to plants, the first two to animals, and all three to humans. Aristotle, in contrast to earlier philosophers, and like the Egyptians, placed the rational soul in the heart, rather than the brain. Notable is Aristotle's division of sensation and thought, which generally went against previous philosophers, with the exception of Alcmaeon.

Aristotle's successor at the Lyceum, Theophrastus, wrote a series of books on botany—the History of Plants—which survived as the most important contribution of antiquity to botany, even into the Middle Ages. Many of Theophrastus' names survive into modern times, such as carpos for fruit, and pericarpium for seed vessel. Rather than focus on formal causes, as Aristotle did, Theophrastus suggested a mechanistic scheme, drawing analogies between natural and artificial processes, and relying on Aristotle's concept of the efficient cause. Theophrastus also recognized the role of sex in the reproduction of some higher plants, though this last discovery was lost in later ages. The biological/teleological ideas of Aristotle and Theophrastus, as well as their emphasis on a series of axioms rather than on empirical observation, cannot be easily separated from their consequent impact on Western medicine.

Herophilus, Erasistratus and ancient Greek anatomy

Frontispiece to a 1644 version of the expanded and illustrated edition of Theophrastus's Historia Plantarum (c. 1200), which was originally written around 200 BC

Nomenclature, methods and applications for the study of anatomy all date back to the Greeks. After Theophrastus (d. 286 BC), the extent of original work produced was diminished. Though interest in Aristotle's ideas survived, they were generally taken unquestioningly. It is not until the age of Alexandria under the Ptolemies that advances in biology can be again found. The first medical teacher at Alexandria was Herophilus of Chalcedon (the father of anatomy), who differed from Aristotle, placing intelligence in the brain, and connected the nervous system to motion and sensation. Herophilus also distinguished between veins and arteries, noting that the latter had a pulse while the former do not. He did this using an experiment involving cutting certain veins and arteries in a pig's neck until the squealing stopped. In the same vein, he developed a diagnostic technique which relied upon distinguishing different types of pulse. He, and his contemporary, Erasistratus of Chios, researched the role of veins and nerves, mapping their courses across the body.

Erasistratus connected the increased complexity of the surface of the human brain compared to other animals to its superior intelligence. He sometimes employed experiments to further his research, at one time repeatedly weighing a caged bird and noting its weight loss between feeding times. Following his teacher's researches into pneumatics, he claimed that the human system of blood vessels was controlled by vacuums, drawing blood across the body. In Erasistratus' physiology, air enters the body, is then drawn by the lungs into the heart, where it is transformed into vital spirit, and is then pumped by the arteries throughout the body. Some of this vital spirit reaches the brain, where it is transformed into animal spirit, which is then distributed by the nerves. Herophilus and Erasistratus performed their experiments upon criminals given to them by their Ptolemaic kings. They dissected these criminals alive, and "while they were still breathing they observed parts which nature had formerly concealed, and examined their position, colour, shape, size, arrangement, hardness, softness, smoothness, connection."

Though a few ancient atomists such as Lucretius challenged the teleological viewpoint of Aristotelian ideas about life, teleology (and after the rise of Christianity, natural theology) would remain central to biological thought essentially until the 18th and 19th centuries. In the words of Ernst Mayr, "Nothing of any real consequence in biology after Lucretius and Galen until the Renaissance." Aristotle's ideas of natural history and medicine survived, but they were generally taken unquestioningly.

Galen

Aelius Galenus was a prominent Greek physician, surgeon and philosopher in the Roman Empire. Arguably the most accomplished of all medical researchers of antiquity, Galen influenced the development of various scientific disciplines, including anatomy, physiology, pathology, pharmacology, and neurology, as well as philosophy and logic.

The son of Aelius Nicon, a wealthy architect with scholarly interests, Galen received a comprehensive education that prepared him for a successful career as a physician and philosopher. Born in Pergamon (present-day Bergama, Turkey), Galen traveled extensively, exposing himself to a wide variety of medical theories and discoveries before settling in Rome, where he served prominent members of Roman society and eventually was given the position of personal physician to several emperors.

Galen's understanding of anatomy and medicine was principally influenced by the then-current theory of humorism, as advanced by ancient Greek physicians such as Hippocrates. His theories dominated and influenced Western medical science for more than 1,300 years. His anatomical reports, based mainly on dissection of monkeys, especially the Barbary macaque, and pigs, remained uncontested until 1543, when printed descriptions and illustrations of human dissections were published in the seminal work De humani corporis fabrica by Andreas Vesalius where Galen's physiological theory was accommodated to these new observations. Galen's theory of the physiology of the circulatory system endured until 1628, when William Harvey published his treatise entitled De motu cordis, in which he established that blood circulates, with the heart acting as a pump. Medical students continued to study Galen's writings until well into the 19th century. Galen conducted many nerve ligation experiments that supported the theory, which is still accepted today, that the brain controls all the motions of the muscles by means of the cranial and peripheral nervous systems.

Galen saw himself as both a physician and a philosopher, as he wrote in his treatise entitled That the Best Physician is also a Philosopher. Galen was very interested in the debate between the rationalist and empiricist medical sects, and his use of direct observation, dissection and vivisection represents a complex middle ground between the extremes of those two viewpoints.

Dioscorides

The first century AD Greek physician, pharmacologist, botanist, and Roman army surgeon Pedanius Dioscorides authored an encyclopedia of medicinal substances commonly known as De Materia Medica. This work did not delve into medical theory or explanation of pathogenesis, but described the uses and actions of some 600 substances, based on empirical observation. Unlike other works of Classical antiquity, Dioscorides' manuscript was never out of publication; it formed the basis for the Western pharmacopeia through the 19th century, a true testament to the efficacy of the medicines described; moreover, the influence of work on European herbal medicine eclipsed that of the Hippocratic Corpes.

Herodicus

Herodicus (Greek: Ἡρóδιĸος) was a Greek physician of the 5th century BC, who is considered to be the father of sports medicine. The first use of therapeutic exercise for the treatment of disease and maintenance of health is credited to him, and he is believed to have been one of the tutors of Hippocrates. He also recommended good diet and massage using beneficial herbs and oils, and his theories are considered the foundation of sports medicine. He was specific in the manner that a massage should be given. He recommended that rubbing be initially slow and gentle, then subsequently faster, with the application of more pressure, which was to be followed by more gentle friction.

Historical legacy

Through long contact with Greek culture, and their eventual conquest of Greece, the Romans adopted a favorable view of Hippocratic medicine.

This acceptance led to the spread of Greek medical theories throughout the Roman Empire, and thus a large portion of the West. The most influential Roman scholar to continue and expand on the Hippocratic tradition was Galen (d. c. 207). Study of Hippocratic and Galenic texts, however, all but disappeared in the Latin West in the Early Middle Ages, following the collapse of the Western Empire, although the Hippocratic-Galenic tradition of Greek medicine continued to be studied and practiced in the Eastern Roman Empire (Byzantium). After AD 750, Arab, Persian and Andalusi scholars translated Galen's and Dioscorides' works in particular. Thereafter the Hippocratic-Galenic medical tradition was assimilated and eventually expanded, with the most influential Muslim doctor-scholar being Avicenna. Beginning in the late eleventh century, the Hippocratic-Galenic tradition returned to the Latin West with a series of translations of the Classical texts, mainly from Arabic translations but occasionally from the original Greek. In the Renaissance, more translations of Galen and Hippocrates directly from the Greek were made from newly available Byzantine manuscripts.

Galen's influence was so great that even after Western Europeans started making dissections in the thirteenth century, scholars often assimilated findings into the Galenic model that otherwise might have thrown Galen's accuracy into doubt. Over time, however, Classical medical theory came to be superseded by increasing emphasis on scientific experimental methods in the 16th and 17th centuries. Nevertheless, the Hippocratic-Galenic practice of bloodletting was practiced into the 19th century, despite its empirical ineffectiveness and riskiness.

Educational software

From Wikipedia, the free encyclopedia

Educational software is a term used for any computer software which is made for an educational purpose. It encompasses different ranges from language learning software to classroom management software to reference software. The purpose of all this software is to make some part of education more effective and efficient.

History

1946s–1970s

The use of computer hardware and software in education and training dates to the early 1940s, when American researchers developed flight simulators which used analog computers to generate simulated onboard instrument data. One such system was the type19 synthetic radar trainer, built in 1943. From these early attempts in the WWII era through the mid-1970s, educational software was directly tied to the hardware, on which it ran. Pioneering educational computer systems in this era included the PLATO system (1960), developed at the University of Illinois, and TICCIT (1969). In 1963, IBM had established a partnership with Stanford University's Institute for Mathematical Studies in the Social Sciences (IMSSS), directed by Patrick Suppes, to develop the first comprehensive CAI elementary school curriculum which was implemented on a large scale in schools in both California and Mississippi. In 1967 Computer Curriculum Corporation (CCC, now Pearson Education Technologies) was formed to market to schools the materials developed through the IBM partnership. Early terminals that ran educational systems cost over $10,000, putting them out of reach of most institutions. Some programming languages from this period, p3), and LOGO (1967) can also be considered educational, as they were specifically targeted to students and novice computer users. The PLATO IV system, released in 1972, supported many features which later became standard in educational software running on home computers. Its features included bitmap graphics, primitive sound generation, and support for non-keyboard input devices, including the touchscreen.

1970s–1980s

The arrival of the personal computer, with the Altair 8800 in 1975, changed the field of software in general, with specific implications for educational software. Whereas users prior to 1975 were dependent upon university or government owned mainframe computers with timesharing, users after this shift could create and use software for computers in homes and schools, computers available for less than $2000. By the early 1980s, the availability of personal computers including the Apple II (1977), Commodore PET (1977), Commodore VIC-20 (1980), and Commodore 64 (1982) allowed for the creation of companies and nonprofits which specialized in educational software. Brøderbund and The Learning Company are key companies from this period, and MECC, the Minnesota Educational Computing Consortium, a key non-profit software developer. These and other companies designed a range of titles for personal computers, with the bulk of the software initially developed for the Apple II.

Categories of educational software

Courseware

"Courseware" is a term that combines the words 'course' with 'software'. It was originally used to describe additional educational material intended as kits for teachers or trainers or as tutorials for students, usually packaged for use with a computer. The term's meaning and usage has expanded and can refer to the entire course and any additional material when used in reference an online or 'computer formatted' classroom. Many companies are using the term to describe the entire "package" consisting of one 'class' or 'course' bundled together with the various lessons, tests, and other material needed. The courseware itself can be in different formats: some are only available online, such as Web pages, while others can be downloaded as PDF files or other types of document. Many forms of educational technology are now covered by the term courseware. Most leading educational companies solicit or include courseware with their training packages.

Classroom aids

Some educational software is designed for use in school classrooms. Typically such software may be projected onto a large whiteboard at the front of the class and/or run simultaneously on a network of desktop computers in a classroom. The most notable are SMART Boards that use SMART Notebook to interact with the board which allows the use of pens to digitally draw on the board. This type of software is often called classroom management software. While teachers often choose to use educational software from other categories in their IT suites (e.g. reference works, children's software), a whole category of educational software has grown up specifically intended to assist classroom teaching. Branding has been less strong in this category than in those oriented towards home users. Software titles are often very specialized and produced by various manufacturers, including many established educational book publishers.

Assessment software

Moodle is a very popular assessment website used by teachers to send assignments and grade students' works.

With the impact of environmental damage and the need for institutions to become "paperless", more educational institutions are seeking alternative ways of assessment and testing, which has always traditionally been known to use up vasts amount of paper. Assessment software refers to software with a primary purpose of assessing and testing students in a virtual environment. Assessment software allows students to complete tests and examinations using a computer, usually networked. The software then scores each test transcript and outputs results for each student. Assessment software is available in various delivery methods, the most popular being self-hosted software, online software and hand-held voting systems. Proprietary software and open-source software systems are available. While technically falling into the Courseware category (see above), Skill evaluation lab is an example for Computer-based assessment software with PPA-2 (Plan, Prove, Assess) methodology to create and conduct computer based online examination. Moodle is an example of open-source software with an assessment component that is gaining popularity. Other popular international assessment systems include Google Classroom, Blackboard Learn, and EvaluNet XT.

Reference software

Many publishers of print dictionaries and encyclopedias have been involved in the production of educational reference software since the mid-1990s. They were joined in the reference software market by both startup companies and established software publishers, most notably Microsoft.

The first commercial reference software products were reformulations of existing content into CD-ROM editions, often supplemented with new multimedia content, including compressed video and sound. More recent products made use of internet technologies, to supplement CD-ROM products, then, more recently, to replace them entirely.

Wikipedia and its offspins (such as Wiktionary) marked a new departure in educational reference software. Previously, encyclopedias and dictionaries had compiled their contents on the basis of invited and closed teams of specialists. The Wiki concept has allowed for the development of collaborative reference works through open cooperation incorporating experts and non-experts.

Custom platforms

Some manufacturers regarded normal personal computers as an inappropriate platform for learning software for younger children and produced custom child-friendly pieces of hardware instead. The hardware and software is generally combined into a single product, such as a child laptop-lookalike. The laptop keyboard for younger children follows an alphabetic order and the qwerty order for the older ones. The most well-known example are Leapfrog products. These include imaginatively designed hand-held consoles with a variety of pluggable educational game cartridges and book-like electronic devices into which a variety of electronic books can be loaded. These products are more portable than laptop computers, but have a much more limited range of purposes, concentrating on literacy.

While mainstream operating systems are designed for general usages, and are more or less customized for education only by the application sets added to them, a variety of software manufacturers, especially Linux distributions, have sought to provide integrated platforms for specifically education.

Corporate training and tertiary education

Earlier educational software for the important corporate and tertiary education markets was designed to run on a single desktop computer (or an equivalent user device). In the years immediately following 2000, planners decided to switch to server-based applications with a high degree of standardization. This means that educational software runs primarily on servers which may be hundreds or thousands of miles from the actual user. The user only receives tiny pieces of a learning module or test, fed over the internet one by one. The server software decides on what learning material to distribute, collects results and displays progress to teaching staff. Another way of expressing this change is to say that educational software morphed into an online educational service. US Governmental endorsements and approval systems ensured the rapid switch to the new way of managing and distributing learning material. McDonald's also experimented with this via the Nintendo DS software eCrew Development Program.

See also:

Specific educational purposes

Educational software for learning Standard Chinese using Pinyin.

There are highly specific niche markets for educational software, including:

  • teacher tools and classroom management software

(remote control and monitoring software, filetransfer software, document camera and presenter, free tools,...)

  • Driving test software
  • Interactive geometry software
  • Language learning software
  • Mind Mapping Software which provides a focal point for discussion, helps make classes more interactive, and assists students with studying, essays and projects.
  • Designing and printing of card models for use in education - e.g. Designer Castles for BBC Micro and Acorn Archimedes platforms
  • Notetaking (Comparison of notetaking software)
  • Software for enabling simulated dissection of human and animal bodies (used in medical and veterinary college courses)
  • Spelling tutor software
  • Typing tutors
  • Reading Instruction
  • Medical and healthcare educational software

Video games and gamification

Video games can be used to teach a user technology literacy or more about a subject. Some operating systems and mobile phones have these features. A notable example is Microsoft Solitaire, which was developed to familiarize users with the use of graphical user interfaces, especially the mouse and the drag-and-drop technique. Mavis Beacon Teaches Typing is a largely known program with built in mini-games to keep the user entertained while improving their typing skills.

Gamification is the use of game design elements in nongame contexts and has been shown to be effective in motivating behavior change. By seeing game elements as "motivational affordances," and formalizing the relationship between these elements and motivational affordances. Classcraft is a software tool used by teachers that has games elements alongside an educational goal. Tovertafel is a games console designed for remedial education and counter-acting the effects of dementia.

Effects and use of educational software

Tutor-based software

Tutor-based education software is defined as software that mimics the teacher student one on one dynamic of tutoring with software in place of a teacher. Research was conducted to see if this type of software would be effective in improving students understanding of material. It concluded that there was a positive impact which decreased the amount of time students need to study for and relative gain of understanding.

Helping those with disabilities

A study was conducted to see the effects of education software on children with mild disabilities. The results were that the software was a positive impact assisting teaching these children social skills though team based learning and discussion, videos and games.

Education software evaluation

There is a large market of educational software in use today. A team decided that they were to develop a system in which educational software should be evaluated as there is no current standard. It is called the Construction of the Comprehensive Evaluation of Electronic Learning Tools and Educational Software (CEELTES). The software to be evaluated is graded on a point scale in four categories: the area of technical, technological and user attributes; area of criteria evaluating the information, content and operation of the software; the area of criteria evaluating the information in terms of educational use, learning and recognition; the area of criteria evaluating the psychological and pedagogical use of the software.

Use in higher education

In university level computer science course, learning logic is an essential part of the curriculum. There is a proposal on using two logistical education tool FOLST and LogicChess to understand First Order Logic for university students to better understand the course material and the essentials of logistical design.

Journalistic objectivity

From Wikipedia, the free encyclopedia
  
Journalistic objectivity is a considerable notion within the discussion of journalistic professionalism. Journalistic objectivity may refer to fairness, disinterestedness, factuality, and nonpartisanship, but most often encompasses all of these qualities. First evolving as a practice in the 18th century, a number of critiques and alternatives to the notion have emerged since, fuelling ongoing and dynamic discourse surrounding the ideal of objectivity in journalism.

Most newspapers and TV stations depend upon news agencies for their material, and each of the four major global agencies (Agence France-Presse (formerly the Havas agency), Associated Press, Reuters, and Agencia EFE) began with and continue to operate on a basic philosophy of providing a single objective news feed to all subscribers. That is, they do not provide separate feeds for conservative or liberal newspapers. Journalist Jonathan Fenby has explained the notion:

To achieve such wide acceptability, the agencies avoid overt partiality. The demonstrably correct information is their stock-in-trade. Traditionally, they report at a reduced level of responsibility, attributing their information to a spokesman, the press, or other sources. They avoid making judgments and steer clear of doubt and ambiguity. Though their founders did not use the word, objectivity is the philosophical basis for their enterprises – or failing that, widely acceptable neutrality.

Objectivity in journalism aims to help the audience make up their own mind about a story, providing the facts alone and then letting audiences interpret those on their own. To maintain objectivity in journalism, journalists should present the facts whether or not they like or agree with those facts. Objective reporting is meant to portray issues and events in a neutral and unbiased manner, regardless of the writer's opinion or personal beliefs.

Sociologist Michael Schudson suggests that "the belief in objectivity is a faith in 'facts,' a distrust in 'values,' and a commitment to their segregation". Objectivity also outlines an institutional role for journalists as a fourth estate, a body that exists apart from government and large interest groups.

Journalistic objectivity requires that a journalist not be on either side of an argument. The journalist must report only the facts and not a personal attitude toward the facts. While objectivity is a complex and dynamic notion that may refer to a multitude of techniques and practices, it generally refers to the idea of "three distinct, yet interrelated, concepts": truthfulness, neutrality, and detachment.

Truthfulness is a commitment to reporting only accurate and truthful information, without skewing any facts or details to improve the story or better align an issue with any certain agenda. Neutrality suggests that stories be reported in an unbiased, even-handed, and impartial manner. Under this notion, journalists are to side with none of the parties involved, and simply provide the relevant facts and information of all. The third idea, detachment, refers to the emotional approach of the journalist. Essentially, reporters should not only approach issues in an unbiased manner but also with a dispassionate and emotionless attitude. Through this strategy, stories can be presented in a rational and calm manner, letting the audience make up their minds without any influences from the media.

History

The modern notion of objectivity in journalism is largely due to the work of Walter Lippmann. Lippmann was the first to widely call for journalists to use the scientific method for gathering information. Lippmann called for journalistic objectivity after the excesses of yellow journalism. He noted that the yellows at the time had served their purpose, but that the people needed to receive the actual news, and not a "romanticized version of it".

The term objectivity was not applied to journalistic work until the 20th century, but it had fully emerged as a guiding principle by the 1890s. Michael Schudson, among a number of other communication scholars and historians, agree that the idea of objectivity has prevailed in dominant discourse among journalists in the United States since the appearance of modern newspapers in the Jacksonian Era of the 1830s. These papers transformed the press amidst the democratization of politics, the expansion of a market economy, and the growing authority of an entrepreneurial, urban middle class. Before then, American newspapers were expected to present a partisan viewpoint, not a neutral one.

The need for objectivity first occurred to Associated Press editors who realized that partisanship would narrow their potential market. Their goal was to reach all newspapers and leave it to the individual papers to decide on what slanting and commentary were needed. Lawrence Gobright, the AP chief in Washington, explained the philosophy of objectivity to Congress in 1856:

My business is to communicate facts. My instructions do not allow me to make any comments upon the facts which I communicate. My dispatches are sent to papers of all manner of politics, and the editors say they are able to make their own comments upon the facts which are sent to them. I, therefore confine myself to what I consider legitimate news. I do not act as a politician belonging to any school, but try to be truthful and impartial. My dispatches are a merely dry matter of fact and detail.

In the first decade of the twentieth century, it was uncommon to see a sharp divide between facts and values. However, Stuart Allan (1997) suggests that, during World War I, scholar propaganda campaigns, as well as the rise of "press agents and publicity experts", fostered the growing cynicism among the public towards state institutions and "official channels of information". The elevation of objectivity thus constituted an effort to re-legitimatize the news-press, as well as the state in general.

Some historians, like Gerald Baldasty, have observed that objectivity went hand in hand with the need to make profits in the newspaper business by attracting advertisers. In this economic analysis, publishers did not want to offend any potential advertising clients and therefore encouraged news editors and reporters to strive to present all sides of an issue. Advertisers would remind the press that partisanship hurts circulation, and, consequently, advertising revenues—thus, objectivity was sought.

Others have proposed a political explanation for the rise of objectivity; scholars like Richard Kaplan have argued that political parties needed to lose their hold over the loyalties of voters and the institutions of government before the press could feel free to offer a nonpartisan, "impartial" account of news events. This change occurred following the critical 1896 election and the subsequent reform of the Progressive Era.

Later, during the period following World War II, the newly formalized rules and practices of objectivity led to a brief national consensus and temporary suspension of negative public opinion; however, doubts and uncertainties in "the institutions of democracy and capitalism" resurfaced in the period of civil unrest during the 1960s and 1970s, ultimately leading to the emergence of the critique of objectivity.

In conclusion, there are three key factors in the origin of objectivity. The transition from a political model of journalism to a commercial model requires the production of content that can be marketed across the political and ideological spectrum. The telegraph imposes pressures on journalists to prioritize the most important facts at the beginning of the story and adopt a simplified, homogenized and generic style that could appeal to geographically diverse audiences. In the early 20th century, journalism started to define itself as a professional occupation that required special training, unique skills and self-regulation according to ethical principles. Professionalization normalized the regime of objectivity as the foundation of good journalism, providing benefits to journalists and editors/publishers.

For most of the 19th century, most of the publications and news were written by one person. Writers could express their own perspectives and opinions. However, since the 1880s, Americans started to become interested in some scientific theories and facts which narrowed the ways that writers could express their feelings. The use of technology led to more productivity and control. New tech in the news process has worked to establish a discourse of speed. The discourse of speed has also become stronger and more encompassing over time. The transformation of the newspaper produced a medium requiring a fairly sophisticated team of many different kinds of laborers. Journalists are expected to possess technical skills in computer-based and new media technologies to some extent, placing new demands on journalists now.

Criticisms

Some scholars and journalists criticize the understanding of objectivity as neutrality or nonpartisanship, arguing that it does a disservice to the public because it fails to attempt to find truth. They also argue that such objectivity is nearly impossible to apply in practice—newspapers inevitably take a point of view in deciding what stories to cover, which to feature on the front page, and what sources they quote. The media critics Edward S. Herman and Noam Chomsky have advanced a propaganda model hypothesis proposing that such a notion of objectivity results in heavily favoring government viewpoints and large corporations. Mainstream commentators accept that news value drives selection of stories, but there is some debate as to whether catering to an audience's level of interest in a story makes the selection process non-objective.

Another example of an objection to objectivity, according to communication scholar David Mindich, was the coverage that the major papers (most notably the New York Times) gave to the lynching of thousands of African Americans during the 1890s. News stories of the period described the hanging, immolation and mutilation of people by mobs with detachment and, through the regimen of objectivity, news writers often attempted to construct a "false balance" of these accounts by recounting the alleged transgressions of the victims that provoked the lynch mobs to fury. Mindich suggests that by enabling practices of objectivity and allowing them to "[go] basically unquestioned", it may have had the effect of normalizing the practice of lynching.

In a more recent example, scholars Andrew Calcutt and Phillip Hammond (2011) note that since the 1990s, war reporting (especially) has increasingly come to criticize and reject the practice of objectivity. In 1998, a BBC reporter, Martin Bell, noted that he favoured a "journalism of attachment", over the previously sought after dispassionate approach. Similarly, a CNN war correspondent from the US, Christiane Amanpour, stated that in some circumstances "neutrality can mean you are an accomplice to all sorts of evil". Each of these opinions stems from scholar's and journalist's critique of objectivity as too "heartless" or "forensic" to report the human natured and emotionally charged issues found in war and conflict reporting.

As discussed above, with the growth of mass media, especially from the 19th century, news advertising became the most important source of media revenue. Whole audiences needed to be engaged across communities and regions to maximize advertising revenue. This led to "[j]ournalistic [o]bjectivity as an industry standard […] a set of conventions allowing the news to be presented as all things to all people". In modern journalism, especially with the emergence of 24-hour news cycles, speed is of the essence in responding to breaking stories. It is therefore not possible for reporters to decide "from first principles" how they will report each and every story that presents itself—thus, some scholars argue that mere convention (versus a true devotion to truth-seeking) has come to govern much of journalism.

Reporters are biased toward conflict because it is more interesting than stories without conflict; we are biased toward sticking with the pack because it is safe; we are biased toward event-driven coverage because it is easier; we are biased toward existing narratives because they are safe and easy. Mostly, though, we are biased in favor of getting the story, regardless of whose ox is being gored.

— Brent Cunningham, 2003

Brent Cunningham, the managing editor of Columbia Journalism Review, argues in a 2003 article that objectivity excuses lazy reporting. He suggests that objectivity makes us passive recipients of news, rather than aggressive analyzers and critics of it. According to Cunningham, the nut of the tortured relationship with objectivity lies within a number of conflicting diktats that the press was subjected to operate under: be neutral yet investigative; be disengaged yet have an impact; and be fair-minded yet have an edge. Cunningham, however, argues that reporters by and large are not ideological warriors; rather, they are imperfect people performing a difficult job that is crucial to society and, "[d]espite all our important and necessary attempts to minimize [individual's] humanity, it can't be any other way," Cunningham concludes.

The debate about objectivity has also occurred within the photojournalism field. In 2011, Italian photographer Ruben Salvadori challenged the expectation of objective truth that the general public associates to photojournalism with his project "Photojournalism Behind the Scenes". By including the traditionally invisible photographer into the frame, Salvadori sought to ignite a discussion about the ethics of the profession, and indicate a need for audiences to be active viewers who understand and recognize the potential subjectivity of the photographic medium.

Another notion circulating around the critique of objectivity is proposed by scholar Judith Lichtenberg. She points to the logical inconsistency that arises when scholars or journalists criticize journalism for failing to be objective, while simultaneously proposing that there is no such thing as objectivity. Underpinning critiques of objectivity that arose in the 1970s and 1980s, this dual theory—which Lichtenberg refers to as a "compound assault on objectivity"—invalidates itself, as each element of the argument repudiates the other. Lichtenberg agrees with other scholars that view objectivity as mere conventional practice: she states that "much of what goes under the name of objectivity reflects shallow understanding of it". Thus, she suggests that these practices, rather than the overall notion of objectivity (whose primary aim, according to Lichtenberg, is only to seek and pursue truth), should really be the target of critique.

Journalism scholars and media critics have used the term view from nowhere to criticize journalists' attempt to adopt a neutral and objective point of view in reporting, as if reporting "from nobody's point of view". Jay Rosen has argued that journalists may thereby disinform their audience by creating the impression that they have an authoritative impartiality between conflicting positions on an issue. Jeremy Iggers quoted Richard S. Salant, former president of CBS News, who stated: "Our reporters do not cover stories from their point of view. They are presenting them from nobody's point of view." Iggers called Salant's assertion "plainly incoherent, as is the notion of observations untouched by interpretation". Rosen has used the term to criticize journalists who hide behind the appearance of journalistic objectivity so as to gain an unearned position of authority or trust with their audience; he advocates for transparency as a better way of legitimately earning trust. Scholars such as Rosen and Jake Lynch borrowed the term from philosopher Thomas Nagel's 1986 book The View from Nowhere, which stated, "A view or form of thought is more objective than another if it relies less on the specifics of the individual's makeup and position in the world." Many other news media commentators have also criticized the view from nowhere in journalism. Writer Elias Isquith argues in a 2014 article for Salon that "the view from nowhere not only leads to sloppy thinking but actually leaves the reader less informed than she would be had she simply read an unapologetically ideological source or even, in some cases, nothing at all". In 2019, journalist Lewis Raven Wallace published a book advocating the opposite of the view from nowhere: the view from somewhere.

Alternatives

Some argue that a more appropriate standard should be fairness and accuracy (as enshrined in the names of groups like Fairness and Accuracy in Reporting). Under this standard, taking sides on an issue would be permitted as long as the side taken was accurate and the other side was given a fair chance to respond. Many professionals believe that true objectivity in journalism is not possible and reporters must seek balance in their stories (giving all sides their respective points of view), which fosters fairness.

A good reporter who is well-steeped in his subject matter and who isn't out to prove his cleverness, but rather is sweating out a detailed understanding of a topic worth exploring, will probably develop intelligent opinions that will inform and perhaps be expressed in his journalism.

— Timothy Noah, 1999

Brent Cunningham suggests that reporters should understand their inevitable biases, so they can explore what the accepted narratives may be, and then work against these as much as possible. He points out that "[w]e need deep reporting and real understanding, but we also need reporters to acknowledge all that they don't know, and not try to mask that shortcoming behind a gloss of attitude, or drown it in a roar of oversimplified assertions".

Cunningham suggests the following to solve the apparent controversies of objectivity:

  • Journalists should acknowledge, humbly and publicly, that what they do is far more subjective and far less detached than the aura of 'objectivity' implies. He proposes that this will not end the charges of bias, but rather allow journalists to defend what they do from a more realistic and less hypocritical position.
  • Journalists should be free and encouraged to develop expertise and to use it to sort through competing claims, identifying and explaining the underlying assumptions of those claims, and making judgments about what readers and viewers need to know and understand about what is happening.

In the words of another scholar, Faina (2012) suggests that modern journalists may function as "sensemakers" within the shifting contemporary journalistic environment.

Notable departures from objective news work also include the muckraking of Ida Tarbell and Lincoln Steffens, the New Journalism of Tom Wolfe, the underground press of the 1960s, and public journalism.

For news related to conflict, peace journalism may provide an alternative by introducing "insights" of social "science" into the journalism field, specifically through disciplines such as conflict analysis, conflict resolution, peace research and social psychology. The application of this "empirical" "research" to the reporting of conflict may thus replace the "unacknowledged" conventions (see above) which govern the "non-scientific" practices of 'objectivity' of journalism.

Crowdfunding

Recently, many scholars and journalists have increasingly become attuned to the shifts occurring within the newspaper industry, and general upheaval of the journalistic environment, as it adjusts to the new digital era of the 21st century. In the face of this, the practice of crowdfunding is increasingly being utilized by journalists to fund independent and/or alternative projects, establishing it as another relevant alternative practice to consider in the discussion of journalistic objectivity. Crowdfunding allows journalists to pursue stories of interest to them or that otherwise may not be covered adequately for a number of reasons. Crowdfunding supports journalists by funding necessary components like reporting equipment, computers, travel expenses if necessary, and overhead costs like office space or paying other staff on their team. A key component of crowdfunding and a significant motivator for journalists to use it is the lack of corporate backing. This means that the journalist has the autonomy to make editorial decisions at their sole discrection but there is equally no financial support.

According to a study conducted by Hunter (2014), journalists engaged in a crowdfunding campaign all held a similar opinion that their funders did not have control over the content and that it was the journalist who maintained ultimate jurisdiction. However, this pronouncement was complicated by the sense of accountability or responsibility incited in journalists towards their funders. Hunter (2014) notes that this may have the effect of creating a power imbalance between funders and the journalist, as journalists want to maintain editorial control, but it is in fact the funders that decide whether the project will be a success or not.

To combat this, Hunter (2014) proposes the following strategies that journalists may employ to maintain a more objective approach if desired:

  • Constructing an imaginary 'firewall' between themselves and their audiences
  • Limiting investment from any single source
  • Clearly defining the relationship they desire with funders at the outset of the project

The type of relationship and potential pressures the journalist may feel depends on the type of investor with whom they are working, as there are passive and active investors. Passive investors will not be involved beyond making a donation on the crowdfunding platform, leaving everything up to the discretion of the journalist. In contrast, active investors have a more active role in the production of the journalistic piece, which can take various forms that may include the investor providing feedback or ideas as well as receiving early copies of the work prior to its public release.

Some journalists from the study firmly held the opinion that impartial accounts and a detached, namely "objective", reporting style should continue to govern, even within a crowdfunding context. Others, however, advocated that point-of-view journalism and accurate reporting are not mutually exclusive ideals, and thus journalists still may ascribe to quality factual reporting, sans the traditional practices or understanding of objectivity.

The study on crowdfunding done by Hunter (2014) showed that audiences are keen to fund projects with a specific point of view or pieces of advocacy journalism. Journalists are often using crowdfunding to pursue stories with a point-of-view that large corporations do not pursue adequately. The journalist explains the goal of the work they are trying to pursue and what resources are needed for it on crowdfunding platforms. Based on this information, funders decide to contribute or not. The desire or acceptance of opinionated journalism is especially clear with passive investors because they donate based on the journalist's pitch and let the journalist produce what they want. They essentially just want to support the journalist as an individual and allow them the freedom to pursue the project.

Exponential decay

From Wikipedia, the free encyclopedia
 
A quantity undergoing exponential decay. Larger decay constants make the quantity vanish much more rapidly. This plot shows decay for decay constant (λ) of 25, 5, 1, 1/5, and 1/25 for x from 0 to 5.

A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where N is the quantity and λ (lambda) is a positive rate called the exponential decay constant:

The solution to this equation (see derivation below) is:

where N(t) is the quantity at time t, N0 = N(0) is the initial quantity, that is, the quantity at time t = 0, and the constant λ is called the decay constant, disintegration constant, rate constant, or transformation constant.

Measuring rates of decay

Mean lifetime

If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime), where the exponential time constant, , relates to the decay rate, λ, in the following way:

The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime, , instead of the decay constant, λ:

and that is the time at which the population of the assembly is reduced to 1/e ≈ 0.367879441 times its initial value.

For example, if the initial population of the assembly, N(0), is 1000, then the population at time , , is 368.

A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life".

Half-life

A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (If N(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as:

When this expression is inserted for in the exponential equation above, and ln 2 is absorbed into the base, this equation becomes:

Thus, the amount of material left is 2−1 = 1/2 raised to the (whole or fractional) number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the original material left.

Therefore, the mean lifetime is equal to the half-life divided by the natural log of 2, or:

For example, polonium-210 has a half-life of 138 days, and a mean lifetime of 200 days.

Solution of the differential equation

The equation that describes exponential decay is

or, by rearranging (applying the technique called separation of variables),

Integrating, we have

where C is the constant of integration, and hence

where the final substitution, N0 = eC, is obtained by evaluating the equation at t = 0, as N0 is defined as being the quantity at t = 0.

This is the form of the equation that is most commonly used to describe exponential decay. Any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the decay constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the differential operator with N(t) as the corresponding eigenfunction. The units of the decay constant are s−1.

Derivation of the mean lifetime

Given an assembly of elements, the number of which decreases ultimately to zero, the mean lifetime, , (also called simply the lifetime) is the expected value of the amount of time before an object is removed from the assembly. Specifically, if the individual lifetime of an element of the assembly is the time elapsed between some reference time and the removal of that element from the assembly, the mean lifetime is the arithmetic mean of the individual lifetimes.

Starting from the population formula

first let c be the normalizing factor to convert to a probability density function:

or, on rearranging,

Exponential decay is a scalar multiple of the exponential distribution (i.e. the individual lifetime of each object is exponentially distributed), which has a well-known expected value. We can compute it here using integration by parts.

Decay by two or more processes

A quantity may decay via two or more different processes simultaneously. In general, these processes (often called "decay modes", "decay channels", "decay routes" etc.) have different probabilities of occurring, and thus occur at different rates with different half-lives, in parallel. The total decay rate of the quantity N is given by the sum of the decay routes; thus, in the case of two processes:

The solution to this equation is given in the previous section, where the sum of is treated as a new total decay constant .

Partial mean life associated with individual processes is by definition the multiplicative inverse of corresponding partial decay constant: . A combined can be given in terms of s:

Since half-lives differ from mean life by a constant factor, the same equation holds in terms of the two corresponding half-lives:

where is the combined or total half-life for the process, and are so-named partial half-lives of corresponding processes. Terms "partial half-life" and "partial mean life" denote quantities derived from a decay constant as if the given decay mode were the only decay mode for the quantity. The term "partial half-life" is misleading, because it cannot be measured as a time interval for which a certain quantity is halved.

In terms of separate decay constants, the total half-life can be shown to be

For a decay by three simultaneous exponential processes the total half-life can be computed as above:

Decay series / coupled decay

In nuclear science and pharmacokinetics, the agent of interest might be situated in a decay chain, where the accumulation is governed by exponential decay of a source agent, while the agent of interest itself decays by means of an exponential process.

These systems are solved using the Bateman equation.

In the pharmacology setting, some ingested substances might be absorbed into the body by a process reasonably modeled as exponential decay, or might be deliberately formulated to have such a release profile.

Applications and examples

Exponential decay occurs in a wide variety of situations. Most of these fall into the domain of the natural sciences.

Many decay processes that are often treated as exponential, are really only exponential so long as the sample is large and the law of large numbers holds. For small samples, a more general analysis is necessary, accounting for a Poisson process.

Natural sciences

  • Chemical reactions: The rates of certain types of chemical reactions depend on the concentration of one or another reactant. Reactions whose rate depends only on the concentration of one reactant (known as first-order reactions) consequently follow exponential decay. For instance, many enzyme-catalyzed reactions behave this way.
  • Electrostatics: The electric charge (or, equivalently, the potential) contained in a capacitor (capacitance C) changes exponentially, if the capacitor experiences a constant external load (resistance R). The exponential time-constant τ for the process is R C, and the half-life is therefore R C ln2. This applies to both charging and discharging, i.e. a capacitor charges or discharges according to the same law. The same equations can be applied to the current in an inductor. (Furthermore, the particular case of a capacitor or inductor changing through several parallel resistors makes an interesting example of multiple decay processes, with each resistor representing a separate process. In fact, the expression for the equivalent resistance of two resistors in parallel mirrors the equation for the half-life with two decay processes.)
  • Geophysics: Atmospheric pressure decreases approximately exponentially with increasing height above sea level, at a rate of about 12% per 1000m.
  • Heat transfer: If an object at one temperature is exposed to a medium of another temperature, the temperature difference between the object and the medium follows exponential decay (in the limit of slow processes; equivalent to "good" heat conduction inside the object, so that its temperature remains relatively uniform through its volume). See also Newton's law of cooling.
  • Luminescence: After excitation, the emission intensity – which is proportional to the number of excited atoms or molecules – of a luminescent material decays exponentially. Depending on the number of mechanisms involved, the decay can be mono- or multi-exponential.
  • Pharmacology and toxicology: It is found that many administered substances are distributed and metabolized (see clearance) according to exponential decay patterns. The biological half-lives "alpha half-life" and "beta half-life" of a substance measure how quickly a substance is distributed and eliminated.
  • Physical optics: The intensity of electromagnetic radiation such as light or X-rays or gamma rays in an absorbent medium, follows an exponential decrease with distance into the absorbing medium. This is known as the Beer-Lambert law.
  • Radioactivity: In a sample of a radionuclide that undergoes radioactive decay to a different state, the number of atoms in the original state follows exponential decay as long as the remaining number of atoms is large. The decay product is termed a radiogenic nuclide.
  • Thermoelectricity: The decline in resistance of a Negative Temperature Coefficient Thermistor as temperature is increased.
  • Vibrations: Some vibrations may decay exponentially; this characteristic is often found in damped mechanical oscillators, and used in creating ADSR envelopes in synthesizers. An overdamped system will simply return to equilibrium via an exponential decay.
  • Beer froth: Arnd Leike, of the Ludwig Maximilian University of Munich, won an Ig Nobel Prize for demonstrating that beer froth obeys the law of exponential decay.

Social sciences

  • Finance: a retirement fund will decay exponentially being subject to discrete payout amounts, usually monthly, and an input subject to a continuous interest rate. A differential equation dA/dt = input – output can be written and solved to find the time to reach any amount A, remaining in the fund.
  • In simple glottochronology, the (debatable) assumption of a constant decay rate in languages allows one to estimate the age of single languages. (To compute the time of split between two languages requires additional assumptions, independent of exponential decay).

Computer science

  • The core routing protocol on the Internet, BGP, has to maintain a routing table in order to remember the paths a packet can be deviated to. When one of these paths repeatedly changes its state from available to not available (and vice versa), the BGP router controlling that path has to repeatedly add and remove the path record from its routing table (flaps the path), thus spending local resources such as CPU and RAM and, even more, broadcasting useless information to peer routers. To prevent this undesired behavior, an algorithm named route flapping damping assigns each route a weight that gets bigger each time the route changes its state and decays exponentially with time. When the weight reaches a certain limit, no more flapping is done, thus suppressing the route.
Graphs comparing doubling times and half lives of exponential growths (bold lines) and decay (faint lines), and their 70/t and 72/t approximations. In the SVG version, hover over a graph to highlight it and its complement.

Copper

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Copper   Copper,  29 Cu Copper Appear...