Search This Blog

Wednesday, August 31, 2022

Anthropogenic hazard

From Wikipedia, the free encyclopedia

Anthropogenic hazards are hazards caused by human action or inaction. They are contrasted with natural hazards. Anthropogenic hazards may adversely affect humans, other organisms, biomes, and ecosystems. They can even cause an omnicide. The frequency and severity of hazards are key elements in some risk analysis methodologies. Hazards may also be described in relation to the impact that they have. A hazard only exists if there is a pathway to exposure. As an example, the center of the earth consists of molten material at very high temperatures which would be a severe hazard if contact was made with the core. However, there is no feasible way of making contact with the core, therefore the center of the earth currently poses no hazard.

Anthropogenic hazards can be grouped into societal hazards (criminality, civil disorder, terrorism, war, industrial hazards, engineering hazards, power outage, fire), hazards caused by transportation and environmental hazards.

A proposed level crossing at railroad tracks would result in "the worse death trap in Los Angeles," a California traffic engineer warned in 1915, because of the impaired view of the railway by automobile drivers. A viaduct was built instead, in 1920.

Societal hazards

There are certain societal hazards that can occur by inadvertently overlooking a hazard, a failure to notice or by purposeful intent by human inaction or neglect, consequences as a result of little or no preemptive actions to prevent a hazard from occurring. Although not everything is within the scope of human control, there is anti-social behaviour and crimes committed by individuals or groups that can be prevented by reasonable apprehension of injury or death. People commonly report dangerous circumstances, suspicious behaviour or criminal intentions to the police and for the authorities to investigate or intervene.

Criminality

Behavior that puts others at risk of injury or death is universally regarded as criminal and is a breach of the law for which the appropriate legal authority may impose some form of penalty, such as imprisonment, a fine, or even execution. Understanding what makes individuals act in ways that put others at risk has been the subject of much research in many developed countries. Mitigating the hazard of criminality is very dependent on time and place with some areas and times of day posing a greater risk than others.

Civil disorder

Civil disorder is a broad term that typically is used by law enforcement to describe forms of disturbance when many people are involved and are set upon a common aim. Civil disorder has many causes, including large-scale criminal conspiracy, socio-economic factors (unemployment, poverty), hostility between racial and ethnic groups, and outrage over perceived moral and legal transgressions. Examples of well-known civil disorders and riots are the poll tax riots in the United Kingdom in 1990; the 1992 Los Angeles riots in which 53 people died; the 2008 Greek riots after a 15-year-old boy was fatally shot by police; and the 2010 Thai political protests in Bangkok during which 91 people died. Such behavior is only hazardous for those directly involved as participants, those controlling the disturbance, or those indirectly involved as passers-by or shopkeepers for example. For the great majority, staying out of the way of the disturbance avoids the hazard.

Terrorism

The common definition of terrorism is the use or threatened use of violence for the purpose of creating fear in order to achieve a political, religious, or ideological goal. Targets of terrorist acts can be anyone, including private citizens, government officials, military personnel, law enforcement officers, firefighters, or people serving in the interests of governments.

Definitions of terrorism may also vary geographically. In Australia, the Security Legislation Amendment (Terrorism) Act 2002, defines terrorism as "an action to advance a political, religious or ideological cause and with the intention of coercing the government or intimidating the public", while the United States Department of State operationally describes it as "premeditated, politically-motivated violence perpetrated against non-combatant targets by sub national groups or clandestine agents, usually intended to influence an audience".

War

War is a conflict between relatively large groups of people, which involves physical force inflicted by the use of weapons. Warfare has destroyed entire cultures, countries, economies and inflicted great suffering on humanity. Other terms for war can include armed conflict, hostilities, and police action. Acts of war are normally excluded from insurance contracts and sometimes from disaster planning.

Industrial hazards

Industrial accidents resulting in releases of hazardous materials usually occur in a commercial context, such as mining accidents. They often have an environmental impact, but also can be hazardous for people living in proximity. The Bhopal disaster saw the release of methyl isocyanate into the neighbouring environment seriously affecting large numbers of people. It is probably the world's worst industrial accident to date.

Engineering hazards

Engineering hazards occur when structures used by people fail or the materials used in their construction prove to be hazardous. This history of construction has many examples of hazards associated with structures including bridge failures such as the Tay Bridge disaster caused by under-design, the Silver Bridge collapse caused by corrosion attack, or the original Tacoma Narrows Bridge caused by aerodynamic flutter of the deck. Failure of dams was not infrequent during the Victorian era, such as the Dale Dyke dam failure in Sheffield, England in 1864, causing the Great Sheffield Flood, which killed at least 240 people. In 1889, the failure of the South Fork Dam on the Little Conemaugh River near Johnstown, Pennsylvania, produced the Johnstown Flood, which killed over 2,200. Other failures include balcony collapses, aerial walkway collapses such as the Hyatt Regency walkway collapse in Kansas City in 1981, and building collapses such as that of the World Trade Center in New York City in 2001 during the September 11 attacks.

Power outage

A power outage is an interruption of normal sources of electrical power. Short-term power outages (up to a few hours) are common and have minor adverse effect, since most businesses and health facilities are prepared to deal with them. Extended power outages, however, can disrupt personal and business activities as well as medical and rescue services, leading to business losses and medical emergencies. Extended loss of power can lead to civil disorder, as in the New York City blackout of 1977. Only very rarely do power outages escalate to disaster proportions, however, they often accompany other types of disasters, such as hurricanes and floods, which hampers relief efforts.

Electromagnetic pulses and voltage spikes from whatever cause can also damage electricity infrastructure and electrical devices.

Recent notable power outages include the 2005 Java–Bali Blackout which affected 100 million people, 2012 India blackouts which affected 600 million and the 2009 Brazil and Paraguay blackout which affected 60 million people.

Fire

An active flame front of the Zaca Fire
 

Bush fires, forest fires, and mine fires are generally started by lightning, but also by human negligence or arson. They can burn thousands of square kilometers. If a fire intensifies enough to produce its own winds and "weather", it will form into a firestorm. A good example of a mine fire is the one near Centralia, Pennsylvania. Started in 1962, it ruined the town and continues to burn today. Some of the biggest city-related fires are The Great Chicago Fire, The Peshtigo Fire (both of 1871) and the Great Fire of London in 1666.

Casualties resulting from fires, regardless of their source or initial cause, can be aggravated by inadequate emergency preparedness. Such hazards as a lack of accessible emergency exits, poorly marked escape routes, or improperly maintained fire extinguishers or sprinkler systems may result in many more deaths and injuries than might occur with such protections. 

A building damaged by arson
 

Arson is the setting a fire with intent to cause damage. The definition of arson was originally limited to setting fire to buildings, but was later expanded to include other objects, such as bridges, vehicles, and private property. Some human-induced fires are accidental: failing machinery such as a kitchen stove is a major cause of accidental fires.

Hazards caused by transportation

Aviation

The ditching of US Airways Flight 1549 was a well-publicised incident in which all on board survived
 

An aviation incident is an occurrence other than an accident, associated with the operation of an aircraft, which affects or could affect the safety of operations, passengers, or pilots. The category of the vehicle can range from a helicopter, an airliner, or a Space Shuttle.

Rail

Granville-Paris Express wreck at Gare Montparnasse on 22 October 1895
 

The special hazards of traveling by rail include the possibility of a train crash which can result in substantial loss of life. Incidents involving freight traffic generally pose a greater hazardous risk to the environment. Less common hazards include geophysical hazards such as tsunami such as that which struck in 2004 in Sri Lanka when 1,700 people died in the Sri Lanka tsunami-rail disaster.

See also the list of train accidents by death toll.

Road

Traffic collisions are the leading cause of death, and road-based pollution creates a substantial health hazard, especially in major conurbations.

Space

Disintegration of the Space Shuttle Challenger

Space travel presents significant hazards, mostly to the direct participants (astronauts or cosmonauts and ground support personnel), but also carry the potential of disaster to the public at large. Accidents related to space travel have killed 22 astronauts and cosmonauts, and a larger number of people on the ground.

Accidents can occur on the ground during launch, preparation, or in flight, due to equipment malfunction or the naturally hostile environment of space itself. An additional risk is posed by (unmanned) low-orbiting satellites whose orbits eventually decay due to friction with the extremely thin atmosphere. If they are large enough, massive pieces traveling at great speed can fall to the Earth before burning up, with the potential to do damage.

One of the worst human-piloted space accidents involved the Space Shuttle Challenger which disintegrated in 1986, claiming all seven lives on board. The shuttle disintegrated 73 seconds after taking off from the launch pad in Cape Canaveral, Florida.

Another example is the Space Shuttle Columbia, which disintegrated during a landing attempt over Texas in 2003, with a loss of all seven astronauts on board. The debris field extended from New Mexico to Mississippi.

Sea travel

The capsized cruise ship Costa Concordia with a large rock lodged in the crushed hull of the ship

Ships can sink, capsize or crash in disasters. Perhaps the most infamous sinking was that of the Titanic which hit an iceberg and sank, resulting in one of the worst maritime disasters in history. Other notable incidents include the capsizing of the Costa Concordia, which killed at least 32 people; and is the largest passenger ship to sink, and the sinking of the MV Doña Paz, which claimed the lives of up to 4,375 people in the worst peacetime maritime disaster in history.

Environmental hazards

Environmental hazards are those hazards where the effects are seen in biomes or ecosystems rather than directly on living organisms. Well known examples include oil spills, water pollution, slash and burn de-forestation, air pollution, and ground fissures.

Waste disposal

In managing waste many hazardous materials are put in the domestic and commercial waste stream. In part this is because modern technological living uses certain toxic or poisonous materials in the electronics and chemical industries. Which, when they are in use or transported, are usually safely contained or encapsulated and packaged to avoid any exposure. In the waste stream, the waste products exterior or encapsulation breaks or degrades and there is a release and exposure to hazardous materials into the environment, for people working in the waste disposal industry, those living around sites used for waste disposal or landfill and the general environment surrounding such sites.

Hazardous materials

Organohalogens

Organohalogens are a family of synthetic organic molecules which all contain atoms of one of the halogens. Such materials include PCBs, Dioxins, DDT, Freon and many others. Although considered harmless when first produced, many of these compounds are now known to have profound physiological effects on many organisms including man. Many are also fat soluble and become concentrated through the food chain.

Toxic metals

Many metals and their salts can exhibit toxicity to humans and many other organisms. Such metals include, Lead, Cadmium, Copper, Silver, Mercury and many of the transuranic metals.

Radioactive materials

Radioactive materials produce ionizing radiation which may be very harmful to living organisms. Damage from even a short exposure to radioactivity may have long term adverse health consequences.

Exposure may occur from nuclear fallout when nuclear weapons are detonated or nuclear containment systems are compromised. During World War II, the United States Army Air Forces dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki, leading to extensive contamination of food, land, and water. In the Soviet Union, the Mayak industrial complex (otherwise known as Chelyabinsk-40 or Chelyabinsk-65) exploded in 1957. The Kyshtym disaster was kept secret for several decades. It is the third most serious nuclear accident ever recorded. At least 22 villages were exposed to radiation and resulted in at least 10,000 displaced persons. In 1992, the former Soviet Union officially acknowledged the accident. Other Soviet republics of Ukraine and Belarus suffered also when a reactor at the Chernobyl nuclear power plant had a meltdown in 1986. To this day, several small towns and the city of Chernobyl remain abandoned and uninhabitable due to fallout.

The Hanford Site is a decommissioned nuclear production complex that produced plutonium for most of the 60,000 weapons in the U.S. nuclear arsenal. There are environmental concerns about radioactivity released from Hanford.

A number of military accidents involving nuclear weapons have also resulted in radioactive contamination, for example the 1966 Palomares B-52 crash and the 1968 Thule Air Base B-52 crash.

Dermatitis (burn) of chin from vapors of mustard gas

CBRNs

CBRN is a catch-all acronym for chemical, biological, radiological, and nuclear. The term is used to describe a non-conventional terror threat that, if used by a nation, would be considered use of a weapon of mass destruction. This term is used primarily in the United Kingdom. Planning for the possibility of a CBRN event may be appropriate for certain high-risk or high-value facilities and governments. Examples include Saddam Hussein's Halabja poison gas attack, the Sarin gas attack on the Tokyo subway and the preceding test runs in Matsumoto, Japan 100 kilometers outside of Tokyo, and Lord Amherst giving smallpox laden blankets to Native Americans.

Screening (medicine)

From Wikipedia, the free encyclopedia
 
 
A coal miner completes a screening survey for coalworker's pneumoconiosis.

Screening, in medicine, is a strategy used to look for as-yet-unrecognised conditions or risk markers. This testing can be applied to individuals or to a whole population. The people tested may not exhibit any signs or symptoms of a disease, or they might exhibit only one or two symptoms, which by themselves do not indicate a definitive diagnosis.

Screening interventions are designed to identify conditions which could at some future point turn into disease, thus enabling earlier intervention and management in the hope to reduce mortality and suffering from a disease. Although screening may lead to an earlier diagnosis, not all screening tests have been shown to benefit the person being screened; overdiagnosis, misdiagnosis, and creating a false sense of security are some potential adverse effects of screening. Additionally, some screening tests can be inappropriately overused. For these reasons, a test used in a screening program, especially for a disease with low incidence, must have good sensitivity in addition to acceptable specificity.

Several types of screening exist: universal screening involves screening of all individuals in a certain category (for example, all children of a certain age). Case finding involves screening a smaller group of people based on the presence of risk factors (for example, because a family member has been diagnosed with a hereditary disease). Screening interventions are not designed to be diagnostic, and often have significant rates of both false positive and false negative results.

Frequently updated recommendations for screening are provided by the independent panel of experts, the United States Preventive Services Task Force.

Principles

In 1968, the World Health Organization published guidelines on the Principles and practice of screening for disease, which often referred to as Wilson and Jungner criteria. The principles are still broadly applicable today:

  1. The condition should be an important health problem.
  2. There should be a treatment for the condition.
  3. Facilities for diagnosis and treatment should be available.
  4. There should be a latent stage of the disease.
  5. There should be a test or examination for the condition.
  6. The test should be acceptable to the population.
  7. The natural history of the disease should be adequately understood.
  8. There should be an agreed policy on whom to treat.
  9. The total cost of finding a case should be economically balanced in relation to medical expenditure as a whole.
  10. Case-finding should be a continuous process, not just a "once and for all" project.

In 2008, with the emergence of new genomic technologies, the WHO synthesised and modified these with the new understanding as follows:

Synthesis of emerging screening criteria proposed over the past 40 years

  • The screening programme should respond to a recognized need.
  • The objectives of screening should be defined at the outset.
  • There should be a defined target population.
  • There should be scientific evidence of screening programme effectiveness.
  • The programme should integrate education, testing, clinical services and programme management.
  • There should be quality assurance, with mechanisms to minimize potential risks of screening.
  • The programme should ensure informed choice, confidentiality and respect for autonomy.
  • The programme should promote equity and access to screening for the entire target population.
  • Programme evaluation should be planned from the outset.
  • The overall benefits of screening should outweigh the harm.

Types

A mobile clinic used to screen coal miners at risk of black lung disease
A mobile clinic used to screen coal miners at risk of black lung disease
  • Mass screening: The screening of a whole population or subgroup. It is offered to all, irrespective of the risk status of the individual.
  • High risk or selective screening: High risk screening is conducted only among high-risk people.
  • Multiphasic screening: The application of two or more screening tests to a large population at one time, instead of carrying out separate screening tests for single diseases.
  • When done thoughtfully and based on research, identification of risk factors can be a strategy for medical screening.

Examples

Common programs

In many countries there are population-based screening programmes. In some countries, such as the UK, policy is made nationally and programmes are delivered nationwide to uniform quality standards. Common screening programmes include:

School-based

Most public school systems in the United States screen students periodically for hearing and vision deficiencies and dental problems. Screening for spinal and posture issues such as scoliosis is sometimes carried out, but is controversial as scoliosis (unlike vision or dental issues) is found in only a very small segment of the general population and because students must remove their shirts for screening. Many states no longer mandate scoliosis screenings, or allow them to be waived with parental notification. There are currently bills being introduced in various U.S. states to mandate mental health screenings for students attending public schools in hopes to prevent self-harm as well as the harming of peers. Those proposing these bills hope to diagnose and treat mental illnesses such as depression and anxiety.

Screening for social determinants of health

The social determinants of health are the economic and social conditions that influence individual and group differences in health status. Those conditions may have adverse effects on their health and well-being. To mitigate those adverse effects, certain health policies like the United States Affordable Care Act (2010) gave increased traction to preventive programs, such as those that routinely screen for social determinants of health. Screening is believed to a valuable tool in identifying patients' basic needs in a social determinants of health framework so that they can be better served.

Policy background in the United States

When established in the United States, the Affordable Care Act was able to bridge the gap between community-based health and healthcare as a medical treatment, leading to programs that screened for social determinants of health. The Affordable Care Act established several services with an eye for social determinants or an openness to more diverse clientele, such as Community Transformation Grants, which were delegated to the community in order to establish "preventive community health activities" and "address health disparities".

Clinical programs

Social determinants of health include social status, gender, ethnicity, economic status, education level, access to services, education, immigrant status, upbringing, and much, much more. Several clinics across the United States have employed a system in which they screen patients for certain risk factors related to social determinants of health. In such cases, it is done as a preventive measure in order to mitigate any detrimental effects of prolonged exposure to certain risk factors, or to simply begin remedying the adverse effects already faced by certain individuals. They can be structured in different ways, for example, online or in person, and yield different outcomes based on the patient's responses. Some programs, like the FIND Desk at UCSF Benioff Children's Hospital, employ screening for social determinants of health in order to connect their patients with social services and community resources that may provide patients greater autonomy and mobility.

Medical equipment used

Medical equipment used in screening tests is usually different from equipment used in diagnostic tests as screening tests are used to indicate the likely presence or absence of a disease or condition in people not presenting symptoms; while diagnostic medical equipment is used to make quantitative physiological measurements to confirm and determine the progress of a suspected disease or condition. Medical screening equipment must be capable of fast processing of many cases, but may not need to be as precise as diagnostic equipment.

Limitations

Screening can detect medical conditions at an early stage before symptoms present while treatment is more effective than for later detection. In the best of cases lives are saved. Like any medical test, the tests used in screening are not perfect. The test result may incorrectly show positive for those without disease (false positive), or negative for people who have the condition (false negative). Limitations of screening programmes can include:

  • Screening can involve cost and use of medical resources on a majority of people who do not need treatment.
  • Adverse effects of screening procedure (e.g. stress and anxiety, discomfort, radiation exposure, chemical exposure).
  • Stress and anxiety caused by prolonging knowledge of an illness without any improvement in outcome. This problem is referred to as overdiagnosis (see also below).
  • Stress and anxiety caused by a false positive screening result.
  • Unnecessary investigation and treatment of false positive results (namely misdiagnosis with Type I error).
  • A false sense of security caused by false negatives, which may delay final diagnosis (namely misdiagnosis with Type II error).

Screening for dementia in the English NHS is controversial because it could cause undue anxiety in patients and support services would be stretched. A GP reported "The main issue really seems to be centred around what the consequences of a such a diagnosis is and what is actually available to help patients."

Analysis

To many people, screening instinctively seems like an appropriate thing to do, because catching something earlier seems better. However, no screening test is perfect. There will always be the problems with incorrect results and other issues listed above. It is an ethical requirement for balanced and accurate information to be given to participants at the point when screening is offered, in order that they can make a fully informed choice about whether or not to accept.

Before a screening program is implemented, it should be looked at to ensure that putting it in place would do more good than harm. The best studies for assessing whether a screening test will increase a population's health are rigorous randomized controlled trials.

When studying a screening program using case-control or, more usually, cohort studies, various factors can cause the screening test to appear more successful than it really is. A number of different biases, inherent in the study method, will skew results.

Overdiagnosis

Screening may identify abnormalities that would never cause a problem in a person's lifetime. An example of this is prostate cancer screening; it has been said that "more men die with prostate cancer than of it". Autopsy studies have shown that between 14 and 77% of elderly men who have died of other causes are found to have had prostate cancer.

Aside from issues with unnecessary treatment (prostate cancer treatment is by no means without risk), overdiagnosis makes a study look good at picking up abnormalities, even though they are sometimes harmless.

Overdiagnosis occurs when all of these people with harmless abnormalities are counted as "lives saved" by the screening, rather than as "healthy people needlessly harmed by overdiagnosis". So it might lead to an endless cycle: the greater the overdiagnosis, the more people will think screening is more effective than it is, which can reinforce people to do more screening tests, leading to even more overdiagnosis. Raffle, Mackie and Gray call this the popularity paradox of screening: "The greater the harm through overdiagnosis and overtreatment from screening, the more people there are who believe they owe their health, or even their life, to the programme"(p56 Box 3.4) 

The screening for neuroblastoma, the most common malignant solid tumor in children, in Japan is a very good example of why a screening program must be evaluated rigorously before it is implemented. In 1981, Japan started a program of screening for neuroblastoma by measuring homovanillic acid and vanilmandelic acid in urine samples of six-month-old infants. In 2003, a special committee was organized to evaluate the motivation for the neuroblastoma screening program. In the same year, the committee concluded that there was sufficient evidence that screening method used in the time led to overdiagnosis, but there was no enough evidence that the program reduced neuroblastoma deaths. As such, the committee recommended against screening and the Ministry of Health, Labor and Welfare decided to stop the screening program.

Another example of overdiagnosis happened with thyroid cancer: its incidence tripled in United States between 1975 and 2009, while mortality was constant. In South Korea, the situation was even worse with 15-fold increase in the incidence from 1993 to 2011 (the world's greatest increase of thyroid cancer incidence), while the mortality remained stable. The increase in incidence was associated with the introduction of ultrasonography screening.

The problem of overdiagnosis in cancer screening is that at the time of diagnosis it not possible to differentiate between a harmless lesion and lethal one, unless the patient is not treated and dies from other causes. So almost all patients tend to be treated, leading to what is called overtreatment. As researchers Welch and Black put it, "Overdiagnosis—along with the subsequent unneeded treatment with its attendant risks—is arguably the most important harm associated with early cancer detection."

Lead time bias

Lead time bias leads to longer perceived survival with screening, even if the course of the disease is not altered

If screening works, it must diagnose the target disease earlier than it would be without screening (when symptoms appear).

Even if in both cases (with screening vs without screening) patients die at the same time, just because the disease was diagnosed earlier by screening, the survival time since diagnosis is longer in screened people than in persons who was not screened. This happens even when life span has not been prolonged. As the diagnosis was made earlier without life being prolonged, the patient might be more anxious as he must live with knowledge of his diagnosis for longer.

If screening works, it must introduce a lead time. So statistics of survival time since diagnosis tends to increase with screening because of the lead time introduced, even when screening offers no benefits. If we do not think about what survival time actually means in this context, we might attribute success to a screening test that does nothing but advance diagnosis. As survival statistics suffers from this and other biases, comparing the disease mortality (or even all-cause mortality) between screened and unscreened population gives more meaningful information.

Length time bias

Length time bias leads to better perceived survival with screening, even if the course of the disease is not altered.

Many screening tests involve the detection of cancers. Screening is more likely to detect slower-growing tumors (due to longer pre-clinical sojourn time) that are less likely to cause harm. Also, those aggressive cancers tend to produce symptoms in the gap between scheduled screening, being less likely to be detected by screening. So, the cases screening often detects automatically have better prognosis than symptomatic cases. The consequence is those more slow progressive cases are now classified as cancers, which increases the incidence, and due to its better prognosis, the survival rates of screened people will be better than non-screened people even if screening makes no difference.

Selection bias

Not everyone will partake in a screening program. There are factors that differ between those willing to get tested and those who are not.

If people with a higher risk of a disease are more likely to be screened, for instance women with a family history of breast cancer are more likely than other women to join a mammography program, then a screening test will look worse than it really is: negative outcomes among the screened population will be higher than for a random sample.

Selection bias may also make a test look better than it really is. If a test is more available to young and healthy people (for instance if people have to travel a long distance to get checked) then fewer people in the screening population will have negative outcomes than for a random sample, and the test will seem to make a positive difference.

Studies have shown that people who attend screening tend to be healthier than those who do not. This has been called the healthy screenee effect, which is a form of selection bias. The reason seems to be that people who are healthy, affluent, physically fit, non-smokers with long-lived parents are more likely to come and get screened than those on low-income, who have existing health and social problems. One example of selection bias occurred in Edinbourg trial of mammography screening, which used cluster randomisation. The trial found reduced cardiovascular mortality in those who were screened for breast cancer. That happened because baseline differences regarding socio-economic status in the groups: 26% of the women in the control group and 53% in the study group belonged to the highest socioeconomic level.

Study Design for the Research of Screening Programs

The best way to minimize selection bias is to use a randomized controlled trial, though observational, naturalistic, or retrospective studies can be of some value and are typically easier to conduct. Any study must be sufficiently large (include many patients) and sufficiently long (follow patients for many years) to have the statistical power to assess the true value of a screening program. For rare diseases, hundreds of thousands of patients may be needed to realize the value of screening (find enough treatable disease), and to assess the effect of the screening program on mortality a study may have to follow the cohort for decades. Such studies take a long time and are expensive, but can provide the most useful data with which to evaluate the screening program and practice evidence-based medicine.

All-cause mortality vs disease-specific mortality

The main outcome of cancer screening studies is usually the number of deaths caused by the disease being screened for - this is called disease-specific mortality. To give an example: in trials of mammography screening for breast cancer, the main outcome reported is often breast cancer mortality. However, disease-specific mortality might be biased in favor of screening. In the example of breast cancer screening, women overdiagnosed with breast cancer might receive radiotherapy, which increases mortality due to lung cancer and heart disease. The problem is those deaths are often classified as other causes and might even be larger than the number of breast cancer deaths avoided by screening. So the non-biased outcome is all-cause mortality. The problem is that much larger trials are needed to detect a significant reduction in all-cause mortality. In 2016, researcher Vinay Prasad and colleagues published an article in BMJ titled "Why cancer screening has never been shown to save lives", as cancer screening trials did not show all-cause mortality reduction.

Mathematics education

From Wikipedia, the free encyclopedia
 

In contemporary education, mathematics education is the practice of teaching and learning mathematics, along with the associated scholarly research.

Researchers in mathematics education are primarily concerned with the tools, methods and approaches that facilitate practice or the study of practice; however, mathematics education research, known on the continent of Europe as the didactics or pedagogy of mathematics, has developed into an extensive field of study, with its concepts, theories, methods, national and international organisations, conferences and literature. This article describes some of the history, influences and recent controversies.

History

Elementary mathematics was part of the education system in most ancient civilisations, including Ancient Greece, the Roman Empire, Vedic society and ancient Egypt. In most cases, formal education was only available to male children with sufficiently high status, wealth or caste.

Illustration at the beginning of the 14th-century translation of Euclid's Elements.

In Plato's division of the liberal arts into the trivium and the quadrivium, the quadrivium included the mathematical fields of arithmetic and geometry. This structure was continued in the structure of classical education that was developed in medieval Europe. The teaching of geometry was almost universally based on Euclid's Elements. Apprentices to trades such as masons, merchants and money-lenders could expect to learn such practical mathematics as was relevant to their profession.

In the Renaissance, the academic status of mathematics declined, because it was strongly associated with trade and commerce, and considered somewhat un-Christian. Although it continued to be taught in European universities, it was seen as subservient to the study of Natural, Metaphysical and Moral Philosophy. The first modern arithmetic curriculum (starting with addition, then subtraction, multiplication, and division) arose at reckoning schools in Italy in the 1300s. Spreading along trade routes, these methods were designed to be used in commerce. They contrasted with Platonic math taught at universities, which was more philosophical and concerned numbers as concepts rather than calculating methods. They also contrasted with mathematical methods learned by artisan apprentices, which were specific to the tasks and tools at hand. For example, the division of a board into thirds can be accomplished with a piece of string, instead of measuring the length and using the arithmetic operation of division.

The first mathematics textbooks to be written in English and French were published by Robert Recorde, beginning with The Grounde of Artes in 1543. However, there are many different writings on mathematics and mathematics methodology that date back to 1800 BCE. These were mostly located in Mesopotamia where the Sumerians were practicing multiplication and division. There are also artifacts demonstrating their methodology for solving equations like the quadratic equation. After the Sumerians, some of the most famous ancient works on mathematics come from Egypt in the form of the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus. The more famous Rhind Papyrus has been dated to approximately 1650 BCE but it is thought to be a copy of an even older scroll. This papyrus was essentially an early textbook for Egyptian students.

The social status of mathematical study was improving by the seventeenth century, with the University of Aberdeen creating a Mathematics Chair in 1613, followed by the Chair in Geometry being set up in University of Oxford in 1619 and the Lucasian Chair of Mathematics being established by the University of Cambridge in 1662.

In the 18th and 19th centuries, the Industrial Revolution led to an enormous increase in urban populations. Basic numeracy skills, such as the ability to tell the time, count money and carry out simple arithmetic, became essential in this new urban lifestyle. Within the new public education systems, mathematics became a central part of the curriculum from an early age.

By the twentieth century, mathematics was part of the core curriculum in all developed countries.

During the twentieth century, mathematics education was established as an independent field of research. Here are some of the main events in this development:

In the 20th century, the cultural impact of the "electronic age" (McLuhan) was also taken up by educational theory and the teaching of mathematics. While previous approach focused on "working with specialized 'problems' in arithmetic", the emerging structural approach to knowledge had "small children meditating about number theory and 'sets'."

Objectives

Boy doing sums, Guinea-Bissau, 1974.

At different times and in different cultures and countries, mathematics education has attempted to achieve a variety of different objectives. These objectives have included:

Methods

The method or methods used in any particular context are largely determined by the objectives that the relevant educational system is trying to achieve. Methods of teaching mathematics include the following:

Games can motivate students to improve skills that are usually learned by rote. In "Number Bingo," players roll 3 dice, then perform basic mathematical operations on those numbers to get a new number, which they cover on the board trying to cover 4 squares in a row. This game was played at a "Discovery Day" organized by Big Brother Mouse in Laos.
  • Computer-based math an approach based around the use of mathematical software as the primary tool of computation.
  • Computer-based mathematics education involving the use of computers to teach mathematics. Mobile applications have also been developed to help students learn mathematics.
  • Conventional approach: the gradual and systematic guiding through the hierarchy of mathematical notions, ideas and techniques. Starts with arithmetic and is followed by Euclidean geometry and elementary algebra taught concurrently. Requires the instructor to be well informed about elementary mathematics since didactic and curriculum decisions are often dictated by the logic of the subject rather than pedagogical considerations. Other methods emerge by emphasizing some aspects of this approach.
  • Discovery math: a constructivist method of teaching (discovery learning) mathematics which centres around problem-based or inquiry-based learning, with the use of open-ended questions and manipulative tools. This type of mathematics education was implemented in various parts of Canada beginning in 2005. Discovery-based mathematics is at the forefront of the Canadian Math Wars debate with many criticizing its effectiveness due to declining math scores, in comparison to traditional teaching models that value direct instruction, rote learning, and memorization.
  • Exercises: the reinforcement of mathematical skills by completing large numbers of exercises of a similar type, such as adding vulgar fractions or solving quadratic equations.
  • Historical method: teaching the development of mathematics within a historical, social and cultural context. Provides more human interest than the conventional approach.
  • Mastery: an approach in which most students are expected to achieve a high level of competence before progressing.
  • New Math: a method of teaching mathematics which focuses on abstract concepts such as set theory, functions and bases other than ten. Adopted in the US as a response to the challenge of early Soviet technical superiority in space, it began to be challenged in the late 1960s. One of the most influential critiques of the New Math was Morris Kline's 1973 book Why Johnny Can't Add. The New Math method was the topic of one of Tom Lehrer's most popular parody songs, with his introductory remarks to the song: "...in the new approach, as you know, the important thing is to understand what you're doing, rather than to get the right answer."
  • Problem solving: the cultivation of mathematical ingenuity, creativity and heuristic thinking by setting students open-ended, unusual, and sometimes unsolved problems. The problems can range from simple word problems to problems from international mathematics competitions such as the International Mathematical Olympiad. Problem-solving is used as a means to build new mathematical knowledge, typically by building on students' prior understandings.
  • Recreational mathematics: Mathematical problems that are fun can motivate students to learn mathematics and can increase enjoyment of mathematics.
  • Standards-based mathematics: a vision for pre-college mathematics education in the US and Canada, focused on deepening student understanding of mathematical ideas and procedures, and formalized by the National Council of Teachers of Mathematics which created the Principles and Standards for School Mathematics.
  • Relational approach: Uses class topics to solve everyday problems and relates the topic to current events. This approach focuses on the many uses of mathematics and helps students understand why they need to know it as well as helping them to apply mathematics to real-world situations outside of the classroom.
  • Rote learning: the teaching of mathematical results, definitions and concepts by repetition and memorisation typically without meaning or supported by mathematical reasoning. A derisory term is drill and kill. In traditional education, rote learning is used to teach multiplication tables, definitions, formulas, and other aspects of mathematics.

Content and age levels

Different levels of mathematics are taught at different ages and in somewhat different sequences in different countries. Sometimes a class may be taught at an earlier age than typical as a special or honors class.

Elementary mathematics in most countries is taught similarly, though there are differences. Most countries tend to cover fewer topics in greater depth than in the United States. During the primary school years, children learn about whole numbers and arithmetic, including addition, subtraction, multiplication, and division. Comparisons and measurement are taught, in both numeric and pictorial form, as well as fractions and proportionality, patterns, and various topics related to geometry.

At high school level, in most of the U.S., algebra, geometry and analysis (pre-calculus and calculus) are taught as separate courses in different years. Mathematics in most other countries (and in a few U.S. states) is integrated, with topics from all branches of mathematics studied every year. Students in many countries choose an option or pre-defined course of study rather than choosing courses à la carte as in the United States. Students in science-oriented curricula typically study differential calculus and trigonometry at age 16–17 and integral calculus, complex numbers, analytic geometry, exponential and logarithmic functions, and infinite series in their final year of secondary school. Probability and statistics may be taught in secondary education classes. In some countries, these topics are available as "advanced" or "additional" mathematics.

At college and university, science- and engineering students will be required to take multivariable calculus, differential equations, and linear algebra; at several US colleges, the minor or AS in mathematics substantively comprises these courses. Mathematics majors continue, to study various other areas within pure mathematics - and often in applied mathematics - with the requirement of specified advanced courses in analysis and modern algebra. Applied mathematics may be taken as a major subject in its own right, while specific topics are taught within other courses: for example, civil engineers may be required to study fluid mechanics, and "math for computer science" might include graph theory, permutation, probability, and formal mathematical proofs. Pure and applied math degrees often include modules in probability theory / mathematical statistics; while a course in numerical methods is a common requirement for applied math. (Theoretical) physics is mathematics intensive, often overlapping substantively with the pure or applied math degree. ("Business mathematics" is usually limited to introductory calculus and, sometimes, matrix calculations. Economics programs additionally cover optimization, often differential equations and linear algebra, sometimes analysis.)

Standards

Throughout most of history, standards for mathematics education were set locally, by individual schools or teachers, depending on the levels of achievement that were relevant to, realistic for, and considered socially appropriate for their pupils.

In modern times, there has been a move towards regional or national standards, usually under the umbrella of a wider standard school curriculum. In England, for example, standards for mathematics education are set as part of the National Curriculum for England, while Scotland maintains its own educational system. Many other countries have centralized ministries which set national standards or curricula, and sometimes even textbooks.

Ma (2000) summarised the research of others who found, based on nationwide data, that students with higher scores on standardised mathematics tests had taken more mathematics courses in high school. This led some states to require three years of mathematics instead of two. But because this requirement was often met by taking another lower-level mathematics course, the additional courses had a “diluted” effect in raising achievement levels.

In North America, the National Council of Teachers of Mathematics (NCTM) published the Principles and Standards for School Mathematics in 2000 for the US and Canada, which boosted the trend towards reform mathematics. In 2006, the NCTM released Curriculum Focal Points, which recommend the most important mathematical topics for each grade level through grade 8. However, these standards were guidelines to implement as American states and Canadian provinces chose. In 2010, the National Governors Association Center for Best Practices and the Council of Chief State School Officers published the Common Core State Standards for US states, which were subsequently adopted by most states. Adoption of the Common Core State Standards in mathematics is at the discretion of each state, and is not mandated by the federal government. "States routinely review their academic standards and may choose to change or add onto the standards to best meet the needs of their students." The NCTM has state affiliates that have different education standards at the state level. For example, Missouri has the Missouri Council of Teachers of Mathematics (MCTM) which has its pillars and standards of education listed on its website. The MCTM also offers membership opportunities to teachers and future teachers so they can stay up to date on the changes in math educational standards.

The Programme for International Student Assessment (PISA), created by the Organisation for the Economic Co-operation and Development (OECD), is a global program studying the reading, science and mathematic abilities of 15-year-old students. The first assessment was conducted in the year 2000 with 43 countries participating. PISA has repeated this assessment every three years to provide comparable data, helping to guide global education to better prepare youth for future economies. There have been many ramifications following the results of triennial PISA assessments due to implicit and explicit responses of stakeholders, which have led to education reform and policy change.

Research

"Robust, useful theories of classroom teaching do not yet exist". However, there are useful theories on how children learn mathematics and much research has been conducted in recent decades to explore how these theories can be applied to teaching. The following results are examples of some of the current findings in the field of mathematics education:

Important results
One of the strongest results in recent research is that the most important feature of effective teaching is giving students "opportunity to learn". Teachers can set expectations, time, kinds of tasks, questions, acceptable answers, and type of discussions that will influence students' opportunity to learn. This must involve both skill efficiency and conceptual understanding.
Conceptual understanding
Two of the most important features of teaching in the promotion of conceptual understanding are attending explicitly to concepts and allowing students to struggle with important mathematics. Both of these features have been confirmed through a wide variety of studies. Explicit attention to concepts involves making connections between facts, procedures and ideas. (This is often seen as one of the strong points in mathematics teaching in East Asian countries, where teachers typically devote about half of their time to making connections. At the other extreme is the U.S.A., where essentially no connections are made in school classrooms.) These connections can be made through explanation of the meaning of a procedure, questions comparing strategies and solutions of problems, noticing how one problem is a special case of another, reminding students of the main point, discussing how lessons connect, and so on.
Deliberate, productive struggle with mathematical ideas refers to the fact that when students exert effort with important mathematical ideas, even if this struggle initially involves confusion and errors, the result is greater learning. This is true whether the struggle is due to challenging, well-implemented teaching, or due to faulty teaching, the students must struggle to make sense of.
Formative assessment
Formative assessment is both the best and cheapest way to boost student achievement, student engagement and teacher professional satisfaction. Results surpass those of reducing class size or increasing teachers' content knowledge. Effective assessment is based on clarifying what students should know, creating appropriate activities to obtain the evidence needed, giving good feedback, encouraging students to take control of their learning and letting students be resources for one another.
Homework
Homework which leads students to practice past lessons or prepare future lessons is more effective than those going over today's lesson. Students benefit from feedback. Students with learning disabilities or low motivation may profit from rewards. For younger children, homework helps simple skills, but not broader measures of achievement.
Students with difficulties
Students with genuine difficulties (unrelated to motivation or past instruction) struggle with basic facts, answer impulsively, struggle with mental representations, have poor number sense and have poor short-term memory. Techniques that have been found productive for helping such students include peer-assisted learning, explicit teaching with visual aids, instruction informed by formative assessment and encouraging students to think aloud.
Algebraic reasoning
Elementary school children need to spend a long time learning to express algebraic properties without symbols before learning algebraic notation. When learning symbols, many students believe letters always represent unknowns and struggle with the concept of variable. They prefer arithmetic reasoning to algebraic equations for solving word problems. It takes time to move from arithmetic to algebraic generalizations to describe patterns. Students often have trouble with the minus sign and understand the equals sign to mean "the answer is....".

Methodology

As with other educational research (and the social sciences in general), mathematics education research depends on both quantitative and qualitative studies. Quantitative research includes studies that use inferential statistics to answer specific questions, such as whether a certain teaching method gives significantly better results than the status quo. The best quantitative studies involve randomized trials where students or classes are randomly assigned different methods to test their effects. They depend on large samples to obtain statistically significant results.

Qualitative research, such as case studies, action research, discourse analysis, and clinical interviews, depend on small but focused samples in an attempt to understand student learning and to look at how and why a given method gives the results it does. Such studies cannot conclusively establish that one method is better than another, as randomized trials can, but unless it is understood why treatment X is better than treatment Y, application of results of quantitative studies will often lead to "lethal mutations" of the finding in actual classrooms. Exploratory qualitative research is also useful for suggesting new hypotheses, which can eventually be tested by randomized experiments. Both qualitative and quantitative studies, therefore, are considered essential in education—just as in the other social sciences. Many studies are “mixed”, simultaneously combining aspects of both quantitative and qualitative research, as appropriate.

Randomized trials

There has been some controversy over the relative strengths of different types of research. Because randomized trials provide clear, objective evidence on “what works”, policymakers often consider only those studies. Some scholars have pushed for more random experiments in which teaching methods are randomly assigned to classes. In other disciplines concerned with human subjects, like biomedicine, psychology, and policy evaluation, controlled, randomized experiments remain the preferred method of evaluating treatments. Educational statisticians and some mathematics educators have been working to increase the use of randomized experiments to evaluate teaching methods. On the other hand, many scholars in educational schools have argued against increasing the number of randomized experiments, often because of philosophical objections, such as the ethical difficulty of randomly assigning students to various treatments when the effects of such treatments are not yet known to be effective, or the difficulty of assuring rigid control of the independent variable in fluid, real school settings.

In the United States, the National Mathematics Advisory Panel (NMAP) published a report in 2008 based on studies, some of which used randomized assignment of treatments to experimental units, such as classrooms or students. The NMAP report's preference for randomized experiments received criticism from some scholars. In 2010, the What Works Clearinghouse (essentially the research arm for the Department of Education) responded to ongoing controversy by extending its research base to include non-experimental studies, including regression discontinuity designs and single-case studies.

Organizations

Classical radicalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cla...