Search This Blog

Thursday, January 31, 2019

Pollution (updated)

From Wikipedia, the free encyclopedia

Thermal oxidizers purify industrial air flows.
 
The litter problem on the coast of Guyana, 2010
 
Pollution is the introduction of contaminants into the natural environment that cause adverse change. Pollution can take the form of chemical substances or energy, such as noise, heat or light. Pollutants, the components of pollution, can be either foreign substances/energies or naturally occurring contaminants. Pollution is often classed as point source or nonpoint source pollution. In 2015, pollution killed 9 million people in the world.

History

Air pollution has always accompanied civilizations. Pollution started from prehistoric times, when man created the first fires. According to a 1983 article in the journal Science, "soot" found on ceilings of prehistoric caves provides ample evidence of the high levels of pollution that was associated with inadequate ventilation of open fires." Metal forging appears to be a key turning point in the creation of significant air pollution levels outside the home. Core samples of glaciers in Greenland indicate increases in pollution associated with Greek, Roman, and Chinese metal production.

Urban pollution

Air pollution in the US, 1973

The burning of coal and wood, and the presence of many horses in concentrated areas made the cities the primary sources of pollution. The Industrial Revolution brought an infusion of untreated chemicals and wastes into local streams that served as the water supply. King Edward I of England banned the burning of sea-coal by proclamation in London in 1272, after its smoke became a problem; the fuel was so common in England that this earliest of names for it was acquired because it could be carted away from some shores by the wheelbarrow.

It was the industrial revolution that gave birth to environmental pollution as we know it today. London also recorded one of the earlier extreme cases of water quality problems with the Great Stink on the Thames of 1858, which led to construction of the London sewerage system soon afterward. Pollution issues escalated as population growth far exceeded viability of neighborhoods to handle their waste problem. Reformers began to demand sewer systems and clean water.

In 1870, the sanitary conditions in Berlin were among the worst in Europe. August Bebel recalled conditions before a modern sewer system was built in the late 1870s:
Waste-water from the houses collected in the gutters running alongside the curbs and emitted a truly fearsome smell. There were no public toilets in the streets or squares. Visitors, especially women, often became desperate when nature called. In the public buildings the sanitary facilities were unbelievably primitive....As a metropolis, Berlin did not emerge from a state of barbarism into civilization until after 1870.
The primitive conditions were intolerable for a world national capital, and the Imperial German government brought in its scientists, engineers, and urban planners to not only solve the deficiencies, but to forge Berlin as the world's model city. A British expert in 1906 concluded that Berlin represented "the most complete application of science, order and method of public life," adding "it is a marvel of civic administration, the most modern and most perfectly organized city that there is."

The emergence of great factories and consumption of immense quantities of coal gave rise to unprecedented air pollution and the large volume of industrial chemical discharges added to the growing load of untreated human waste. Chicago and Cincinnati were the first two American cities to enact laws ensuring cleaner air in 1881. Pollution became a major issue in the United States in the early twentieth century, as progressive reformers took issue with air pollution caused by coal burning, water pollution caused by bad sanitation, and street pollution caused by the 3 million horses who worked in American cities in 1900, generating large quantities of urine and manure. As historian Martin Melosi notes, The generation that first saw automobiles replacing the horses saw cars as "miracles of cleanliness.". By the 1940s, however, automobile-caused smog was a major issue in Los Angeles.

Other cities followed around the country until early in the 20th century, when the short lived Office of Air Pollution was created under the Department of the Interior. Extreme smog events were experienced by the cities of Los Angeles and Donora, Pennsylvania in the late 1940s, serving as another public reminder.

Air pollution would continue to be a problem in England, especially later during the industrial revolution, and extending into the recent past with the Great Smog of 1952. Awareness of atmospheric pollution spread widely after World War II, with fears triggered by reports of radioactive fallout from atomic warfare and testing. Then a non-nuclear event – the Great Smog of 1952 in London – killed at least 4000 people. This prompted some of the first major modern environmental legislation: the Clean Air Act of 1956

Pollution began to draw major public attention in the United States between the mid-1950s and early 1970s, when Congress passed the Noise Control Act, the Clean Air Act, the Clean Water Act, and the National Environmental Policy Act.

Smog Pollution in Taiwan
 
Severe incidents of pollution helped increase consciousness. PCB dumping in the Hudson River resulted in a ban by the EPA on consumption of its fish in 1974. National news stories in the late 1970s – especially the long-term dioxin contamination at Love Canal starting in 1947 and uncontrolled dumping in Valley of the Drums – led to the Superfund legislation of 1980. The pollution of industrial land gave rise to the name brownfield, a term now common in city planning.

The development of nuclear science introduced radioactive contamination, which can remain lethally radioactive for hundreds of thousands of years. Lake Karachay – named by the Worldwatch Institute as the "most polluted spot" on earth – served as a disposal site for the Soviet Union throughout the 1950s and 1960s. Chelyabinsk, Russia, is considered the "Most polluted place on the planet".

Nuclear weapons continued to be tested in the Cold War, especially in the earlier stages of their development. The toll on the worst-affected populations and the growth since then in understanding about the critical threat to human health posed by radioactivity has also been a prohibitive complication associated with nuclear power. Though extreme care is practiced in that industry, the potential for disaster suggested by incidents such as those at Three Mile Island and Chernobyl pose a lingering specter of public mistrust. Worldwide publicity has been intense on those disasters. Widespread support for test ban treaties has ended almost all nuclear testing in the atmosphere.

International catastrophes such as the wreck of the Amoco Cadiz oil tanker off the coast of Brittany in 1978 and the Bhopal disaster in 1984 have demonstrated the universality of such events and the scale on which efforts to address them needed to engage. The borderless nature of atmosphere and oceans inevitably resulted in the implication of pollution on a planetary level with the issue of global warming. Most recently the term persistent organic pollutant (POP) has come to describe a group of chemicals such as PBDEs and PFCs among others. Though their effects remain somewhat less well understood owing to a lack of experimental data, they have been detected in various ecological habitats far removed from industrial activity such as the Arctic, demonstrating diffusion and bioaccumulation after only a relatively brief period of widespread use.

Plastic Pollution in Ghana, 2018
 
A much more recently discovered problem is the Great Pacific Garbage Patch, a huge concentration of plastics, chemical sludge and other debris which has been collected into a large area of the Pacific Ocean by the North Pacific Gyre. This is a less well known pollution problem than the others described above, but nonetheless has multiple and serious consequences such as increasing wildlife mortality, the spread of invasive species and human ingestion of toxic chemicals. Organizations such as 5 Gyres have researched the pollution and, along with artists like Marina DeBris, are working toward publicizing the issue. 

Pollution introduced by light at night is becoming a global problem, more severe in urban centers, but nonetheless contaminating also large territories, far away from towns.

Growing evidence of local and global pollution and an increasingly informed public over time have given rise to environmentalism and the environmental movement, which generally seek to limit human impact on the environment.

Forms of pollution

The Lachine Canal in Montreal, Quebec, Canada.
 
Blue drain and yellow fish symbol used by the UK Environment Agency to raise awareness of the ecological impacts of contaminating surface drainage.
 
The major forms of pollution are listed below along with the particular contaminant relevant to each of them:

Pollutants

A pollutant is a waste material that pollutes air, water, or soil. Three factors determine the severity of a pollutant: its chemical nature, the concentration and the persistence.

Cost of pollution

Pollution has a cost. Manufacturing activities that cause air pollution impose health and clean-up costs on the whole of society, whereas the neighbors of an individual who chooses to fire-proof his home may benefit from a reduced risk of a fire spreading to their own homes. A manufacturing activity that causes air pollution is an example of a negative externality in production. A negative externality in production occurs “when a firm’s production reduces the well-being of others who are not compensated by the firm." For example, if a laundry firm exists near a polluting steel manufacturing firm, there will be increased costs for the laundry firm because of the dirt and smoke produced by the steel manufacturing firm. If external costs exist, such as those created by pollution, the manufacturer will choose to produce more of the product than would be produced if the manufacturer were required to pay all associated environmental costs. Because responsibility or consequence for self-directed action lies partly outside the self, an element of externalization is involved. If there are external benefits, such as in public safety, less of the good may be produced than would be the case if the producer were to receive payment for the external benefits to others. However, goods and services that involve negative externalities in production, such as those that produce pollution, tend to be over-produced and underpriced since the externality is not being priced into the market.

Pollution can also create costs for the firms producing the pollution. Sometimes firms choose, or are forced by regulation, to reduce the amount of pollution that they are producing. The associated costs of doing this are called abatement costs, or marginal abatement costs if measured by each additional unit. In 2005 pollution abatement capital expenditures and operating costs in the US amounted to nearly $27 billion.

Socially optimal level of pollution

Society derives some indirect utility from pollution, otherwise there would be no incentive to pollute. This utility comes from the consumption of goods and services that create pollution. Therefore, it is important that policymakers attempt to balance these indirect benefits with the costs of pollution in order to achieve an efficient outcome.

A visual comparison of the free market and socially optimal outcomes.
 
It is possible to use environmental economics to determine which level of pollution is deemed the social optimum. For economists, pollution is an “external cost and occurs only when one or more individuals suffer a loss of welfare,” however, there exists a socially optimal level of pollution at which welfare is maximized. This is because consumers derive utility from the good or service manufactured, which will outweigh the social cost of pollution until a certain point. At this point the damage of one extra unit of pollution to society, the marginal cost of pollution, is exactly equal to the marginal benefit of consuming one more unit of the good or service.

In markets with pollution, or other negative externalities in production, the free market equilibrium will not account for the costs of pollution on society. If the social costs of pollution are higher than the private costs incurred by the firm, then the true supply curve will be higher. The point at which the social marginal cost and market demand intersect gives the socially optimal level of pollution. At this point, the quantity will be lower and the price will be higher in comparison to the free market equilibrium. Therefore, the free market outcome could be considered a market failure because it “does not maximize efficiency”.

This model can be used as a basis to evaluate different methods of internalizing the externality. Some examples include tariffs, a carbon tax and cap and trade systems.

Sources and causes

Air pollution comes from both natural and human-made (anthropogenic) sources. However, globally human-made pollutants from combustion, construction, mining, agriculture and warfare are increasingly significant in the air pollution equation.

Motor vehicle emissions are one of the leading causes of air pollution. China, United States, Russia, India Mexico, and Japan are the world leaders in air pollution emissions. Principal stationary pollution sources include chemical plants, coal-fired power plants, oil refineries, petrochemical plants, nuclear waste disposal activity, incinerators, large livestock farms (dairy cows, pigs, poultry, etc.), PVC factories, metals production factories, plastics factories, and other heavy industry. Agricultural air pollution comes from contemporary practices which include clear felling and burning of natural vegetation as well as spraying of pesticides and herbicides.

About 400 million metric tons of hazardous wastes are generated each year. The United States alone produces about 250 million metric tons. Americans constitute less than 5% of the world's population, but produce roughly 25% of the world’s CO2, and generate approximately 30% of world’s waste. In 2007, China has overtaken the United States as the world's biggest producer of CO2, while still far behind based on per capita pollution – ranked 78th among the world's nations.

An industrial area, with a power plant, south of Yangzhou's downtown, China
 
In February 2007, a report by the Intergovernmental Panel on Climate Change (IPCC), representing the work of 2,500 scientists, economists, and policymakers from more than 120 countries, said that humans have been the primary cause of global warming since 1950. Humans have ways to cut greenhouse gas emissions and avoid the consequences of global warming, a major climate report concluded. But to change the climate, the transition from fossil fuels like coal and oil needs to occur within decades, according to the final report this year from the UN's Intergovernmental Panel on Climate Change (IPCC).

Some of the more common soil contaminants are chlorinated hydrocarbons (CFH), heavy metals (such as chromium, cadmium – found in rechargeable batteries, and lead – found in lead paint, aviation fuel and still in some countries, gasoline), MTBE, zinc, arsenic and benzene. In 2001 a series of press reports culminating in a book called Fateful Harvest unveiled a widespread practice of recycling industrial byproducts into fertilizer, resulting in the contamination of the soil with various metals. Ordinary municipal landfills are the source of many chemical substances entering the soil environment (and often groundwater), emanating from the wide variety of refuse accepted, especially substances illegally discarded there, or from pre-1970 landfills that may have been subject to little control in the U.S. or EU. There have also been some unusual releases of polychlorinated dibenzodioxins, commonly called dioxins for simplicity, such as TCDD.

Pollution can also be the consequence of a natural disaster. For example, hurricanes often involve water contamination from sewage, and petrochemical spills from ruptured boats or automobiles. Larger scale and environmental damage is not uncommon when coastal oil rigs or refineries are involved. Some sources of pollution, such as nuclear power plants or oil tankers, can produce widespread and potentially hazardous releases when accidents occur.

In the case of noise pollution the dominant source class is the motor vehicle, producing about ninety percent of all unwanted noise worldwide.

Effects

Human health

Overview of main health effects on humans from some common types of pollution.
 
Adverse air quality can kill many organisms including humans. Ozone pollution can cause respiratory disease, cardiovascular disease, throat inflammation, chest pain, and congestion. Water pollution causes approximately 14,000 deaths per day, mostly due to contamination of drinking water by untreated sewage in developing countries. An estimated 500 million Indians have no access to a proper toilet, Over ten million people in India fell ill with waterborne illnesses in 2013, and 1,535 people died, most of them children. Nearly 500 million Chinese lack access to safe drinking water. A 2010 analysis estimated that 1.2 million people died prematurely each year in China because of air pollution. The high smog levels China has been facing for a long time can do damage to civilians bodies and generate different diseases  The WHO estimated in 2007 that air pollution causes half a million deaths per year in India. Studies have estimated that the number of people killed annually in the United States could be over 50,000.

Oil spills can cause skin irritations and rashes. Noise pollution induces hearing loss, high blood pressure, stress, and sleep disturbance. Mercury has been linked to developmental deficits in children and neurologic symptoms. Older people are majorly exposed to diseases induced by air pollution. Those with heart or lung disorders are at additional risk. Children and infants are also at serious risk. Lead and other heavy metals have been shown to cause neurological problems. Chemical and radioactive substances can cause cancer and as well as birth defects.

An October 2017 study by the Lancet Commission on Pollution and Health found that global pollution, specifically toxic air, water, soils and workplaces, kill nine million people annually, which is triple the number of deaths caused by AIDS, tuberculosis and malaria combined, and 15 times higher than deaths caused by wars and other forms of human violence. The study concluded that "pollution is one of the great existential challenges of the Anthropocene era. Pollution endangers the stability of the Earth’s support systems and threatens the continuing survival of human societies."

Environment

Pollution has been found to be present widely in the environment. There are a number of effects of this:

Environmental health information

The Toxicology and Environmental Health Information Program (TEHIP) at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET) an integrated system of toxicology and environmental health databases that are available free of charge on the web. 

TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs.

School outcomes

A 2019 paper linked pollution to adverse school outcomes for children.

Worker productivity

A number of studies show that pollution has an adverse effect on the productivity of both indoor and outdoor workers.

Regulation and monitoring

To protect the environment from the adverse effects of pollution, many nations worldwide have enacted legislation to regulate various types of pollution as well as to mitigate the adverse effects of pollution.

Pollution control

A litter trap catches floating waste in the Yarra River, east-central Victoria, Australia
 
Air pollution control system, known as a Thermal oxidizer, decomposes hazard gases from industrial air streams at a factory in the United States of America.
 
Gas nozzle with vapor recovery
 
A Mobile Pollution Check Vehicle in India.
 
Pollution control is a term used in environmental management. It means the control of emissions and effluents into air, water or soil. Without pollution control, the waste products from overconsumption, heating, agriculture, mining, manufacturing, transportation and other human activities, whether they accumulate or disperse, will degrade the environment. In the hierarchy of controls, pollution prevention and waste minimization are more desirable than pollution control. In the field of land development, low impact development is a similar technique for the prevention of urban runoff.

Practices

Pollution control devices

Perspectives

The earliest precursor of pollution generated by life forms would have been a natural function of their existence. The attendant consequences on viability and population levels fell within the sphere of natural selection. These would have included the demise of a population locally or ultimately, species extinction. Processes that were untenable would have resulted in a new balance brought about by changes and adaptations. At the extremes, for any form of life, consideration of pollution is superseded by that of survival. 

For humankind, the factor of technology is a distinguishing and critical consideration, both as an enabler and an additional source of byproducts. Short of survival, human concerns include the range from quality of life to health hazards. Since science holds experimental demonstration to be definitive, modern treatment of toxicity or environmental harm involves defining a level at which an effect is observable. Common examples of fields where practical measurement is crucial include automobile emissions control, industrial exposure (e.g. Occupational Safety and Health Administration (OSHA) PELs), toxicology (e.g. LD50), and medicine (e.g. medication and radiation doses). 

"The solution to pollution is dilution", is a dictum which summarizes a traditional approach to pollution management whereby sufficiently diluted pollution is not harmful. It is well-suited to some other modern, locally scoped applications such as laboratory safety procedure and hazardous material release emergency management. But it assumes that the dilutant is in virtually unlimited supply for the application or that resulting dilutions are acceptable in all cases.

Such simple treatment for environmental pollution on a wider scale might have had greater merit in earlier centuries when physical survival was often the highest imperative, human population and densities were lower, technologies were simpler and their byproducts more benign. But these are often no longer the case. Furthermore, advances have enabled measurement of concentrations not possible before. The use of statistical methods in evaluating outcomes has given currency to the principle of probable harm in cases where assessment is warranted but resorting to deterministic models is impractical or infeasible. In addition, consideration of the environment beyond direct impact on human beings has gained prominence. 

Yet in the absence of a superseding principle, this older approach predominates practices throughout the world. It is the basis by which to gauge concentrations of effluent for legal release, exceeding which penalties are assessed or restrictions applied. One such superseding principle is contained in modern hazardous waste laws in developed countries, as the process of diluting hazardous waste to make it non-hazardous is usually a regulated treatment process. Migration from pollution dilution to elimination in many cases can be confronted by challenging economical and technological barriers.

Greenhouse gases and global warming

Historical and projected CO2 emissions by country (as of 2005).
Source: Energy Information Administration.
 
Carbon dioxide, while vital for photosynthesis, is sometimes referred to as pollution, because raised levels of the gas in the atmosphere are affecting the Earth's climate. Disruption of the environment can also highlight the connection between areas of pollution that would normally be classified separately, such as those of water and air. Recent studies have investigated the potential for long-term rising levels of atmospheric carbon dioxide to cause slight but critical increases in the acidity of ocean waters, and the possible effects of this on marine ecosystems.

Most polluting industries

The Pure Earth, an international non-for-profit organization dedicated to eliminating life-threatening pollution in the developing world, issues an annual list of some of the world's most polluting industries.

World’s worst polluted places

The Pure Earth issues an annual list of some of the world's worst polluted places.

Medical error

From Wikipedia, the free encyclopedia

A medical error is a preventable adverse effect of care, whether or not it is evident or harmful to the patient. This might include an inaccurate or incomplete diagnosis or treatment of a disease, injury, syndrome, behavior, infection, or other ailment. Globally, it is estimated that 142,000 people died in 2013 from adverse effects of medical treatment; this is an increase from 94,000 in 1990. However, a 2016 study of the number of deaths that were a result of medical error in the U.S. placed the yearly death rate in the U.S. alone at 251,454 deaths, which suggests that the 2013 global estimation may not be accurate.

Definitions

The word error in medicine is used as a label for nearly all of the clinical incidents that harm patients. Medical errors are often described as human errors in healthcare. Whether the label is a medical error or human error, one definition used in medicine says that it occurs when a healthcare provider chooses an inappropriate method of care, improperly executes an appropriate method of care, or reads the wrong CT scan. It has been said that the definition should be the subject of more debate. For instance, studies of hand hygiene compliance of physicians in an ICU show that compliance varied from 19% to 85%. The deaths that result from infections caught as a result of treatment providers improperly executing an appropriate method of care by not complying with known safety standards for hand hygiene are difficult to regard as innocent accidents or mistakes. At the least, they are negligence, if not dereliction, but in medicine they are lumped together under the word error with innocent accidents and treated as such.

There are many types of medical error, from minor to major, and causality is often poorly determined.
There are many taxonomies for classifying medical errors.

Impact

Globally, it is estimated that 142,000 people died in 2013 from adverse effects of medical treatment; in 1990, the number was 94,000.

A 2000 Institute of Medicine report estimated that medical errors result in between 44,000 and 98,000 preventable deaths and 1,000,000 excess injuries each year in U.S. hospitals. In the UK, a 2000 study found that an estimated 850,000 medical errors occur each year, costing over £2 billion.

Some researchers questioned the accuracy of the IOM study, criticizing the statistical handling of measurement errors in the report, significant subjectivity in determining which deaths were "avoidable" or due to medical error, and an erroneous assumption that 100% of patients would have survived if optimal care had been provided. A 2001 study in the Journal of the American Medical Association of seven Department of Veterans Affairs medical centers estimated that for roughly every 10,000 patients admitted to the select hospitals, one patient died who would have lived for three months or more in good cognitive health had "optimal" care been provided.

A 2006 follow-up to the IOM study found that medication errors are among the most common medical mistakes, harming at least 1.5 million people every year. According to the study, 400,000 preventable drug-related injuries occur each year in hospitals, 800,000 in long-term care settings, and roughly 530,000 among Medicare recipients in outpatient clinics. The report stated that these are likely to be conservative estimates. In 2000 alone, the extra medical costs incurred by preventable drug-related injuries approximated $887 million—and the study looked only at injuries sustained by Medicare recipients, a subset of clinic visitors. None of these figures take into account lost wages and productivity or other costs.

According to a 2002 Agency for Healthcare Research and Quality report, about 7,000 people were estimated to die each year from medication errors – about 16 percent more deaths than the number attributable to work-related injuries (6,000 deaths). Medical errors affect one in 10 patients worldwide. One extrapolation suggests that 180,000 people die each year partly as a result of iatrogenic injury. One in five Americans (22%) report that they or a family member have experienced a medical error of some kind.

The World Health Organization registered 14 million new cases and 8.2 million cancer-related deaths in 2012. It estimated that the number of cases could increase by 70% through 2032. As the number of cancer patients receiving treatment increases, hospitals around the world are seeking ways to improve patient safety, to emphasize traceability and raise efficiency in their cancer treatment processes.

A study released in 2016 found medical error is the third leading cause of death in the United States, after heart disease and cancer. Researchers looked at studies that analyzed the medical death rate data from 2000 to 2008 and extrapolated that over 250,000 deaths per year had stemmed from a medical error, which translates to 9.5% of all deaths annually in the US.

Difficulties in measuring frequency of errors

About 1% of hospital admissions result in an adverse event due to negligence. However, mistakes are likely much more common, as these studies identify only mistakes that led to measurable adverse events occurring soon after the errors. Independent review of doctors' treatment plans suggests that decision-making could be improved in 14% of admissions; many of the benefits would have delayed manifestations. Even this number may be an underestimate. One study suggests that adults in the United States receive only 55% of recommended care. At the same time, a second study found that 30% of care in the United States may be unnecessary. For example, if a doctor fails to order a mammogram that is past due, this mistake will not show up in the first type of study. In addition, because no adverse event occurred during the short follow-up of the study, the mistake also would not show up in the second type of study because only the principal treatment plans were critiqued. However, the mistake would be recorded in the third type of study. If a doctor recommends an unnecessary treatment or test, it may not show in any of these types of studies.

Cause of death on United States death certificates, statistically compiled by the Centers for Disease Control and Prevention (CDC), are coded in the International Classification of Disease (ICD), which does not include codes for human and system factors.

Causes

Medical errors are associated with inexperienced physicians and nurses, new procedures, extremes of age, and complex or urgent care. Poor communication (whether in one's own language or, as may be the case for medical tourists, another language), improper documentation, illegible handwriting, spelling errors, inadequate nurse-to-patient ratios, and similarly named medications are also known to contribute to the problem. Patient actions may also contribute significantly to medical errors. Falls, for example, may result from patients' own misjudgements. Human error has been implicated in nearly 80 percent of adverse events that occur in complex healthcare systems. The vast majority of medical errors result from faulty systems and poorly designed processes versus poor practices or incompetent practitioners.

Healthcare complexity

Complicated technologies, powerful drugs, intensive care, and prolonged hospital stay can contribute to medical errors.

System and process design

In 2000, The Institute of Medicine released "To Err is Human," which asserted that the problem in medical errors is not bad people in health care—it is that good people are working in bad systems that need to be made safer.

Poor communication and unclear lines of authority of physicians, nurses, and other care providers are also contributing factors. Disconnected reporting systems within a hospital can result in fragmented systems in which numerous hand-offs of patients results in lack of coordination and errors.

Other factors include the impression that action is being taken by other groups within the institution, reliance on automated systems to prevent error., and inadequate systems to share information about errors, which hampers analysis of contributory causes and improvement strategies. Cost-cutting measures by hospitals in response to reimbursement cutbacks can compromise patient safety. In emergencies, patient care may be rendered in areas poorly suited for safe monitoring. The American Institute of Architects has identified concerns for the safe design and construction of health care facilities. Infrastructure failure is also a concern. According to the WHO, 50% of medical equipment in developing countries is only partly usable due to lack of skilled operators or parts. As a result, diagnostic procedures or treatments cannot be performed, leading to substandard treatment.

The Joint Commission's Annual Report on Quality and Safety 2007 found that inadequate communication between healthcare providers, or between providers and the patient and family members, was the root cause of over half the serious adverse events in accredited hospitals. Other leading causes included inadequate assessment of the patient's condition, and poor leadership or training.

Competency, education, and training

Variations in healthcare provider training & experience and failure to acknowledge the prevalence and seriousness of medical errors also increase the risk. The so-called July effect occurs when new residents arrive at teaching hospitals, causing an increase in medication errors according to a study of data from 1979–2006.

Human factors and ergonomics

Cognitive errors commonly encountered in medicine were initially identified by psychologists Amos Tversky and Daniel Kahneman in the early 1970s. Jerome Groopman, author of How Doctors Think, says these are "cognitive pitfalls", biases which cloud our logic. For example, a practitioner may overvalue the first data encountered, skewing his thinking (or recent or dramatic cases which come quickly to mind and may color judgement). Another pitfall is where stereotypes may prejudice thinking.

Sleep deprivation has also been cited as a contributing factor in medical errors. One study found that being awake for over 24 hours caused medical interns to double or triple the number of preventable medical errors, including those that resulted in injury or death. The risk of car crash after these shifts increased by 168%, and the risk of near miss by 460%. Interns admitted falling asleep during lectures, during rounds, and even during surgeries. Night shifts are associated with worse surgeon performance during laparoscopic surgeries.

Practitioner risk factors include fatigue, depression, and burnout. Factors related to the clinical setting include diverse patients, unfamiliar settings, time pressures, and increased patient-to-nurse staffing ratio increases. Drug names that look alike or sound alike are also a problem.

Examples

Errors can include misdiagnosis or delayed diagnosis, administration of the wrong drug to the wrong patient or in the wrong way, giving multiple drugs that interact negatively, surgery on an incorrect site, failure to remove all surgical instruments, failure to take the correct blood type into account, or incorrect record-keeping. A 10th type of error is ones which are not watched for by researchers, such as RNs failing to program an IV pump to give a full dose of IV antibiotics or other medication.

Errors in diagnosis

A large study reported several cases where patients were wrongly told that they were HIV-negative when the physicians erroneously ordered and interpreted HTLV (a closely related virus) testing rather than HIV testing. In the same study, more than 90% of HTLV tests were ordered erroneously. It is estimated that between 10-15 percent of physician diagnoses are erroneous.

Misdiagnosis of lower extremity cellulitis is estimated to occur in 30% of patients, leading to unnecessary hospitalizations in 85% and unnecessary antibiotic use in 92%. Collectively, these errors lead to between 50,000 and 130,000 unnecessary hospitalizations and between $195 and $515 million in avoidable health care spending annually in the United States.

Misdiagnosis of psychological disorders

Female sexual desire sometimes used to be diagnosed as female hysteria.

Sensitivities to foods and food allergies risk being misdiagnosed as the anxiety disorder Orthorexia

Studies have found that bipolar disorder has often been misdiagnosed as major depression. Its early diagnosis necessitates that clinicians pay attention to the features of the patient's depression and also look for present or prior hypomanic or manic symptomatology.

The misdiagnosis of schizophrenia is also a common problem. There may be long delays of patients getting a correct diagnosis of this disorder.

The DSM-5 field trials included "test-retest reliability" which involved different clinicians doing independent evaluations of the same patient—a new approach to the study of diagnostic reliability.

Outpatient vs. inpatient

Misdiagnosis is the leading cause of medical error in outpatient facilities. Since the National Institute of Medicine’s 1999 report, “To Err is Human,” found up to 98,000 hospital patients die from preventable medical errors in the U.S. each year, government and private sector efforts have focused on inpatient safety.

After an error has occurred

Mistakes can have a strongly negative emotional impact on the doctors who commit them.

Recognizing that mistakes are not isolated events

Some physicians recognize that adverse outcomes from errors usually do not happen because of an isolated error and actually reflect system problems. This concept is often referred to as the Swiss Cheese Model. This is the concept that there are layers of protection for clinicians and patients to prevent mistakes from occurring. Therefore, even if a doctor or nurse makes a small error (e.g. incorrect dose of drug written on a drug chart by doctor), this is picked up before it actually affects patient care (e.g. pharmacist checks the drug chart and rectifies the error). Such mechanisms include: Practical alterations (e.g.-medications that cannot be given through IV, are fitted with tubing which means they cannot be linked to an IV even if a clinician makes a mistake and tries to), systematic safety processes (e.g. all patients must have a Waterlow score assessment and falls assessment completed on admission), and training programs/continuing professional development courses  are measures that may be put in place. 

There may be several breakdowns in processes to allow one adverse outcome. In addition, errors are more common when other demands compete for a physician's attention. However, placing too much blame on the system may not be constructive.

Placing the practice of medicine in perspective

Essayists imply that the potential to make mistakes is part of what makes being a physician rewarding and without this potential the rewards of medical practice would be diminished. Laurence states that "Everybody dies, you and all of your patients. All relationships end. Would you want it any other way? [...] Don't take it personally" Seder states "[...] if I left medicine, I would mourn its loss as I've mourned the passage of my poetry. On a daily basis, it is both a privilege and a joy to have the trust of patients and their families and the camaraderie of peers. There is no challenge to make your blood race like that of a difficult case, no mind game as rigorous as the challenging differential diagnosis, and though the stakes are high, so are the rewards."

Disclosing mistakes

Forgiveness, which is part of many cultural traditions, may be important in coping with medical mistakes.

To oneself

Inability to forgive oneself may create a cycle of distress and increased likelihood of a future error.

However, Wu et al. suggest "...those who coped by accepting responsibility were more likely to make constructive changes in practice, but [also] to experience more emotional distress." It may be helpful to consider the much larger number of patients who are not exposed to mistakes and are helped by medical care.

To patients

Gallagher et al. state that patients want "information about what happened, why the error happened, how the error's consequences will be mitigated, and how recurrences will be prevented." Interviews with patients and families reported in a 2003 book by Rosemary Gibson and Janardan Prasad Singh, put forward that those who have been harmed by medical errors face a "wall of silence" and "want an acknowledgement" of the harm. With honesty, "healing can begin not just for the patients and their families but also the doctors, nurses and others involved." Detailed suggestions on how to disclose are available.

A 2005 study by Wendy Levinson of the University of Toronto showed surgeons discussing medical errors used the word "error" or "mistake" in only 57 percent of disclosure conversations and offered a verbal apology only 47 percent of the time.

Patient disclosure is important in the medical error process. The current standard of practice at many hospitals is to disclose errors to patients when they occur. In the past, it was a common fear that disclosure to the patient would incite a malpractice lawsuit. Many physicians would not explain that an error had taken place, causing a lack of trust toward the healthcare community. In 2007, 34 states passed legislation that precludes any information from a physician’s apology for a medical error from being used in malpractice court (even a full admission of fault). This encourages physicians to acknowledge and explain mistakes to patients, keeping an open line of communication. 

The American Medical Association's Council on Ethical and Judicial Affairs states in its ethics code:
Situations occasionally occur in which a patient suffers significant medical complications that may have resulted from the physician's mistake or judgment. In these situations, the physician is ethically required to inform the patient of all facts necessary to ensure understanding of what has occurred. Concern regarding legal liability which might result following truthful disclosure should not affect the physician's honesty with a patient.
From the American College of Physicians Ethics Manual:
In addition, physicians should disclose to patients information about procedural or judgment errors made in the course of care if such information is material to the patient's well-being. Errors do not necessarily constitute improper, negligent, or unethical behavior, but failure to disclose them may.
However, "there appears to be a gap between physicians' attitudes and practices regarding error disclosure. Willingness to disclose errors was associated with higher training level and a variety of patient-centered attitudes, and it was not lessened by previous exposure to malpractice litigation". Hospital administrators may share these concerns.

Consequently, in the United States, many states have enacted laws excluding expressions of sympathy after accidents as proof of liability. However, "excluding from admissibility in court proceedings apologetic expressions of sympathy but not fault-admitting apologies after accidents"
Disclosure may actually reduce malpractice payments.

To non-physicians

In a study of physicians who reported having made a mistake, it was offered that disclosing to non-physician sources of support may reduce stress more than disclosing to physician colleagues. This may be due to the finding that of the physicians in the same study, when presented with a hypothetical scenario of a mistake made by another colleague, only 32% of them would have unconditionally offered support. It is possible that greater benefit occurs when spouses are physicians.

To other physicians

Discussing mistakes with other physicians is beneficial. However, medical providers may be less forgiving of one another. The reason is not clear, but one essayist has admonished, "Don't Take Too Much Joy in the Mistakes of Other Doctors."

To the physician's institution

Disclosure of errors, especially 'near misses' may be able to reduce subsequent errors in institutions that are capable of reviewing near misses. However, doctors report that institutions may not be supportive of the doctor.

Use of rationalization to cover up medical errors

Based on anecdotal and survey evidence, Banja states that rationalization (making excuses) is very common among the medical profession to cover up medical errors.

By presence of to the patient

A survey of more than 10,000 physicians in the United States came to the results that, on the question "Are there times when it's acceptable to cover up or avoid revealing a mistake if that mistake would not cause harm to the patient?", 19% answered yes, 60% answered no and 21% answered it depends. On the question "Are there times when it is acceptable to cover up or avoid revealing a mistake if that mistake would potentially or likely harm the patient?", 2% answered yes, 95% answered no and 3% answered it depends.

Cause-specific preventive measures

Traditionally, errors are attributed to mistakes made by individuals who may be penalized for these mistakes. The usual approach to correct the errors is to create new rules with additional checking steps in the system, aiming to prevent further errors. As an example, an error of free flow IV administration of heparin is approached by teaching staff how to use the IV systems and to use special care in setting the IV pump. While overall errors become less likely, the checks add to workload and may in themselves be a cause of additional errors.

A newer model for improvement in medical care takes its origin from the work of W. Edwards Deming in a model of Total Quality Management. In this model, there is an attempt to identify the underlying system defect that allowed the opportunity for the error to occur. As an example, in such a system the error of free flow IV administration of Heparin is dealt with by not using IV heparin and substituting subcutaneous administration of heparin, obviating the entire problem. However, such an approach presupposes available research showing that subcutaneous heparin is as effective as IV. Thus, most systems use a combination of approaches to the problem.

In specific specialties

The field of medicine that has taken the lead in systems approaches to safety is anaesthesiology. Steps such as standardization of IV medications to 1 ml doses, national and international color-coding standards, and development of improved airway support devices has made anesthesia care a model of systems improvement in care.

Pharmacy professionals have extensively studied the causes of errors in the prescribing, preparation, dispensing and administration of medications. As far back as the 1930s, pharmacists worked with physicians to select, from many options, the safest and most effective drugs available for use in hospitals. The process is known as the Formulary System and the list of drugs is known as the Formulary. In the 1960s, hospitals implemented unit dose packaging and unit dose drug distribution systems to reduce the risk of wrong drug and wrong dose errors in hospitalized patients; centralized sterile admixture services were shown to decrease the risks of contaminated and infected intravenous medications; and pharmacists provided drug information and clinical decision support directly to physicians to improve the safe and effective use of medications. Pharmacists are recognized experts in medication safety and have made many contributions that reduce error and improve patient care over the last 50 years. More recently, governments have attempted to address issues like patient-pharmacists communication and consumer knowledge through measures like the Australian Government's Quality Use of Medicines policy.

Legal procedure

Standards and regulations for medical malpractice vary by country and jurisdiction within countries. Medical professionals may obtain professional liability insurances to offset the risk and costs of lawsuits based on medical malpractice.

Prevention

Medical care is frequently compared adversely to aviation; while many of the factors that lead to errors in both fields are similar, aviation's error management protocols are regarded as much more effective. Safety measures include informed consent, the availability of a second practitioner's opinion, voluntary reporting of errors, root cause analysis, reminders to improve patient medication adherence, hospital accreditation, and systems to ensure review by experienced or specialist practitioners.

A template has been developed for the design (both structure and operation) of hospital medication safety programs, particularly for acute tertiary settings, which emphasizes safety culture, infrastructure, data (error detection and analysis), communication and training. 

Particularly to prevent the medication errors in the perspective of the intrathecal administration of local anaesthesia, there is a proposal to change the presentation and packaging of the appliances and agents used for this purpose. One spinal needle with a syringe prefilled with the local anaesthetic agents may be marketed in a single blister pack, which will be peeled open and presented before the anaesthesiologist conducting the procedure.

Reporting requirements

In the United States, adverse medical event reporting systems were mandated in just over half (27) of the states as of 2014, a figure unchanged since 2007. In U.S. hospitals error reporting is a condition of payment by Medicare. An investigation by the Office of Inspector General, Department of Health and Human Services released January 6, 2012 found that most errors are not reported and even in the case of errors that are reported and investigated changes are seldom made which would prevent them in the future. The investigation revealed that there was often lack of knowledge regarding which events were reportable and recommended that lists of reportable events be developed.

Misconceptions

These are the common misconceptions about adverse events, and the arguments and explanations against those misconceptions are noted in parentheses:
  • "Bad apples" or incompetent health care providers are a common cause. (Although human error is commonly an initiating event, the faulty process of delivering care invariably permits or compounds the harm, and is the focus of improvement.
  • High risk procedures or medical specialties are responsible for most avoidable adverse events. (Although some mistakes, such as in surgery, are harder to conceal, errors occur in all levels of care. Even though complex procedures entail more risk, adverse outcomes are not usually due to error, but to the severity of the condition being treated.). However, USP has reported that medication errors during the course of a surgical procedure are three times more likely to cause harm to a patient than those occurring in other types of hospital care.
  • If a patient experiences an adverse event during the process of care, an error has occurred. (Most medical care entails some level of risk, and there can be complications or side effects, even unforeseen ones, from the underlying condition or from the treatment itself.)

Psychological evaluation

From Wikipedia, the free encyclopedia

Psychological evaluation is defined as a way of assessing an individual's behavior, personality, cognitive abilities, and several other domains. The purpose behind many modern psychological evaluations is to try to pinpoint what is happening in someone's psychological life that may be inhibiting their ability to behave or feel in more appropriate or constructive ways; it is the mental equivalent of physical examination. Other psychological evaluations seek to better understand the individual's unique characteristics or personality to predict things like workplace performance or customer relationship management.

History

Modern Psychological evaluation has been around for roughly 200 years, with roots that stem as far back as 2200 B.C. It started in China, and many psychologists throughout Europe worked to develop methods of testing into the 1900s. The first tests focused on aptitude. Eventually scientists tried to gauge mental processes in patients with brain damage, then children with special needs.

Ancient psychological evaluation

Earliest accounts of evaluation are seen as far back as 2200 B.C. when Chinese emperors were assessed to determine their fitness for office. These rudimentary tests were developed over time until 1370 A.D. when an understanding of classical Confucianism was introduced as a testing mechanism. As a preliminary evaluation for anyone seeking public office, candidates were required to spend one day and one night in a small space composing essays and writing poetry over assigned topics. Only the top 1% to 7% were selected for higher evaluations, which required three separate session of three days and three nights performing the same tasks. This process continued for one more round until a final group emerged, comprising less than 1% of the original group, became eligible for public office. The Chinese failure to validate their selection procedures, along with widespread discontent over such grueling processes, resulted in the eventual abolishment of the practice by royal decree.

Modern psychological evaluation

In the 1800s, Hubert von Grashey developed a battery to determine the abilities of brain-damaged patients. This test was also not favorable, as it took over 100 hours to administer. However, this influenced Wilhelm Wundt, who had the first psychological laboratory in Germany. His tests were shorter, but used similar techniques. Wundt also measured mental processes and acknowledged the fact that there are individual differences between people.

Frances Galton established the first tests in London for measuring IQ. He tested thousands of people, examining their physical characteristics as a basis for his results and many of the records remain today. James Cattell studied with him, and eventually worked on his own with brass instruments for evaluation. His studies led to his paper "Mental Tests and Measurements" ,one of the most famous writings on psychological evaluation. He also coined the term "mental test" in this paper.

As the 1900s began, Alfred Binet was also studying evaluation. However, he was more interested in distinguishing children with special needs from their peers after he could not prove in his other research that magnets could cure hysteria. He did his research in France, with the help of Theodore Simon. They created a list of questions that were used to determine if children would receive regular instruction, or would participate in special education programs. Their battery was continually revised and developed, until 1911 when the Binet-Simon questionnaire was finalized for different age levels.

After Binet's death, intelligence testing was further studied by Charles Spearman. He theorized that intelligence was made up of several different subcategories, which were all interrelated. He combined all the factors together to form a general intelligence, which he abbreviated as "g". This led to William Stern's idea of an intelligence quotient. He believed that children of different ages should be compared to their peers to determine their mental age in relation to their chronological age. Lewis Terman combined the Binet-Simon questionnaire with the intelligence quotient and the result was the standard test we use today, with an average score of 100.

The large influx of non-English speaking immigrants into the US brought about a change in psychological testing that relied heavily on verbal skills for subjects that were not literate in English, or had speech/hearing difficulties. In 1913, R.H. Sylvester standardized the first non-verbal psychological test. In this particular test, participants fit different shaped blocks into their respective slots on a Seguin form board. From this test, Knox developed a series of non-verbal psychological tests that he used while working at the Ellis Island immigrant station in 1914. In his tests, were a simple wooden puzzle as well as digit-symbol substitution test where each participant saw digits paired up with a particular symbol, they were then shown the digits and had to write in the symbol that was associated with it.

When the United States moved into World War I, Robert M. Yerkes convinced the government that they should be testing all of the recruits they were receiving into the Army. The results of the tests could be used to make sure that the "mentally incompetent" and "mentally exceptional" were assigned to appropriate jobs. Yerkes and his colleagues developed the Army Alpha and Army Beta tests to use on all new recruits. These tests set a precedent for the development of psychological testing for the next several decades.

After seeing the success of the Army standardized tests, college administration quickly picked up on the idea of group testing to decide entrance into their institutions. The College Entrance Examination Board was created to test applicants to colleges across the nation. In 1925, they developed tests that were no longer essay tests that were very open to interpretation, but now were objective tests that were also the first to be scored by machine. These early tests evolved into modern day College Board tests, like the Scholastic Assessment Test, Graduate Record Examination, and the Law School Admissions Test.

Formal and informal evaluation

Formal psychological evaluation consists of standardized batteries of tests and highly structured clinician-run interviews, while informal evaluation takes on a completely different tone. In informal evaluation, assessments are based on unstructured, free-flowing interviews or observations that allow both the patient and the clinician to guide the content. Both of these methods have their pros and cons. A highly unstructured interview and informal observations provide key findings about the patient that are both efficient and effective. A potential issue with an unstructured, informal approach is the clinician may overlook certain areas of functioning or not notice them at all. Or they might focus too much on presenting complaints. The highly structured interview, although very precise, can cause the clinician to make the mistake of focusing a specific answer to a specific question without considering the response in terms of a broader scope or life context. They may fail to recognize how the patient's answers all fit together. 

There are many ways that the issues associated with the interview process can be mitigated. The benefits to more formal standardized evaluation types such as batteries and tests are many. First, they measure a large number of characteristics simultaneously. These include personality, cognitive, or neuropsychological characteristics. Second, these tests provide empirically quantified information. The obvious benefit to this is that we can more precisely measure patient characteristics as compared to any kind of structured or unstructured interview. Third, all of these tests have a standardized way of being scored and being administered. Each patient is presented a standardized stimulus that serves as a benchmark that can be used to determine their characteristics. These types of tests eliminate any possibility of bias and produce results that could be harmful to the patient and cause legal and ethical issues. Fourth, tests are normed. This means that patients can be assessed not only based on their comparison to a "normal" individual, but how they compare to the rest of their peers who may have the same psychological issues that they face. Normed tests allow the clinician to make a more individualized assessment of the patient. Fifth, standardized tests that we commonly use today are both valid and reliable. We know what specific scores mean, how reliable they are, and how the results will affect the patient. 

Most clinicians agree that a balanced battery of tests is the most effective way of helping patients. Clinicians should not become victims of blind adherence to any one particular method. A balanced battery of tests allows there to be a mix of formal testing processes that allow the clinician to start making their assessment, while conducting more informal, unstructured interviews with the same patient may help the clinician to make more individualized evaluations and help piece together what could potentially be a very complex, unique-to-the-individual kind of issue or problem .

Modern uses

Psychological assessment is most often used in the psychiatric, medical, legal, educational, or psychological clinic settings. The types of assessments and the purposes for them differ among these settings. 

In the psychiatric setting, the common needs for assessment are to determine risks, whether a person should be admitted or discharged, the location the patients should be held, as well as what therapy the patient should be receiving. Within this setting, the psychologists need to be aware of the legal responsibilities that what they can legally do in each situation.

Within a medical setting, psychological assessment is used to find a possible underlying psychological disorder, emotional factors that may be associated with medical complaints, assessment for neuropsychological deficit, psychological treatment for chronic pain, and the treatment of chemical dependency. There has been greater importance placed on the patient’s neuropsychological status as neuropsychologists are becoming more concerned with the functioning of the brain.

Psychological assessment also has a role in the legal setting. Psychologists might be asked to assess the reliability of a witness, the quality of the testimony a witness gives, the competency of an accused person, or determine what might have happened during a crime. They also may help support a plea of insanity or to discount a plea. Judges may use the psychologist's report to change the sentence of a convicted person, and parole officers work with psychologists to create a program for the rehabilitation of a parolee. Problematic areas for psychologists include predicting how dangerous a person will be. There are currently no accurate measure for this prediction, however there is often a need for this prediction to prevent dangerous people from returning to society.

Psychologists may also be called on to assess a variety of things within an education setting. They may be asked to assess strengths and weaknesses of children who are having difficulty in the school systems, assess behavioral difficulties, assess a child’s responsiveness to an intervention, or to help create an educational plan for a child. The assessment of children also allows for the psychologists to determine if the child will be willing to use the resources that may be provided.

In a psychological clinic setting, psychological assessment can be used to determine characteristics of the client that can be useful for developing a treatment plan. Within this setting, psychologists often are working with clients who may have medical or legal problems or sometimes students who were referred to this setting from their school psychologist.

Some psychological assessments have been validated for use when administered via computer or the Internet. However, caution must be applied to these test results, as it is possible to fake in electronically mediated assessment. Many electronic assessments do not truly measure what is claimed, such as the Meyers-Briggs personality test. Although one of the most well known personality assessments, it has been found both invalid and unreliable by many psychological researches, and should be used with caution.

Within clinical psychology, the "clinical method" is an approach to understanding and treating mental disorders that begins with a particular individual's personal history and is designed around that individual's psychological needs. It is sometimes posed as an alternative approach to the experimental method which focuses on the importance of conducting experiments in learning how to treat mental disorders, and the differential method which sorts patients by class (gender, race, income, age, etc.) and designs treatment plans based around broad social categories.

Taking a personal history along with clinical examination allow the health practitioners to fully establish a clinical diagnosis. A medical history of a patient provides insights into diagnostic possibilities as well as the patient's experiences with illnesses. The patients will be asked about current illness and the history of it, past medical history and family history, other drugs or dietary supplements being taken, lifestyle, and allergies. The inquiry includes obtaining information about relevant diseases or conditions of other people in their family. Self-reporting methods may be used, including questionnaires, structured interviews and rating scales.

Personality Assessment

Personality traits are an individual's enduring manner of perceiving, feeling, evaluating, reacting, and interacting with other people specifically, and with their environment more generally. Because reliable and valid personality inventories give a relatively accurate representation of a person's characteristics, they are beneficial in the clinical setting as supplementary material to standard initial assessment procedures such as a clinical interview; review of collateral information, e.g., reports from family members; and review of psychological and medical treatment records.

MMPI

History

Developed by Starke R. Hathaway, PhD, and J. C. McKinley, MD, The Minnesota Multiphasic Personality Inventory (MMPI) is a personality inventory used to investigate not only personality, but also psychopathology. The MMPI was developed using an empirical, atheoretical approach. This means that it was not developed using any of the frequently changing theories about psychodynamics at the time. There are two variations of the MMPI administered to adults, the MMPI-2 and the MMPI-2-RF, and two variations administered to teenagers, the MMPI-A and MMPI-A-RF. This inventory's validity has been confirmed by Hiller, Rosenthal, Bornstein, and Berry in their 1999 meta-analysis. Throughout history the MMPI in its various forms has been routinely administered in hospitals, clinical settings, prisons, and military settings.

MMPI-2

The MMPI-2 consists of 567 true or false questions aimed at measuring the reporting person's psychological wellbeing. The MMPI-2 is commonly used in clinical settings and occupational health settings. There is a revised version of the MMPI-2 called the MMPI-2-RF (MMPI-2 Restructured Form). The MMPI-2-RF is not intended to be a replacement for the MMPI-2, but is used to assess patients using the most current models of psychopathology and personality.

MMPI-2 and MMPI-2-RF Scales
Version Number of Items Number of Scales Scale Categories
MMPI-2 567 120 Validity Indicators, Superlative Self-Presentation Subscales, Clinical Scales, Restructured Clinical (RC) Scales, Content Scales, Content Component Scales, Supplementary Scales, Clinical Subscales (Harris-Lingoes and Social Introversion Subscales)
MMPI-2-RF 338 51 Validity, Higher-Order (H-O), Restructured Clinical (RC), Somatic, Cognitive, Internalizing, Externalizing, Interpersonal, Interest, Personality Psychopathology Five (PSY-5)

MMPI-A

The MMPI-A was published in 1992 and consists of 478 true or false questions. This version of the MMPI is similar to the MMPI-2 but used for adolescents (age 14-18) rather than for adults. The restructured form of the MMPI-A, the MMPI-A-RF, was published in 2016 and consists of 241 true or false questions that can understood with a sixth grade reading level. Both the MMPI-A and MMPI-A-RF are used to assess adolescents for personality and psychological disorders, as well as to evaluate cognitive processes.

MMPI-A and MMPI-A-RF Scales
Verson Number of Items Number of Scales Scale Categories
MMPI-A 478 105 Validity Indicators, Clinical Scales, Clinical Subscales (Harris-Lingoes and Social Introversion Subscales), Content Scales, Content Component Scales, Supplementary Scales
MMPI-A-RF 241 48 Validity, Higher-Order (H-O), Restructured Clinical (RC), Somatic/Cognitive, Internalizing, Externalizing, Interpersonal, Personality Psychopathology Five (PSY-5)

NEO Personality Inventory

The NEO Personality Inventory was developed by Paul Costa Jr. and Robert R. McCrae in 1978. When initially created, it only measured three of the Big Five personality traits: Neuroticism, Openness to Experience, and Extroversion. The inventory was then renamed as the Neuroticism-Extroversion-Openness Inventory (NEO-I). It was not until 1985 that Agreeableness and Conscientiousness were added to the personality assessment. With all Big Five personality traits being assessed, it was then renamed as the NEO Personality Inventory. Research for the NEO-PI continued over the next few years until a revised manual with six facets for each Big Five trait was published in 1992. In the 1990s, now called the NEO PI-R, issues were found with the personality inventory. The developers of the assessment found it to be too difficult for younger people, and another revision was done to create the NEO PI-3.

The NEO Personality Inventory is administered in two forms: self-report and observer report. It consists of 240 personality items and a validity item. It can be administered in roughly 35–45 minutes. Every item is answered on a Likert scale, widely known as a scale from Strongly Disagree to Strongly Agree. If more than 40 items are missing or more than 150 responses or less than 50 responses are Strongly Agree/Disagree, the assessment should be viewed with great caution and has the potential to be invalid. In the NEO report, each trait's T score is recorded along with the percentile they rank on compared to all data recorded for the assessment. Then, each trait is broken up into their six facets along with raw score, individual T-scores, and percentile. The next page goes on to list what each score means in words as well as what each facet entails. The exact responses to questions are given in a list as well as the validity response and amount of missing responses.

When an individual is given their NEO report, it is important to understand specifically what the facets are and what the corresponding scores mean.
  • Neuroticism
    • Anxiety
      • High scores suggest nervousness, tenseness, and fearfulness. Low scores suggest feeling relaxed and calm.
    • Angry Hostility
      • High scores suggest feeling anger and frustration often. Low scores suggest being easy-going.
    • Depression
      • High scores suggest feeling guilty, sad, hopeless, and lonely. Low scores suggest less feeling of that of someone who scores highly, but not necessarily being light-hearted and cheerful.
    • Self-Consciousness
      • High scores suggest shame, embarrassment, and sensitivity. Low scores suggest being less affected by others' opinions, but not necessarily having good social skills or poise.
    • Impulsiveness
      • High scores suggest the inability to control cravings and urges. Low scores suggest easy resistance to such urges.
    • Vulnerability
      • High scores suggest inability to cope with stress, being dependent, and feeling panicked in high stress situations. Low scores suggest capability to handle stressful situations.
  • Extraversion
    • Warmth
      • High scores suggest friendliness and affectionate behavior. Low scores suggest being more formal, reserved, and distant. A low score does not necessarily mean being hostile or lacking compassion.
    • Gregariousness
      • High scores suggest wanting the company of others. Low scores tend to be from those who avoid social stimulation.
    • Assertiveness
      • High scores suggest a forceful and dominant person who lacks hesitation. Low scores suggest are more passive and try not to stand out in a crowd.
    • Activity
      • High scores suggest a more energetic and upbeat personality and lead a quicker paced lifestyle. Low scores suggest the person is more leisurely, but does not imply being lazy or slow.
    • Excitement-Seeking
      • High scores suggest a person who seeks and craves excitement and is similar to those with high sensation seeking. Low scores seek a less exciting lifestyle and come off more boring.
    • Positive Emotions
      • High scores suggest the tendency to feel happier, laugh more, and are optimistic. Low scorers are not necessarily unhappy, but more so are less high-spirited and are more pessimistic.
  • Openness to Experience
    • Fantasy
      • Those who score high in fantasy have a more creative imagination and daydream frequently. Low scores suggest a person who lives more in the moment.
    • Aesthetics
      • High scores suggest a love and appreciation for art and physical beauty. These people are more emotionally attached to music, artwork, and poetry. Low scorers have a lack of interest in the arts.
    • Feelings
      • High scorers have a deeper ability to experience emotion and see their emotions as more important than those who score low on this facet. Low scorers are less expressive.
    • Actions
      • High scores suggest a more open-mindedness to traveling and experiencing new things. These people prefer novelty over a routine life. Low scorers prefer a scheduled life and dislike change.
    • Ideas
      • Active pursuit of knowledge, high curiosity, and the enjoyment of brain teasers and philosophical are common of those who score high on this facet. Those who score lower are not necessarily less intelligent, nor does a high score imply high intelligence. However, those who score lower are more narrow in their interests and have low curiosity.
    • Values
      • High scorers are more investigative of political, social, and religious values. Those who score lower and more accepting of authority and honor more traditional values. High scorers are more typically liberal while lower scorers are more typically conservative.
  • Agreeableness
    • Trust
      • High scores are more trusting of others and believe others are honest and have good intentions. Low scorers are more skeptical, cynical, and assumes others are dishonest and/or dangerous.
    • Straightforwardness
      • Those who score high in this facet are more sincere and frank. Low scorers are more deceitful and more willing to manipulate others, but this does not mean they should be labeled as a dishonest or manipulative person.
    • Altruism
      • High scores suggest a person concerned with the well-being of others and show it through generosity, willingness to help others, and volunteering for those less fortunate. Low scores suggest a more self-centered person who is less willing to go out of their way to help others.
    • Compliance
      • High scorers are more inclined to avoid conflict and tend to forgive easily. Low scores suggest a more aggressive personality and a love for competition.
    • Modesty
      • High scorers are more humble, but not necessarily lacking in self-esteem or confidence. Low scorers believe they're more superior than others and may come off as more conceited.
    • Tender-Mindedness
      • This facet scales one's concern for others and their ability to empathize. High scorers are more moved by others' emotions, while low scorers are more hardheaded and typically consider themselves realists.
  • Conscientiousness
    • Competence
      • High scores suggest one is capable, sensible, prudent, effective, and are well-prepared to deal with whatever happens in life. Low scores suggest a potential lower self-esteem and are often unprepared.
    • Order
      • High scorers are more neat and tidy, while low scorers lack organization and are unmethodical.
    • Dutifulness
      • Those who score highly in this facet are more strict about their ethical principles and are more dependable. Low scorers are less reliable and are more casual about their morals.
    • Achievement Striving
      • Those who score highly in this facet have higher aspirations and work harder to achieve their goals. However, they may be too invested in their work and become a workaholic. Low scorers are much less ambitious and perhaps even lazy. They are often content with their lack of goal-seeking.
    • Self-Discipline
      • High scorers complete whatever task is assigned to them and are self-motivated. Low scorers often procrastinate and are easily discouraged.
    • Deliberation
      • High scorers tend to think more than low scorers before acting. High scorers are more cautious and deliberate while low scorers are more hasty and act without considering the consequences.

HEXACO-PI

The HEXACO-PI, developed by Lee and Ashton in the early 2000s, is a personality inventory used to measure six different dimensions of personality which have been found in lexical studies across various cultures. There are two versions of the HEXACO: the HEXACO-PI and the HEXACO-PI-R which are examined with either self reports or observer reports. The HEXACO-PI-R has forms of three lengths: 200 items, 100 items, and 60 items. Items from each form are grouped to measure scales of more narrow personality traits, which are them grouped into broad scales of the six dimensions: honesty & humility (H), emotionality (E), Extroversion (X), agreeableness (A), conscientiousness (C), and openness to experience (O).The HEXACO-PI-R includes various traits associated with neuroticism and can be used to help identify trait tendencies. One table which give examples of typically high loaded adjectives on the six factors of HEXACO can be found in Ashton's book "Individual Differences and Personality" 

Adjective relating to the six factors within the HEXACO structure
Personality Factor Narrow Personality Traits Related Adjectives
Honesty-Humility Sincerity, fairness, greed-avoidance, modesty Sincere, honest, faithful/loyal, modest/unassuming, fair-minded versus sly, deceitful, greedy, pretentious, hypocritical, boastful, pompous
Emotionality Fearfulness, anxiety, depenence, sentimentality Emotional, oversensitive, sentimental, fearful, anxious, vulnerable versus brave, tough, independent, self-assured, stable
Extraversion Social self-esteem, social boldness, sociability, liveliness Outgoing, lively, extroverted, sociable, talkative, cheerful, active versus shy, passive, withdrawn, introverted, quiet, reserved
Agreeableness Forgivingness, gentleness, flexibility, patience Patient, tolerant, peaceful, mild, agreeable, lenient, gentle versus ill-tempered, quarrelsome, stubborn, choleric
Conscientiousness Organization, diligence, perfectionism, prudence Organized, disciplined, diligent, careful, thorough, precise versus sloppy, negligent, reckless, lazy, irresponsible, absent-minded
Openess to Experience Aesthetic appreciation, inquisitiveness, creativity, unconventionality Intellectual, creative, unconventional, innovative, ironic versus shallow, unimaginative, conventional

One benefit of using the HEXACO is that of the facet of neurotocism within the factor of emotionality: trait neurotocism has been shown to have a moderate positive correlation with people with anxiety and depression. The identification of trait neuroticism on a scale, paired with anxiety, and/or depression is beneficial in a clinical setting for introductory screenings some personality disorders. Because the HEXACO has facets which help identify traits of neuroticism, it is also a helpful indicator of the dark triad.

Pseudopsychology (pop psychology) in assessment

Although there have been many great advancements in the field of psychological evaluation, some issues have also developed. One of the main problems in the field is pseudopsychology, also called pop psychology. Psychological evaluation is one of the biggest aspects in pop psychology. In a clinical setting, patients are not aware that they are not receiving correct psychological treatment, and that belief is one of the main foundations of pseudopsychology. It is largely based upon the testimonies of previous patients, the avoidance of peer review (a critical aspect of any science), and poorly set up tests, which can include confusing language or conditions that are left up to interpretation.

Pseudopsychology can also occur when people claim to be psychologists, but really lack qualifications. A prime example of this is found in quizzes that can lead to a variety of false conclusions. These can be found in magazines, online, or just about anywhere accessible to the public. They usually consist of a small number of questions designed to tell the participant things about themselves. The problem is, they’re usually written by people who know nothing about psychological assessment, and have no research or evidence to back up any diagnosis made by the quizzes. These types of things can tarnish the reputation for true psychological assessment.

Ethics

Concerns about privacy, cultural biases, tests that have not been validated, and inappropriate contexts have led groups such as the American Educational Research Association (AERA) and the American Psychological Association (APA) to publish guidelines for examiners in regards to assessment. The American Psychological Association states that a client must give permission to release any of the information that may come from a psychologist. The only exceptions to this are in the case of minors, when the clients are a danger to themselves or others, or if they are applying for a job that requires this information. Also, the issue of privacy occurs during the assessment itself. The client has the right to say as much or little as they would like, however they may feel the need to say more than they want or even may accidentally reveal information they would like to keep private.

Guidelines have been put in place to ensure the psychologist giving the assessments maintains a professional relationship with the client since their relationship can impact the outcomes of the assessment. The examiner's expectations may also influence the client’s performance in the assessments.

The validity and reliability of the tests being used also can affect the outcomes of the assessments being used. When psychologists are choosing which assessments they are going to use, they should pick one that will be most effective for what they are looking at. Also, it is important for the psychologists are aware of the possibility of the client, either consciously or unconsciously, faking answers and consider use of tests that have validity scales within them.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...