Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic and help inform public health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination
programmes. The modelling can help decide which intervention/s to avoid
and which to trial, or can predict future growth patterns, etc.
History
The
modeling of infectious diseases is a tool that has been used to study
the mechanisms by which diseases spread, to predict the future course of
an outbreak and to evaluate strategies to control an epidemic.
The first scientist who systematically tried to quantify causes of death was John Graunt in his book Natural and Political Observations made upon the Bills of Mortality,
in 1662. The bills he studied were listings of numbers and causes of
deaths published weekly. Graunt's analysis of causes of death is
considered the beginning of the "theory of competing risks" which
according to Daley and Gani is "a theory that is now well established among modern epidemiologists".
The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory.
The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune
individuals in a population. The Kermack–McKendrick epidemic model was
successful in predicting the behavior of outbreaks very similar to that
observed in many recorded epidemics.
Assumptions
Models
are only as good as the assumptions on which they are based. If a model
makes predictions that are out of line with observed results and the
mathematics is correct, the initial assumptions must change to make the
model useful.
Rectangular and stationary age distribution, i.e., everybody in the population lives to age L and then dies, and for each age (up to L)
there is the same number of people in the population. This is often
well-justified for developed countries where there is a low infant
mortality and much of the population lives to the life expectancy.
Homogeneous mixing of the population, i.e., individuals of the population under scrutiny assort and make contact at random and do not mix mostly in a smaller subgroup. This assumption is rarely justified because social structure
is widespread. For example, most people in London only make contact
with other Londoners. Further, within London then there are smaller
subgroups, such as the Turkish community or teenagers (just to give two
examples), who mix with each other more than people outside their group.
However, homogeneous mixing is a standard assumption to make the
mathematics tractable.
Types of epidemic models
Stochastic
"Stochastic"
means being or having a random variable. A stochastic model is a tool
for estimating probability distributions of potential outcomes by
allowing for random variation in one or more inputs over time.
Stochastic models depend on the chance variations in risk of exposure,
disease and other illness dynamics.
Deterministic
When
dealing with large populations, as in the case of tuberculosis,
deterministic or compartmental mathematical models are often used. In a
deterministic model, individuals in the population are assigned to
different subgroups or compartments, each representing a specific stage
of the epidemic. Letters such as M, S, E, I, and R are often used to
represent different stages.
The transition rates from one class to another are mathematically
expressed as derivatives, hence the model is formulated using
differential equations. While building such models, it must be assumed
that the population size in a compartment is differentiable with respect
to time and that the epidemic process is deterministic. In other words,
the changes in population of a compartment can be calculated using only
the history that was used to develop the model.
Reproduction number
The basic reproduction number (denoted by R0)
is a measure of how transferable a disease is. It is the average number
of people that a single infectious person will infect over the course
of their infection. This quantity determines whether the infection will
spread exponentially, die out, or remain constant: if R0 > 1, then each person on average infects more than one other person so the disease will spread; if R0 < 1, then each person infects fewer than one person on average so the disease will die out; and if R0 = 1, then each person will infect on average exactly one other person, so the disease will become endemic: it will move throughout the population but not increase or decrease.
The basic reproduction number can be computed as a ratio of known
rates over time: if an infectious individual contacts β other people
per unit time, if all of those people are assumed to contract the
disease, and if the disease has a mean infectious period of 1/γ, then
the basic reproduction number is just R0 = β/γ.
Some diseases have multiple possible latency periods, in which case
the reproduction number for the disease overall is the sum of the
reproduction number for each transition time into the disease. For
example, Blower et al
model two forms of tuberculosis infection: in the fast case, the
symptoms show up immediately after exposure; in the slow case, the
symptoms develop years after the initial exposure (endogenous
reactivation). The overall reproduction number is the sum of the two
forms of contraction: R0 = R0FAST + R0SLOW.
Endemic steady state
An infectious disease is said to be endemic
when it can be sustained in a population without the need for external
inputs. This means that, on average, each infected person is infecting exactly one other person (any more and the number of people infected will grow exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is:
The basic reproduction number (R0)
of the disease, assuming everyone is susceptible, multiplied by the
proportion of the population that is actually susceptible (S)
must be one (since those who are not susceptible do not feature in our
calculations as they cannot contract the disease). Notice that this
relation means that for a disease to be in the endemicsteady state,
the higher the basic reproduction number, the lower the proportion of
the population susceptible must be, and vice versa. This expression has
limitations concerning the susceptibility proportion, e.g. the R0 equals 0.5 implicates S has to be 2, however this proportion exceeds to population size.
Assume the rectangular stationary age distribution and let also
the ages of infection have the same distribution for each birth year.
Let the average age of infection be A, for instance when individuals younger than A are susceptible and those older than A
are immune (or infectious). Then it can be shown by an easy argument
that the proportion of the population that is susceptible is given by:
We reiterate that L is the age at which in this model every
individual is assumed to die. But the mathematical definition of the
endemic steady state can be rearranged to give:
This allows for the basic reproduction number of a disease given A and L in either type of population distribution.
Modelling epidemics
The SIR model is one of the more basic models used for modelling epidemics. There are many modifications to the model.
The SIR model
Diagram of the SIR model with initial values , and rates for infection and for recovery
Animation of the SIR model with initial values , initial rate for infection and constant rate for recovery .
If there is neither medicine nor vaccination available, it is only
possible to reduce the infection rate (often referred to as "flattening the curve")
by appropriate measures (e. g. "social distancing"). This animation
shows the impact of reducing the infection rate by 76 % (from down to ).
In 1927, W. O. Kermack and A. G. McKendrick created a model in which
they considered a fixed population with only three compartments:
susceptible, ; infected, ; and recovered, . The compartments used for this model consist of three classes:
is used to represent the individuals not yet infected with the disease
at time t, or those susceptible to the disease of the population.
denotes the individuals of the population who have been infected with
the disease and are capable of spreading the disease to those in the
susceptible category.
is the compartment used for the individuals of the population who have
been infected and then removed from the disease, either due to
immunization or due to death. Those in this category are not able to be
infected again or to transmit the infection to others.
The flow of this model may be considered as follows:
Using a fixed population,
in the three functions resolves that the value N should remain constant
within the simulation. The model is started with values of S(t=0),
I(t=0) and R(t=0). These are the number of people in the susceptible,
infected and removed categories at time equals zero. Subsequently, the
flow model updates the three variables for every time point with set
values for and .
The simulation first updates the infected from the susceptible and then
the removed category is updated from the infected category for the next
time point (t=1). This describes the flow persons between the three
categories. During an epidemic the susceptible category is not shifted
with this model, changes over the course of the epidemic and so does . These variables determine the length of the epidemic and would have to be updated with each cycle.
Several assumptions were made in the formulation of these equations:
First, an individual in the population must be considered as having an
equal probability as every other individual of contracting the disease
with a rate of and an equal number of people that an individual makes contact with per unit time. Then, let be the multiplication of and . This is the transmission probability times the contact rate. Besides, an infected individual makes contact with persons per unit time whereas only a fraction, of them are susceptible.Thus, we have every infective can infect --- susceptible persons,and therefore, the whole number of susceptibles infected by infectives per unit time is .
For the second and third equations, consider the population leaving the
susceptible class as equal to the number entering the infected class.
However, a number equal to the fraction ( which represents the mean recovery/death rate, or
the mean infective period) of infectives are leaving this class per
unit time to enter the removed class. These processes which occur
simultaneously are referred to as the Law of Mass Action, a widely
accepted idea that the rate of contact between two groups in a
population is proportional to the size of each of the groups concerned.
Finally, it is assumed that the rate of infection and recovery is much
faster than the time scale of births and deaths and therefore, these
factors are ignored in this model.
Steady-state solutions
The expected duration of susceptibility will be where reflects the time alive (life expectancy) and reflects the time in the susceptible state before becoming infected, which can be simplified to:
such that the number of susceptible persons is the number entering the susceptible compartment times the duration of susceptibility:
Analogously, the steady-state number of infected persons is the
number entering the infected state from the susceptible state (number
susceptible, times rate of infection times the duration of infectiousness :
Other compartmental models
There
are many modifications of the SIR model, including those that include
births and deaths, where upon recovery there is no immunity (SIS model),
where immunity lasts only for a short period of time (SIRS), where
there is a latent period of the disease where the person is not
infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR).
Infectious disease dynamics
Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem.
If the proportion of the population that is immune exceeds the herd immunity
level for the disease, then the disease can no longer persist in the
population. Thus, if this level can be exceeded by vaccination, the
disease can be eliminated. An example of this being successfully
achieved worldwide is the global smallpox eradication, with the last wild case in 1977. The WHO is carrying out a similar vaccination campaign to eradicate polio.
The herd immunity level will be denoted q. Recall that, for a stable state:
In turn,
which is approximately:
S will be (1 − q), since q is the proportion of the population that is immune and q + S must equal one (since in this simplified model, everyone is either susceptible or immune). Then:
Remember that this is the threshold level. If the proportion of immune individuals exceeds this level due to a mass vaccination programme, the disease will die out.
We have just calculated the critical immunisation threshold (denoted qc).
It is the minimum proportion of the population that must be immunised
at birth (or close to birth) in order for the infection to die out in
the population.
Because the fraction of the final size of the population p that is never infected can be defined as:
Hence,
Solving for , we obtain:
When mass vaccination cannot exceed the herd immunity
If the vaccine used is insufficiently effective or the required coverage cannot be reached (for example due to popular resistance), the programme may fail to exceed qc. Such a programme can, however, disturb the balance of the infection without eliminating it, often causing unforeseen problems.
Suppose that a proportion of the population q (where q < qc) is immunised at birth against an infection with R0 > 1. The vaccination programme changes R0 to Rq where
This change occurs simply because there are now fewer susceptibles in the population who can be infected. Rq is simply R0 minus those that would normally be infected but that cannot be now since they are immune.
As a consequence of this lower basic reproduction number, the average age of infection A will also change to some new value Aq in those who have been left unvaccinated.
Recall the relation that linked R0, A and L. Assuming that life expectancy has not changed, now:
But R0 = L/A so:
Thus the vaccination programme will raise the average age of
infection, another mathematical justification for a result that might
have been intuitively obvious. Unvaccinated individuals now experience a
reduced force of infection due to the presence of the vaccinated group.
However, it is important to consider this effect when vaccinating
against diseases that are more severe in older people. A vaccination
programme against such a disease that does not exceed qc
may cause more deaths and complications than there were before the
programme was brought into force as individuals will be catching the
disease later in life. These unforeseen outcomes of a vaccination
programme are called perverse effects.
When mass vaccination exceeds the herd immunity
If
a vaccination programme causes the proportion of immune individuals in a
population to exceed the critical threshold for a significant length of
time, transmission of the infectious disease in that population will
stop. This is known as elimination of the infection and is different
from eradication.
Elimination
Interruption of endemic transmission of an infectious disease, which
occurs if each infected individual infects less than one other, is
achieved by maintaining vaccination coverage to keep the proportion of
immune individuals above the critical immunisation threshold.
Eradication
Reduction of infective organisms in the wild worldwide to zero. So far, this has only been achieved for smallpox and rinderpest. To get to eradication, elimination in all world regions must be achieved.
The science of epidemiology has matured significantly from the times of Hippocrates, Semmelweis and John Snow.
The techniques for gathering and analyzing epidemiological data vary
depending on the type of disease being monitored but each study will
have overarching similarities.
Outline of the process of an epidemiological study
Establish that a problem exists
Full epidemiological studies are expensive and laborious
undertakings. Before any study is started, a case must be made for the
importance of the research.
Confirm the homogeneity of the events
Any conclusions drawn from inhomogeneous cases will be
suspicious. All events or occurrences of the disease must be true cases
of the disease.
Collect all the events
It is important to collect as much information as possible about
each event in order to inspect a large number of possible risk factors.
The events may be collected from varied methods of epidemiological study or from censuses or hospital records.
Often, occurrence of a single disease entity is set as an event.
Given inherent heterogeneous nature of any given disease (i.e., the unique disease principle), a single disease entity may be treated as disease subtypes. This framework is well conceptualized in the interdisciplinary field of molecular pathological epidemiology (MPE).
Characterize the events as to epidemiological factors
Predisposing factors
Non-environmental factors that increase the likelihood of getting a disease. Genetic history, age, and gender are examples.
Enabling/disabling factors
Factors relating to the environment that either increase or
decrease the likelihood of disease. Exercise and good diet are examples
of disabling factors. A weakened immune system and poor nutrition are
examples of enabling factors.
Precipitation factors
This factor is the most important in that it identifies the source of exposure. It may be a germ, toxin or gene.
Reinforcing factors
These are factors that compound the likelihood of getting a disease. They may include repeated exposure or excessive environmental stresses.
Look for patterns and trends
Here one looks for similarities in the cases which may identify major risk factors for contracting the disease. Epidemic curves may be used to identify such risk factors.
Formulate a hypothesis
If a trend has been observed in the cases, the researcher may
postulate as to the nature of the relationship between the potential
disease-causing agent and the disease.
Test the hypothesis
Because epidemiological studies can rarely be conducted in a
laboratory the results are often polluted by uncontrollable variations
in the cases. This often makes the results difficult to interpret. Two
methods have evolved to assess the strength of the relationship between
the disease causing agent and the disease.
Koch's postulates
were the first criteria developed for epidemiological relationships.
Because they only work well for highly contagious bacteria and toxins,
this method is largely out of favor.
Bradford-Hill Criteria
are the current standards for epidemiological relationships. A
relationship may fill all, some, or none of the criteria and still be
true.
Publish the results.
Measures
Epidemiologists
are famous for their use of rates. Each measure serves to characterize
the disease giving valuable information about contagiousness, incubation
period, duration, and mortality of the disease.
Epidemiological (and other observational) studies typically highlight associations
between exposures and outcomes, rather than causation. While some
consider this a limitation of observational research, epidemiological
models of causation (e.g. Bradford Hill criteria) contend that an entire body of evidence is needed before determining if an association is truly causal.
Moreover, many research questions are impossible to study in
experimental settings, due to concerns around ethics and study validity.
For example, the link between cigarette smoke and lung cancer was
uncovered largely through observational research; however research
ethics would certainly prohibit conducting a randomized trial of
cigarette smoking once it had already been identified as a potential
health threat.
Epidemiology is the study and analysis of the distribution (who, when, and where), patterns and determinants of health and disease conditions in defined populations.
Epidemiology, literally meaning "the study of what is upon the people", is derived from Greek epi, meaning 'upon, among', demos, meaning 'people, district', and logos,
meaning 'study, word, discourse', suggesting that it applies only to
human populations. However, the term is widely used in studies of
zoological populations (veterinary epidemiology), although the term "epizoology" is available, and it has also been applied to studies of plant populations (botanical or plant disease epidemiology).
The distinction between "epidemic" and "endemic" was first drawn by Hippocrates,
to distinguish between diseases that are "visited upon" a population
(epidemic) from those that "reside within" a population (endemic).
The term "epidemiology" appears to have first been used to describe the
study of epidemics in 1802 by the Spanish physician Villalba in Epidemiología Española. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic.
The term epidemiology is now widely applied to cover the
description and causation of not only epidemic disease, but of disease
in general, and even many non-disease, health-related conditions, such
as high blood pressure, depression and obesity. Therefore, this epidemiology is based upon how the pattern of the disease causes change in the function of human beings.
History
The Greek physician Hippocrates, known as the father of medicine,
sought a logic to sickness; he is the first person known to have
examined the relationships between the occurrence of disease and
environmental influences. Hippocrates believed sickness of the human body to be caused by an imbalance of the four humors
(black bile, yellow bile, blood, and phlegm). The cure to the sickness
was to remove or add the humor in question to balance the body. This
belief led to the application of bloodletting and dieting in medicine. He coined the terms endemic (for diseases usually found in some places but not in others) and epidemic (for diseases that are seen at some times but not others).
Modern era
In the middle of the 16th century, a doctor from Verona named Girolamo Fracastoro
was the first to propose a theory that these very small, unseeable,
particles that cause disease were alive. They were considered to be able
to spread by air, multiply by themselves and to be destroyable by fire.
In this way he refuted Galen's miasma theory (poison gas in sick people). In 1543 he wrote a book De contagione et contagiosis morbis, in which he was the first to promote personal and environmental hygiene to prevent disease. The development of a sufficiently powerful microscope by Antonie van Leeuwenhoek in 1675 provided visual evidence of living particles consistent with a germ theory of disease.
During the Ming Dynasty, Wu Youke (1582–1652) developed the idea that some diseases were caused by transmissible agents, which he called Li Qi (戾气 or pestilential factors) when he observed various epidemics rage around him between 1641 and 1644. His book Wen Yi Lun
(瘟疫论,Treatise on Pestilence/Treatise of Epidemic Diseases) can be
regarded as the main etiological work that brought forward the concept.
His concepts were still being considered in analysing SARS outbreak by
WHO in 2004 in the context of traditional Chinese medicine.
Another pioneer, Thomas Sydenham
(1624–1689), was the first to distinguish the fevers of Londoners in
the later 1600s. His theories on cures of fevers met with much
resistance from traditional physicians at the time. He was not able to
find the initial cause of the smallpox fever he researched and treated.
John Graunt, a haberdasher and amateur statistician, published Natural and Political Observations ... upon the Bills of Mortality in 1662. In it, he analysed the mortality rolls in London before the Great Plague, presented one of the first life tables,
and reported time trends for many diseases, new and old. He provided
statistical evidence for many theories on disease, and also refuted some
widespread ideas on them.
John Snow
is famous for his investigations into the causes of the 19th-century
cholera epidemics, and is also known as the father of (modern)
epidemiology.
He began with noticing the significantly higher death rates in two
areas supplied by Southwark Company. His identification of the Broad Street
pump as the cause of the Soho epidemic is considered the classic
example of epidemiology. Snow used chlorine in an attempt to clean the
water and removed the handle; this ended the outbreak. This has been
perceived as a major event in the history of public health and regarded as the founding event of the science of epidemiology, having helped shape public health policies around the world.
However, Snow's research and preventive measures to avoid further
outbreaks were not fully accepted or put into practice until after his
death.
Other pioneers include Danish physician Peter Anton Schleisner, who in 1849 related his work on the prevention of the epidemic of neonatal tetanus on the Vestmanna Islands in Iceland. Another important pioneer was Hungarian physician Ignaz Semmelweis,
who in 1847 brought down infant mortality at a Vienna hospital by
instituting a disinfection procedure. His findings were published in
1850, but his work was ill-received by his colleagues, who discontinued
the procedure. Disinfection did not become widely practiced until
British surgeon Joseph Lister 'discovered' antiseptics in 1865 in light of the work of Louis Pasteur.
In the late 20th century, with the advancement of biomedical
sciences, a number of molecular markers in blood, other biospecimens and
environment were identified as predictors of development or risk of a
certain disease. Epidemiology research to examine the relationship
between these biomarkers analyzed at the molecular level and disease was broadly named "molecular epidemiology". Specifically, "genetic epidemiology"
has been used for epidemiology of germline genetic variation and
disease. Genetic variation is typically determined using DNA from
peripheral blood leukocytes. Since the 2000s, genome-wide association studies (GWAS) have been commonly performed to identify genetic risk factors for many diseases and health conditions.
While most molecular epidemiology studies are still using conventional disease diagnosis
and classification systems, it is increasingly recognized that disease
progression represents inherently heterogeneous processes differing from
person to person. Conceptually, each individual has a unique disease
process different from any other individual ("the unique disease
principle"), considering uniqueness of the exposome
(a totality of endogenous and exogenous / environmental exposures) and
its unique influence on molecular pathologic process in each individual.
Studies to examine the relationship between an exposure and molecular
pathologic signature of disease (particularly cancer) became increasingly common throughout the 2000s. However, the use of molecular pathology in epidemiology posed unique challenges, including lack of research guidelines and standardized statistical methodologies, and paucity of interdisciplinary experts and training programs.
Furthermore, the concept of disease heterogeneity appears to conflict
with the long-standing premise in epidemiology that individuals with the
same disease name have similar etiologies and disease processes. To
resolve these issues and advance population health science in the era of
molecular precision medicine, "molecular pathology" and "epidemiology" was integrated to create a new interdisciplinary field of "molecular pathological epidemiology" (MPE), defined as "epidemiology of molecular pathology
and heterogeneity of disease". In MPE, investigators analyze the
relationships between (A) environmental, dietary, lifestyle and genetic
factors; (B) alterations in cellular or extracellular molecules; and (C)
evolution and progression of disease. A better understanding of
heterogeneity of disease pathogenesis will further contribute to elucidate etiologies of disease. The MPE approach can be applied to not only neoplastic diseases but also non-neoplastic diseases. The concept and paradigm of MPE have become widespread in the 2010s.
By 2012 it was recognized that many pathogens' evolution
is rapid enough to be highly relevant to epidemiology, and that
therefore much could be gained from an interdisciplinary approach to
infectious disease integrating epidemiology and molecular evolution to
"inform control strategies, or even patient treatment."
Types of studies
Epidemiologists employ a range of study designs from the
observational to experimental and generally categorized as descriptive,
analytic (aiming to further examine known associations or hypothesized
relationships), and experimental (a term often equated with clinical or
community trials of treatments and other interventions). In
observational studies, nature is allowed to "take its course," as
epidemiologists observe from the sidelines. Conversely, in experimental
studies, the epidemiologist is the one in control of all of the factors
entering a certain case study. Epidemiological studies are aimed, where possible, at revealing unbiased relationships between exposures such as alcohol or smoking, biological agents, stress, or chemicals to mortality or morbidity.
The identification of causal relationships between these exposures and
outcomes is an important aspect of epidemiology. Modern epidemiologists
use informatics as a tool.
Observational studies have two components, descriptive and
analytical. Descriptive observations pertain to the "who, what, where
and when of health-related state occurrence". However, analytical
observations deal more with the ‘how’ of a health-related event. Experimental epidemiology
contains three case types: randomized controlled trials (often used for
new medicine or drug testing), field trials (conducted on those at a
high risk of contracting a disease), and community trials (research on
social originating diseases).
The term 'epidemiologic triad' is used to describe the intersection of Host, Agent, and Environment in analyzing an outbreak.
Case series
Case-series
may refer to the qualitative study of the experience of a single
patient, or small group of patients with a similar diagnosis, or to a
statistical factor with the potential to produce illness with periods
when they are unexposed.
The former type of study is purely descriptive and cannot be used
to make inferences about the general population of patients with that
disease. These types of studies, in which an astute clinician identifies
an unusual feature of a disease or a patient's history, may lead to a
formulation of a new hypothesis. Using the data from the series,
analytic studies could be done to investigate possible causal factors.
These can include case-control studies or prospective studies. A
case-control study would involve matching comparable controls without
the disease to the cases in the series. A prospective study would
involve following the case series over time to evaluate the disease's
natural history.
The latter type, more formally described as self-controlled
case-series studies, divide individual patient follow-up time into
exposed and unexposed periods and use fixed-effects Poisson regression
processes to compare the incidence rate of a given outcome between
exposed and unexposed periods. This technique has been extensively used
in the study of adverse reactions to vaccination and has been shown in
some circumstances to provide statistical power comparable to that
available in cohort studies.
Case-control studies
Case-control studies
select subjects based on their disease status. It is a retrospective
study. A group of individuals that are disease positive (the "case"
group) is compared with a group of disease negative individuals (the
"control" group). The control group should ideally come from the same
population that gave rise to the cases. The case-control study looks
back through time at potential exposures that both groups (cases and
controls) may have encountered. A 2×2 table is constructed, displaying
exposed cases (A), exposed controls (B), unexposed cases (C) and
unexposed controls (D). The statistic generated to measure association
is the odds ratio
(OR), which is the ratio of the odds of exposure in the cases (A/C) to
the odds of exposure in the controls (B/D), i.e. OR = (AD/BC).
Cases
Controls
Exposed
A
B
Unexposed
C
D
If the OR is significantly greater than 1, then the conclusion is
"those with the disease are more likely to have been exposed," whereas
if it is close to 1 then the exposure and disease are not likely
associated. If the OR is far less than one, then this suggests that the
exposure is a protective factor in the causation of the disease.
Case-control studies are usually faster and more cost-effective than cohort studies but are sensitive to bias (such as recall bias and selection bias).
The main challenge is to identify the appropriate control group; the
distribution of exposure among the control group should be
representative of the distribution in the population that gave rise to
the cases. This can be achieved by drawing a random sample from the
original population at risk. This has as a consequence that the control
group can contain people with the disease under study when the disease
has a high attack rate in a population.
A major drawback for case control studies is that, in order to be
considered to be statistically significant, the minimum number of cases
required at the 95% confidence interval is related to the odds ratio by
the equation:
where N is the ratio of cases to controls.
As the odds ratio approached 1, approaches 0; rendering case-control
studies all but useless for low odds ratios. For instance, for an odds
ratio of 1.5 and cases = controls, the table shown above would look like
this:
Cases
Controls
Exposed
103
84
Unexposed
84
103
For an odds ratio of 1.1
Cases
Controls
Exposed
1732
1652
Unexposed
1652
1732
Cohort studies
Cohort studies
select subjects based on their exposure status. The study subjects
should be at risk of the outcome under investigation at the beginning of
the cohort study; this usually means that they should be disease free
when the cohort study starts. The cohort is followed through time to
assess their later outcome status. An example of a cohort study would be
the investigation of a cohort of smokers and non-smokers over time to
estimate the incidence of lung cancer. The same 2×2 table is constructed
as with the case control study. However, the point estimate generated
is the relative risk (RR), which is the probability of disease for a person in the exposed group, Pe = A / (A + B) over the probability of disease for a person in the unexposed group, Pu = C / (C + D), i.e. RR = Pe / Pu.
.....
Case
Non-case
Total
Exposed
A
B
(A + B)
Unexposed
C
D
(C + D)
As with the OR, a RR greater than 1 shows association, where the
conclusion can be read "those with the exposure were more likely to
develop disease."
Prospective studies have many benefits over case control studies.
The RR is a more powerful effect measure than the OR, as the OR is just
an estimation of the RR, since true incidence cannot be calculated in a
case control study where subjects are selected based on disease status.
Temporality can be established in a prospective study, and confounders
are more easily controlled for. However, they are more costly, and there
is a greater chance of losing subjects to follow-up based on the long
time period over which the cohort is followed.
Cohort studies also are limited by the same equation for number
of cases as for cohort studies, but, if the base incidence rate in the
study population is very low, the number of cases required is reduced
by ½.
Causal inference
Although epidemiology is sometimes viewed as a collection of
statistical tools used to elucidate the associations of exposures to
health outcomes, a deeper understanding of this science is that of
discovering causal relationships.
"Correlation does not imply causation" is a common theme for much of the epidemiological literature. For epidemiologists, the key is in the term inference.
Correlation, or at least association between two variables, is a
necessary but not sufficient criterion for inference that one variable
causes the other. Epidemiologists use gathered data and a broad range of
biomedical and psychosocial theories in an iterative way to generate or
expand theory, to test hypotheses, and to make educated, informed
assertions about which relationships are causal, and about exactly how
they are causal.
Epidemiologists emphasize that the "one cause – one effect" understanding is a simplistic mis-belief. Most outcomes, whether disease or death, are caused by a chain or web consisting of many component causes.
Causes can be distinguished as necessary, sufficient or probabilistic
conditions. If a necessary condition can be identified and controlled
(e.g., antibodies to a disease agent, energy in an injury), the harmful
outcome can be avoided (Robertson, 2015).
Bradford Hill criteria
In 1965, Austin Bradford Hill proposed a series of considerations to help assess evidence of causation, which have come to be commonly known as the "Bradford Hill criteria".
In contrast to the explicit intentions of their author, Hill's
considerations are now sometimes taught as a checklist to be implemented
for assessing causality.
Hill himself said "None of my nine viewpoints can bring indisputable
evidence for or against the cause-and-effect hypothesis and none can be
required sine qua non."
Strength of Association: A small association does not
mean that there is not a causal effect, though the larger the
association, the more likely that it is causal.
Consistency of Data: Consistent findings observed by
different persons in different places with different samples strengthens
the likelihood of an effect.
Specificity: Causation is likely if a very specific
population at a specific site and disease with no other likely
explanation. The more specific an association between a factor and an
effect is, the bigger the probability of a causal relationship.
Temporality: The effect has to occur after the cause (and if
there is an expected delay between the cause and expected effect, then
the effect must occur after that delay).
Biological gradient: Greater exposure should generally lead
to greater incidence of the effect. However, in some cases, the mere
presence of the factor can trigger the effect. In other cases, an
inverse proportion is observed: greater exposure leads to lower
incidence.
Plausibility: A plausible mechanism between cause and effect
is helpful (but Hill noted that knowledge of the mechanism is limited by
current knowledge).
Coherence: Coherence between epidemiological and laboratory
findings increases the likelihood of an effect. However, Hill noted that
"... lack of such [laboratory] evidence cannot nullify the
epidemiological effect on associations".
Experiment: "Occasionally it is possible to appeal to experimental evidence".
Analogy: The effect of similar factors may be considered.
Legal interpretation
Epidemiological studies can only go to prove that an agent could have caused, but not that it did cause, an effect in any particular case:
"Epidemiology is concerned with the incidence
of disease in populations and does not address the question of the
cause of an individual's disease. This question, sometimes referred to
as specific causation, is beyond the domain of the science of
epidemiology. Epidemiology has its limits at the point where an
inference is made that the relationship between an agent and a disease
is causal (general causation) and where the magnitude of excess risk
attributed to the agent has been determined; that is, epidemiology
addresses whether an agent can cause a disease, not whether an agent did
cause a specific plaintiff's disease."
In United States law, epidemiology alone cannot prove that a causal
association does not exist in general. Conversely, it can be (and is in
some circumstances) taken by US courts, in an individual case, to
justify an inference that a causal association does exist, based upon a
balance of probability.
The subdiscipline of forensic epidemiology is directed at the
investigation of specific causation of disease or injury in individuals
or groups of individuals in instances in which causation is disputed or
is unclear, for presentation in legal settings.
Population-based health management
Epidemiological
practice and the results of epidemiological analysis make a significant
contribution to emerging population-based health management frameworks.
Population-based health management encompasses the ability to:
Assess the health states and health needs of a target population;
Implement and evaluate interventions that are designed to improve the health of that population; and
Efficiently and effectively provide care for members of that
population in a way that is consistent with the community's cultural,
policy and health resource values.
Modern population-based health management is complex, requiring a
multiple set of skills (medical, political, technological, mathematical,
etc.) of which epidemiological practice and analysis is a core
component, that is unified with management science to provide efficient
and effective health care and health guidance to a population. This task
requires the forward-looking ability of modern risk management
approaches that transform health risk factors, incidence, prevalence and
mortality statistics (derived from epidemiological analysis) into
management metrics that not only guide how a health system responds to
current population health issues but also how a health system can be
managed to better respond to future potential population health issues.
Examples of organizations that use population-based health
management that leverage the work and results of epidemiological
practice include Canadian Strategy for Cancer Control, Health Canada
Tobacco Control Programs, Rick Hansen Foundation, Canadian Tobacco
Control Research Initiative.
Each of these organizations uses a population-based health
management framework called Life at Risk that combines epidemiological
quantitative analysis with demographics, health agency operational
research and economics to perform:
Population Life Impacts Simulations: Measurement of the
future potential impact of disease upon the population with respect to
new disease cases, prevalence, premature death as well as potential
years of life lost from disability and death;
Labour Force Life Impacts Simulations: Measurement of the
future potential impact of disease upon the labour force with respect to
new disease cases, prevalence, premature death and potential years of
life lost from disability and death;
Economic Impacts of Disease Simulations: Measurement of the
future potential impact of disease upon private sector disposable income
impacts (wages, corporate profits, private health care costs) and
public sector disposable income impacts.
Applied field epidemiology
Applied
epidemiology is the practice of using epidemiological methods to
protect or improve the health of a population. Applied field
epidemiology can include investigating communicable and non-communicable
disease outbreaks, mortality and morbidity rates, and nutritional
status, among other indicators of health, with the purpose of
communicating the results to those who can implement appropriate
policies or disease control measures.
Humanitarian context
As
the surveillance and reporting of diseases and other health factors
becomes increasingly difficult in humanitarian crisis situations, the
methodologies used to report the data are compromised. One study found
that less than half (42.4%) of nutrition surveys sampled from
humanitarian contexts correctly calculated the prevalence of
malnutrition and only one-third (35.3%) of the surveys met the criteria
for quality. Among the mortality surveys, only 3.2% met the criteria for
quality. As nutritional status and mortality rates help indicate the
severity of a crisis, the tracking and reporting of these health factors
is crucial.
Vital registries are usually the most effective ways to collect
data, but in humanitarian contexts these registries can be non-existent,
unreliable, or inaccessible. As such, mortality is often inaccurately
measured using either prospective demographic surveillance or
retrospective mortality surveys. Prospective demographic surveillance
requires much manpower and is difficult to implement in a spread-out
population. Retrospective mortality surveys are prone to selection and
reporting biases. Other methods are being developed, but are not common
practice yet.
Validity: precision and bias
Different
fields in epidemiology have different levels of validity. One way to
assess the validity of findings is the ratio of false-positives (claimed
effects that are not correct) to false-negatives (studies which fail to
support a true effect). To take the field of genetic epidemiology,
candidate-gene studies produced over 100 false-positive findings for
each false-negative. By contrast genome-wide association appear close to
the reverse, with only one false positive for every 100 or more
false-negatives.
This ratio has improved over time in genetic epidemiology as the field
has adopted stringent criteria. By contrast, other epidemiological
fields have not required such rigorous reporting and are much less
reliable as a result.
Random error
Random
error is the result of fluctuations around a true value because of
sampling variability. Random error is just that: random. It can occur
during data collection, coding, transfer, or analysis. Examples of
random error include: poorly worded questions, a misunderstanding in
interpreting an individual answer from a particular respondent, or a
typographical error during coding. Random error affects measurement in a
transient, inconsistent manner and it is impossible to correct for
random error.
There is random error in all sampling procedures. This is called sampling error.
Precision in epidemiological variables is a measure of random
error. Precision is also inversely related to random error, so that to
reduce random error is to increase precision. Confidence intervals are
computed to demonstrate the precision of relative risk estimates. The
narrower the confidence interval, the more precise the relative risk
estimate.
There are two basic ways to reduce random error in an epidemiological study.
The first is to increase the sample size of the study. In other words,
add more subjects to your study. The second is to reduce the variability
in measurement in the study. This might be accomplished by using a more
precise measuring device or by increasing the number of measurements.
Note, that if sample size or number of measurements are
increased, or a more precise measuring tool is purchased, the costs of
the study are usually increased. There is usually an uneasy balance
between the need for adequate precision and the practical issue of study
cost.
Systematic error
A
systematic error or bias occurs when there is a difference between the
true value (in the population) and the observed value (in the study)
from any cause other than sampling variability. An example of systematic
error is if, unknown to you, the pulse oximeter
you are using is set incorrectly and adds two points to the true value
each time a measurement is taken. The measuring device could be precise but not accurate.
Because the error happens in every instance, it is systematic.
Conclusions you draw based on that data will still be incorrect. But the
error can be reproduced in the future (e.g., by using the same mis-set
instrument).
A mistake in coding that affects all responses for that particular question is another example of a systematic error.
The validity of a study is dependent on the degree of systematic error. Validity is usually separated into two components:
Internal validity
is dependent on the amount of error in measurements, including
exposure, disease, and the associations between these variables. Good
internal validity implies a lack of error in measurement and suggests
that inferences may be drawn at least as they pertain to the subjects
under study.
External validity
pertains to the process of generalizing the findings of the study to
the population from which the sample was drawn (or even beyond that
population to a more universal statement). This requires an
understanding of which conditions are relevant (or irrelevant) to the
generalization. Internal validity is clearly a prerequisite for external
validity.
Selection bias
Selection bias
occurs when study subjects are selected or become part of the study as a
result of a third, unmeasured variable which is associated with both
the exposure and outcome of interest.
For instance, it has repeatedly been noted that cigarette smokers and
non smokers tend to differ in their study participation rates. (Sackett D
cites the example of Seltzer et al., in which 85% of non smokers and
67% of smokers returned mailed questionnaires.)
It is important to note that such a difference in response will not
lead to bias if it is not also associated with a systematic difference
in outcome between the two response groups.
Information bias
Information bias is bias arising from systematic error in the assessment of a variable.
An example of this is recall bias. A typical example is again provided
by Sackett in his discussion of a study examining the effect of specific
exposures on fetal health: "in questioning mothers whose recent
pregnancies had ended in fetal death or malformation (cases) and a
matched group of mothers whose pregnancies ended normally (controls) it
was found that 28% of the former, but only 20% of the latter, reported
exposure to drugs which could not be substantiated either in earlier
prospective interviews or in other health records".
In this example, recall bias probably occurred as a result of women who
had had miscarriages having an apparent tendency to better recall and
therefore report previous exposures.
Confounding
Confounding
has traditionally been defined as bias arising from the co-occurrence
or mixing of effects of extraneous factors, referred to as confounders,
with the main effect(s) of interest. A more recent definition of confounding invokes the notion of counterfactual effects.
According to this view, when one observes an outcome of interest, say
Y=1 (as opposed to Y=0), in a given population A which is entirely
exposed (i.e. exposure X = 1 for every unit of the population) the risk of this event will be RA1. The counterfactual or unobserved risk RA0 corresponds to the risk which would have been observed if these same individuals had been unexposed (i.e. X = 0 for every unit of the population). The true effect of exposure therefore is: RA1 − RA0 (if one is interested in risk differences) or RA1/RA0 (if one is interested in relative risk). Since the counterfactual risk RA0 is unobservable we approximate it using a second population B and we actually measure the following relations: RA1 − RB0 or RA1/RB0. In this situation, confounding occurs when RA0 ≠ RB0. (NB: Example assumes binary outcome and exposure variables.)
Some epidemiologists prefer to think of confounding separately
from common categorizations of bias since, unlike selection and
information bias, confounding stems from real causal effects.
The profession
Few universities have offered epidemiology as a course of study at the undergraduate level. One notable undergraduate program exists at Johns Hopkins University,
where students who major in public health can take graduate level
courses, including epidemiology, during their senior year at the Bloomberg School of Public Health.
As public health/health protection practitioners, epidemiologists
work in a number of different settings. Some epidemiologists work 'in
the field'; i.e., in the community, commonly in a public health/health
protection service, and are often at the forefront of investigating and
combating disease outbreaks. Others work for non-profit organizations,
universities, hospitals and larger government entities such as state and
local health departments, various Ministries of Health, Doctors without Borders, the Centers for Disease Control and Prevention (CDC), the Health Protection Agency, the World Health Organization (WHO), or the Public Health Agency of Canada.
Epidemiologists can also work in for-profit organizations such as
pharmaceutical and medical device companies in groups such as market
research or clinical development.
Covid-19
An April 2020 University of Southern California article noted that "The coronavirus epidemic...
thrust epidemiology – the study of the incidence, distribution and
control of disease in a population – to the forefront of scientific
disciplines across the globe and even made temporary celebrities out of
some of its practitioners."
On June 8, 2020, The New York Times published results of its survey of 511 epidemiologists
asked "when they expect to resume 20 activities of daily life"; 52% of
those surveyed expected to stop "routinely wearing a face covering" in
one year or more.
Visual agnosia
is an impairment in recognition of visually presented objects. It is
not due to a deficit in vision (acuity, visual field, and scanning),
language, memory, or intellect. While cortical blindness
results from lesions to primary visual cortex, visual agnosia is often
due to damage to more anterior cortex such as the posterior occipital and/or temporal lobe(s) in the brain. There are two types of visual agnosia: apperceptive agnosia and associative agnosia.
Recognition of visual objects
occurs at two primary levels. At an apperceptive level, the features of
the visual information from the retina are put together to form a
perceptual representation of an object. At an associative level, the
meaning of an object is attached to the perceptual representation and
the object is identified.
If a person is unable to recognize objects because they cannot perceive
correct forms of the objects, although their knowledge of the objects
is intact (i.e. they do not have anomia),
they have apperceptive agnosia. If a person correctly perceives the
forms and has knowledge of the objects, but cannot identify the objects,
they have associative agnosia.
Symptoms
While
most cases of visual agnosia are seen in older adults who have
experienced extensive brain damage, there are also cases of young
children with less brain damage during developmental years acquiring the
symptoms.
Commonly, visual agnosia presents as an inability to recognize an
object in the absence of other explanations, such as blindness or
partial blindness, anomia, memory loss, etc.. Other common
manifestations of visual agnosia that are generally tested for include
difficulty identifying objects that look similar in shape, difficulty
with identifying line drawings of objects, and recognizing objects that
are shown from less common views, such as a horse from a top-down view.
Within any given patient, a variety of symptoms can occur, and
the impairment of ability is not only binary but can range in severity.
For example, Patient SM is a prosopagnosic with an unilateral lesion to
left extrastriate cortex due to an accident in his twenties who displays
behavior similar to congenital prosopagnosia.
Although he can recognize facial features and emotions – indeed he
sometimes uses a standout feature to recognize a face – face recognition
is almost impossible purely from visual stimuli, even for faces of
friends, family, and himself. The disorder also affects his memory of
faces, both in storing new memories of faces and recalling stored
memories.
Nevertheless, it is important to note the reach of symptoms to
other domains. SM’s object recognition is similarly impaired though not
entirely; when given line drawings to identify, he was able to give
names of objects with properties similar to the drawing, implying that
he is able to see the features of the drawing. Similarly, copying a
line drawing of a beach scene led to a simplified version of the
drawing, though the main features were accounted for. For recognition of
places, he is still impaired but familiar places are remembered and new
places can be stored into memory.
Pathophysiology
Visual
agnosia occurs after damage to visual association cortex or to parts of
the ventral stream of vision, known as the "what pathway" of vision for
its role in object recognition.
This occurs even when no damage has been done to the eyes or optic
tract that leads visual information into the brain; in fact, visual
agnosia occurs when symptoms cannot be explained by such damage. Damage
to specific areas of the ventral stream impair the ability to recognize
certain categories of visual information, such as the case of
prospagnosia.
Patients with visual agnosia generally do not have damage to the
dorsal stream of vision, known as the "where pathway" of vision because
of its role determining object's position in space, allowing individuals
with visual agnosia to show relatively normal visually guided behavior.
For example, patient DF had lesions to the ventral surface that gave her apperceptive agnosia.
One of the tasks she was tested on required her to place a card through
a thin slot that could be rotated into all orientations. As an
apperceptive agnosic, it would be expected that since she cannot
recognize the slot, she should not be able to correctly place the card
into the slot. Indeed, when she was asked to give the direction of the
slot, her responses were no better than chance. Yet, when she was asked
to place the card into the slot, her success was almost to the level of
the controls. This implies that in the event of a ventral stream
deficit, the dorsal stream can help with processing of special
information to aid movement regardless of object recognition.
More specifically, the lateral occipital complex appears to respond to many different types of objects.
Prosopagnosia (inability to recognize faces) is due to damage of the
fusiform face area (FFA). An area in the fusiform gyrus of the temporal
lobe that has been strongly associated with a role in facial
recognition.
However, this area is not exclusive to faces; recognition of other
objects of expertise are also processed in this area. The extrastriate
body cortex (EBA) was found to be activated by photographs, silhouettes,
or stick drawings of human bodies.
The parahippocampal place area (PPA) of the limbic cortex has been
found to be activated by the sight of scenes and backgrounds.
Cerebral achromatopsia (the inability to discriminate between
different hues) is caused by damage to the V8 area of the visual
association cortex.
The left hemisphere seems to play a critical role in recognizing the meaning of common objects.
Diagnosis
Classification
Broadly, visual agnosia is divided into apperceptive and associative visual agnosia.
Apperceptive agnosia is failure of object recognition even when
the basic visual functions (acuity, color, motion) and other mental
processing, such as language and intelligence, are normal.
The brain must correctly integrate features such as edges, light
intensity, and color from sensory information to form a complete percept
of an object. If a failure occurs during this process, a percept of an
object is not fully formed and thus it cannot be recognized.
Tasks requiring copying, matching, or drawing simple figures can
distinguish the individuals with apperceptive agnosia because they
cannot perform such tasks.
Associative agnosia is an inability to identify objects even with
apparent perception and knowledge of them. It involves a higher level
of processing than apperceptive agnosia.
Individuals with associative agnosia can copy or match simple figures,
indicating that they can perceive objects correctly. They also display
the knowledge of objects when tested with tactile or verbal information.
However, when tested visually, they cannot name or describe common
objects. This means that there is an impairment in associating the perception of objects with the stored knowledge of them.
Although visual agnosia can be general, there exist many variants
that impair recognition of specific types. These variants of visual
agnosia include prosopagnosia (inability to recognize faces), pure word
blindness (inability to recognize words, often called "agnosic alexia"
or "pure alexia"), agnosias for colors (inability to differentiate
colors), agnosias for the environment (inability to recognize landmarks
or difficult with spatial layout of an environment, i.e.
topographagnosia) and simultanagosia (inability to sort out multiple
objects in a visual scene).
Categories and subtypes of visual agnosia
The two main categories of visual agnosia are:
Apperceptive visual agnosia, impaired object recognition. Individuals with apperceptive visual agnosia cannot form a whole percept of visual information.
Associative visual agnosia,
impaired object identification. Individuals with associative agnosia
cannot give a meaning to a formed percept. The percept is created, but
it would have no meaning for individuals who have an associative
agnosia.
Prosopagnosia, an inability to recognize human faces.
Individuals with prosopagnosia know that they are looking at faces, but
cannot recognize people by the sight of their face, even people whom
they know well.
Simultagnosia,
an inability to recognize multiple objects in a scene, including
distinct objects within a spatial layout and distinguishing between
"local" objects and "global" objects, such as being able to see a tree
but not the forest or vice versa.
Topographagnosia, an inability to process the spatial layout of an
environment, including landmark agnosia, difficult recognizing buildings
and places; difficulty building mental maps of a location or scene;
and/or an inability to discern the orientation between objects in space.
Pure alexia, an inability to read.
Orientation agnosia: an inability to judge or determine orientation of objects.
Pantomime agnosia: an inability to understand pantomimes (gestures).
It appears that the inferior cortical visual cortex is critical in
recognizing pantomimes.
Patient CK
Background
Patient
C.K. was born in 1961 in England and emigrated to Canada in 1980. In
January 1988, C.K. sustained a head injury from a motor vehicle accident
while out for a jog. Following the accident, C.K. experienced many
cognitive issues, mood swings, poor memory, and temper outbursts. C.K.
also had motor weakness on the left side and a left homonymous
hemianopia. He recovered well, retaining normal intelligence and normal
visual acuity. He was able to complete a Masters in History, later
working as a manager at a large corporation. Although his recovery was
successful in other areas of cognition, C.K. still struggles to make
sense of the visual world.
Associative visual agnosia
Magnetic
resonance imaging (MRI) showed bilateral thinning of C.K.'s occipital
lobe which resulted in associative visual agnosia.
Patients that suffer from visual agnosia are unable to identify
visually presented objects. They can identify these objects through
other modalities such as touch but if presented visually, they are
unable to. Associative agnosic patients cannot create a detailed
representation of the visual world in their brains, they can only
perceive elements of whole objects. They also cannot form associations between objects or assign meaning to objects.
C.K. makes many mistakes when trying to identify objects. For
example, he called an abacus "skewers on a kebab" and a badminton
racquet a "fencer's mask". A dart was a "feather duster" and a
protractor was mistaken for a "cockpit". Despite this impairment in
visual object recognition, C.K. retained many abilities such as drawing,
visual imagery, and internal imagery. As a native of England, he was
tasked with drawing England, marking London and where he was born. His
accurate drawing of England is just one example of his excellent drawing
abilities.
As aforementioned, C.K. is able to identify parts of objects but
cannot generate a whole representation. It should not be surprising then
that his visual imagery for object size, shape, and color is intact.
For example, when shown a picture of an animal, he can correctly answer
questions such as "are the ears up or down?" and "is the tail long or
short?" He can correctly identify colors, for example that the inside of
a cantaloupe is orange.
Finally, C.K. can generate internal images and perceive these generated
objects. For example, Finke, Pinker, and Farah instructed C.K. to
imagine a scenario where a 'B' is rotated 90 degrees to the left, a
triangle is put below, and the line in the middle is removed. C.K. can
correctly identify this object as a heart by picturing this
transformation in his head.
Evidence for double dissociation between face and object processing
Patient
C.K. provided evidence for a double dissociation between face
processing and visual object processing. Patients with prosopagnosia
have damage to the Fusiform Face Area (FFA) and are unable to recognize
upright faces. C.K. has no difficulty with face processing and matches
the performance of controls when tasked with identifying upright famous
faces. When shown inverted faces of famous people, C.K. performs
significantly worse than controls. This is because processing inverted
faces involves a piecemeal strategy. C.K.'s performance is compared to
patients with prosopagnosia who are impaired in face processing but
perform well identifying inverted faces. This was the first evidence for
a double dissociation between face and object processing suggesting a
face-specific processing system.
In the graphic novel Preacher,
the character Lorie suffers from an extreme version of agnosia
resulting from being born with a single eye. For example, she perceives
Arseface, a man with severe facial deformities, as resembling a young James Dean.
Val Kilmer's character suffers from visual agnosia in the film At First Sight.
In "Folie à Deux", a fifth-season episode of the X Files,
Mulder succumbs to the same belief as telemarketer Gary Lambert, that
his boss Greg Pincus is a monster who disguises his true appearance by
means of hypnosis.
Scully, although believing this notion preposterous, suggests that what
Mulder describes is analogous to an induced visual agnosia.
The short story Liking What You See: A Documentary by Ted Chiang examines the cultural effects of a noninvasive medical procedure that induces a visual agnosia toward physical beauty.