Grid parity for solar PV systems around the world Reached grid-parity before 2014 Reached grid-parity after 2014 Reached grid-parity only for peak prices U.S. states poised to reach grid-parity Source: Deutsche Bank, as of February 2015
Grid parity (or socket parity) occurs when an alternative energy source can generate power at a levelized cost of electricity (LCOE) that is less than or equal to the price of power from the electricity grid. The term is most commonly used when discussing renewable energy sources, notably solar power and wind power. Grid parity depends upon whether you are calculating from the point of view of a utility or of a retail consumer.
Reaching grid parity is considered to be the point at which an
energy source becomes a contender for widespread development without subsidies
or government support. It is widely believed that a wholesale shift in
generation to these forms of energy will take place when they reach grid
parity.
Germany was one of the first countries to reach parity for solar PV in 2011 and 2012 for utility-scale solar and rooftop solar PV, respectively.By January 2014, grid parity for solar PV systems had already been reached in at least nineteen countries.
Wind power reached grid parity in some places in Europe in the mid 2000s, and has continued to reduce in price.
Overview
The
price of electricity from the grid is complex. Most power sources in
the developed world are generated in industrial scale plants developed
by private or public consortia. The company providing the power and the
company delivering that power to the customers are often separate
entities who enter into a Power Purchase Agreement that sets a fixed rate for all of the power delivered by the plant. On the other end of the wire, the local distribution company (LDC) charges rates that will cover their power purchases from the variety of producers they use.
This relationship is not straightforward; for instance, an LDC may buy large amounts of base load power from a nuclear plant at a low fixed cost and then buy peaking power only as required from natural gas peakers
at a much higher cost, perhaps five to six times. Depending on their
billing policy, this might be billed to the customer at a flat rate
combining the two rates the LDC pays, or alternately based on a time-based pricing policy that tries to more closely match input costs with customer prices.
As a result of these policies, the exact definition of "grid
parity" varies not only from location to location, but customer to
customer and even hour to hour.
For instance, wind power
connects to the grid on the distribution side (as opposed to the
customer side). This means it competes with other large forms of
industrial-scale power like hydro, nuclear or coal-fired plants, which
are generally inexpensive forms of power. Additionally, the generator
will be charged by the distribution operator to carry the power to the
markets, adding to their levelized costs.
Solar has the advantage of scaling easily from systems as small as a single solar panel placed on the customer's roof. In this case the system has to compete with the post-delivery retail price, which is generally much higher than the wholesale price at the same time.
It is also important to consider changes in grid pricing when
determining whether or not a source is at parity. For instance, the
introduction of time-of-use pricing and a general increase in power
prices in Mexico
during 2010 and 2011 has suddenly resulted in many forms of renewable
energy reaching grid parity. A drop in power prices, as has happened in
some locations due to the late-2000s recession, can likewise render systems formerly at parity, to be no longer interesting.
In general terms, fuel prices continue to increase, while
renewable energy sources continue to reduce in up-front costs. As a
result, widespread grid parity for wind and solar were generally
predicted for the time between 2015 and 2020.
Grid parity is most commonly used in the field of solar power, and most specifically when referring to solar photovoltaics (PV). As PV systems do not use fuel and are largely maintenance-free, the levelized cost of electricity (LCOE) is dominated almost entirely by the capital cost of the system. With the assumption that the discount rate will be similar to the inflation rate
of grid power, the levelized cost can be calculated by dividing the
original capital cost by the total amount of electricity produced over
the system's lifetime.
As the LCOE of solar PV is dominated by the capital costs, and the capital costs by the panels, the wholesale prices of PV modules
are the main consideration when tracking grid parity. A 2015 study
shows price/kWh dropping by 10% per year since 1980, and predicts that
solar could contribute 20% of total electricity consumption by 2030,
whereas the International Energy Agency predicts 16% by 2050.
The price of electricity from these sources dropped about 25 times between 1990 and 2010. This rate of price reduction accelerated between late-2009 and mid-2011 due to oversupply; the wholesale cost of solar modules dropped approximately 70%.
These pressures have demanded efficiencies throughout the construction
chain, so total installed cost has also been strongly lowered.
Adjusting for inflation, it cost $96 per watt for a solar module in the
mid-1970s. Process improvements and a very large boost in production
have brought that figure down 99 percent, to 68¢ per watt in February
2016, according to data from Bloomberg New Energy Finance. The downward move in pricing continues. Palo Alto
California signed a wholesale purchase agreement in 2016 that secured
solar power for 3.7 cents per kilowatt-hour. And in sunny Qatar
large-scale solar generated electricity sold in 2020 for just $0.01567
per kWh cheaper than any form of fossil-based electricity.
The average retail price of solar cells as monitored by the
Solarbuzz group fell from $3.50/watt to $2.43/watt over the course of
2011, and a decline to prices below $2.00/watt seems inevitable.
Solarbuzz tracks retail prices, which includes a large mark-up over
wholesale prices, and systems are commonly installed by firms buying at
the wholesale price. For this reason, total installation costs are
commonly similar to the retail price of the panels alone. Recent
total-systems installation costs are around $2500/kWp in Germany or $3,250 in the UK. As of 2011, the capital cost of PV had fallen well below that of nuclear power and was set to fall further.
Knowing the expected production allows the calculation of the
LCOE. Modules are generally warranted for 25 years and suffer only minor
degradation during that time, so all that is needed to predict the
generation is the local insolation. According to PVWattsArchived 18 January 2012 at the Wayback Machine a one-kilowatt system in Matsumoto, Nagano will produce 1187 kilowatt-hour
(kWh) of electricity a year. Over a 25-year lifetime, the system will
produce about 29,675 kWh (not accounting for the small effects of system
degradation, about 0.25% a year). If this system costs $5,000 to
install ($5 per watt),
very conservative compared to worldwide prices, the LCOE = 5,000/29,675
~= 17 cents per kWh. This is lower than the average Japanese
residential rate of ~19.5 cents, which means that, in this simple case
which skips the necessary time value of money calculation, PV had reached grid parity for residential users in Japan.
Reaching parity
Deciding
whether or not PV is at grid parity is more complex than other sources,
due to a side-effect of one of its main advantages. Compared to most
sources, like wind turbines or hydro dams, PV can scale successfully to
systems as small as one panel or as large as millions. In the case of
small systems, they can be installed at the customer's location. In this
case the LCOE competes against the retail price of grid power,
which includes all upstream additions like transmission fees, taxes,
etc. In the example above, grid parity has been reached in Nagano.
However, retail prices are generally higher than wholesale prices, so
grid parity may not have been reached for the very same system installed
on the supply-side of the grid.
In order to encompass all of these possibilities, Japan's NEDO defines the grid parity in three phases:
1st phase grid parity: residential grid-connected PV systems
These categories are ranked in terms of the price of power they
displace; residential power is more expensive than commercial wholesale.
Thus, it is expected that the 1st phase would be reached earlier than
the 3rd phase.
Predictions from the 2006 time-frame expected retail grid parity for solar in the 2016 to 2020 era,
but due to rapid downward pricing changes, more recent calculations
have forced dramatic reductions in time scale, and the suggestion that
solar has already reached grid parity in a wide variety of locations. The European Photovoltaic Industry Association
(EPIA) calculated that PV would reach parity in many of the European
countries by 2020, with costs declining to about half of those of 2010.
However, this report was based on the prediction that prices would fall
36 to 51% between 2010 and 2020, a decrease that actually took place
during the year the report was authored. The parity line was claimed to
have been crossed in Australia in September 2011, and module prices have continued to fall since then.
Stanwell Corporation, an electricity generator owned by the
Queensland government made a loss in 2013 from its 4,000 MW of coal and
gas fired generation. The company attributed this loss to the expansion
of rooftop solar generation which reducing the price of electricity
during the day, on some days the price per MWh (usually $40–$50
Australian dollars) was almost zero.
The Australian Government and Bloomberg New Energy Finance forecast the
production of energy by rooftop solar to rise sixfold between 2014 and
2024.
Photovoltaics since early-2010s started to compete in some places without subsidies. Shi Zhengrong said that, as of 2012, unsubsidised solar power were already competitive with fossil fuels in India, Hawaii,
Italy and Spain. As PV system prices declined it was inevitable that
subsidies would end. "Solar power will be able to compete without
subsidies against conventional power sources in half the world by 2015".
In fact, recent evidence suggest that photovoltaic grid parity has
already been reached in countries of the Mediterranean basin (Cyprus).
Predictions that a power source becomes self-supporting when
parity is reached appear to be coming true. According to many measures,
PV is the fastest growing source of power in the world:
For large-scale installations, prices below $1.00/watt are now
common. In some locations, PV has reached grid parity, the cost at which
it is competitive with coal or gas-fired generation. More generally, it
is now evident that, given a carbon price of $50/ton, which would raise
the price of coal-fired power by 5c/kWh, solar PV will be
cost-competitive in most locations. The declining price of PV has been
reflected in rapidly growing installations, totalling about 23 GW
in 2011. Although some consolidation is likely in 2012, as firms try to
restore profitability, strong growth seems likely to continue for the
rest of the decade. Already, by one estimate, total investment in
renewables for 2011 exceeded investment in carbon-based electricity
generation.
The dramatic price reductions in the PV industry have caused a number
of other power sources to become less interesting. Nevertheless, there
remained the widespread belief that concentrating solar power
(CSP) will be even less expensive than PV, although this is suitable
for industrial-scale projects only, and thus has to compete at wholesale
pricing. One company stated in 2011 that CSP costs $0.12/kWh to produce
in Australia, and expected this to drop to $0.06/kWh by 2015 due to
improvements in technology and reductions in equipment manufacturing costs. Greentech Media predicted that LCOE of CSP and PV power would lower to $0.07–$0.12/kWh by 2020 in California.
Wind power
Grid
parity also applies to wind power where it varies according to wind
quality and existing distribution infrastructure. ExxonMobil predicted
in 2011 that wind power real cost would approach parity with natural gas
and coal without carbon sequestration and be cheaper than natural gas and coal with carbon sequestration by 2025.
Wind turbines reached grid parity in some areas of Europe in the
mid-2000s, and in the US around the same time. Falling prices continue
to drive the levelized cost down and it was suggested that it had
reached general grid parity in Europe in 2010, and would reach the same
point in the US around 2016 due to an expected reduction in capital
costs of about 12%.
Nevertheless, a significant amount of the wind power resource in North
America remained above grid parity due to the long transmission
distances involved. (see also OpenEI Database for cost of electricity by source).
Brain-reading or thought identification uses the responses of multiple voxels in the brain evoked by stimulus then detected by fMRI in order to decode the original stimulus. Advances in research have made this possible by using human neuroimaging to decode a person's conscious experience based on non-invasive measurements of an individual's brain activity.
Brain reading studies differ in the type of decoding
(i.e. classification, identification and reconstruction) employed, the
target (i.e. decoding visual patterns, auditory patterns, cognitive states), and the decoding algorithms (linear classification, nonlinear classification, direct reconstruction, Bayesian reconstruction, etc.) employed.
In 2024 -2025, professor of neuropsychology
Barbara Sahakian qualified, "A lot of neuroscientists in the field are
very cautious and say we can't talk about reading individuals' minds,
and right now that is very true, but we're moving ahead so rapidly, it's
not going to be that long before we will be able to tell whether
someone's making up a story, or whether someone intended to do a crime
with a certain degree of certainty."
Applications
Natural images
Identification of complex natural images is possible using voxels from early and anterior visual cortex areas forward of them (visual areas V3A, V3B, V4, and the lateral occipital) together with Bayesian inference. This brain reading approach uses three components:
a structural encoding model that characterizes responses in early
visual areas; a semantic encoding model that characterizes responses in
anterior visual areas; and a Bayesian prior that describes the
distribution of structural and semantic scene statistics.
Experimentally the procedure is for subjects to view 1750 black and white
natural images that are correlated with voxel activation in their
brains. Then subjects viewed another 120 novel target images, and
information from the earlier scans is used reconstruct them. Natural
images used include pictures of a seaside cafe and harbor, performers on
a stage, and dense foliage.
In 2008 IBM
applied for a patent on how to extract mental images of human faces
from the human brain. It uses a feedback loop based on brain
measurements of the fusiform gyrus area in the brain which activates
proportionate with degree of facial recognition.
In 2011, a team led by Shinji Nishimoto used only brain
recordings to partially reconstruct what volunteers were seeing. The
researchers applied a new model, about how moving object information is
processed in human brains, while volunteers watched clips from several
videos. An algorithm searched through thousands of hours of external
YouTube video footage (none of the videos were the same as the ones the
volunteers watched) to select the clips that were most similar. The authors have uploaded demos comparing the watched and the computer-estimated videos.
In 2017 a face perception study in monkeys reported the reconstruction of human faces by analyzing electrical activity from 205 neurons.
In 2023 image reconstruction was reported utilizing Stable Diffusion on human brain activity obtained via fMRI.
In 2024, a study demonstrated that images imagined in the mind,
without visual stimulation, can be reconstructed from fMRI brain signals
utilizing machine learning and generative AI technology.
Another 2024 study reported the reconstruction of images from EEG.
Lie detector
Brain-reading has been suggested as an alternative to polygraph machines as a form of lie detection. Another alternative to polygraph machines is blood oxygenated level dependent
functional MRI technology. This technique involves the interpretation
of the local change in the concentration of oxygenated hemoglobin in the
brain, although the relationship between this blood flow and neural
activity is not yet completely understood. Another technique to find concealed information is brain fingerprinting, which uses EEG to ascertain if a person has a specific memory or information by identifying P300 event related potentials.
A number of concerns have been raised about the accuracy and
ethical implications of brain-reading for this purpose. Laboratory
studies have found rates of accuracy of up to 85%; however, there are
concerns about what this means for false positive results among
non-criminal populations: "If the prevalence of "prevaricators" in the
group being examined is low, the test will yield far more false-positive
than true-positive results; about one person in five will be
incorrectly identified by the test."
Ethical problems involved in the use of brain-reading as lie detection
include misapplications due to adoption of the technology before its
reliability and validity can be properly assessed and due to
misunderstanding of the technology, and privacy concerns due to
unprecedented access to individual's private thoughts.
However, it has been noted that the use of polygraph lie detection
carries similar concerns about the reliability of the results and violation of privacy.
Human–machine interfaces
The Emotiv Epoc is one way that users can give commands to devices using only thoughts.
Brain-reading has also been proposed as a method of improving human–machine interfaces, by the use of EEG to detect relevant brain states of a human.
In recent years, there has been a rapid increase in patents for
technology involved in reading brainwaves, rising from fewer than 400
from 2009–2012 to 1600 in 2014. These include proposed ways to control video games via brain waves and "neuro-marketing" to determine someone's thoughts about a new product or advertisement.
Emotiv Systems, an Australian electronics company, has demonstrated a headset
that can be trained to recognize a user's thought patterns for
different commands. Tan Le demonstrated the headset's ability to
manipulate virtual objects on screen, and discussed various future
applications for such brain-computer interface devices, from powering wheel chairs to replacing the mouse and keyboard.
Detecting attention
It
is possible to track which of two forms of rivalrous binocular
illusions a person was subjectively experiencing from fMRI signals.
When humans think of an object, such as a screwdriver, many
different areas of the brain activate. Marcel Just and his colleague,
Tom Mitchell, have used fMRI brain scans to teach a computer to identify
the various parts of the brain associated with specific thoughts.
This technology also yielded a discovery: similar thoughts in different
human brains are surprisingly similar neurologically. To illustrate
this, Just and Mitchell used their computer to predict, based on nothing
but fMRI data, which of several images a volunteer was thinking about.
The computer was 100% accurate, but so far the machine is only
distinguishing between 10 images.
Detecting thoughts
The category of event which a person freely recalls can be identified from fMRI before they say what they remembered.
16 December 2015, a study conducted by Toshimasa Yamazaki at Kyushu Institute of Technology found that during a rock-paper-scissors game a computer was able to determine the choice made by the subjects before they moved their hand. An EEG was used to measure activity in the Broca's area to see the words two seconds before the words were uttered.
In 2023, the University of Texas in Austin trained a non-invasive brain decoder to translate volunteers' brainwaves into the GPT-1language model.
After lengthy training on each individual volunteer, the decoder
usually failed to reconstruct the exact words, but could nevertheless
reconstruct meanings close enough that the decoder could, most of the
time, identify what timestamp of a given book the subject was listening
to.
Detecting language
Statistical analysis of EEG brainwaves has been claimed to allow the recognition of phonemes, and (in 1999) at a 60% to 75% level color and visual shape words.
On 31 January 2012 Brian Pasley and colleagues of University of California Berkeley published their paper in PLoS Biology
wherein subjects' internal neural processing of auditory information
was decoded and reconstructed as sound on computer by gathering and
analyzing electrical signals directly from subjects' brains.
The research team conducted their studies on the superior temporal
gyrus, a region of the brain that is involved in higher order neural
processing to make semantic sense from auditory information.
The research team used a computer model to analyze various parts of the
brain that might be involved in neural firing while processing auditory
signals. Using the computational model, scientists were able to
identify the brain activity involved in processing auditory information
when subjects were presented with recording of individual words.
Later, the computer model of auditory information processing was used
to reconstruct some of the words back into sound based on the neural
processing of the subjects. However the reconstructed sounds were not of
good quality and could be recognized only when the audio wave patterns
of the reconstructed sound were visually matched with the audio wave
patterns of the original sound that was presented to the subjects. However this research marks a direction towards more precise identification of neural activity in cognition.
Some researchers in 2008 were able to predict, with 60% accuracy,
whether a subject was going to push a button with their left or right
hand. This is notable, not just because the accuracy is better than
chance, but also because the scientists were able to make these
predictions up to 10 seconds before the subject acted – well before the
subject felt they had decided.
This data is even more striking in light of other research suggesting
that the decision to move, and possibly the ability to cancel that
movement at the last second, may be the results of unconscious processing.
John Dylan-Haynes has also demonstrated that fMRI can be used to
identify whether a volunteer is about to add or subtract two numbers in
their head.
Predictive processing in the brain
Neural decoding techniques have been used to test theories about the predictive brain, and to investigate how top-down predictions affect brain areas such as the visual cortex. Studies using fMRI decoding techniques have found that predictable sensory events and the expected consequences of our actions are better decoded in visual brain areas, suggesting that prediction 'sharpens' representations in line with expectations.
Virtual environments
It has also been shown that brain-reading can be achieved in a complex virtual environment.
Emotions
Just and Mitchell also claim they are beginning to be able to identify kindness, hypocrisy, and love in the brain.
Security
In
2013 a project led by University of California Berkeley professor John
Chuang published findings on the feasibility of brainwave-based computer
authentication as a substitute for passwords. Improvements in the use
of biometrics for computer authentication has continually improved since
the 1980s, but this research team was looking for a method faster and
less intrusive than today's retina scans, fingerprinting, and voice
recognition. The technology chosen to improve security measures is an electroencephalogram
(EEG), or brainwave measurer, to improve passwords into "pass
thoughts." Using this method Chuang and his team were able to customize
tasks and their authentication thresholds to the point where they were
able to reduce error rates under 1%, significantly better than other
recent methods. In order to better attract users to this new form of
security the team is still researching mental tasks that are enjoyable
for the user to perform while having their brainwaves identified. In the
future this method could be as cheap, accessible, and straightforward
as thought itself.
John-Dylan Haynes states that fMRI can also be used to identify
recognition in the brain. He provides the example of a criminal being
interrogated about whether he recognizes the scene of the crime or
murder weapons.
Methods of analysis
Classification
In
classification, a pattern of activity across multiple voxels is used to
determine the particular class from which the stimulus was drawn. Many studies have classified visual stimuli, but this approach has also been used to classify cognitive states.
Reconstruction
In
reconstruction brain reading the aim is to create a literal picture of
the image that was presented. Early studies used voxels from early visual cortex areas (V1, V2, and V3) to reconstruct geometric stimuli made up of flickering checkerboard patterns.
EEG
EEG has also been used to identify recognition of specific information or memories by the P300 event related potential, which has been dubbed 'brain fingerprinting'.
Accuracy
Brain-reading
accuracy is increasing steadily as the quality of the data and the
complexity of the decoding algorithms improve. In one recent experiment
it was possible to identify which single image was being seen from a set
of 120.
In another it was possible to correctly identify 90% of the time which
of two categories the stimulus came and the specific semantic category
(out of 23) of the target image 40% of the time.
Limitations
It
has been noted that so far brain-reading is limited. "In practice,
exact reconstructions are impossible to achieve by any reconstruction
algorithm on the basis of brain activity signals acquired by fMRI. This
is because all reconstructions will inevitably be limited by
inaccuracies in the encoding models and noise in the measured signals.
Our results demonstrate that the natural image prior is a powerful (if
unconventional) tool for mitigating the effects of these fundamental
limitations. A natural image prior with only six million images is
sufficient to produce reconstructions that are structurally and
semantically similar to a target image."
With brain scanning
technology becoming increasingly accurate, experts predict important
debates over how and when it should be used. One potential area of
application is criminal law. Haynes states that simply refusing to use
brain scans on suspects also prevents the wrongly accused from proving
their innocence. US scholars generally believe that involuntary brain reading, and involuntary polygraph tests, would violate the Fifth Amendment's right to not self-incriminate.
One perspective is to consider whether brain imaging is like testimony,
or instead like DNA, blood, or semen. Paul Root Wolpe, director of the
Center for Ethics at Emory University in Atlanta predicts that this
question will be decided by a Supreme Court case.
In other countries outside the United States, thought
identification has already been used in criminal law. In 2008 an Indian
woman was convicted of murder after an EEG of her brain allegedly
revealed that she was familiar with the circumstances surrounding the
poisoning of her ex-fiancé.
Some neuroscientists and legal scholars doubt the validity of using
thought identification as a whole for anything past research on the
nature of deception and the brain.
The Economist
cautioned people to be "afraid" of the future impact, and some
ethicists argue that privacy laws should protect private thoughts. Legal
scholar Hank Greely argues that the court systems could benefit from such technology, and neuroethicist Julian Savulescu states that brain data is not fundamentally different from other types of evidence. In Nature,
journalist Liam Drew writes about emerging projects to attach
brain-reading devices to speech synthesizers or other output devices for
the benefit of tetraplegics.
Such devices could create concerns of accidentally broadcasting the
patient's "inner thoughts" rather than merely conscious speech.
History
MRI scanner that could be used for Thought Identification
Psychologist John-Dylan Haynes experienced breakthroughs in brain imaging research in 2006 by using fMRI. This research included new findings on visual object recognition, tracking dynamic mental processes, lie detecting,
and decoding unconscious processing. The combination of these four
discoveries revealed such a significant amount of information about an
individual's thoughts that Haynes termed it "brain reading".
The fMRI has allowed research to expand by significant amounts
because it can track the activity in an individual's brain by measuring
the brain's blood flow. It is currently thought to be the best method
for measuring brain activity, which is why it has been used in multiple
research experiments in order to improve the understanding of how
doctors and psychologists can identify thoughts.
In a 2020 study, AI using implanted electrodes could correctly
transcribe a sentence read aloud from a fifty-sentence test set 97% of
the time, given 40 minutes of training data per participant.
Future research
Experts
are unsure of how far thought identification can expand, but Marcel
Just believed in 2014 that in 3–5 years there will be a machine that is
able to read complex thoughts such as 'I hate so-and-so'.
Donald Marks, founder and chief science officer of MMT, is
working on playing back thoughts individuals have after they have
already been recorded.
Researchers at the University of California Berkeley have already
been successful in forming, erasing, and reactivating memories in rats.
Marks says they are working on applying the same techniques to humans.
This discovery could be monumental for war veterans who suffer from PTSD.
Further research is also being done in analyzing brain activity during video games to detect criminals, neuromarketing, and using brain scans in government security checks.
In popular culture
A Captain Science comic panel showing a character using a device to read an alien's brain
The episode Black Hole of American medical drama House,
which aired on 15 March 2010, featured an experimental "cognitive
imaging" device that supposedly allowed seeing into a patient's
subconscious mind. The patient was first put in a preparation phase of
six hours while watching video clips, attached to a neuroimaging device
looking like electroencephalography or functional near-infrared spectroscopy, to train the neuroimaging classifier. Then the patient was put under twilight anesthesia,
and the same device was used to try to infer what was going through the
patient's mind. The fictional episode somewhat anticipated the study by
Nishimoto et al. published the following year, in which fMRI was used instead.
A timeline of key ancient protein analysis since the 1950s.
Ancient proteins are complex mixtures and the term palaeoproteomics is used to characterise the study of proteomes in the past. Ancients proteins have been recovered from a wide range of archaeological materials, including bones, teeth, eggshells, leathers, parchments, ceramics, painting binders and well-preserved soft tissues like gut intestines. These preserved proteins have provided valuable information about taxonomic identification, evolution history (phylogeny), diet, health, disease, technology and social dynamics in the past.
Like modern proteomics, the study of ancient proteins has also
been enabled by technological advances. Various analytical techniques,
for example, amino acid profiling, racemisation dating, immunodetection, Edman sequencing, peptide mass fingerprinting, and tandem mass spectrometry have been used to analyse ancient proteins. The introduction of high-performance mass spectrometry (for example, Orbitrap) in 2000 has revolutionised the field, since the entire preserved sequences of complex proteomes can be characterised.
Over the past decade, the study of ancient proteins has evolved
into a well-established field in archaeological science. However, like
the research of aDNA
(ancient DNA preserved in archaeological remains), it has been limited
by several challenges such as the coverage of reference databases,
identification, contamination and authentication. Researchers have been working on standardising sampling, extraction, data analysis and reporting for ancient proteins. Novel computational tools such as de novo sequencing and open research may also improve the identification of ancient proteomes.
History: the pioneers of ancient protein studies
Philip Abelson, Edgar Hare and Thomas Hoering
Abelson, Hare and Hoering were leading the studies of ancient proteins between the 1950s and the early 1970s.
Abelson was directing the Geophysical Laboratory at the Carnegie
Institute (Washington, DC) between 1953 and 1971, and he was the first
to discover amino acids in fossils. Hare joined the team and specialised in amino acid racemisation
(the conversion of L- to D-amino acids after the death of organisms).
D/L ratios were used to date various ancient tissues such as bones,
shells and marine sediments. Hoering was another prominent member, contributing to the advancement of isotopes and mass spectrometry. This golden trio drew many talented biologists, geologists, chemists and physicists to the field, including Marilyn Fogel, John Hedges and Noreen Tuross.
Ralph Wyckoff
Wyckoff was a pioneer in X-ray crystallography and electron microscopy. Using microscopic images, he demonstrated the variability and damage of collagen fibres in ancient bones and shells. His research contributed to the understanding of protein diagenesis
(degradation) in the late 1960s, and highlighted that ancient amino
acid profiles alone might not be sufficient for protein identification.
Margaret Jope and Peter Wesbroek
Jope and Wesbroek were leading experts in shell proteins and crystallisation. Wesbroek later established Geobiochemistry laboratory at the University of Leiden, focusing on biomineralisation and how this process facilitated protein survival.
He also pioneered the use of antibodies for the study of ancient
proteins in the 1970s and 1980s, utilising different immunological
techniques such as Ouchterlony double immunodiffusion (interactions of antibodies and antigens in a gel).
Understanding
how ancient proteins are formed and incorporated into archaeological
materials are essential in sampling, evaluating contamination and
planning analyses. Generally, for ancient proteins in proteinaceous tissues, notably, collagens in bones, keratins in wool, amelogenins in tooth enamel, and intracrystalline proteins in shells, they might be incorporated during the time of tissue formation.
However, the formation of proteinaceous tissues is often complex,
dynamic and affected by various factors such pH, metals, ion
concentration, diet plus other biological, chemical and physical
parameters. One of the most characterised phenomena is bone mineralisation, a process by which hydroxyapatite crystals are deposited within collagen fibres, forming a matrix.
Despite extensive research, bone scaffolding is still a challenge, and
the role of non-collagenous proteins (a wide range of proteoglycans and
other proteins) remains poorly understood.
Another category is complex and potentially mineralised tissues,
such as ancient human dental calculi and ceramic vessels. Dental calculi
are defined as calcified biofilms, created and mediated by interactions between calcium phosphate ions and a wide range of oral microbial, human, and food proteins during episodic biomineralisation.
Similarly, the minerals of a ceramic matrix might interact with food
proteins during food processing and cooking. This is best explained by
calcite deposits adhering to the inside of archaeological ceramic
vessels. These protein-rich mineralised deposits might be formed during repeated cooking using hard water and subsequent scaling.
Preservation
Organic (containing carbon) biomolecules like proteins are prone to degradation.
For example, experimental studies demonstrate that robust, fibrous and
hydrophobic keratins such as feathers and woollen fabrics decay quickly
at room temperature.
Indeed ancient proteins are exceptional, and they are often recovered
from extreme burial contexts, especially dry and cold environments. This is because the lack of water and low temperature may slow down hydrolysis, microbial attack and enzymatic activities.
There are also proteins whose chemical and physical properties
may enable their preservation in the long term. The best example is Type 1 collagen; it is one of the most abundant proteins in skin (80-85%) and bone (80-90%) extracellular matrices. It is also mineralised, organised in a triple helix and stabilised by hydrogen bonding. Type 1 collagen
has been routinely extracted from ancient bones, leathers, and
parchments; these characteristics may contribute to its stability over
time. Another common protein in the archaeological record is milk beta-lactoglobulin, often recovered from ancient dental calculi. Beta-lactoglobulin is a small whey protein with a molecular mass of around 18400 Da (dalton). It is resistant to heating and enzymatic degradation; structurally, it has a beta-barrel associated with binding to small hydrophobic molecules such as fatty acids, forming stable polymers.
Given that proteins vary in abundance, size, hydrophobicity
(water insolubility), structure, conformation (shape), function and
stability, understanding protein preservation is challenging. While there are common determinants of protein survival, including thermal history (temperature/time), burial conditions (pH/soil chemistry/water table) and protein properties (neighbouring amino acids/secondary structure/tertiary folding/proteome content), there is no clear answer and protein diagenesis is still an active research field.
Structure & damage patterns
Generally, proteins have four levels of structural complexity: quaternary (multiple polypeptides, or subunits), tertiary (the 3D folding of a polypeptide), secondary (alpha helices/beta sheets/random coils) and primary structure (linear amino acid sequences linked by peptide bonds). Ancient proteins are expected to lose their structural integrity over time, due to denaturation (protein unfolding) or other diagenetic processes.
Ancient proteins also tend to be fragmented, damaged and altered. Proteins can be cleaved into small fragments over time, since hydrolysis (the addition of water) breaks peptide bonds (covalent bonds between two neighbouring alpha-amino acids). In terms of post-translational modifications (changes occur after RNA translation), ancient proteins are often characterised by extensive damage such as oxidation (methionine), hydroxylation (proline), deamidation (glutamine/asparagine), citrullination (arginine), phosphorylation (serine/threonine/tyrosine), N-terminusglutamate to pyroglutamate and the addition of advanced glycation products to lysine or arginine. Among these modifications, glutamine deamidation is one of the most time-dependent processes. Glutamine deamidation is mostly a non-enzymatic process, by which glutamine is converted to glutamic acid (+0.98406 Da) via side-chainhydrolysis or the formation of a glutarimide ring. It is a slow conversion with a long half-time, depending on adjacent amino acids, secondary structures, 3D folding, pH, temperature and other factors. Bioinformatic tools are available to calculate bulk and site-specific deamidation rates of ancient proteins.
The structural manifestation of these chemical changes within ancient
proteins was first documented using scanning electron microscopy (SEM).
Type-1 collagen protein fibrils of a permafrost-preserved woolly mammoth
(Yukon, Canada) were directly imaged and shown to retain their
characteristic banding pattern. These were compared against type-1
collagen fibrils from a temperate Columbian mammoth specimen (Montana,
U.S.A.). The Columbian mammoth collagen fibrils, unlike those of the
permafrost-frozen woolly mammoth, had lost their banding, indicating
substantial chemical degradation of the constituent peptide sequences.
This also constitutes the first time that collagen banding, or the
molecular structure for any ancient protein, has been directly imaged
with scanning electron microscopy.
Palaeoproteomics
Overview
Palaeoproteomics is a fast-developing field that combines archaeology, biology, chemistry and heritage studies. Comparable to its high-profile sister field, aDNA
analysis, the extraction, identification and authentication of ancient
proteins are challenging, since both ancient DNA and proteins tend to be
ultrashort, highly fragmented, extensively damaged and chemically
modified.
However, ancient proteins are still one of the most informative
biomolecules. Proteins tend to degrade more slowly than DNA, especially
biomineralised proteins. While ancient lipids can be used to differentiate between marine, plant and animal fats, ancient protein data is high-resolution with taxon- and tissue-specificities.
To date, ancient peptide sequences have been successfully
extracted and securely characterised from various archaeological
remains, including a 3.8 Ma (million year) ostrich eggshell, 1.77 Ma Homo erectus teeth, a 0.16 Ma Denisovan jawbone and several Neolithic (6000-5600 cal BC) pots. Hence, palaeoproteomics has provided valuable insight into past evolutionary relationships, extinct species and societies.
Extraction
Generally, there are two approaches: a digestion-free, top-down method and bottom-up proteomics. Top-down proteomics is seldom used to analyse ancient proteins due to analytical and computational difficulties. For bottom-up, or shotgun proteomics, ancient proteins are digested into peptides using enzymes, for example trypsin.
Mineralised archaeological remains such as bones, teeth, shells, dental
calculi and ceramics require an extra demineralisation step to release
proteins from mineral matrices. This is often achieved by using a weak acid (ethylenediaminetetraacetic acid, EDTA) or cold (4 °C) hydrochloric acid (HCl) to minimise chemical modifications that may introduced during extraction.
After demineralisation, protein solubilisation, alkylation and
reduction, buffer exchange is needed to ensure that extracts are
compatible with downstream analysis. Currently, there are three
widely-used protocols for ancient proteins and gels (GASP), filters (FASP) and magnetic beads (SP3)
can be used for this purpose. Once buffer exchange is completed,
extracts are incubated with digestion enzymes, then concentrated,
purified and desalted.
For non-mineralised archaeological materials such as parchments,
leathers and paintings, demineralisation is not necessary, and protocols
can be changed depending on sample preservation and sampling size.
Instrumentation and data analysis
Nowadays, palaeoproteomics is dominated by two mass spectrometry-based techniques: MALDI-ToF (matrix-assisted laser desorption/ionisation-time-of-flight) and LC-MS/MS. MALDI-ToF is used to determine the mass-to-charge (m/z) ratios of ions and their peak patterns. Digested peptides are spotted on a MALDI plate, co-crystallise with a matrix (mainly α-cyano-4-hydroxycinnamic acid,
CHCA); a laser excites and ionises the matrix, then its time to travel a
vacuum tube is measured and converted to a spectrum of m/z ratios and
intensities.
Since only peak patterns, not entire amino acid sequences of
digested peptides are characterised, peptide markers are needed for
pattern matching and ancient protein identification. In archaeological contexts, MALDI-ToF has been routinely used for bones and collagens in a field known as ZooMS (zooarchaeolgy by mass spectrometry).
LC-MS/MS is another widely used approach. It is a powerful
analytical technique to separate, sequence and quantify complex protein
mixtures.
The first step in LC-MS/MS is liquid chromatography. Protein mixtures
are separated in a liquid mobile phase using a stationary column. How liquid analytes interact with a stationary phase depends on their size, charge, hydrophobicity and affinity. These differences lead to distinct elution
and retention time (when a component of a mixture exit a column). After
chromatographic separation, protein components are ionised and
introduced into mass spectrometers.
During a first mass scan (MS1), the m/z ratios of precursor ions are
measured. Selected precursors are further fragmented and the m/z ratios
of fragment ions are determined in a second mass scan (MS2). There are
different fragmentation methods, for example, high-energy C-trap
dissociation (HCD) and collision induced dissociation (CID), but b- and y-ions are frequently targeted.
Search engines and software tools are often used to process ancient MS/MS data, including MaxQuant, Mascot and PEAKS. Protein sequence data can be downloaded from public genebanks (UniProt/NCBI) and exported as FASTA files for sequencing algorithms.
Recently, open search engines such as MetaMorpheus, pFind and Fragpipe
have received attention, because they make it possible to identify all
modifications associated with peptide spectral matches (PSMs).
De novo sequencing
is also possible for the analysis of ancient MS/MS spectra. It is a
sequencing technique that assembles amino acid sequences directly from
spectra without reference databases. Advances in deep learning also lead to the development of multiple pipelines such as DeNovoGUI, DeepNovo2 and Casanovo.
However, it may be challenging to evaluate the outputs of de novo
sequences and optimisation may be required for ancient proteins to
minimise false positives and overfitting.
Applications
Palaeoproteomes
Bones.
Ancient bones are one of the most well-characterised and iconic
palaeoproteomes. Ancient bone proteomes have been sequenced from hominins, humans, mammoths, moas and now extinct rhinoceros. Fibrillar collagens are the most abundant proteins in modern bones; similarly, Type 1 and III collagens are also common in the archaeological record. While modern bones contain about 10% of non-collagenous proteins (NCPs), various NCPs have been recorded, including osteocalcin, biglycan and lumican. Generally, NCPs are excellent targets for studying evolution history, since they have higher turnover rates than bones.
Given the abundance of ancient bone proteomes, a bottom-up proteomic
workflow known as SPIN (Species by Proteome INvestigation) is available
for the high-throughput analysis of 150 million mammalian bones.
Teeth. Tooth enamel is one of the hardest and most
mineralised tissues in the human body, since it is mainly composed of
hydroxyapatite crystals.
While an enamel proteome is small, ancient amelogenins and other
ameloblast-relevant proteins are often well-preserved in a mineralised,
closed system.
Ancient enamel proteins are useful when aDNA or other proteins do not
survive, and they have been analysed to understand extinct species and
evolution.
Shells. Archaeological shells also contain rich palaoproteomes. Like tooth enamel, They are more or less close systems that isolate proteins from water or other forces of degradation. Strathiocalcin-1 and -2 are securely identified in 3.8 Ma ostrich eggshell samples at the site of Laetoli in Tanzania. These C-type lectins are associated with biomineralisation, and they are also found in extinct big bird shells collected from Australia. Given the age of the ostrich eggshell, it was verified by a
combination of six methods: analytical replication (same samples
analysed in different labs), amino acid racemisation (D/L ratios),
carry-over analysis (pre- and after-injection washes to evaluation the
extent of carry-over in mass spectrometers), damage patterns
(deamidation/oxidation/phosphorylation/amidation/decomposition) and aDNA
studies. These independent procedures ensure the authenticity of the oldest peptide sequences.
Other complex mixtures
Ceramics & food crusts.
Various ancient dietary proteins have been characterised from ceramics
and associated food crusts (charred and calcite deposits on ceramic
vessels). Cow, sheep and goat milk beta-lactoglobulin proteins are predominant in this context, but there are also milk caseins (alpha-, beta- and kappa-casein), animal blood haemoglobins and a wide range of plant proteins (wheat glutenins, barley hordeins, legumins and other seed storage proteins).
The identification of these ancient foodstuffs may be used to
understand how food was prepared, cooked and consumed in the past. It is
also clear that archaeological ceramics and food crusts are complex
mixtures that contain metaproteomes (multiple proteomes).
Analytical challenges
While
palaeoproteomics is a useful tool for a wide array of research
questions, there are some analytical challenges that prevent the field
from reaching its potential. The first issue is preservation.
Mineral-binding seems to stabilise proteins, but this is a complex,
dynamic process that has not been systematically investigated in
different archaeological and burial contexts.
Destructive sampling is another problem that can cause
irreparable damage to archaeological materials. Although
minimally-destructive or non-destructive sampling methods are being
developed for parchments, bones, mummified tissues and leathers, it is
unclear if they are suitable for other types of remains such as dental
calculi, ceramics and food crusts.
It is equally difficult to extract mineral-bound proteins due to their low abundance, extensive degradation, and often strong intermolecular interactions (hydrogen bonding, dispersion, ion-dipole and dipole-dipole interactions) with mineral matrice.
Ancient proteins also vary in preservation states, hydrophobicity,
solubility and optimum pH values; methodological development is still
required to maximise protein recovery.
Ancient protein identification is still a challenge, because
database search algorithms are not optimised for low-intensity and
damaged ancient proteins, increasing the probabilities of false positive
and false negatives. There is also the issue of dark proteomes (unknown protein regions that cannot be sequenced); approximately 44-54% of proteins in eukaryotes such as animals and plants are dark. Reference databases are also biassed towards model organisms such as yeasts and mouses, and current sequence data may not cover all archaeological materials.
Lastly, while cytosinedeamination (cytosine being converted to uracil
over time that causes misreadings) has been widely used in the
authentication of aDNA, there are no standardised procedures to
authenticate ancient proteins. This authentication issue is highlighted by the claim identification of 78 Ma Brachylophosaurus canadensis (hadrosaur) and 68 Ma Tyrannosaurus rex collagen peptides.
The lack of post-translational modifications and subsequent
experimental studies demonstrate that these sequences may be derived
from bacterial biofilms, the cross-contamination of control samples or modern laboratory procedures.
Future directions
Despite
significant analytical challenges, palaeoproteomics is constantly
evolving and adopting new technology. Latest high-performance mass
spectrometry, for example, TimsToF (trapped ion mobility-time-of-flight)
in a DIA mode (data independent acquisition) may help with the
separation, selection and resolution of ancient MS/MS data. Novel extraction protocols such as DES (Deep Eutectic Solvent)-assisted procedures may increase the numbers and types of extracted palaeoproteomes. Identification tools are also improving thanks to progress of bioinformatics, machine learning and artificial intelligence.
COVID-19 testing involves analyzing samples to assess the current or past presence of SARS-CoV-2, the virus that cases COVID-19 and is responsible for the COVID-19 pandemic. The two main types of tests detect either the presence of the virus or antibodies produced in response to infection.
Molecular tests for viral presence through its molecular components are
used to diagnose individual cases and to allow public health
authorities to trace and contain outbreaks. Antibody tests (serology
immunoassays) instead show whether someone once had the disease. They are less useful for diagnosing current infections because antibodies may not develop for weeks after infection. It is used to assess disease prevalence, which aids the estimation of the infection fatality rate.
Individual jurisdictions have adopted varied testing protocols,
including whom to test, how often to test, analysis protocols, sample
collection and the uses of test results.
This variation has likely significantly impacted reported statistics,
including case and test numbers, case fatality rates and case
demographics.
Because SARS-CoV-2 transmission occurs days after exposure (and before
onset of symptoms), there is an urgent need for frequent surveillance
and rapid availability of results.
Explanation of the underlying pathophysiology pertaining to diagnosis of COVID-19
Positive viral tests indicate a current infection, while positive antibody tests indicate a prior infection. Other techniques include a CT scan, checking for elevated body temperature, checking for low blood oxygen level, and detection by trained dogs.
Detection of the virus
Detection of the virus is usually done either by looking for the virus's inner RNA, or pieces of protein on the outside of the virus. Tests that look for the viral antigens (parts of the virus) are called antigen tests.
Reverse transcription polymerase chain reaction (RT-PCR) test
Polymerase chain reaction (PCR) is a process that amplifies (replicates) a small, well-defined segment of DNA many hundreds of thousands of times, creating enough of it for analysis. Test samples are treated with certain chemicals that allow DNA to be extracted. Reverse transcription converts RNA into DNA.
Reverse transcription polymerase chain reaction (RT-PCR) first uses reverse transcription to obtain DNA, followed by PCR to amplify that DNA, creating enough to be analyzed. RT-PCR can thereby detect SARS-CoV-2, which contains only RNA. The RT-PCR process generally requires a few hours. These tests are also referred to as molecular or genetic assays.
Real-time PCR (qPCR)
provides advantages including automation, higher-throughput and more
reliable instrumentation. It has become the preferred method.
The combined technique has been described as real-time RT-PCR or quantitative RT-PCR and is sometimes abbreviated qRT-PCR, rRT-PCR or RT-qPCR, although sometimes RT-PCR or PCR are used. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines propose the term RT-qPCR, but not all authors adhere to this.
Average sensitivity
for rapid molecular tests depend on the brand. For ID NOW, the average
sensitivity was 73.0% with an average specificity of 99.7%; for Xpert
Xpress the average sensitivity was 100% with an average specificity of
97.2%.
In a diagnostic test, sensitivity is a measure of how well a test can
identify true positives and specificity is a measure of how well a test
can identify true negatives. For all testing, both diagnostic and
screening, there is usually a trade-off between sensitivity and
specificity, such that higher sensitivities will mean lower
specificities and vice versa.
Sensitivity and Specificity
A 90% specific test will correctly identify 90% of those who are uninfected, leaving 10% with a false positive result.
Samples can be obtained by various methods, including a nasopharyngeal swab, sputum (coughed up material), throat swabs, deep airway material collected via suction catheter or saliva. Drosten
et al. remarked that for 2003 SARS, "from a diagnostic point of view,
it is important to note that nasal and throat swabs seem less suitable
for diagnosis, since these materials contain considerably less viral RNA
than sputum, and the virus may escape detection if only these materials
are tested."
Sensitivity of clinical samples by RT-PCR is 63% for nasal swab,
32% for pharyngeal swab, 48% for feces, 72–75% for sputum, and 93–95%
for bronchoalveolar lavage.
The likelihood of detecting the virus depends on collection
method and how much time has passed since infection. According to
Drosten tests performed with throat swabs are reliable only in the first
week. Thereafter the virus may abandon the throat and multiply in the
lungs. In the second week, sputum or deep airways collection is
preferred.
Collecting saliva may be as effective as nasal and throat swabs, although this is not certain.Sampling saliva may reduce the risk for health care professionals by eliminating close physical interaction. It is also more comfortable for the patient. Quarantined people can collect their own samples. A saliva test's diagnostic value depends on sample site (deep throat, oral cavity, or salivary glands). Some studies have found that saliva yielded greater sensitivity and consistency when compared with swab samples.
On 15 August 2020, the US FDA granted an emergency use
authorization for a saliva test developed at Yale University that gives
results in hours.
On 4 January 2021, the US FDA issued an alert about the risk of false results, particularly false negative results, with the Curative SARS-Cov-2 Assay real-time RT-PCR test.
Viral burden measured in upper respiratory specimens declines after symptom onset.
Following recovery, many patients no longer have detectable viral RNA
in upper respiratory specimens. Among those who do, RNA concentrations
three days following recovery are generally below the range in which
replication-competent virus has been reliably isolated.
No clear correlation has been described between length of illness and
duration of post-recovery shedding of viral RNA in upper respiratory
specimens.
Demonstration of a throat swab for COVID-19 testing
A PCR machine
Other molecular tests
Isothermal nucleic acid amplification
tests also amplify the virus's genome. They are faster than PCR because
they do not involve repeated heating and cooling cycles. These tests
typically detect DNA using fluorescent tags, which are read out with specialized machines.
CRISPR gene editing
technology was modified to perform the detection: if the CRISPR enzyme
attaches to the sequence, it colors a paper strip. The researchers
expect the resulting test to be cheap and easy to use in point-of-care
settings.The test amplifies RNA directly, without the RNA-to-DNA conversion step of RT-PCR.
Antigen tests
COVID-19 Antigen Rapid Test Kit; the timer is provided by the user.Mucus from nose or throat in a test liquid is placed onto a COVID-19 rapid antigen diagnostic test device.COVID-19 rapid testing in Rwanda
An antigen is the part of a pathogen that elicits an immune response. Antigen tests look for antigen proteins from the viral surface. In the case of a coronavirus, these are usually proteins from the surface spikes.
SARS-CoV-2 antigens can be detected before onset of COVID-19 symptoms
(as soon as SARS-CoV-2 virus particles) with more rapid test results,
but with less sensitivity than PCR tests for the virus.
COVID-19 rapid antigen tests are lateral flowimmunoassays that detect the presence of a specific viral antigen,
which indicates current viral infection. Antigen tests produce results
quickly (within approximately 15–30 minutes), and most can be used at
the point-of-care or as self-tests. Self-tests are rapid tests that can
be taken at home or anywhere, are easy to use, and produce rapid
results. Antigen tests can be performed on nasopharyngeal, nasal swab, or saliva specimens.
Antigen tests that can identify SARS-CoV-2 offer a faster and less expensive method to test for the virus.
Antigen tests are generally less sensitive than real-time reverse
transcription polymerase chain reaction (RT-PCR) and other nucleic acid
amplification tests (NAATs).
Antigen tests may be one way to scale up testing to much greater levels. Isothermal nucleic acid amplification tests can process only one sample at a time per machine. RT-PCR tests are accurate but require too much time, energy and trained personnel to run the tests.
"There will never be the ability on a [PCR] test to do 300 million
tests a day or to test everybody before they go to work or to school," Deborah Birx, head of the White House Coronavirus Task Force, said on 17 April 2020. "But there might be with the antigen test."
Samples may be collected via nasopharyngeal swab, a swab of the anterior nares, or from saliva (obtained by various methods including lollipop tests for children).
The sample is then exposed to paper strips containing artificial
antibodies designed to bind to coronavirus antigens. Antigens bind to
the strips and give a visual readout. The process takes less than 30
minutes, can deliver results at point of care, and does not require
expensive equipment or extensive training.
Swabs of respiratory viruses often lack enough antigen material to be detectable. This is especially true for asymptomatic patients who have little if any nasal discharge. Viral proteins are not amplified in an antigen test.
A Cochrane review based on 64 studies investigating the efficacy of 16
different antigen tests determined that they correctly identified
COVID-19 infection in an average of 72% of people with symptoms,
compared to 58% of people without symptoms.
Tests were most accurate (78%) when used in the first week after
symptoms first developed, likely because people have the most virus in
their system in the first days after they are infected. While some scientists doubt whether an antigen test can be useful against COVID-19,
others have argued that antigen tests are highly sensitive when viral
load is high and people are contagious, making them suitable for public
health screening.
Routine antigen tests can quickly identify when asymptomatic people are
contagious, while follow-up PCR can be used if confirmatory diagnosis
is needed.
The body responds to a viral infection by producing antibodies that help neutralize the virus. Blood tests (also called serology tests or serology immunoassays) can detect the presence of such antibodies.
Antibody tests can be used to assess what fraction of a population has
once been infected, which can then be used to calculate the disease's mortality rate.
They can also be used to determine how much antibody is contained in a
unit of convalescent plasma, for COVID-19 treatment, or to verify if a
given vaccine generates an adequate immune response.
SARS-CoV-2 antibodies' potency and protective period have not been established. Therefore, a positive antibody test may not imply immunity to a future
infection. Further, whether mild or asymptomatic infections produce
sufficient antibodies for a test to detect has not been established. Antibodies for some diseases persist in the bloodstream for many years, while others fade away.
The most notable antibodies are IgM and IgG.
IgM antibodies are generally detectable several days after initial
infection, although levels over the course of infection and beyond are
not well characterized. IgG antibodies generally become detectable 10–14 days after infection and normally peak around 28 days after infection.
This pattern of antibody development seen with other infections, often
does not apply to SARS-CoV-2, however, with IgM sometimes occurring
after IgG, together with IgG or not occurring at all.
Generally, however, median IgM detection occurs 5 days after symptom
onset, whereas IgG is detected a median 14 days after symptom onset. IgG levels significantly decline after two or three months.
Genetic tests verify infection earlier than antibody tests. Only
30% of those with a positive genetic test produced a positive antibody
test on day 7 of their infection.
Antibody Test Types
Rapid diagnostic test (RDT)
RDTs typically use a small, portable, positive/negative lateral flow assay
that can be executed at point of care. RDTs may process blood samples,
saliva samples, or nasal swab fluids. RDTs produce colored lines to
indicate positive or negative results.
Enzyme-linked immunosorbent assay (ELISA)
ELISAs can be qualitative or quantitative and generally require a lab. These tests usually use whole blood, plasma, or serum
samples. A plate is coated with a viral protein, such as a SARS-CoV-2
spike protein. Samples are incubated with the protein, allowing any
antibodies to bind to it. The antibody-protein complex can then be
detected with another wash of antibodies that produce a
color/fluorescent readout.
Neutralization assay
Neutralization assays assess whether sample antibodies prevent viral infection in test cells. These tests sample blood, plasma or serum. The test cultures cells that allow viral reproduction (e.g., Vero E6
cells). By varying antibody concentrations, researchers can visualize
and quantify how many test antibodies block virus replication.
Chemiluminescent immunoassay
Chemiluminescent immunoassays
are quantitative lab tests. They sample blood, plasma, or serum.
Samples are mixed with a known viral protein, buffer reagents and
specific, enzyme-labeled antibodies. The result is luminescent. A
chemiluminescent microparticle immunoassay uses magnetic, protein-coated
microparticles. Antibodies react to the viral protein, forming a
complex. Secondary enzyme-labeled antibodies are added and bind to these
complexes. The resulting chemical reaction produces light. The radiance
is used to calculate the number of antibodies. This test can identify
multiple types of antibodies, including IgG, IgM, and IgA.
Neutralizing vis-à-vis binding antibodies
Most if not all large scale COVID-19 antibody testing looks for binding antibodies only and does not measure the more important neutralizing antibodies (NAb). A NAb is an antibody that neutralizes the infectivity of a virus
particle by blocking its attachment to or entry into a susceptible cell;
enveloped viruses, like e.g. SARS-CoV-2, are neutralized by the
blocking of steps in the replicative cycle up to and including membrane
fusion.
A non-neutralizing antibody either does not bind to the crucial
structures on the virus surface or binds but leaves the virus particle
infectious; the antibody may still contribute to the destruction of
virus particles or infected cells by the immune system. It may even enhance infectivity by interacting with receptors on macrophages.
Since most COVID-19 antibody tests return a positive result if they
find only binding antibodies, these tests cannot indicate that the
subject has generated protective NAbs that protect against re-infection.
It is expected that binding antibodies imply the presence of NAbs and for many viral diseases total antibody responses correlate somewhat with NAb responses
but this is not established for COVID-19. A study of 175 recovered
patients in China who experienced mild symptoms reported that 10
individuals had no detectable NAbs at discharge, or thereafter. How
these patients recovered without the help of NAbs and whether they were
at risk of re-infection was not addressed. An additional source of uncertainty is that even if NAbs are present, viruses such as HIV can evade NAb responses.
Studies have indicated that NAbs to the original SARS virus (the predecessor to the current SARS-CoV-2) can remain active for two years and are gone after six years. Nevertheless, memory cells including memory B cells and memory T cells can last much longer and may have the ability to reduce reinfection severity.
A point of care test in Peru. A blood droplet is collected by a pipette.
The rapid diagnostic test shows reactions of IgG and IgM antibodies. The left tray shows a negative result, and the right positive
Home test with a positive result. The "C" is the control; the "T" is the test
Other tests
Sniff tests
Sudden loss of smell can be used to screen people on a daily basis
for COVID-19. A study by the National Institutes of Health showed that
those infected with SARS-CoV-2 could not smell a 25% mixture of ethanol
and water.
Because various conditions can lead to the loss of the sense of smell,
a sniff test would not be definitive but indicate the need for a PCR
test. Because the loss of the sense of smell shows up before other
symptoms, there has been a call for widespread sniff testing.
Health care bureaucracies have generally ignored sniff tests even
though they are quick, easy and capable of being self-administered
daily. This has led some medical journals to write editorials supporting
the adoption of sniff testing.
Imaging
Typical visible features on CT initially include bilateral multilobar ground-glass opacities with a peripheral or posterior distribution. COVID-19 can be identified with higher precision using CT than with RT-PCR.
Chest X-rays, computed tomography scans and ultrasounds are all ways the coronavirus disease can be detected.
A chest x-ray is a portable lightweight machine. This machine is
typically more available than polymerase chain reaction and computerized
tomography scans. it only takes approximately 15 seconds per patient.
This makes chest-x ray readily accessible and inexpensive. It also has
quick turnaround time and can be crucial to the clinical equipment in
the detection of coronavirus disease.
Computerized tomography scans involve looking at 3D images from various
angles. This is not as available as chest x-ray, but still only takes
about 15 minutes per patient.
Computerized tomography has been a known routine scanning for pneumonia
diagnosis, therefore can also be used to diagnose coronavirus disease.
Computerized tomography scans may help with ongoing illness monitoring
throughout treatment. Patients who had low-grade symptoms and high body
temperatures revealed significant lung indications on their chest
computed tomography scans. They emphasized how important chest
computerized tomography scans are for determining how serious the
coronavirus disease infection is.
Ultrasound can be another tool to detect coronavirus disease. An
ultrasound is a type of imaging exam that produces images using sound
waves. Unlike computerized tomography scans and x-rays, ultrasound does
not use radiation. Moreover, it is inexpensive, simple to use,
repeatable, and has several additional advantages. Using a hand-held
mobile machine, ultrasound examinations can be performed in a variety of
healthcare settings.
There are some downsides to using imaging, however. The equipment
needed for computed tomography scans is not available in most
hospitals, making it not as effective as some other tools used for
detection of the coronavirus disease.
One of the difficult tasks in a pandemic is manually inspecting each
report, which takes numerous radiology professionals and time.
There were several problems with early studies of using chest
computerized tomography scans for diagnosing coronavirus. Some of these
problems included the disease severity characters being different in
severe and hospitalized cases. The criteria for doing a chest
computerized tomography scan were not defined. There was also no
characterization of positive chest computerized tomography scans
results. The computerized tomography scans findings were not the same as
positive computerized tomography scans findings of coronavirus.
In a typical clinical setting, chest imaging is not advised for routine
screening of COVID-19. Patients with asymptomatic to mild symptoms are
not recommended to be tested via chest computerized tomography scans.
However, it is still crucial to use, particularly when determining
complications or disease progression. Chest imaging also is not always
the first route to take with patients who have high risk factors for
COVID. High risk patients that had mild symptoms, chest imaging findings
were limited. Although a computerized tomography scan is a strong tool
in the diagnosis of COVID-19, it is insufficient to identify COVID-19
alone due to the poor specificity and the difficulties that radiologists
may experience in distinguishing COVID-19 from other viral pneumonia on
chest computerized tomography scans.
Serology (CoLab score) tests
The standard blood test (quick scan) taken at the emergency room
measures different values. By use of the blood quick scan the CoLab
score is calculated with a developed algorithm based on how the
coronavirus causes changes in the blood. The software is intended for
use in emergency rooms to quickly rule out the presence of the disease
in incoming patients. A not negative result is followed by a PCR (polymerase chain reaction) or LAMP (loop-mediated isothermal amplification) test.
Breath tests
The breath test by a Coronavirus breathalyzer
is a pre-screening test for people who have no or mild symptoms of
COVID-19. A not negative result is followed by a PCR or LAMP test.
Animals
In May 2021, Reuters reported that Dutch researchers at Wageningen University
had shown that trained bees could detect the virus in infected samples
in seconds and this could benefit countries where test facilities are in
short supply.
A two-month study by the Necker-Cochin hospital Paris in conjunction
with the French national veterinary school reported in May 2021 that
dogs were more reliable than current lateral flow tests.
Researchers in Paris in March 2022 reported in a preprint not yet
peer-reviewed that trained dogs were very effective for rapidly
detecting the presence of SARS-Cov2 in people, whether displaying
symptoms or not. The dogs were presented with sweat samples to smell
from 335 people, of whom 78 with symptoms and 31 without tested positive
by PCR. The dogs detected 97% of the symptomatic and 100% of the
asymptomatic infections. They were 91% accurate at identifying
volunteers who were not infected, and 94% accurate at ruling out the
infection in people without symptoms. The authors said "Canine testing
is non-invasive and provides immediate and reliable results. Further
studies will be focused on direct sniffing by dogs to evaluate sniffer
dogs for mass pre-test in airports, harbors, railways stations, cultural
activities or sporting events."
Functional assays
Tollotest is a molecular test that detects the activity of a SARS-CoV2 protease, which is a biomarker for active infection.
Timeline of total number of tests in different countries
In January 2020, scientists from China published the first genetic sequences of SARS-CoV-2 via virological.org, a "hub for prepublication data designed to assist with public health activities and research".
Researchers around the world used that data to build molecular tests
for the virus. Antigen- and antibody-based tests were developed later.
Even once the first tests were created, the supply was limited.
As a result, no countries had reliable data on the prevalence of the
virus early in the pandemic. The WHO and other experts called for ramping up testing as the best way to slow the spread of the virus. Shortages of reagent and other testing supplies became a bottleneck for mass testing in the EU, the UK and the US. Early tests also encountered problems with reliability.
Testing protocols
Drive-through testing
In drive-through
testing, the person undergoing testing remains in a vehicle while a
healthcare professional approaches the vehicle and obtains a sample, all
while taking appropriate precautions such as wearing personal protective equipment (PPE).Drive-through centers helped South Korea accelerate its testing program.
Home collection
A Randox PCR home test kit in the UK, showing the swab, and multi-layer packaging to deliver it to the labA
USPS package containing COVID-19 tests from the fifth round of free US
distributions in the fall of 2023, with instructions regarding FDA
extensions of test expiration dates.
In Hong Kong test subjects can stay home and receive a specimen tube. They spit into it, return it and later get the result.
Additionally, by the fall of 2023, the United States had conducted six
rounds of mailing free at-home COVID-19 tests to households nationwide.
The rapid antigen tests, while less accurate than PCR tests, did not
require mailing the tests back to labs for analysis.
Pooled testing
can improve turnaround time, by combining a number of samples to be
tested together. If the pool result is negative, all samples are
negative. If the test result is positive, samples will need to be
individually tested.
In Israel, researchers at Technion and Rambam Hospital
developed a method for testing samples from 64 patients simultaneously,
by pooling the samples and only testing further if the combined sample
was positive. Pool testing was then adopted in Israel, Germany, Ghana South Korea, Nebraska, China and the Indian states of Uttar Pradesh, West Bengal, Punjab, Chhattisgarh and Maharashtra.
Open source, multiplexed designs released by Origami Assays can test as many as 1122 patient samples using only 93 assays. These balanced designs can be run in small laboratories without robotic liquid handlers.
Multi-tiered testing
One study proposed a rapid immune response assay as a screening test,
with a confirmatory nucleic acid test for diagnosis, followed by a
rapid antibody test to determine course of action and assess population
exposure/herd immunity.
Required volume
Required testing levels are a function of disease spread. The more
the cases, the more tests are needed to manage the outbreak. COVID-19
tends to grow exponentially at the beginning of an outbreak, meaning
that the number of required tests initially also grows exponentially. If
properly targeted testing grows more rapidly than cases, it can be
contained.
WHO recommends increasing testing until fewer than 10% are positive in any given jurisdiction.
United States
Number of tests done per day in the US, as of April 2020. Blue: CDC lab Orange: Public health lab Gray: Data incomplete due to reporting lag Not shown: Testing at private labs; total exceeded 100,000 per day by 27 March.
Economist Paul Romer
reported that the US has the technical capacity to scale up to
20 million tests per day, which is his estimate of the scale needed to
fully remobilize the economy. The Edmond J. Safra Center for Ethics estimated on 4 April 2020 that this capacity could be available by late July 2020. Romer pointed to single-molecule real-time sequencing equipment from Pacific Biosciences and to the Ion Torrent Next-Generation Sequencing equipment from ThermoFisher Scientific.
According to Romer, "Recent research papers suggest that any one of
these has the potential to scale up to millions of tests per day." This
plan requires removing regulatory hurdles. Romer estimated that
$100 billion would cover the costs.
Romer also claimed that high test accuracy is not required if
tests are administered frequently enough. He ran model simulations in
which 7% of the population is tested every day using a test with a 20% false negative rate and a 1% false positive
rate. The average person would be tested roughly every two weeks. Those
who tested positive would go into quarantine. Romer's simulation
indicated that the fraction of the population that is infected at any
given time (known as the attack rate)
peaks reaches roughly 8% in about thirty days before gradually
declining, in most runs reaching zero at 500 days, with cumulative
prevalence remaining below 20%.
Snapshot mass-testing
A study found that, despite possibly suboptimal implementation, the snapshot mass-testing approach conducted by Slovakia
by which ~80% of its population was tested for COVID-19 within a
weekend at the end of October 2020 was thought highly efficacious,
decreasing observed prevalence by 58% within one week and by 70%
compared to a hypothetical scenario of no snapshot mass-testing.
The significant reduction resulted from a set of complementary lockdown
and quarantine measures whereby citizens who tested positive were
quarantined synchronously the weeks afterwards.
The country increased other countermeasures at the same time so the
inference was questionable. In the following months Slovakia's COVID-19
death rate per population increased to among the highest in the world.
Research on mass testing suggests that people who test negative think it
is safe to travel and come in contact with infected people. In the U.S.
the tracing system was overwhelmed. On 70 percent of days there were
more cases than tracers had time to contact and people contacted were
often uncooperative.
As of August 2020, the WHO recognizes wastewater surveillance of
SARS-CoV-2 as a potentially useful source of information on the
prevalence and temporal trends of COVID-19 in communities, while
highlighting that gaps in research such as viral shedding
characteristics should be addressed. Such aggregative testing may have detected early cases. Studies show that wastewater-based epidemiology has the potential for an early warning system and monitoring for COVID-19 infections.
This may prove particularly useful once large shares of regional
populations are vaccinated or recovered and do not need to conduct rapid
tests while in some cases being infectious nevertheless.
Available tests
A temporary drive-in testing site for COVID-19 set up with tents in a parking lot
Countries around the world developed tests independently and in partnership with others.
Tests developed in China, France, Germany, Hong Kong, Japan, the
United Kingdom, and the US targeted different parts of the viral genome.
WHO adopted the German system for manufacturing kits sent to low-income
countries without the resources to develop their own.
PowerChek Coronavirus looks for the "E" gene shared by all beta coronaviruses, and the RdRp gene specific to SARS-CoV-2.
US President Donald Trump displays a COVID-19 testing kit from Abbott Laboratories in March 2020.Nucleic acid testing conducted using an Abbott Laboratories ID Now device
Abbott Laboratories' ID Now nucleic acid test uses isothermal amplification technology. The assay amplifies a unique region of the virus's RdRp gene; the resulting copies are then detected with "fluorescently-labeledmolecular beacons". The test kit uses the company's "toaster-size" ID Now device, which is widely deployed in the US. The device can be used in laboratories or in point of care settings, and provides results in 13 minutes or less.
Primerdesign offers its Genesig Real-Time PCR test system. Roche Molecular Systems offers the Cobas 6800/8800 systems; they are offered among others by the United Nations.
Innova
SARS-CoV-2 Antigen Rapid Qualitative Lateral Flow Test kit showing a
negative result. This device has been subject to accuracy concerns and a
recall in the United States.
Antigen tests are readily available worldwide and have been approved by several health regulators.
Quidel's "Sofia2 SARS Antigen FIA" is a lateral flow test that uses monoclonal antibodies to detect the virus's nucleocapsid (N) protein. The result is read out by the company's Sofia2 device using immunofluorescence.
The test is simpler and cheaper but less accurate than nucleic acid
tests. It can be deployed in laboratories or at point of care and gives
results in 15 minutes.
A false negative result occurs if the sample's antigen level is
positive but below the test's detection limit, requiring confirmation
with a nucleic acid test.
The Innova SARS-CoV-2 Antigen Rapid Qualitative Test was never
approved for use in the United States, but was being sold by the company
anyway. The FDA inspected Innova facilities in California in March and
April 2021, and found inadequate quality assurance of tests manufactured
in China. On 23 April 2021, the company issued a recall. The FDA warned consumers to return or destroy the devices because the rate of false positives and false negatives found in clinical trials were higher than the rate claimed by the packaging. Over 1 billion tests from the company have been distributed in the UK, with £3 billion in funding as part of Operation Moonshot, and the MHRK has authorized exceptional use until at least 28 August 2021.
Concerned experts pointed out that accuracy dropped significantly when
screening was conducted by the public instead of by a medical
professional, and that the test was not designed to screen asymptomatic
people.
A 2020 study found 79% of positive cases were found when used by
laboratory scientists, but only 58% when used by the general public and
40% when used for city-wide screening in Liverpool.
Serology (antibody) tests
Antibodies are usually detectable 14 days after the onset of the
infection. Multiple jurisdictions survey their populations using these
tests. The test requires a blood sample.
Private US labs including Quest Diagnostics and LabCorp offer antibody testing upon request.
Certain antibody tests are available in several European countries and also in the US.
A summary review in BMJ
has noted that while some "serological tests ... might be cheaper and
easier to implement at the point of care [than RT-PCR]", and such
testing can identify previously infected individuals, "caution is
warranted ... using serological tests for ... epidemiological
surveillance". The review called for higher quality studies assessing
accuracy with reference to a standard of "RT-PCR performed on at least
two consecutive specimens, and, when feasible, includ[ing] viral
cultures." CEBM researchers have called for in-hospital 'case definition' to record "CT lung findings and associated blood tests" and for the WHO to produce a "protocol to standardise the use and interpretation of PCR" with continuous re-calibration.
Accuracy
The location of sample collection impact on sensitivity for COVID-19 in 205 Wuhan patients
Samples source
Positive rate
Bronchoalveolar lavage fluid specimens
93% (14/15)
Sputum
72% (75/104)
Nasal swabs
63% (5/8)
Fibrobronchoscope brush biopsy
46% (6/13)
Pharyngeal swabs
32% (126/398)
Feces
29% (44/153)
Blood
1% (3/307)
Accuracy is measured in terms of specificity and selectivity. Test
errors can be false positives (the test is positive, but the virus is
not present) or false negatives, (the test is negative, but the virus is
present). In a study of over 900,000 rapid antigen tests, false positives were found to occur at a rate of 0.05% or 1 in 2000.
Sensitivity indicates whether the test accurately identifies whether
the virus is present. Each test requires a minimum level of viral load
in order to produce a positive result. A 90% sensitive test will
correctly identify 90% of infections, missing the other 10% (a false
negative). Even relatively high sensitivity rates can produce high rates
of false negatives in populations with low incidence rates.
In a diagnostic test, sensitivity is a measure of how well a test can
identify true positives and specificity is a measure of how well a test
can identify true negatives. For all testing, both diagnostic and
screening, there is usually a trade-off between sensitivity and
specificity, such that higher sensitivities will mean lower
specificities and vice versa.
Sensitivity and Specificity
A 90% specific test will correctly identify 90% of those who are uninfected, leaving 10% with a false positive result.
Low-specificity tests have a low positive predictive value (PPV) when prevalence is low. For example, suppose incidence is 5%. Testing 100 people at random using a test that has a specificity
of 95% would yield on average 5 people who are actually negative who
would incorrectly test positive. Since 5% of the subjects actually are
positive, another five would also test positive correctly, totaling 10
positive results. Thus, the PPV is 50%, an outcome no different from a coin toss. In this situation, assuming that the result of a second test is independent
of the first test, retesting those with a first positive result
increases the PPV to 94.5%, meaning that only 4.5% of the second tests
would return the incorrect result, on average less than 1 incorrect
result.
Causes of test error
The time course of infection affects the accuracy of some tests.
Samples may be collected before the virus has a chance to establish
itself or after the body has begun to eliminate it. A May 2020 review of
PCR-RT testing found that the median probability of a false-negative
result decreased from 100% on day 1 to 67% on day 4. On the day of
symptom onset, the probability was 38%, which decreased to 20% 3 days
later.
PCR-based test
Detection of SARS-CoV-2 by nasal swab over six weeks in patients who experienced mild to moderate illness
RT-PCR is the most commonly-used diagnostic test.
PCR tests by nasopharyngeal swab have a sensitivity of 73%, but
systematic analysis of specificity has not been determined due to the
lack of PCR studies with a control group.
In one study sensitivity was highest at week one (100%), followed
by 89.3%, 66.1%, 32.1%, 5.4% and zero by week six since symptom onset.
Sensitivity is also a function of the number of PCR cycles, as
well as time and temperature between sample collection and analysis. A cycle threshold of 20 cycles would be adequate to detect SARS-Cov-2 in a highly infective person. Cycle thresholds above 34 are increasingly likely to give false positives outside of high biosafety level facilities.
In July 2020, Dr. Anthony Fauci
of the US NIH indicated that positive results obtained from RT-PCR
tests run at more than 35 cycles were almost always "just dead
nucleotides".
In August 2020, it was reported that, "In three sets of testing data
that include cycle thresholds, compiled by officials in Massachusetts,
New York and Nevada ... most tests set the limit at 40 [cycles], a few
at 37" and that the CDC was examining the use of cycle threshold
measures "for policy decisions,"
On 21 July 2021, the CDC, in their "Real-Time RT-PCR Diagnostic Pan:
Instructions for Use", indicated tests results should be determined at
40 cycles.
A Dutch CDC-led laboratory investigation compared 7 PCR kits. Test kits made by BGI, R-Biopharm AG, BGI, KH Medical and Seegene showed high sensitivity.
High sensitivity kits are recommended to assess people without
symptoms, while lower sensitivity tests are adequate when diagnosing
symptomatic patients.
The University of Oxford's Centre for Evidence-Based Medicine (CEBM) has pointed to mounting evidence
that "a good proportion of 'new' mild cases and people re-testing
positives via RT-PCR after quarantine or discharge from hospital are not
infectious, but are simply clearing harmless virus particles which
their immune system has efficiently dealt with", and have called for "an
international effort to standardize and periodically calibrate
testing".
On 7 September, the UK government issued "guidance for procedures to be
implemented in laboratories to provide assurance of positive SARS-CoV-2
RNA results during periods of low prevalence, when there is a reduction
in the predictive value of positive test results".
On 4 January 2021, the US FDA issued an alert about the risk of
false results, particularly false negative results, with the Curative
SARS-Cov-2 Assay real-time RT-PCR test.
Isothermal nucleic amplification test
One study reported that the ID Now COVID-19 test showed sensitivity
of 85.2%. Abbott responded that the issue could have been caused by
analysis delays. Another study rejected the test in their clinical setting because of this low sensitivity.
Confirmatory testing
The WHO recommends countries that do not have testing capacity and
national laboratories with limited experience on COVID-19 send their
first five positives and the first ten negative COVID-19 samples to one
of the 16 WHO reference laboratories for confirmatory testing.
Out of the sixteen reference laboratories, seven are in Asia, five in
Europe, two in Africa, one in North America and one in Australia.
National or regional responses
Iceland
Iceland managed the pandemic with aggressive contact tracing, inbound
travel restrictions, testing, and quarantining, but with less
aggressive lock-downs.
Researchers tested the entire population of Vo',
the site of Italy's first COVID-19 death. They tested about 3,400
people twice, at an interval of ten days. About half the people testing
positive had no symptoms. All discovered cases were quarantined. Along
with restricting travel to the commune, new infections were eliminated.
Japan
Unlike other Asian countries, Japan did not experience a pandemic of SARS or MERS, so the country's PCR testing system was not well developed. Japan preferentially tested patients with severe illness and their close contacts at the beginning. Japan's Novel Coronavirus Expert Meeting chose cluster measures to identify infections clusters. The Expert Meeting analyzed the outbreak from Wuhan
and identified conditions leading to clusters (closed spaces, crowded
spaces and close-contact), and asked people to avoid them.
In January, contact tracers took action shortly after the first
infection was found. Only administrative tests were carried out at
first, until insurance began covering PCR tests on 6 March. Private
companies began to test, and the test system gradually expanded.
On 3 April, those with positive tests were legally permitted to
recuperate at home or in a hotel if they had asymptomatic or mild
illness, ending the hospital bed shortage. The first wave (from China) was contained, but a second wave (caused by returnees from Europe and the US) in mid-March led to spreading infection in April.
On 7 April, Japan declared a state of emergency (less strict than a
lockdown, because it did not block cities or restrict outings).On 13 May, antigen test kits became covered by insurance, and were combined with a PCR test for diagnosis.
Japan's PCR test count per capita remained far smaller than in
some other countries even though its positive test rate was lower.
Excess mortality was observed in March.
The Expert Meeting stated, "The Japanese health care system originally
carries out pneumonia surveillance, allowing it to detect most of the
severely ill patients who develop pneumonia. There are a large number of
CT scanners in Japan and they have spread to small hospitals all over
the country, so pneumonia patients are rarely missed. In that sense, it
meets the same standards as other countries that mainly carry out PCR
tests." The group recommended using CT scans data and doctor's findings for diagnosis.On the Diamond Princess cruise ship, many people who initially tested
negative later tested positive. Half of coronavirus-positives there who
remained mild or asymptomatic had pneumonia findings on CT scans and
their CT image showed a frosted glass shadow that is characteristic of
infection.
As of 18 July, Japan's daily PCR testing capacity was about
32,000, more than three times the 10,000 cases as of April. When the
antigen test is added to it, the number is about 58,000. The number of
tests per 1,000 people in the United States is about 27 times that of
Japan, the UK is 20 times, Italy is 8 times, and South Korea is twice
(as of 26 July).
The number of those infected with coronavirus and inpatients has
increased in July, but the number of serious cases has not increased.
This is thought to be due to the proper testing of those infected in
July compared to those in April. In April, the number of tests could not
catch up with the increase in the number of infected people, and the
test standards were strict, so the test positive rate exceeded 30% at
the peak. It means that there were quite a few cases where those
infected were not PCR tested. It is thought that the severe case was
preferentially tested though there were a lot of mild cases and
asymptomatic carriers mainly in the young during the first wave. In
other words, it became possible to grasp the actual situation of
infection much better than before by strengthening the testing system.
At the end of July, accommodation facilities for mild and asymptomatic
carriers became full, and the authorities requested hospitals to prepare
beds for the mild. However, it became difficult to treat patients with
other illnesses and to maintain the ICU system including the staff due
to the occupation of hospital beds by patients with mild symptoms.
Russia
In April 2020, Russia tested 3 million people and had 183,000 positive results. On 28 April Anna Popova, head of Federal Service for Surveillance in Healthcare
(Roszdravnadzor) stated that 506 laboratories were testing; that 45% of
those who tested positive had no symptoms; that 5% of patients had a
severe form; and 40% of infections were from family members. Illness
improved from six days to one day after symptoms appeared. Antibody
testing was carried out on 3,200 Moscow doctors, finding 20% immunity.
Singapore
With contact tracing, inbound travel restrictions, testing, and quarantining, Singapore arrested the initial spread without complete lockdown.
Slovakia
In October 2020 Slovakia tested 3.62 million people in a weekend,
from a population of 5.4m, representing 67% of the total (or 82% of the
adult population), 38,359 tested positive, representing 1.06% of those
tested. The government considered the mass test would significantly
assist in controlling the virus and avoid a lockdown and may repeat the
exercise at a later date.
South Korea
South Korea's broad testing approach helped reduce spread. Testing
capacity, largely in private sector labs, was built up over several
years by the South Korean government in the early 2000s.
The government exploited the resident registration number
(RRN) system. Authorities mobilized young men who were eligible for
military service as social service agents, security and public health
doctors. Public health doctors were mainly dispatched to public health
centers and life treatment centers where mildly ill patients were
accommodated. They performed PCR tests and managed mild patients. Social
service agents worked in pharmacies to fill staff shortages. Korea's
10k PCR tests per million residents was the world's highest as of 13
April rising to 20k by mid-June. Twenty-seven Korean companies exported
test kits worth $48.6 million in March, and were asked to provide test
kits or humanitarian assistance by more than 120 countries. Korean
authorities set up a treatment center to isolate and manage patients
with asymptomatic and minor illnesses in one facility in order to vacate
hospital beds for the more severely ill.
Centers were sited mainly at national facilities and corporate
training centers. The failure of Korea's MERS quarantine in May 2015
left Korea more prepared for COVID-19 than countries that did not face
that pandemic. Then President Park Geun-hye
allowed Korean CDC-approved private sector testing for infectious
diseases in 2016. Korea already had a system for isolating, testing and
treating infectious disease patients separately from others. Patients
with respiratory illness but no epidemiological relevance were treated
at the National Hospital, and those with epidemiological relevance were
treated at selected clinics.
Korea established a large scale drive-through/walk-through" test
testing program. However, the most common method was "mobile
examination". In Daegu City, 54% of samples were collected by 23 March
in home or hospital. Collecting samples door-to-door of avoided the risk
of travel by possibly infected patients, but required additional staff.
Korea solved the problem by drafting more than 2,700 public insurance
doctors.
The government disclosed personal information to the public via
KCDC without patient consent. The authorities used digital surveillance
to trace possible spread.
In January 2021, the COVID-19 testing results of the UAE came under scrutiny, as Denmark
suspended the Emirati flights for five days. The European nation said
that it barred the flights from the UAE due to growing suspicion of
irregularities in the testing process being followed in the Gulf nation.
Denmark's Minister of Transport, Benny Engelbrecht
said that they were taking time to ensure that the negative tests of
travelers from the Emirates were a real screening carried out
appropriately.
New York State's control measures consisted of PCR tests,
stay-at-home measures and strengthening the healthcare system. On 29
February before its first case, the state allowed testing at the
Wordsworth Center. They managed to convince the CDC to approve tests at
state laboratories and the FDA to approve a test kit. As of 13 March the
state was conducting more than 1,000 daily tests, growing to 10,000/day
on 19 March. In April, the number exceeded 20,000. Many people queued
at hospitals to get tested. On 21 March New York City health officials
directed medical providers to test only those entering the hospital, for
lack of PPE.
Following an outbreak, 94% of the 4,800 aircraft carrier crew were
tested. Roughly 60 percent of the 600-plus sailors who tested positive
were asymptomatic. Five infected sailors who completed quarantine subsequently developed flu-like symptoms and again tested positive.
Nevada
In 2020, Nevada received a donation of 250,000 Covid testing kits, which were a product of China's leading genetics company, BGI Group. A UAE-based firm owned by Tahnoun bin Zayed Al Nahyan, Group 42 partnered with the BGI Group to supply the testing kits to Nevada. However, the US Department of Homeland Security and the State Department
raised a warning for Nevada hospitals to not use the Chinese-made
testing kits, as there were concerns around the involvement of the
Chinese government, test accuracy and privacy of the patients.
Testing statistics by country
Testing strategies vary by country and over time, with some countries testing very widely, while others have at times focused narrowly on only testing the seriously ill.
The country that tests only people showing symptoms will have a higher
figure for "Confirmed"/"tested" than the country that also tests others.
If two countries are alike in every respect, including which people
they test, the one that tests more people will have a higher "Confirmed /
population". Studies have also found that countries that test more,
relative to the number of deaths, have lower estimated case fatality
rates and younger age distributions of cases.