Carbon monitoring refers to tracking how much carbon dioxide or methane
is produced by particular activity at a particular point in time. For
example, it may refer to tracking methane emissions from agriculture, or
carbon dioxide emissions from land use changes, such as deforestation,
or from burning fossil fuels, whether in a power plant, automobile, or
other device. Because carbon dioxide is the greenhouse gas emitted in
the largest quantities, and methane is an even more potent greenhouse
gas, monitoring carbon emissions is widely seen as crucial to any effort
to reduce emissions and thereby slow climate change. Monitoring carbon
emissions is key to the cap-and-trade program currently being used in
Europe, as well as the one in California, and will be necessary for any
such program in the future, like the Paris Agreement. The lack of reliable sources of consistent data on carbon emissions is a significant barrier to efforts to reduce emissions.
Data sources
Sources of such emissions data include:
Carbon Monitoring for Action (CARMA)
– An online database provided by the Center for Global Development,
that includes plant-level emissions for more than 50,000 power plants
and 4,000 power companies around the world, as well as the total
emissions from power generation of countries, provinces (or states), and
localities. Carbon emissions from power generation account for about 25
percent of global CO 2 emissions.
ETSWAP
- An emissions monitoring and reporting system currently in use in the
UK and Ireland, which enables relevant organizations to monitor, verify
and report carbon emissions, as is required by the EU ETS (European Union Emissions Trading Scheme).
FMS - A system used in Germany to record and calculate annual emission reports for plant operators subject to the EU ETS.
In the United States
Almost
all climate change regulations in the US have stipulations to reduce
carbon dioxide and methane emissions by economic sector, so being able
to accurately monitor and assess these emissions is crucial to being
able to assess compliance with these regulations.
Emissions estimates at the national level have been shown to be fairly
accurate, but at the state level there is still much uncertainty.
As part of the Paris Agreement, the US pledged to "decrease its GHG
emissions by 26–28 % relative to 2005 levels by 2025 as part of the
Paris Agreement negotiated at COP21. To comply with these regulations, it is necessary to quantify emissions from specific source sectors.
A source sector is a sector of the economy that emits a particular
greenhouse gas, i.e. methane emissions from the oil and gas industry,
which the US has pledged to decrease by 40–45 % relative to 2012 levels
by 2025 as a more specific action towards achieving its Paris Agreement contribution.
Currently, most governments, including the US government,
estimate carbon emissions with a "bottom-up" approach, using emission
factors which give the rate of carbon emissions per unit of a certain
activity, and data on how much of that activity has taken place.
For example an emission factor can be determined for the amount of
carbon dioxide emitted per gallon of gasoline burned, and this can be
combined with data on gasoline sales to get an estimate of carbon
emissions from light duty vehicles.
Other examples include determining the number of cows in various
locations, or the mass of coal burned at power plants, and combining
these data with the appropriate emissions factors to estimate methane or
carbon dioxide emissions. Sometimes "top-down" methods are used to
monitor carbon emissions. These involve measuring the concentration of a
greenhouse gas in the atmosphere and using these measurements to
determine the distribution of emissions which caused the resulting
concentrations.
Accounting by sector can be complicated when there is a chance of
double counting. For example, when coal is gasified to produce
synthetic natural gas, which is then mixed with natural gas and burned
at a natural gas powered power plant, if accounted for as part of the
natural gas sector, this activity must be subtracted from the coal
sector and added to the natural gas sector in order to be properly
accounted for.
NASA Carbon Monitoring System (CMS)
NASA Carbon Monitoring System (CMS) is a climate research program
created by a congressional order in 2010 that provides grants of about
$500,000 a year for climate research that measure carbon dioxide and
methane emissions.
Using instruments in satellites and airplanes CMS funded research
projects provide data to the United States and other countries that help
track progress of individual nations regarding their Paris climate
emission cuts agreements. For example, CMS projects measured carbon
emissions from deforestation and forest degradation. CMS "stitch-ed] together observations of sources and sinks into high-resolution models of the planet's flows of carbon." The 2019 federal budget specifically assured funding for CMS, after President Trump ended funding in April, 2018.
In the European Union
As part of the European Union Emission Trading Scheme (EU-ETS),
carbon monitoring is necessary in order to ensure compliance with the
cap-and-trade program. This carbon monitoring program has three main
components: atmospheric carbon dioxide measurements, bottom-up carbon
dioxide emissions maps, and an operational data-assimilation system to
synthesize the information from the first two components.
The top-down, atmospheric measurement approach involves satellite
data and in-situ measurements of carbon dioxide concentrations, as well
as atmospheric models that model atmospheric transport of carbon
dioxide. These have limited ability to determine carbon dioxide
emissions at highly resolved spatial scales and can typically not
represent finer scales than a 1 km grid. The models also must resolve
the fluxes of carbon dioxide from anthropogenic sources like fossil fuel
burning, and from natural interactions like terrestrial ecosystems and
the ocean. Due to the complexities and limitations of the top-down approach, the EU combines this method with a bottom-up approach.
The current bottom-up data are based on information that is
self-reported by emitters in the trading scheme. However, the EU is
trying to improve this information source and has proposed plans for
improved bottom-up emissions maps, which will have greatly improved
spatial resolution and near real-time updates.
An operational data system to combine the information gathered
from the two aforementioned sources is also planned. The EU hopes that
by the 2030s, this will be operational and enable a highly sophisticated
carbon monitoring program across the European Union.
Satellites
Satellites
can be used to monitor carbon dioxide concentrations from outer space,
and have been shown to be as accurate as Earth-based measurement
systems.
NASA currently operates a satellite named the Orbiting Carbon
Observatory-2 (OCO-2), and Japan operates their own satellite, the
Greenhouse Gases Observing Satellite (GOSAT).
These satellites can provide valuable information to fill in data gaps
from emission inventories. The OCO-2 measured a strong flux of carbon
dioxide over the Middle East, which had not been represented in
emissions inventories, indicating that important sources were being
neglected in bottom-up estimates of emissions.
These satellites currently both have an error of only 0.5% in the
measurements, but the American and Japanese teams hope to bring that
error down to 0.25%. China recently launched their own satellite to
monitor greenhouse gas concentrations on Earth, the TanSat, in December
2016. It currently has a three-year mission planned and will take readings of carbon dioxide concentrations every 16 days.
Environmental monitoring
describes the processes and activities that need to take place to
characterize and monitor the quality of the environment. Environmental
monitoring is used in the preparation of environmental impact assessments, as well as in many circumstances in which human activities carry a risk of harmful effects on the natural environment.
All monitoring strategies and programs have reasons and justifications
which are often designed to establish the current status of an
environment or to establish trends in environmental parameters. In all
cases, the results of monitoring will be reviewed, analyzed statistically,
and published. The design of a monitoring program must therefore have
regard to the final use of the data before monitoring starts.
Air quality monitoring
Air quality monitoring station
Air pollutants are atmospheric substances—both naturally occurring and anthropogenic—which may potentially have a negative impact on the environment and organism
health. With the evolution of new chemicals and industrial processes
has come the introduction or elevation of pollutants in the atmosphere,
as well as environmental research and regulations, increasing the demand
for air quality monitoring.
Air quality monitoring is challenging to enact as it requires the
effective integration of multiple environmental data sources, which
often originate from different environmental networks and institutions.
These challenges require specialized observation equipment and tools to
establish air pollutant concentrations, including sensor networks, geographic information system (GIS) models, and the Sensor Observation Service (SOS), a web service for querying real-time sensor data. Air dispersion models
that combine topographic, emissions, and meteorological data to predict
air pollutant concentrations are often helpful in interpreting air
monitoring data. Additionally, consideration of anemometer
data in the area between sources and the monitor often provides
insights on the source of the air contaminants recorded by an air
pollution monitor.
Air quality monitors are operated by citizens, regulatory agencies, and researchers
to investigate air quality and the effects of air pollution.
Interpretation of ambient air monitoring data often involves a
consideration of the spatial and temporal representativeness of the data gathered, and the health effects associated with exposure to the monitored levels.
If the interpretation reveals concentrations of multiple chemical
compounds, a unique "chemical fingerprint" of a particular air pollution
source may emerge from analysis of the data.
Air sampling
Passive or "diffusive" air sampling depends on meteorological conditions such as wind to diffuse air pollutants to a sorbent
medium. Passive samplers have the advantage of typically being small,
quiet, and easy to deploy, and they are particularly useful in air
quality studies that determine key areas for future continuous
monitoring.
Air pollution can also be assessed by biomonitoring with organisms that bioaccumulate air pollutants, such as lichens, mosses, fungi, and other biomass.
One of the benefits of this type of sampling is how quantitative
information can be obtained via measurements of accumulated compounds,
representative of the environment from which they came. However, careful
considerations must be made in choosing the particular organism, how
it's dispersed, and relevance to the pollutant.
Other sampling methods include the use of a denuder, needle trap devices, and microextraction techniques.
Soil monitoring
Collecting a soil sample in Mexico for pathogen testing
Assessing these threats and other risks to soil can be challenging due to a variety of factors, including soil's heterogeneity and complexity, scarcity of toxicity data, lack of understanding of a contaminant's fate, and variability in levels of soil screening.
This requires a risk assessment approach and analysis techniques that
prioritize environmental protection, risk reduction, and, if necessary,
remediation methods.
Soil monitoring plays a significant role in that risk assessment, not
only aiding in the identification of at-risk and affected areas but also
in the establishment of base background values of soil.
Soil monitoring has historically focused on more classical conditions and contaminants, including toxic elements (e.g., mercury, lead, and arsenic) and persistent organic pollutants (POPs).
Historically, testing for these and other aspects of soil, however, has
had its own set of challenges, as sampling in most cases is of a destructive
in nature, requiring multiple samples over time. Additionally,
procedural and analytical errors may be introduced due to variability
among references and methods, particularly over time.
However, as analytical techniques evolve and new knowledge about
ecological processes and contaminant effects disseminate, the focus of
monitoring will likely broaden over time and the quality of monitoring
will continue to improve.
Soil sampling
The
two primary types of soil sampling are grab sampling and composite
sampling. Grab sampling involves the collection of an individual sample
at a specific time and place, while composite sampling involves the
collection of a homogenized mixture of multiple individual samples at
either a specific place over different times or multiple locations at a
specific time.
Soil sampling may occur both at shallow ground levels or deep in the
ground, with collection methods varying by level collected from. Scoops,
augers, core barrel, and solid-tube samplers, and other tools are used
at shallow ground levels, whereas split-tube, solid-tube, or hydraulic
methods may be used in deep ground.
Monitoring programs
A portable X-ray fluorescence (XRF) analyzer can be used in the field for testing soils for metal contamination
Soil contamination monitoring
Soil contamination monitoring helps researchers identify patterns and
trends in contaminant deposition, movement, and effect. Human-based
pressures such as tourism, industrial activity, urban sprawl,
construction work, and inadequate agriculture/forestry practices can
contribute to and make worse soil contamination
and lead to the soil becoming unfit for its intended use. Both
inorganic and organic pollutants may make their way to the soil, having a
wide variety of detrimental effects. Soil contamination monitoring is
therefore important to identify risk areas, set baselines, and identify
contaminated zones for remediation. Monitoring efforts may range from
local farms to nationwide efforts, such as those made by China in the
late 2000s,
providing details such as the nature of contaminants, their quantity,
effects, concentration patterns, and remediation feasibility.
Monitoring and analytical equipment will ideally will have high
response times, high levels of resolution and automation, and a certain
degree of self-sufficiency. Chemical techniques may be used to measure toxic elements and POPs using chromatography and spectrometry,
geophysical techniques may assess physical properties of large
terrains, and biological techniques may use specific organisms to gauge
not only contaminant level but also byproducts of contaminant
biodegradation. These techniques and others are increasingly becoming
more efficient, and laboratory instrumentation is becoming more precise, resulting in more meaningful monitoring outcomes.
Soil erosion monitoring
Soil erosion monitoring helps researchers identify patterns and
trends in soil and sediment movement. Monitoring programs have varied
over the years, from long-term academic research on university plots to
reconnaissance-based surveys of biogeoclimatic areas. In most methods,
however, the general focus is on identifying and measuring all the
dominant erosion processes in a given area.
Additionally, soil erosion monitoring may attempt to quantify the
effects of erosion on crop productivity, though challenging "because of
the many complexities in the relationship between soils and plants and
their management under a variable climate."
Soil salinity monitoring
Soil salinity monitoring helps researchers identify patterns and trends in soil salt content. Both the natural process of seawater intrusion
and the human-induced processes of inappropriate soil and water
management can lead to salinity problems in soil, with up to one billion
hectares of land affected globally (as of 2013).
Salinity monitoring at the local level may look closely at the root
zone to gauge salinity impact and develop management options, whereas at
the regional and national level salinity monitoring may help with
identifying areas at-risk and aiding policymakers in tackling the issue
before it spreads. The monitoring process itself may be performed using technologies such as remote sensing and geographic information systems
(GIS) to identify salinity via greenness, brightness, and whiteness at
the surface level. Direct analysis of soil up close, including the use
of electromagnetic induction techniques, may also be used to monitor soil salinity.
Water quality monitoring
Electrofishing
survey methods use a mild electric shock to temporarily stun fish for
capture, identification and counting. The fish are then returned to the
water unharmed.
Design of environmental monitoring programmes
Water quality
monitoring is of little use without a clear and unambiguous definition
of the reasons for the monitoring and the objectives that it will
satisfy. Almost all monitoring (except perhaps remote sensing)
is in some part invasive of the environment under study and extensive
and poorly planned monitoring carries a risk of damage to the
environment. This may be a critical consideration in wilderness areas or
when monitoring very rare organisms or those that are averse to human
presence. Some monitoring techniques, such as gill nettingfish
to estimate populations, can be very damaging, at least to the local
population and can also degrade public trust in scientists carrying out
the monitoring.
Almost all mainstream environmentalism monitoring projects form
part of an overall monitoring strategy or research field, and these
field and strategies are themselves derived from the high levels
objectives or aspirations of an organisation. Unless individual
monitoring projects fit into a wider strategic framework, the results
are unlikely to be published and the environmental understanding
produced by the monitoring will be lost.
Parameters
Chemical
Analyzing water samples for pesticides
The range of chemical parameters that have the potential to affect
any ecosystem is very large and in all monitoring programmes it is
necessary to target a suite of parameters based on local knowledge and
past practice for an initial review. The list can be expanded or reduced
based on developing knowledge and the outcome of the initial surveys.
Freshwater environments have been extensively studied for many
years and there is a robust understanding of the interactions between
chemistry and the environment across much of the world. However, as new
materials are developed and new pressures come to bear, revisions to
monitoring programmes will be required. In the last 20 years acid rain, synthetic hormone analogues, halogenated hydrocarbons, greenhouse gases and many others have required changes to monitoring strategies.
Biological
In
ecological monitoring, the monitoring strategy and effort is directed
at the plants and animals in the environment under review and is
specific to each individual study.
However, in more generalised environmental monitoring, many
animals act as robust indicators of the quality of the environment that
they are experiencing or have experienced in the recent past. One of the most familiar examples is the monitoring of numbers of Salmonid fish such as brown trout or Atlantic salmon
in river systems and lakes to detect slow trends in adverse
environmental effects. The steep decline in salmonid fish populations
was one of the early indications of the problem that later became known
as acid rain.
In recent years much more attention has been given to a more
holistic approach in which the ecosystem health is assessed and used as
the monitoring tool itself. It is this approach that underpins the monitoring protocols of the Water Framework Directive in the European Union.
Radiological
Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to ionizing radiation or radioactive substances, and the interpretation of the results.
The ‘measurement’ of dose often means the measurement of a dose
equivalent quantity as a proxy (i.e. substitute) for a dose quantity
that cannot be measured directly. Also, sampling may be involved as a
preliminary step to measurement of the content of radionuclides in
environmental media. The methodological and technical details of the
design and operation of monitoring programmes and systems for different
radionuclides, environmental media and types of facility are given in IAEA Safety Guide RS–G-1.8 and in IAEA Safety Report No. 64.
Bacteria and viruses
are the most commonly monitored groups of microbiological organisms and
even these are only of great relevance where water in the aquatic
environment is subsequently used as drinking water or where water contact recreation such as swimming or canoeing is practised.
Although pathogens
are the primary focus of attention, the principal monitoring effort is
almost always directed at much more common indicator species such as Escherichia coli, supplemented by overall coliform bacteria counts. The rationale behind this monitoring strategy is that most human pathogens originate from other humans via the sewage stream. Many sewage treatment plants have no sterilisation final stage and therefore discharge an effluent
which, although having a clean appearance, still contains many millions
of bacteria per litre, the majority of which are relatively harmless
coliform bacteria. Counting the number of harmless (or less harmful)
sewage bacteria allows a judgement to be made about the probability of
significant numbers of pathogenic bacteria or viruses being present.
Where E. coli or coliform levels exceed pre-set trigger values, more intensive monitoring including specific monitoring for pathogenic species is then initiated.
Populations
Monitoring
strategies can produce misleading answers when relaying on counts of
species or presence or absence of particular organisms if there is no
regard to population size. Understanding the populations dynamics of an
organism being monitored is critical.
As an example if presence or absence of a particular organism
within a 10 km square is the measure adopted by a monitoring strategy,
then a reduction of population from 10,000 per square to 10 per square
will go unnoticed despite the very significant impact experienced by the
organism.
Monitoring programmes
All
scientifically reliable environmental monitoring is performed in line
with a published programme. The programme may include the overall
objectives of the organisation, references to the specific strategies
that helps deliver the objective and details of specific projects or
tasks within those strategies the key feature of any programme is the
listing of what is being monitored and how that monitoring is to take
place and the time-scale over which it should all happen. Typically, and
often as an appendix, a monitoring programme will provide a table of
locations, dates and sampling methods that are proposed and which, if
undertaken in full, will deliver the published monitoring programme.
There are a number of commercial software
packages which can assist with the implementation of the programme,
monitor its progress and flag up inconsistencies or omissions but none
of these can provide the key building block which is the programme
itself.
Environmental monitoring data management systems
Given the multiple types and increasing volumes and importance of monitoring data, commercial software
Environmental Data Management Systems (EDMS) or E-MDMS are increasingly
in common use by regulated industries. They provide a means of managing
all monitoring data in a single central place. Quality validation,
compliance checking, verifying all data has been received, and sending
alerts are generally automated. Typical interrogation functionality
enables comparison of data sets both temporarily and spatially. They
will also generate regulatory and other reports.
There are a wide range of sampling methods which depend on the type of environment, the material being sampled and the subsequent analysis of the sample.
At its simplest a sample can be filling a clean bottle with river
water and submitting it for conventional chemical analysis. At the more
complex end, sample data may be produced by complex electronic sensing
devices taking sub-samples over fixed or variable time periods.
Judgmental sampling
In
judgmental sampling, the selection of sampling units (i.e., the number
and location and/or timing of collecting samples) is based on knowledge
of the feature or condition under investigation and on professional
judgment. Judgmental sampling is distinguished from probability-based
sampling in that inferences are based on professional judgment, not
statistical scientific theory. Therefore, conclusions about the target
population are limited and depend entirely on the validity and accuracy
of professional judgment; probabilistic statements about parameters are
not possible. As described in subsequent chapters, expert judgment may
also be used in conjunction with other sampling designs to produce
effective sampling for defensible decisions.
Simple random sampling
In
simple random sampling, particular sampling units (for example,
locations and/or times) are selected using random numbers, and all
possible selections of a given number of units are equally likely. For
example, a simple random sample of a set of drums can be taken by
numbering all the drums and randomly selecting numbers from that list or
by sampling an area by using pairs of random coordinates. This method
is easy to understand, and the equations for determining sample size are
relatively straightforward. An example is shown in Figure 2-2. This
figure illustrates a possible simple random sample for a square area of
soil. Simple random sampling is most useful when the population of
interest is relatively homogeneous; i.e., no major patterns of
contamination or “hot spots” are expected. The main advantages of this
design are:
It provides statistically unbiased estimates of the mean, proportions, and variability.
It is easy to understand and easy to implement.
Sample size calculations and data analysis are very straightforward.
In some cases, implementation of a simple random sample can be more
difficult than some other types of designs (for example, grid samples)
because of the difficulty of precisely identifying random geographic
locations. Additionally, simple random sampling can be more costly than
other plans if difficulties in obtaining samples due to location causes
an expenditure of extra effort.
Stratified sampling
In stratified sampling,
the target population is separated into non-overlapping strata, or
subpopulations that are known or thought to be more homogeneous
(relative to the environmental medium or the contaminant), so that there
tends to be less variation among sampling units in the same stratum
than among sampling units in different strata. Strata may be chosen on
the basis of spatial or temporal proximity of the units, or on the basis
of preexisting information or professional judgment about the site or
process. Advantages of this sampling design are that it has potential
for achieving greater precision in estimates of the mean and variance,
and that it allows computation of reliable estimates for population
subgroups of special interest. Greater precision can be obtained if the
measurement of interest is strongly correlated with the variable used to
make the strata.
Systematic and grid sampling
In
systematic and grid sampling, samples are taken at regularly spaced
intervals over space or time. An initial location or time is chosen at
random, and then the remaining sampling locations are defined so that
all locations are at regular intervals over an area (grid) or time
(systematic). Examples Systematic Grid Sampling - Square Grid Systematic
Grid Sampling - Triangular Grids of systematic grids include square,
rectangular, triangular, or radial grids.
Cressie, 1993. In random systematic sampling, an initial sampling
location (or time) is chosen at random and the remaining sampling sites
are specified so that they are located according to a regular pattern.
Random systematic sampling is used to search for hot spots and to infer
means, percentiles, or other parameters and is also useful for
estimating spatial patterns or trends over time. This design provides a
practical and easy method for designating sample locations and ensures
uniform coverage of a site, unit, or process.
Ranked set sampling is an innovative design that can be highly
useful and cost efficient in obtaining better estimates of mean
concentration levels in soil and other environmental media by explicitly
incorporating the professional judgment of a field investigator or a
field screening measurement method to pick specific sampling locations
in the field. Ranked set sampling uses a two-phase sampling design that
identifies sets of field locations, utilizes inexpensive measurements to
rank locations within each set, and then selects one location from each
set for sampling. In ranked set sampling, m sets (each of size r) of
field locations are identified using simple random sampling. The
locations are ranked independently within each set using professional
judgment or inexpensive, fast, or surrogate measurements. One sampling
unit from each set is then selected (based on the observed ranks) for
subsequent measurement using a more accurate and reliable (hence, more
expensive) method for the contaminant of interest. Relative to simple
random sampling, this design results in more representative samples and
so leads to more precise estimates of the population parameters. Ranked
set sampling is useful when the cost of locating and ranking locations
in the field is low compared to laboratory measurements. It is also
appropriate when an inexpensive auxiliary variable (based on expert
knowledge or measurement) is available to rank population units with
respect to the variable of interest. To use this design effectively, it
is important that the ranking method and analytical method are strongly
correlated.
Adaptive cluster sampling
In adaptive cluster sampling,
samples are taken using simple random sampling, and additional samples
are taken at locations where measurements exceed some threshold value.
Several additional rounds of sampling and analysis may be needed.
Adaptive cluster sampling tracks the selection probabilities for later
phases of sampling so that an unbiased estimate of the population mean
can be calculated despite oversampling of certain areas. An example
application of adaptive cluster sampling is delineating the borders of a
plume of contamination. Adaptive sampling is useful for estimating or
searching for rare characteristics in a population and is appropriate
for inexpensive, rapid measurements. It enables delineating the
boundaries of hot spots, while also using all data collected with
appropriate weighting to give unbiased estimates of the population mean.
Grab samples
Collecting a grab sample on a stream
Grab samples are samples taken of a homogeneous material, usually water, in a single vessel. Filling a clean bottle with river
water is a very common example. Grab samples provide a good snap-shot
view of the quality of the sampled environment at the point of sampling
and at the time of sampling. Without additional monitoring, the results
cannot be extrapolated to other times or to other parts of the river, lake or ground-water.
In order to enable grab samples or rivers to be treated as representative, repeat transverse and longitudinal transect
surveys taken at different times of day and times of year are required
to establish that the grab-sample location is as representative as is
reasonably possible. For large rivers such surveys should also have
regard to the depth of the sample and how to best manage the sampling
locations at times of flood and drought.
In lakes grab samples are relatively simple to take using depth
samplers which can be lowered to a pre-determined depth and then closed
trapping a fixed volume of water from the required depth. In all but the
shallowest lakes, there are major changes in the chemical composition
of lake water at different depths, especially during the summer months
when many lakes stratify into a warm, well oxygenated upper layer (epilimnion) and a cool de-oxygenated lower layer (hypolimnion).
In the open seas marine environment grab samples can establish a
wide range of base-line parameters such as salinity and a range of
cation and anion concentrations. However, where changing conditions are
an issue such as near river or sewage discharges, close to the effects
of volcanism or close to areas of freshwater input from melting ice, a
grab sample can only give a very partial answer when taken on its own.
Semi-continuous monitoring and continuous
An automated sampling station and data logger (to record temperature, specific conductance, and dissolved oxygen levels)
There is a wide range of specialized sampling equipment available
that can be programmed to take samples at fixed or variable time
intervals or in response to an external trigger. For example, a sampler
can be programmed to start taking samples of a river at 8-minute
intervals when the rainfall intensity rises above 1 mm / hour. The
trigger in this case may be a remote rain gauge communicating with the
sampler by using cell phone or meteor burst
technology. Samplers can also take individual discrete samples at each
sampling occasion or bulk up samples into composite so that in the
course of one day, such a sampler might produce 12 composite samples
each composed of 6 sub-samples taken at 20-minute intervals.
Continuous or quasi-continuous monitoring involves having an
automated analytical facility close to the environment being monitored
so that results can, if required, be viewed in real time. Such systems
are often established to protect important water supplies such as in the
River Dee regulation system
but may also be part of an overall monitoring strategy on large
strategic rivers where early warning of potential problems is essential.
Such systems routinely provide data on parameters such as pH, dissolved oxygen, conductivity, turbidity and colour but it is also possible to operate gas liquid chromatography with mass spectrometry technologies (GLC/MS) to examine a wide range of potential organic
pollutants. In all examples of automated bank-side analysis there is a
requirement for water to be pumped from the river into the monitoring
station. Choosing a location for the pump inlet is equally as critical
as deciding on the location for a river grab sample. The design of the
pump and pipework also requires careful design to avoid artefacts being
introduced through the action of pumping the water. Dissolved oxygen
concentration is difficult to sustain through a pumped system and GLC/MS
facilities can detect micro-organic contaminants from the pipework and glands.
Although
on-site data collection using electronic measuring equipment is
common-place, many monitoring programmes also use remote surveillance
and remote access to data in real time. This requires the on-site
monitoring equipment to be connected to a base station via either a
telemetry network, land-line, cell phone network or other telemetry
system such as Meteor burst. The advantage of remote surveillance is
that many data feeds can come into a single base station for storing and
analysis. It also enable trigger levels or alert levels to be set for
individual monitoring sites and/or parameters so that immediate action
can be initiated if a trigger level is exceeded. The use of remote
surveillance also allows for the installation of very discrete
monitoring equipment which can often be buried, camouflaged or tethered
at depth in a lake or river with only a short whip aerial protruding. Use of such equipment tends to reduce vandalism and theft when monitoring in locations easily accessible by the public.
Remote sensing
Environmental remote sensing uses aircraft or satellites to monitor the environment using multi-channel sensors.
There are two kinds of remote sensing. Passive sensors detect
natural radiation that is emitted or reflected by the object or
surrounding area being observed. Reflected sunlight is the most common
source of radiation measured by passive sensors and in environmental
remote sensing, the sensors used are tuned to specific wavelengths from
far infrared through visible light frequencies to the far ultraviolet.
The volumes of data that can be collected are very large and require
dedicated computational support. The output of data analysis from remote
sensing are false colour images which differentiate small differences
in the radiation characteristics of the environment being monitored.
With a skilful operator choosing specific channels it is possible to
amplify differences which are imperceptible to the human eye. In
particular it is possible to discriminate subtle changes in chlorophyll a and chlorophyll b concentrations in plants and show areas of an environment with slightly different nutrient regimes.
Active remote sensing emits energy and uses a passive sensor to
detect and measure the radiation that is reflected or backscattered from
the target. LIDAR
is often used to acquire information about the topography of an area,
especially when the area is large and manual surveying would be
prohibitively expensive or difficult.
Remote sensing makes it possible to collect data on dangerous or
inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, the effects of climate change on glaciers and Arctic and Antarctic regions, and depth sounding of coastal and ocean depths.
Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum,
which in conjunction with larger scale aerial or ground-based sensing
and analysis, provides information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, land use planning and conservation.
Bio-monitoring
The use of living organisms as monitoring tools has many advantages.
Organisms living in the environment under study are constantly exposed
to the physical, biological and chemical influences of that
environment. Organisms that have a tendency to accumulate chemical species can often accumulate significant quantities of material from very low concentrations in the environment. Mosses have been used by many investigators to monitor heavy metal concentrations because of their tendency to selectively adsorb heavy metals.
Similarly, eels have been used to study halogenated organic chemicals, as these are adsorbed into the fatty deposits within the eel.
Other sampling methods
Ecological
sampling requires careful planning to be representative and as
noninvasive as possible. For grasslands and other low growing habitats
the use of a quadrat – a 1-metre square frame – is often used with the numbers and types of organisms growing within each quadrat area counted.
Sediments and soils
require specialist sampling tools to ensure that the material recovered
is representative. Such samplers are frequently designed to recover a
specified volume of material and may also be designed to recover the
sediment or soil living biota as well such as the Ekman grab sampler.
Data interpretations
The
interpretation of environmental data produced from a well designed
monitoring programme is a large and complex topic addressed by many
publications. Regrettably it is sometimes the case that scientists
approach the analysis of results with a pre-conceived outcome in mind
and use or misuse statistics to demonstrate that their own particular
point of view is correct.
Statistics remains a tool that is equally easy to use or to
misuse to demonstrate the lessons learnt from environmental monitoring.
Environmental quality indices
Since
the start of science-based environmental monitoring, a number of
quality indices have been devised to help classify and clarify the
meaning of the considerable volumes of data involved. Stating that a
river stretch is in "Class B" is likely to be much more informative than
stating that this river stretch has a mean BOD of 4.2, a mean dissolved
oxygen of 85%, etc. In the UK the Environment Agency
formally employed a system called General Quality Assessment (GQA)
which classified rivers into six quality letter bands from A to F based
on chemical criteria and on biological criteria.
The Environment Agency and its devolved partners in Wales (Countryside
Council for Wales, CCW) and Scotland (Scottish Environmental Protection
Agency, SEPA) now employ a system of biological, chemical and physical
classification for rivers and lakes that corresponds with the EU Water
Framework Directive.
Environmental standards are administrative regulations or civil law rules
implemented for the treatment and maintenance of the environment.
Environmental standards are set by a government and can include
prohibition of specific activities, mandating the frequency and methods
of monitoring, and requiring permits for the use of land or water. Standards differ depending on the type of environmental activity.
Environmental standards produce quantifiable and enforceable laws
that promote environmental protection. The basis for the standards is
determined by scientific opinions from varying disciplines, the views of
the general population, and social context. As a result, the process of
determining and implementing the standards is complex and is usually
set within legal, administrative or private contexts.
The human environment is distinct from the natural environment.
The concept of the human environment considers that humans are
permanently interlinked with their surroundings, which are not just the
natural elements (air, water, and soil), but also culture,
communication, co-operation, and institutions. Environmental standards
should preserve nature and the environment, protect against damages, and
repair past damage caused by human activity.
Development of environmental standards
Historically, the development of environmental standards was influenced by two competing ideologies: ecocentrism and anthropocentrism.
Ecocentrism frames the environment as having an intrinsic value
divorced from the human utility, while anthropocentrism frames the
environment as only having value if it helps humanity survive. This has
led to problems in establishing standards.
Within the past few decades, the sensibility of people towards the topic of environmentalism
has increased. In turn, the demand for protecting the environment has
risen. This movement towards environmentalism was likely caused by the
increased understanding of medicine and science, as well as advances in
the measurement of factors contributing to environmental damage. This
improved measurement allows scientists to further understand the impact
of human-caused environmental destruction on human health
and the biodiversity which composes the natural environment. These
developments in science have been fundamental for the setting of
environmental standards.
Environmental standards often define the desired state (e.g. the
pH of a lake should be between 6.5 and 7.5) or limit alterations (e.g.,
no more than 50% of the natural forest may be damaged). Statistical
methods are used to determine the specific states and limits the
enforceable environmental standard.
Where environmental issues are concerned, uncertainties should
always be taken into consideration. The first step to developing a
standard is the evaluation of the specific risk. The expected value of
the occurrence of the risk must be calculated. Then, possible damage
should be classified. Three different types of damages exist - changes
due to physiochemical environmental damages, ecological damages in plants and animals, and damages to human health.
To establish an acceptable risk, in view of the expected
collective benefit, the risk-induced costs and the costs of risk
avoidance must be socially balanced. The comparison is difficult to
express in monetary units. Furthermore, the risks have multiple
dimensions, which should be reached with a correlation at the end of the
balancing process.
At the balancing process, the following steps should be considered:
To establish objectives that serve both the protection of life,
health and environment, and allow a rational allocation of social
resources.
Studying the possible outcomes of implementing these objectives.
Considering social costs or damages, including opportunity costs and
benefits which will arise when any of the available options are not
further pursued.
Into the balancing process, the fairness of distributing the risks
and the resilience with respect to sustaining the productivity of the
environment should be observed too. In addition to the standard, an
implementation rule, indicating under what circumstances the standard
will be considered violated, is commonly part of the regulations.
Penalties and other procedures for dealing with regions out of
compliance with the standard may be part of the legislation.
Environmental
standards are set by many different institutions, and most of the
standards continue to be based on the principle of voluntary
self-commitment.
United Nations (UN)
The UN,
with 193 member states, is the largest intergovernmental organization.
The environmental policy of the UN has a huge impact on the setting of
international environmental standards. At the Earth summit
in 1992, held in Rio, the member states acknowledged their negative
impact on the environment for the first time. During this and the
following Millennium Declaration, the first development goals for environmental issues were set.
Since then, the risk of the catastrophe caused by extreme weather has been enhanced by the overuse of natural resources and global warming. At the Paris Agreement in 2015, the UN determined 17 Goals
for sustainable development. Besides the fight against global poverty,
the main focus of the goals is the preservation of our planet. These
goals set a baseline for global environmentalism. The environmental areas of water, energy, oceans, ecosystems, sustainable production, consumer behavior and climate protection were covered by the goals. The goals contained explanations on which mediums were required to reach them.
Whether the member states fulfill the settled goals is
questionable. Some members perceive inspection or any other control from
external parties as an intervention into their inner affairs. For this
reason, the implementation and follow-up are only controlled by the Voluntary National Reviews. The main control is done by statistical values, which are called indicators. These indicators deliver information if the goals are reached.
European Union
Within the Treaty on the Functioning of the European Union,
the Union integrates a self-commitment towards the environment. In
Title XX, Article 191.1, it is settled: “Union policy on the environment
shall contribute to the pursuit of the following objectives: —
preserving, protecting and improving the quality of the environment, —
protecting human health, — prudent and rational utilization of natural
resources, — promoting measures at international level to deal with
regional or worldwide environmental; problems, and in particular
combating climate change.” All environmental actions are based on this
article and lead to a suite of environmental laws. European
environmental regulation covers air, biotechnological, chemical, climate
change, environmental economics, health, industry and technology, land
use, nature and biodiversity, noise, protection of the ozone layer, soil, sustainable development, waste, and water.
The environmental standards set by European legislation include
precise parametric concentrations of pollutants and also includes target
environmental concentrations to be achieved by specific dates.
United States
In the United States,
the development of standards is decentralized. These standards were
developed by more than a hundred different institutions, many of which
are private. The method of handling environmental standards is a partly
fragmented plural system, which is mainly affected by the market. Under
the Trump Administration, climate standards have increasingly become a scene of conflict in the politics of global warming.
States may set their own ambient standards, so long as they are lower than the national standard. The NAAQS regulates the six criteria for air pollutants: sulfur dioxide (SO2), particulate matter (PM10), carbon monoxide (CO), ozone (O3), nitrogen dioxide (NO2), and lead (Pb).
To ensure that the ambient standards are met, the EPA uses the Federal
Reference Method (FRM) and Federal Equivalent Method (FEM) systems to
measure the number of pollutants in the air and check that they are
within the legal limits.
Air emission standards
Emission
standards are national regulations managed by the EPA that control the
amount and concentration of pollutants that can be released into the
atmosphere to maintain air quality, human health, and regulate the
release of greenhouse gases such as carbon dioxide (CO2), oxides of nitrogen and oxides of sulfur.
The standards are established in two phases to stay up-to-date,
with final projections aiming to collectively save Americans $1.7
trillion in fuel costs and reduce the amount of greenhouse gas emissions (GHG) by 6 billion metric tons.
Similar to the ambient standards, individuals states may also tighten
regulations. For example, California set their own emissions standards
through the California Air Resources Board (CARB), and these standards have been adopted by some other states. Emission standards also regulate the number of pollutants released by heavy industry and for electricity.
The technological standards set by the EPA do not necessarily
enforce the use of specific technologies, but set minimum performance
levels for different industries.
The EPA often encourages technological improvement by setting standards
that are not achievable with current technologies. These standards are
always set based on the industry's top performers to promote the overall
improvement of the industry as a whole.
Impact of non-governmental organizations on environmental standards
International Organization of Standardization
The International Organization of Standardization
(IOS) develops a large number of voluntary standards. With 163 member
states, it has a comprehensive outreach. The standards set by the IOS
were often transmitted into national standards by different nations.
About 363,000 companies and organizations worldwide have the ISO 14001 certificate,
a standard for environmental management created to improve the
environmental performance of an organization and legal aspects as well
as reaching environmental aims. Most of the national and international
environmental management standards include the ISO 14000 series. In light of the UN Sustainable Development Goals, ISO has identified several families of standards which help meet SDG 13 which is focused on Climate Action for global warming.
Greenpeace
Greenpeace
is a popular non-governmental organization that deals with biodiversity
and the environment. Their activities have had a great global impact on
environmental issues. Greenpeace encourages public attention and
enforces governments or companies to adapt and set environmental
standards through activities recording special environmental issues.
Their main focus is on forests, the sea, climate change, and toxic
chemicals. For example, the organization set a standard about toxic
chemicals together with the textiles sector, creating the concept 2020,
which plans to banish all toxic chemicals from textile production by
2020.
World Wildlife Fund
The World Wildlife Fund
(WWF) focuses on how to produce the maximum yield in agriculture while
conserving biodiversity. They try to educate, protect, and reach policy
changes and incentives to achieve these goals.
Economy
Environmental
standards in the economy are set through external motivation. First,
companies need to fulfill the environmental law of the countries in
which they operate. Moreover, environmental standards are based on
voluntary self-commitment which means companies implement standards for
their business. These standards should exceed the level of the
requirements of governmental regulations. If companies set
further-reaching standards, they try to fulfill the wishes of stakeholders.
At the process of setting environmental standards, three
different stakeholders have the main influence. The first stakeholder,
the government, is the strongest determinate, followed by the
influence of the customers. Nowadays, there is an increasing number of
people, who consider environmental factors during their purchasing decision. The third stakeholder who forces companies to set environmental standards is industrial participants.
If companies are part of industrial networks, they are forced to
fulfill the codes of conduct of these networks. This code of conduct is
often set to improve the collective reputation of an industry. Another
driving force of industry participants could be a reaction to a
competitors action.
The environmental standards set by companies themselves can be
divided into two dimensions: operational environmental policies and the
message sent in advertising and public communications.
Operational environmental policies
This can be the environmental management, audits, controls, or technologies. In this dimension, the regulations tend to be closely connected with other function areas, e.g. lean production. Furthermore, it could be understood that multinational companies
tend to set cross-country harmonized environmental government
regulations and therefore reach a higher performance level of
environmental standards.
It is often argued that companies focus on the second dimension: the message sent in advertising
and public communications. To satisfy the stakeholders' requirement,
companies were focused on the public impression of their environmental
self-commitment standards. Often the real implementation does not play
an important role.
A lot of companies settle the responsibility for the
implementation of low-budget departments. The workers, who were in
charge of the standards missing time and financial resources to
guarantee a real implementation. Furthermore, within the implementation,
goal conflicts arise. The biggest concern of companies is that
environmental protection is more expansive compared to the gained
beneficial effects. But, there are a lot of positive
cost-benefit-calculation for environmental standards set by companies
themselves. It is observed that companies often set environmental
standards after a public crisis. Sometimes environmental standards were
already set by companies to avoid public crises. As to whether
environmental self-commitment standards are effective, is controversial.
Emission standards are the legal requirements governing air pollutants released into the atmosphere. Emission standards set quantitative limits on the permissible amount of specific air pollutants
that may be released from specific sources over specific timeframes.
They are generally designed to achieve air quality standards and to
protect human life.
Regulated sources
Many emissions standards focus on regulating pollutants released by automobiles (motor cars) and other powered vehicles. Others regulate emissions from industry, power plants, small equipment such as lawn mowers and diesel generators, and other sources of air pollution.
The first automobile emissions standards were enacted in 1963 in the United States, mainly as a response to Los Angeles' smog problems.
Three years later Japan enacted their first emissions rules, followed
between 1970 and 1972 by Canada, Australia, and several European
nations. The early standards mainly concerned carbon monoxide (CO) and hydrocarbons (HC). Regulations on nitrogen oxide emissions (NOx) were introduced in the United States, Japan, and Canada in 1973 and 1974, with Sweden following in 1976 and the European Economic Community in 1977. These standards gradually grew more and more stringent but have never been unified.
There are largely three main sets of standards: United States,
Japanese, and European, with various markets mostly using these as their
base.
Sweden, Switzerland, and Australia had separate emissions standards for
many years but have since adopted the European standards. India, China,
and other newer markets have also begun enforcing vehicle emissions
standards (derived from the European requirements) in the twenty-first
century, as growing vehicle fleets have given rise to severe air quality
problems there, too.
Vehicle emission performance standard
An emission performance standard is a limit that sets thresholds above which a different type of vehicle emissions control technology might be needed. While emission performance standards have been used to dictate limits for conventional pollutants such as oxides of nitrogen and oxides of sulphur (NOx and SOx),[3] this regulatory technique may be used to regulate greenhouse gasses, particularly carbon dioxide (CO2). In the US, this is given in pounds of carbon dioxide per megawatt-hour (lbs. CO2/MWhr), and kilograms CO2/MWhr elsewhere.
North America
Canada
In Canada,
the Canadian Environmental Protection Act, 1999 (CEPA 1999) transfers
the legislative authority for regulating emissions from on-road vehicles
and engines to Environment Canada
from Transport Canada's Motor Vehicle Safety Act.
The Regulations align emission standards with the U.S. federal standards
and apply to light-duty vehicles (e.g., passenger cars), light-duty
trucks (e.g., vans, pickup trucks, sport utility vehicles), heavy-duty
vehicles (e.g., trucks and buses), heavy-duty engines and motorcycles.
United States of America
The United States has its own set of emissions standards that all new vehicles must meet. In the United States, emissions standards are managed by the Environmental Protection Agency (EPA). Under federal law, the state of California
is allowed to promulgate more stringent vehicle emissions standards
(subject to EPA approval), and other states may choose to follow either
the national or California standards. California had produced air
quality standards prior to EPA, with severe air quality problems in the Los Angeles
metropolitan area. LA is the country's second-largest city, and relies
much more heavily on automobiles and has less favorable meteorological
conditions than the largest and third-largest cities (New York and
Chicago).
California's emissions standards are set by the California Air Resources Board, known locally by its acronym "CARB". By mid-2009, 16 other states had adopted CARB rules;
given the size of the California market plus these other states, many
manufacturers choose to build to the CARB standard when selling in all
50 states. CARB's policies have also influenced EU emissions standards.
California is attempting to regulate greenhouse gas
emissions from automobiles, but faces a court challenge from the
federal government. The states are also attempting to compel the
federal EPA to regulate greenhouse gas emissions, which as of 2007 it has declined to do. On May 19, 2009 news reports indicate that the Federal EPA will largely adopt California's standards on greenhouse gas emissions.
California and several other western states have passed bills requiring performance-based regulation of greenhouse gases from electricity generation.
The California ARB standard for light vehicle emissions is a
regulation of equipment first, with verification of emissions second.
The property owner of the vehicle is not permitted to modify, improve,
or innovate solutions in order to pass a true emissions-only standard
set for their vehicle. Therefore, California's attempt at regulation of
emissions is a regulation of equipment, not of air quality. This form of
regulation prevents vehicle modifications that may assist in cheating
emissions tests, but it also prevents grassroots or creative individuals
from participating in the math, science, and engineering that could
lead to breakthroughs in this area of research. They are wholly excluded
from modifying their property in any way that has not been extensively
researched and approved by CARB.
Before the European Union began streamlining emissions standards, there were several differing sets of rules. Members of the European Economic Community
(EEC) had a unified set of rules, considerably more lax than those of
the United States or Japan. These were tightened gradually, beginning on
cars of over two liters displacement as the price increase would have
less of an impact in this segment.
The ECE 15/05 norms (also known as the Luxemburg accord, strict enough
to essentially require catalytic converters) began taking effect
gradually: the initial step applied to cars of over 2000 cc in two
stages, in October 1988 and October 1989.
There followed cars between 1.4 and 2.0 liters, in October 1991 and
then October 1993. Cars of under 1400 cc had to meet two subsequent sets
of regulations that applied in October 1992 and October 1994
respectively.
French and Italian car manufacturers, strongly represented in the small
car category, had been lobbying heavily against these regulations
throughout the 1980s.
Within the EEC, Germany was a leader in regulating automobile
emissions. Germany gave financial incentives to buyers of cars that met
US or ECE standards, with lesser credits available to those that
partially fulfilled the requirements. These incentives had a strong
impact; only 6.5 percent of new cars registered in Germany in 1988 did
not meet any emissions requirements and 67.3 percent were compliant with
the strictest US or ECE standards.
Sweden was one of the first countries to instill stricter rules
(for 1975), placing severe limitations on the number of vehicles
available there. These standards also caused drivability problems and
steeply increased fuel consumption - in part because manufacturers could
not justify the expenditure to meet specific regulations that applied
only in one very small market. In 1982, the European Community calculated that the Swedish standards increased fuel consumption by 9 percent, while it made cars 2.5 percent more expensive.
For 1983 Switzerland (and then Australia) joined in the same set of
regulations, which gradually increased the number of certified engines.
One problem with the strict standards was that they did not account for
catalyzed engines, meaning that vehicles thus equipped had to have the
catalytic converters removed before they could be legally registered.
In 1985 the first catalyzed cars entered certain European markets
such as Germany. At first, the availability of unleaded petrol was
limited and sales were small. In Sweden, catalyzed vehicles became
allowed in 1987, benefitting from a tax rebate to boost sales.
By 1989 the Swiss/Swedish emissions rules were tightened to the point
that non-catalyzed cars were no longer able to be sold. In early 1989
the BMW Z1
was introduced, only available with catalyzed engines. This was a
problem in some places like Portugal, where unleaded fuel was still
almost non-existent, although European standards required unleaded
gasoline to be "available" in every country by 1 October 1989.
European Union
The European Union has its own set of emissions standards that all
new vehicles must meet. Currently, standards are set for all road
vehicles, trains, barges and 'nonroad mobile machinery' (such as
tractors). No standards apply to seagoing ships or airplanes.
EU Regulation No 443/2009 sets an average CO2
emissions target for new passenger cars of 130 grams per kilometre. The
target was gradually phased in between 2012 and 2015. A target of 95
grams per kilometre will apply from 2021.
For light commercial vehicle, an emissions target of 175 g/km applies from 2017, and 147 g/km from 2020, a reduction of 16%.
The EU introduced Euro 4 effective January 1, 2008, Euro 5
effective January 1, 2010 and Euro 6 effective January 1, 2014. These
dates had been postponed for two years to give oil refineries the
opportunity to modernize their plants.
UK
Several local authorities
in the UK have introduced Euro 4 or Euro 5 emissions standards for
taxis and licensed private hire vehicles to operate in their area.
Emissions tests on diesel cars have not been carried out during MOTs in Northern Ireland for 12 years, despite being legally required.
Germany
According
to the German federal automotive office 37.3% (15.4 million) cars in
Germany (total car population 41.3 million) conform to the Euro 4
standard from Jan 2009.
Asia
China
Due to rapidly expanding wealth and prosperity, the number of coal power plants
and cars on China's roads is rapidly growing, creating an ongoing
pollution problem. China enacted its first emissions controls on
automobiles in 2000, equivalent to Euro I standards. China's State
Environmental Protection Administration (SEPA) upgraded emission
controls again on July 1, 2004 to the Euro II standard. More stringent emission standard, National Standard III, equivalent to Euro III standards, went into effect on July 1, 2007.
Plans were for Euro IV standards to take effect in 2010. Beijing
introduced the Euro IV standard in advance on January 1, 2008, becoming
the first city in mainland China to adopt this standard.
Hong Kong
From Jan 1, 2006, all new passenger cars with spark-ignition engines in Hong Kong
must meet either Euro IV petrol standard, Japanese Heisei 17 standard
or US EPA Tier 2 Bin 5 standard. For new passenger cars with
compression-ignition engines, they must meet US EPA Tier 2 Bin 5
standard.
India
Bharat stage emission standards are emission standards instituted by
the Government of India to regulate the output of air pollutants from
internal combustion engine equipment, including motor vehicles. The
standards and the timeline for implementation are set by the Central
Pollution Control Board under the Ministry of Environment & Forests.
The standards, based on European regulations were first
introduced in 2000. Progressively stringent norms have been rolled out
since then. All new vehicles manufactured after the implementation of
the norms have to be compliant with the regulations. By 2014, the
country was under a combination of Euro 3 and Euro 4-based norms, with
Euro 4 standards partly implemented in 13 major cities. Till April
2017, the entire country was under BS IV norms, which is based on Euro
4.
As of now manufacture and registration of BS IV vehicles has
started, by April 2020 all BS IV manufacturing will be mandatory,
respectively.
Japan
Background
Starting June 10, 1968, the Japanese Government passed the (Japanese: Air Pollution Control Act) which regulated all sources of air pollutants. As a result of the 1968 law, dispute resolutions were passed under the 1970 (Japanese: Air Pollution Dispute Resolution Act).
As a result of the 1970 law, in 1973 the first installment of four sets
of new emissions standards were introduced. Interim standards were
introduced on January 1, 1975 and again for 1976. The final set of
standards were introduced for 1978.
While the standards were introduced they were not made immediately
mandatory, instead tax breaks were offered for cars which passed them. The standards were based on those adopted by the original US Clean Air Act of 1970, but the test cycle included more slow city driving to correctly reflect the Japanese situation. The 1978 limits for mean emissions
during a "Hot Start Test" of CO, hydrocarbons, and NOx were 2.1 grams
per kilometre (3.38 g/mi) of CO, .25 grams per kilometre (0.40 g/mi) of
HC, and .25 grams per kilometre (0.40 g/mi) of NOx respectively.
Maximum limits are 2.7 grams per kilometre (4.35 g/mi) of CO, .39 grams
per kilometre (0.63 g/mi) of HC, and .48 grams per kilometre
(0.77 g/mi) of NOx. The "10 - 15 Mode Hot Cycle"
test, used to determine individual fuel economy ratings and emissions
observed from the vehicle being tested, use a specific testing regime.
In 1992, to cope with NOx pollution problems from existing
vehicle fleets in highly populated metropolitan areas, the Ministry of
the Environment adopted the "(Japanese: Law Concerning Special Measures to Reduce the Total Amount of Nitrogen Oxides Emitted from Motor Vehicles in Specified Areas)",
called in short The Motor Vehicle NOx Law. The regulation designated a
total of 196 communities in the Tokyo, Saitama, Kanagawa, Osaka and
Hyogo Prefectures as areas with significant air pollution due to
nitrogen oxides emitted from motor vehicles. Under the Law, several
measures had to be taken to control NOx from in-use vehicles, including
enforcing emission standards for specified vehicle categories.
The regulation was amended in June 2001 to tighten the existing
NOx requirements and to add PM control provisions. The amended rule is
called the "Law Concerning Special Measures to Reduce the Total Amount
of Nitrogen Oxides and Particulate Matter Emitted from Motor Vehicles in
Specified Areas", or in short the Automotive NOx and PM Law.
Emission Standards
The NOx and PM Law introduces emission standards for specified
categories of in-use highway vehicles including commercial goods (cargo)
vehicles such as trucks and vans, buses, and special purpose motor
vehicles, irrespective of the fuel type. The regulation also applies to
diesel powered passenger cars (but not to gasoline cars).
In-use vehicles in the specified categories must meet 1997/98
emission standards for the respective new vehicle type (in the case of
heavy duty engines NOx = 4.5 g/kWh, PM = 0.25 g/kWh). In other words,
the 1997/98 new vehicle standards are retroactively applied to older
vehicles already on the road. Vehicle owners have two methods to comply:
Replace old vehicles with newer, cleaner models
Retrofit old vehicles with approved NOx and PM control devices
Vehicles have a grace period, between 8 and 12 years from the initial
registration, to comply. The grace period depends on the vehicle type,
as follows:
Light commercial vehicles (GVW ≤ 2500 kg): 8 years
Heavy commercial vehicles (GVW > 2500 kg): 9 years
Micro buses (11-29 seats): 10 years
Large buses (≥ 30 seats): 12 years
Special vehicles (based on a cargo truck or bus): 10 years
Diesel passenger cars: 9 years
Furthermore, the regulation allows fulfillment of its requirements to
be postponed by an additional 0.5-2.5 years, depending on the age of
the vehicle. This delay was introduced in part to harmonize the NOx and
PM Law with the Tokyo diesel retrofit program.
The NOx and PM Law is enforced in connection with Japanese
vehicle inspection program, where non-complying vehicles cannot undergo
the inspection in the designated areas. This, in turn, may trigger an
injunction on the vehicle operation under the Road Transport Vehicle
Law.
Israel
Since January 2012 vehicles which do not comply with Euro 6 emission values are not allowed to be imported to Israel.
Turkey
Diesel and gasoline sulphur content is regulated at 10ppm. Turkey currently follows Euro VI for heavy duty commercial vehicles,
and, in 2016 a couple of years after the EU, Turkey adopted Euro 6 for
new types of light duty vehicles (LDV) and new types of passenger cars. Turkey is planning to use the Worldwide harmonized light vehicles test procedure (WLTP).
However, despite these tailpipe emission standards for new vehicle types there are many older diesel vehicles, no low-emission zones and no national limit on PM2.5 particulates so local pollution, including from older vehicles, is still a major health risk in some cities, such as Ankara. Concentrations of PM2.5 are 41 µg/m3 in Turkey, making it the country with the worst air pollution in Europe. The regulation for testing of existing vehicle exhaust gases is Official Newspaper number 30004 published 11 March 2017.
An average of 135 g CO2/km for LDVs compared well with other countries in 2015, however unlike the EU there is no limit on carbon dioxide emissions.
Africa
South Africa
South
Africa's first clean fuels programme was implemented in 2006 with the
banning of lead from petrol and the reduction of sulphur levels in
diesel from 3 000 parts per million (ppm) to 500ppm, along with a niche
grade of 50ppm.
The Clean Fuels 2 standard, expected to begin in 2017, includes
the reduction of sulphur to 10ppm; the lowering of benzene from 5
percent to 1 percent of volume; the reduction of aromatics from 50
percent to 35 percent of volume; and the specification of olefins at 18
percent of volume.
Oceania
Australia
Australian
emission standards are based on European regulations for light-duty and
heavy-duty (heavy goods) vehicles, with acceptance of selected US and
Japanese standards. The current policy is to fully harmonize Australian
regulations with United Nations (UN) and Economic Commission for Europe
(ECE) standards. In November 2013, the first stage of the stringent Euro
5 emission standards for light vehicles was introduced, which includes
cars and light commercial vehicles.
The development of emission standards for highway vehicles and engines
is coordinated by the National Transport Commission (NTC) and the
regulations—Australian Design Rules (ADR)—are administered by the
Department of Infrastructure and Transport.
All new vehicles manufactured or sold in the country must comply
with the standards, which are tested by running the vehicle or engine in
a standardized test cycle.