Search This Blog

Tuesday, November 5, 2019

Ecological footprint

From Wikipedia, the free encyclopedia
 
World map of countries by their raw ecological footprint, relative to the world average biocapacity (2007).
 
National ecological surplus or deficit, measured as a country's biocapacity per person (in global hectares) minus its ecological footprint per person (also in global hectares). Data from 2013.
 
           
                             
−9 −8 −7 −6 −5 −4 −3 −2 −1 0 2 4 6 8

The ecological footprint measures human demand on nature, i.e., the quantity of nature it takes to support people or an economy. It tracks this demand through an ecological accounting system. The accounts contrast the biologically productive area people use for their consumption to the biologically productive area available within a region or the world (biocapacity, the productive area that can regenerate what people demand from nature). In short, it is a measure of human impact on Earth's ecosystem and reveals the dependence of the human economy on natural capital.

Footprint and biocapacity can be compared at the individual, regional, national or global scale. Both footprint and biocapacity change every year with number of people, per person consumption, efficiency of production, and productivity of ecosystems. At a global scale, footprint assessments show how big humanity's demand is compared to what planet Earth can renew. Since 2003, Global Footprint Network has calculated the ecological footprint from UN data sources for the world as a whole and for over 200 nations (known as the National Footprint Accounts). Every year the calculations are updated with the newest data. The time series are recalculated with every update since UN statistics also change historical data sets. As shown in Lin et al. (2018) the time trends for countries and the world have stayed consistent despite data updates. Also, a recent study by the Swiss Ministry of Environment independently recalculated the Swiss trends and reproduced them within 1–4% for the time period that they studied (1996–2015). Global Footprint Network estimates that, as of 2014, humanity has been using natural capital 1.7 times as fast as Earth can renew it. This means humanity's ecological footprint corresponds to 1.7 planet Earths. 

Ecological footprint analysis is widely used around the Earth in support of sustainability assessments. It enables people to measure and manage the use of resources throughout the economy and explore the sustainability of individual lifestyles, goods and services, organizations, industry sectors, neighborhoods, cities, regions and nations. Since 2006, a first set of ecological footprint standards exist that detail both communication and calculation procedures. The latest version are the updated standards from 2009.

Footprint measurements and methodology

The natural resources of Earth are finite, and unsustainably strained by current levels of human activity.
 
For 2014, Global Footprint Network estimated humanity's ecological footprint as 1.7 planet Earths. This means that, according to their calculations, humanity's demands were 1.7 times faster than what the planet's ecosystems renewed.

Ecological footprints can be calculated at any scale: for an activity, a person, a community, a city, a town, a region, a nation, or humanity as a whole. Cities, due to their population concentration, have large ecological footprints and have become ground zero for footprint reduction.

The ecological footprint accounting method at the national level is described on the web page of Global Footprint Network  or in greater detail in academic papers, including Borucke et al.

The National Accounts Review Committee has also published a research agenda on how to improve the accounts.

Overview

The first academic publication about ecological footprints was by William Rees in 1992. The ecological footprint concept and calculation method was developed as the PhD dissertation of Mathis Wackernagel, under Rees' supervision at the University of British Columbia in Vancouver, Canada, from 1990–1994. Originally, Wackernagel and Rees called the concept "appropriated carrying capacity". To make the idea more accessible, Rees came up with the term "ecological footprint", inspired by a computer technician who praised his new computer's "small footprint on the desk". In early 1996, Wackernagel and Rees published the book Our Ecological Footprint: Reducing Human Impact on the Earth with illustrations by Phil Testemale.

Footprint values at the end of a survey are categorized for Carbon, Food, Housing, and Goods and Services as well as the total footprint number of Earths needed to sustain the world's population at that level of consumption. This approach can also be applied to an activity such as the manufacturing of a product or driving of a car. This resource accounting is similar to life-cycle analysis wherein the consumption of energy, biomass (food, fiber), building material, water and other resources are converted into a normalized measure of land area called global hectares (gha). 

The focus of Ecological Footprint accounting is biological resources. Rather than non-renewable resources like oil or minerals, it is the biological resources that are the materially most limiting resources for the human enterprise. For instance, while the amount of fossil fuel still underground is limited, even more limiting is the biosphere's ability to cope with the CO2 emitted when burning it. This ability is one of the competing uses of the planet's biocapacity. Similarly, minerals are limited by the energy available to extract them from the lithosphere and concentrate them. The limits of ecosystems' ability to renew biomass is given by factors such as water availability, climate, soil fertility, solar energy, technology and management practices. This capacity to renew, driven by photosynthesis, is called biocapacity

Per capita ecological footprint (EF), or ecological footprint analysis (EFA), is a means of comparing consumption and lifestyles, and checking this against biocapacity - nature's ability to provide for this consumption. The tool can inform policy by examining to what extent a nation uses more (or less) than is available within its territory, or to what extent the nation's lifestyle would be replicable worldwide. The footprint can also be a useful tool to educate people about carrying capacity and overconsumption, with the aim of altering personal behavior. Ecological footprints may be used to argue that many current lifestyles are not sustainable. Such a global comparison also clearly shows the inequalities of resource use on this planet at the beginning of the twenty-first century. 

In 2007, the average biologically productive area per person worldwide was approximately 1.8 global hectares (gha) per capita. The U.S. footprint per capita was 9.0 gha, and that of Switzerland was 5.6 gha, while China's was 1.8 gha. The WWF claims that the human footprint has exceeded the biocapacity (the available supply of natural resources) of the planet by 20%. Wackernagel and Rees originally estimated that the available biological capacity for the 6 billion people on Earth at that time was about 1.3 hectares per person, which is smaller than the 1.8 global hectares published for 2006, because the initial studies neither used global hectares nor included bioproductive marine areas.

A number of NGOs offer ecological footprint calculators.

Global trends in humanity's ecological footprint

According to the 2018 edition of the National Footprint Accounts, humanity's total ecological footprint has exhibited an increasing trend since 1961, growing an average of 2.1% per year (SD= 1.9). Humanity's ecological footprint was 7.0 billion gha in 1961 and increased to 20.6 billion gha in 2014. The world-average ecological footprint in 2014 was 2.8 global hectares per person. The carbon footprint is the fastest growing part of the ecological footprint and accounts currently for about 60% of humanity's total ecological footprint.

The Earth's biocapacity has not increased at the same rate as the ecological footprint. The increase of biocapacity averaged at only 0.5% per year (SD = 0.7). Because of agricultural intensification, biocapacity was at 9.6 billion gha in 1961 and grew to 12.2 billion gha in 2016.

Therefore, the Earth has been in ecological overshoot (where humanity is using more resources and generating waste at a pace that the ecosystem can't renew) since the 1970s. In 2018, Earth Overshoot Day, the date where humanity has used more from nature than the planet can renew in the entire year, was estimated to be August 1. Now more than 85% of humanity lives in countries that run an ecological deficit. This means their ecological footprint for consumption exceeds the biocapacity of that country.

Studies in the United Kingdom

The UK's average ecological footprint is 5.45 global hectares per capita (gha) with variations between regions ranging from 4.80 gha (Wales) to 5.56 gha (East England).

Two recent studies have examined relatively low-impact small communities. BedZED, a 96-home mixed-income housing development in South London, was designed by Bill Dunster Architects and sustainability consultants BioRegional for the Peabody Trust. Despite being populated by relatively "mainstream" home-buyers, BedZED was found to have a footprint of 3.20 gha due to on-site renewable energy production, energy-efficient architecture, and an extensive green lifestyles program that included on-site London's first carsharing club. The report did not measure the added footprint of the 15,000 visitors who have toured BedZED since its completion in 2002. Findhorn Ecovillage, a rural intentional community in Moray, Scotland, had a total footprint of 2.56 gha, including both the many guests and visitors who travel to the community to undertake residential courses there and the nearby campus of Cluny Hill College. However, the residents alone have a footprint of 2.71 gha, a little over half the UK national average and one of the lowest ecological footprints of any community measured so far in the industrialized world. Keveral Farm, an organic farming community in Cornwall, was found to have a footprint of 2.4 gha, though with substantial differences in footprints among community members.

Ecological footprint at the individual level

In a 2012 study of consumers acting "green" vs. "brown" (where green people are «expected to have significantly lower ecological impact than “brown” consumers»), the conclusion was "the research found no significant difference between the carbon footprints of green and brown consumers". A 2013 study concluded the same.

A 2017 study published in Environmental Research Letters posited that the most significant way individuals could reduce their own carbon footprint is to have fewer children, followed by living without a vehicle, forgoing air travel and adopting a plant-based diet. The 2017 World Scientists' Warning to Humanity: A Second Notice, co-signed by over 15,000 scientists around the globe, urges human beings to "re-examine and change our individual behaviors, including limiting our own reproduction (ideally to replacement level at most) and drastically diminishing our per capita ­consumption of fossil fuels, meat, and other resources." According to a 2018 study published in Science, avoiding animal products altogether, including meat and dairy, is the most significant way individuals can reduce their overall ecological impact on earth, meaning "not just greenhouse gases, but global acidification, eutrophication, land use and water use".

Reviews and critiques

Early criticism was published by van den Bergh and Verbruggen in 1999, which was updated in 2014. Another criticism was published in 2008. A more complete review commissioned by the Directorate-General for the Environment (European Commission) was published in June 2008. The review found Ecological Footprint "a useful indicator for assessing progress on the EU’s Resource Strategy" the authors noted that Ecological Footprint analysis was unique "in its ability to relate resource use to the concept of carrying capacity." The review noted that further improvements in data quality, methodologies and assumptions were needed.

A recent critique of the concept is due to Blomqvist et al., 2013a, with a reply from Rees and Wackernagel, 2013, and a rejoinder by Blomqvist et al., 2013b.

An additional strand of critique is due to Giampietro and Saltelli (2014a), with a reply from Goldfinger et al., 2014, a rejoinder by Giampietro and Saltelli (2014a), and additional comments from van den Bergh and Grazi (2015).

A number of countries have engaged in research collaborations to test the validity of the method. This includes Switzerland, Germany, United Arab Emirates, and Belgium.

Grazi et al. (2007) have performed a systematic comparison of the ecological footprint method with spatial welfare analysis that includes environmental externalities, agglomeration effects and trade advantages. They find that the two methods can lead to very distinct, and even opposite, rankings of different spatial patterns of economic activity. However this should not be surprising, since the two methods address different research questions. 

Newman (2006) has argued that the ecological footprint concept may have an anti-urban bias, as it does not consider the opportunities created by urban growth. Calculating the ecological footprint for densely populated areas, such as a city or small country with a comparatively large population — e.g. New York and Singapore respectively — may lead to the perception of these populations as "parasitic". This is because these communities have little intrinsic biocapacity, and instead must rely upon large hinterlands. Critics argue that this is a dubious characterization since mechanized rural farmers in developed nations may easily consume more resources than urban inhabitants, due to transportation requirements and the unavailability of economies of scale. Furthermore, such moral conclusions seem to be an argument for autarky. Some even take this train of thought a step further, claiming that the Footprint denies the benefits of trade. Therefore, the critics argue that the Footprint can only be applied globally.

The method seems to reward the replacement of original ecosystems with high-productivity agricultural monocultures by assigning a higher biocapacity to such regions. For example, replacing ancient woodlands or tropical forests with monoculture forests or plantations may improve the ecological footprint. Similarly, if organic farming yields were lower than those of conventional methods, this could result in the former being "penalized" with a larger ecological footprint. Of course, this insight, while valid, stems from the idea of using the footprint as one's only metric. If the use of ecological footprints are complemented with other indicators, such as one for biodiversity, the problem might be solved. Indeed, WWF's Living Planet Report complements the biennial Footprint calculations with the Living Planet Index of biodiversity.[51] Manfred Lenzen and Shauna Murray have created a modified Ecological Footprint that takes biodiversity into account for use in Australia.

Although the ecological footprint model prior to 2008 treated nuclear power in the same manner as coal power, the actual real world effects of the two are radically different. A life cycle analysis centered on the Swedish Forsmark Nuclear Power Plant estimated carbon dioxide emissions at 3.10 g/kW⋅h and 5.05 g/kW⋅h in 2002 for the Torness Nuclear Power Station. This compares to 11 g/kW⋅h for hydroelectric power, 950 g/kW⋅h for installed coal, 900 g/kW⋅h for oil and 600 g/kW⋅h for natural gas generation in the United States in 1999. Figures released by Mark Hertsgaard, however, show that because of the delays in building nuclear plants and the costs involved, investments in energy efficiency and renewable energies have seven times the return on investment of investments in nuclear energy.

The Vattenfall study found Nuclear, Hydro, and Wind to have far less greenhouse emissions than other sources represented.

The Swedish utility Vattenfall did a study of full life-cycle greenhouse-gas emissions of energy sources the utility uses to produce electricity, namely: Nuclear, Hydro, Coal, Gas, Solar Cell, Peat and Wind. The net result of the study was that nuclear power produced 3.3 grams of carbon dioxide per kW⋅h of produced power. This compares to 400 for natural gas and 700 for coal (according to this study). The study also concluded that nuclear power produced the smallest amount of CO2 of any of their electricity sources.

Claims exist that the problems of nuclear waste do not come anywhere close to approaching the problems of fossil fuel waste. A 2004 article from the BBC states: "The World Health Organization (WHO) says 3 million people are killed worldwide by outdoor air pollution annually from vehicles and industrial emissions, and 1.6 million indoors through using solid fuel." In the U.S. alone, fossil fuel waste kills 20,000 people each year. A coal power plant releases 100 times as much radiation as a nuclear power plant of the same wattage. It is estimated that during 1982, US coal burning released 155 times as much radioactivity into the atmosphere as the Three Mile Island incident. In addition, fossil fuel waste causes global warming, which leads to increased deaths from hurricanes, flooding, and other weather events. The World Nuclear Association provides a comparison of deaths due to accidents among different forms of energy production. In their comparison, deaths per TW-yr of electricity produced (in UK and USA) from 1970 to 1992 are quoted as 885 for hydropower, 342 for coal, 85 for natural gas, and 8 for nuclear.

The Western Australian government State of the Environment Report included an Ecological Footprint measure for the average Western Australian seven times the average footprint per person on the planet in 2007, a total of about 15 hectares.

Footprint by country

Ecological footprint for different nations compared to their Human Development Index.

The world-average ecological footprint in 2013 was 2.8 global hectares per person. The average per country ranges from over 10 to under 1 global hectares per person. There is also a high variation within countries, based on individual lifestyle and economic possibilities.

The GHG footprint or the more narrow carbon footprint are a component of the ecological footprint. Often, when only the carbon footprint is reported, it is expressed in weight of CO2 (or CO2e representing GHG warming potential (GGWP)), but it can also be expressed in land areas like ecological footprints. Both can be applied to products, people or whole societies.

Implications

. . . the average world citizen has an eco-footprint of about 2.7 global average hectares while there are only 2.1 global hectare of bioproductive land and water per capita on earth. This means that humanity has already overshot global biocapacity by 30% and now lives unsustainabily by depleting stocks of "natural capital"

Paris Agreement

From Wikipedia, the free encyclopedia
 
Paris Agreement under the United Nations Framework Convention on Climate Change
{{{image_alt}}}
  State parties
  Signatories
  Parties covered by EU ratification
Drafted30 November – 12 December 2015 in Le Bourget, France
Signed22 April 2016
LocationNew York City, United States
Sealed12 December 2015
Effective4 November 2016
ConditionRatification and accession by 55 UNFCCC parties, accounting for 55% of global greenhouse gas emissions
Signatories195
Parties187
DepositarySecretary-General of the United Nations
LanguagesArabic, Chinese, English, French, Russian, Spanish and Afrikaans

The Paris Agreement (French: Accord de Paris) is an agreement within the United Nations Framework Convention on Climate Change (UNFCCC), dealing with greenhouse-gas-emissions mitigation, adaptation, and finance, signed in 2016. The agreement's language was negotiated by representatives of 196 state parties at the 21st Conference of the Parties of the UNFCCC in Le Bourget, near Paris, France, and adopted by consensus on 12 December 2015. As of March 2019, 195 UNFCCC members have signed the agreement, and 187 have become party to it.

The Paris Agreement's long-term temperature goal is to keep the increase in global average temperature to well below 2 °C above pre-industrial levels; and to pursue efforts to limit the increase to 1.5 °C, recognizing that this would substantially reduce the risks and impacts of climate change. This should be done by peaking emissions as soon as possible, in order to "achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases" in the second half of the 21st century. It also aims to increase the ability of parties to adapt to the adverse impacts of climate change, and make "finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development."

Under the Paris Agreement, each country must determine, plan, and regularly report on the contribution that it undertakes to mitigate global warming. No mechanism forces a country to set a specific target by a specific date, but each target should go beyond previously set targets. In June 2017, U.S. President Donald Trump announced his intention to withdraw the United States from the agreement. Under the agreement, the earliest effective date of withdrawal for the U.S. is November 2020, shortly before the end of President Trump's current term. In practice, changes in United States policy that are contrary to the Paris Agreement have already been put in place.

Content

Aims

The aim of the agreement is to decrease global warming described in its Article 2, "enhancing the implementation" of the UNFCCC through:
(a) Holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;
(b) Increasing the ability to adapt to the adverse impacts of climate change and foster climate resilience and low greenhouse gas emissions development, in a manner that does not threaten food production;
(c) Making finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development.
This strategy involved energy and climate policy including the so-called 20/20/20 targets, namely the reduction of carbon dioxide (CO2) emissions by 20%, the increase of renewable energy's market share to 20%, and a 20% increase in energy efficiency.

Countries furthermore aim to reach "global peaking of greenhouse gas emissions as soon as possible". The agreement has been described as an incentive for and driver of fossil fuel divestment.
The Paris deal is the world's first comprehensive climate agreement.

Nationally determined contributions

Global carbon dioxide emissions by jurisdiction.
 
Contributions each individual country should make to achieve the worldwide goal are determined by all countries individually and are called nationally determined contributions (NDCs). Article 3 requires them to be "ambitious", "represent a progression over time" and set "with the view to achieving the purpose of this Agreement". The contributions should be reported every five years and are to be registered by the UNFCCC Secretariat. Each further ambition should be more ambitious than the previous one, known as the principle of 'progression'. Countries can cooperate and pool their nationally determined contributions. The Intended Nationally Determined Contributions pledged during the 2015 Climate Change Conference serve—unless provided otherwise—as the initial Nationally determined contribution. 

The level of NDCs set by each country will set that country's targets. However the 'contributions' themselves are not binding as a matter of international law, as they lack the specificity, normative character, or obligatory language necessary to create binding norms. Furthermore, there will be no mechanism to force a country to set a target in their NDC by a specific date and no enforcement if a set target in an NDC is not met. There will be only a "name and shame" system or as János Pásztor, the U.N. assistant secretary-general on climate change, told CBS News (US), a "name and encourage" plan. As the agreement provides no consequences if countries do not meet their commitments, consensus of this kind is fragile. A trickle of nations exiting the agreement could trigger the withdrawal of more governments, bringing about a total collapse of the agreement.

The NDC Partnership was launched at COP22 in Marrakesh to enhance cooperation so that countries have access to the technical knowledge and financial support they need to achieve large-scale climate and sustainable development targets. The NDC Partnership is guided by a Steering Committee composed of developed and developing nations and international institutions, and facilitated by a Support Unit hosted by World Resources Institute and based in Washington, DC and Bonn, Germany. The NDC Partnership is co-chaired by the governments of Costa Rica and the Netherlands and includes 93 member countries,21 institutional partners and ten associate members.

Effects on global temperature

The negotiators of the agreement, however, stated that the NDCs and the target of no more than 2 °C increase were insufficient; instead, a target of 1.5 °C maximum increase is required, noting "with concern that the estimated aggregate greenhouse gas emission levels in 2025 and 2030 resulting from the intended nationally determined contributions do not fall within least-cost 2 °C scenarios but rather lead to a projected level of 55 gigatonnes in 2030", and recognizing furthermore "that much greater emission reduction efforts will be required in order to hold the increase in the global average temperature to below 2 °C by reducing emissions to 40 gigatonnes or to 1.5 °C."

Though not the sustained temperatures over the long term that the Agreement addresses, in the first half of 2016 average temperatures were about 1.3 °C (2.3 degrees Fahrenheit) above the average in 1880, when global record-keeping began.

When the agreement achieved enough signatures to cross the threshold on 5 October 2016, US President Barack Obama claimed that "Even if we meet every target ... we will only get to part of where we need to go." He also said that "this agreement will help delay or avoid some of the worst consequences of climate change. It will help other nations ratchet down their emissions over time, and set bolder targets as technology advances, all under a strong system of transparency that allows each nation to evaluate the progress of all other nations."

Global stocktake

Map of cumulative per capita anthropogenic atmospheric CO2 emissions by country. Cumulative emissions include land use change, and are measured between the years 1950 and 2000.
 
The global stocktake will kick off with a "facilitative dialogue" in 2018. At this convening, parties will evaluate how their NDCs stack up to the nearer-term goal of peaking global emissions and the long-term goal of achieving net zero emissions by the second half of this century.

The implementation of the agreement by all member countries together will be evaluated every 5 years, with the first evaluation in 2023. The outcome is to be used as input for new nationally determined contributions of member states. The stocktake will not be of contributions/achievements of individual countries but a collective analysis of what has been achieved and what more needs to be done. 

The stocktake works as part of the Paris Agreement's effort to create a "ratcheting up" of ambition in emissions cuts. Because analysts have agreed that the current NDCs will not limit rising temperatures below 2 degrees Celsius, the global stocktake reconvenes parties to assess how their new NDCs must evolve so that they continually reflect a country's "highest possible ambition".

While ratcheting up the ambition of NDCs is a major aim of the global stocktake, it assesses efforts beyond mitigation. The 5-year reviews will also evaluate adaptation, climate finance provisions, and technology development and transfer.

Structure

The Paris Agreement has a 'bottom up' structure in contrast to most international environmental law treaties, which are 'top down', characterised by standards and targets set internationally, for states to implement. Unlike its predecessor, the Kyoto Protocol, which sets commitment targets that have legal force, the Paris Agreement, with its emphasis on consensus-building, allows for voluntary and nationally determined targets. The specific climate goals are thus politically encouraged, rather than legally bound. Only the processes governing the reporting and review of these goals are mandated under international law. This structure is especially notable for the United States—because there are no legal mitigation or finance targets, the agreement is considered an "executive agreement rather than a treaty". Because the UNFCCC treaty of 1992 received the consent of the Senate, this new agreement does not require further legislation from Congress for it to take effect.

Another key difference between the Paris Agreement and the Kyoto Protocol is their scopes. While the Kyoto Protocol differentiated between Annex-1 and non-Annex-1 countries, this bifurcation is blurred in the Paris Agreement, as all parties will be required to submit emissions reductions plans. While the Paris Agreement still emphasizes the principle of "Common but Differentiated Responsibility and Respective Capabilities"—the acknowledgement that different nations have different capacities and duties to climate action—it does not provide a specific division between developed and developing nations. It therefore appears that negotiators will have to continue to deal with this issue in future negotiation rounds, even though the discussion on differentiation may take on a new dynamic.

Mitigation provisions and carbon markets

Article 6 has been flagged as containing some of the key provisions of the Paris Agreement. Broadly, it outlines the cooperative approaches that parties can take in achieving their nationally determined carbon emissions reductions. In doing so, it helps establish the Paris Agreement as a framework for a global carbon market.

Linkage of trading systems and international transfer of mitigation outcomes (ITMOs)

Paragraphs 6.2 and 6.3 establish a framework to govern the international transfer of mitigation outcomes (ITMOs). The Agreement recognizes the rights of Parties to use emissions reductions outside of their own jurisdiction toward their NDC, in a system of carbon accounting and trading.

This provision requires the "linkage" of various carbon emissions trading systems—because measured emissions reductions must avoid "double counting", transferred mitigation outcomes must be recorded as a gain of emission units for one party and a reduction of emission units for the other. Because the NDCs, and domestic carbon trading schemes, are heterogeneous, the ITMOs will provide a format for global linkage under the auspices of the UNFCCC. The provision thus also creates a pressure for countries to adopt emissions management systems—if a country wants to use more cost-effective cooperative approaches to achieve their NDCs, they will need to monitor carbon units for their economies.

Sustainable Development Mechanism

Paragraphs 6.4-6.7 establish a mechanism "to contribute to the mitigation of greenhouse gases and support sustainable development". Though there is no specific name for the mechanism as yet, many Parties and observers have informally coalesced around the name "Sustainable Development Mechanism" or "SDM". The SDM is considered to be the successor to the Clean Development Mechanism, a flexible mechanism under the Kyoto Protocol, by which parties could collaboratively pursue emissions reductions for their Intended Nationally Determined Contributions. The Sustainable Development Mechanism lays the framework for the future of the Clean Development Mechanism post-Kyoto (in 2020). 

In its basic aim, the SDM will largely resemble the Clean Development Mechanism, with the dual mission to 1. contribute to global GHG emissions reductions and 2. support sustainable development. Though the structure and processes governing the SDM are not yet determined, certain similarities and differences from the Clean Development Mechanism can already be seen. Notably, the SDM, unlike the Clean Development Mechanism, will be available to all parties as opposed to only Annex-1 parties, making it much wider in scope.

Since the Kyoto Protocol went into force, the Clean Development Mechanism has been criticized for failing to produce either meaningful emissions reductions or sustainable development benefits in most instances. It has also suffered from the low price of Certified Emissions Reductions (CERs), creating less demand for projects. These criticisms have motivated the recommendations of various stakeholders, who have provided through working groups and reports, new elements they hope to see in SDM that will bolster its success. The specifics of the governance structure, project proposal modalities, and overall design were expected to come during the 2016 Conference of the Parties in Marrakesh.

Adaptation provisions

Adaptation issues garnered more focus in the formation of the Paris Agreement. Collective, long-term adaptation goals are included in the Agreement, and countries must report on their adaptation actions, making adaptation a parallel component of the agreement with mitigation. The adaptation goals focus on enhancing adaptive capacity, increasing resilience, and limiting vulnerability.

Ensuring finance

At the Paris Conference in 2015 where the Agreement was negotiated, the developed countries reaffirmed the commitment to mobilize $100 billion a year in climate finance by 2020, and agreed to continue mobilizing finance at the level of $100 billion a year until 2025. The commitment refers to the pre-existing plan to provide US$100 billion a year in aid to developing countries for actions on climate change adaptation and mitigation.

Though both mitigation and adaptation require increased climate financing, adaptation has typically received lower levels of support and has mobilised less action from the private sector. A 2014 report by the OECD found that just 16 percent of global finance was directed toward climate adaptation in 2014. The Paris Agreement called for a balance of climate finance between adaptation and mitigation, and specifically underscoring the need to increase adaptation support for parties most vulnerable to the effects of climate change, including Least Developed Countries and Small Island Developing States. The agreement also reminds parties of the importance of public grants, because adaptation measures receive less investment from the public sector. John Kerry, as Secretary of State, announced that the U.S. would double its grant-based adaptation finance by 2020.

Some specific outcomes of the elevated attention to adaptation financing in Paris include the G7 countries' announcement to provide US$420 million for Climate Risk Insurance, and the launching of a Climate Risk and Early Warning Systems (CREWS) Initiative. In early March 2016, the Obama administration gave a $500 million grant to the "Green Climate Fund" as "the first chunk of a $3 billion commitment made at the Paris climate talks." So far, the Green Climate Fund has now received over $10 billion in pledges. Notably, the pledges come from developed nations like France, the US, and Japan, but also from developing countries such as Mexico, Indonesia, and Vietnam.

Loss and damage

A new issue that emerged as a focal point in the Paris negotiations rose from the fact that many of the worst effects of climate change will be too severe or come too quickly to be avoided by adaptation measures. The Paris Agreement specifically acknowledges the need to address loss and damage of this kind, and aims to find appropriate responses. It specifies that loss and damage can take various forms—both as immediate impacts from extreme weather events, and slow onset impacts, such as the loss of land to sea-level rise for low-lying islands.

The push to address loss and damage as a distinct issue in the Paris Agreement came from the Alliance of Small Island States and the Least Developed Countries, whose economies and livelihoods are most vulnerable to the negative impacts of climate change. Developed countries, however, worried that classifying the issue as one separate and beyond adaptation measures would create yet another climate finance provision, or might imply legal liability for catastrophic climate events. 

In the end, all parties acknowledged the need for "averting, minimizing, and addressing loss and damage" but notably, any mention of compensation or liability is excluded. The agreement also adopts the Warsaw International Mechanism for Loss and Damage, an institution that will attempt to address questions about how to classify, address, and share responsibility for loss.

Enhanced transparency framework

While each Party's NDC is not legally binding, the Parties are legally bound to have their progress tracked by technical expert review to assess achievement toward the NDC, and to determine ways to strengthen ambition. Article 13 of the Paris Agreement articulates an "enhanced transparency framework for action and support" that establishes harmonized monitoring, reporting, and verification (MRV) requirements. Thus, both developed and developing nations must report every two years on their mitigation efforts, and all parties will be subject to both technical and peer review.

Flexibility mechanisms

While the enhanced transparency framework is universal, along with the global stocktaking to occur every 5 years, the framework is meant to provide "built-in flexibility" to distinguish between developed and developing countries' capacities. In conjunction with this, the Paris Agreement has provisions for an enhanced framework for capacity building. The agreement recognizes the varying circumstances of some countries, and specifically notes that the technical expert review for each country consider that country's specific capacity for reporting. The agreement also develops a Capacity-Building Initiative for Transparency to assist developing countries in building the necessary institutions and processes for complying with the transparency framework.

There are several ways that flexibility mechanisms can be incorporated into the enhanced transparency framework. The scope, level of detail, or frequency of reporting may all be adjusted and tiered based on a country's capacity. The requirement for in-country technical reviews could be lifted for some less developed or small island developing countries. Ways to assess capacity include financial and human resources in a country necessary for NDC review.

Adoption

The Paris Agreement was opened for signature on 22 April 2016 (Earth Day) at a ceremony in New York. After several European Union states ratified the agreement in October 2016, there were enough countries that had ratified the agreement that produce enough of the world's greenhouse gases for the agreement to enter into force. The agreement went into effect on 4 November 2016.

Negotiations

Within the United Nations Framework Convention on Climate Change, legal instruments may be adopted to reach the goals of the convention. For the period from 2008 to 2012, greenhouse gas reduction measures were agreed in the Kyoto Protocol in 1997. The scope of the protocol was extended until 2020 with the Doha Amendment to that protocol in 2012.

During the 2011 United Nations Climate Change Conference, the Durban Platform (and the Ad Hoc Working Group on the Durban Platform for Enhanced Action) was established with the aim to negotiate a legal instrument governing climate change mitigation measures from 2020. The resulting agreement was to be adopted in 2015.

Adoption

Heads of delegations at the 2015 United Nations Climate Change Conference in Paris.
 
At the conclusion of COP 21 (the 21st meeting of the Conference of the Parties, which guides the Conference), on 12 December 2015, the final wording of the Paris Agreement was adopted by consensus by all of the 195 UNFCCC participating member states and the European Union to reduce emissions as part of the method for reducing greenhouse gas. In the 12-page Agreement, the members promised to reduce their carbon output "as soon as possible" and to do their best to keep global warming "to well below 2 °C" [3.6 °F].

Signature and entry into force

Signing by John Kerry in United Nations General Assembly Hall for the United States
 
The Paris Agreement was open for signature by states and regional economic integration organizations that are parties to the UNFCCC (the Convention) from 22 April 2016 to 21 April 2017 at the UN Headquarters in New York.

The agreement stated that it would enter into force (and thus become fully effective) only if 55 countries that produce at least 55% of the world's greenhouse gas emissions (according to a list produced in 2015) ratify, accept, approve or accede to the agreement. On 1 April 2016, the United States and China, which together represent almost 40% of global emissions, issued a joint statement confirming that both countries would sign the Paris Climate Agreement. 175 Parties (174 states and the European Union) signed the agreement on the first date it was open for signature. On the same day, more than 20 countries issued a statement of their intent to join as soon as possible with a view to joining in 2016. With ratification by the European Union, the Agreement obtained enough parties to enter into effect as of 4 November 2016.

European Union and its member states

Both the EU and its member states are individually responsible for ratifying the Paris Agreement. A strong preference was reported that the EU and its 28 member states deposit their instruments of ratification at the same time to ensure that neither the EU nor its member states engage themselves to fulfilling obligations that strictly belong to the other, and there were fears that disagreement over each individual member state's share of the EU-wide reduction target, as well as Britain's vote to leave the EU might delay the Paris pact. However, the European Parliament approved ratification of the Paris Agreement on 4 October 2016, and the EU deposited its instruments of ratification on 5 October 2016, along with several individual EU member states.

Implementation

The process of translating the Paris Agreement into national agendas and implementation has started. One example is the commitment of the least developed countries (LDCs). The LDC Renewable Energy and Energy Efficiency Initiative for Sustainable Development, known as LDC REEEI, is set to bring sustainable, clean energy to millions of energy-starved people in LDCs, facilitating improved energy access, the creation of jobs and contributing to the achievement of the Sustainable Development Goals.

Per analysis from the Intergovernmental Panel on Climate Change (IPCC) a carbon "budget" based upon total carbon dioxide emissions in the atmosphere (versus the rate of annual emission) to limit global warming to 1.5 °C was estimated to be 2.25 trillion tonnes of overall emitted carbon dioxide from the period since 1870. This number is a notable increase from the number estimated by the original Paris Climate accord estimates (of around 2 trillion tonnes total) total carbon emission limit to meet the 1.5 °C global warming target, a target that would be met in the year 2020 at current rates of emission. Additionally, the annual emission of carbon is estimated to be currently at 40 billion tonnes emitted per year. The revised IPCC budget for this was based upon CMIP5 climate model. Estimate models using different base-years also provide other slightly adjusted estimates of a carbon "budget".

Parties and signatories

As of February 2019, 194 states and the European Union have signed the Agreement. 186 states and the EU, representing more than 87% of global greenhouse gas emissions, have ratified or acceded to the Agreement, including China, the United States and India, the countries with three of the four largest greenhouse gas emissions of the UNFCCC members total (about 42% together).

Withdrawal from Agreement

Article 28 of the agreement enables parties to withdraw from the agreement after sending a withdrawal notification to the depositary, but notice can be given no earlier than three years after the agreement goes into force for the country. Withdrawal is effective one year after the depositary is notified. Alternatively, the Agreement stipulates that withdrawal from the UNFCCC, under which the Paris Agreement was adopted, would also withdraw the state from the Paris Agreement. The conditions for withdrawal from the UNFCCC are the same as for the Paris Agreement. The agreement does not specify provisions for non-compliance. 

On 4 August 2017, the Trump administration delivered an official notice to the United Nations that the U.S. intends to withdraw from the Paris Agreement as soon as it is legally eligible to do so. The formal notice of withdrawal cannot be submitted until the agreement is in force for 3 years for the US, in 2019. In accordance with Article 28, as the agreement entered into force in the United States on 4 November 2016, the earliest possible effective withdrawal date for the United States is 4 November 2020 if notice is provided on 4 November 2019. If it chooses to withdraw by way of withdrawing from the UNFCCC, notice could be given immediately (the UNFCCC entered into force for the US in 1994), and be effective one year later.

Criticism

Effectiveness

Global CO2 emissions and probabilistic temperature outcomes of Paris
 
Paris climate accord emission reduction targets and current real-life reductions offered
 
A pair of studies in Nature have said that, as of 2017, none of the major industrialized nations were implementing the policies they had envisioned and have not met their pledged emission reduction targets, and even if they had, the sum of all member pledges (as of 2016) would not keep global temperature rise "well below 2 °C". According to UNEP the emission cut targets in November 2016 will result in temperature rise by 3 °C above pre-industrial levels, far above the 2 °C of the Paris climate agreement.

In addition, an MIT News article written on 22 April 2016 discussed recent MIT studies on the true impact that the Paris Agreement had on global temperature increase. Using their Integrated Global System Modeling (IGSM) to predict temperature increase results in 2100, they used a wide range of scenarios that included no effort towards climate change past 2030, and full extension of the Paris Agreement past 2030. They concluded that the Paris Agreement would cause temperature decrease by about 0.6 to 1.1 degrees Celsius compared to a no-effort-scenario, with only a 0.1 °C change in 2050 for all scenarios. They concluded that, although beneficial, there was strong evidence that the goal provided by the Paris Agreement could not be met in the future; under all scenarios, warming would be at least 3.0 °C by 2100.

How well each individual country is on track to achieving its Paris agreement commitments can be continuously followed on-line.

A 2018 published study points at a threshold at which temperatures could rise to 4 or 5 degrees compared to the pre-industrial levels, through self-reinforcing feedbacks in the climate system, suggesting this threshold is below the 2-degree temperature target, agreed upon by the Paris climate deal. Study author Katherine Richardson stresses, "We note that the Earth has never in its history had a quasi-stable state that is around 2 °C warmer than the pre-industrial and suggest that there is substantial risk that the system, itself, will 'want' to continue warming because of all of these other processes – even if we stop emissions. This implies not only reducing emissions but much more."

At the same time, another 2018 published study notes that even at a 1.5 °C level of warming, important increases in the occurrence of high river flows would be expected in India, South and Southeast Asia. Yet, the same study points out that under a 2.0 °C of warming various areas in South America, central Africa, western Europe, and the Mississippi area in the United States would see more high flows; thus increasing flood risks.

Lack of binding enforcement mechanism

Although the agreement was lauded by many, including French President François Hollande and UN Secretary General Ban Ki-moon, criticism has also surfaced. For example, James Hansen, a former NASA scientist and a climate change expert, voiced anger that most of the agreement consists of "promises" or aims and not firm commitments. He called the Paris talks a fraud with 'no action, just promises' and feels that only an across the board tax on CO
2
emissions, something not part of the Paris Agreement, would force CO
2
emissions down fast enough to avoid the worst effects of global warming.

Institutional asset owners associations and think-tanks have also observed that the stated objectives of the Paris Agreement are implicitly "predicated upon an assumption – that member states of the United Nations, including high polluters such as China, the US, India, Russia, Japan, Germany, South Korea, Iran, Saudi Arabia, Canada, Indonesia and Mexico, which generate more than half the world's greenhouse gas emissions, will somehow drive down their carbon pollution voluntarily and assiduously without any binding enforcement mechanism to measure and control CO
2
emissions at any level from factory to state, and without any specific penalty gradation or fiscal pressure (for example a carbon tax) to discourage bad behaviour." Emissions taxes (such as a carbon tax) can be integrated into the country's NDC however.

Digital imaging

From Wikipedia, the free encyclopedia
 
Digital imaging or digital image acquisition is the creation of a digitally encoded representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability make copies and copies of copies digitally indefinitely without any loss of image quality.

Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).

History

Before digital imaging, the first photograph ever produced was in 1826 by Frenchman Joseph Nicéphore Niépce. When Joseph was 28, he was discussing with his brother Claude about the possibility of reproducing images with light. His focus on his new innovations began in 1816. He was in fact more interested in creating an engine for a boat. Joseph and his brother focused on that for quite some time and Claude successfully promoted his innovation moving and advancing him to England. Joseph was able to focus on the photograph and finally in 1826, he was able to produce his first photograph of a view through his window. It took 8 hours of exposure to light to finally process it. Now, with digital imaging photos do not take that long to process. Brown, B. (2002, November). The First Photograph. Abbey Newsletter, V26, N3. 

The first digital image was produced in 1920, by the Bartlane cable picture transmission system. British inventors, Harry G. Bartholomew and Maynard D. McFarlane, developed this method. The process consisted of “a series of negatives on zinc plates that were exposed for varying lengths of time, thus producing varying densities,”. The Bartlane cable picture transmission system generated at both its transmitter and its receiver end a punched data card or tape that was recreated as an image.

In 1957, Russell A. Kirsch produced a device that generated digital data that could be stored in a computer; this used a drum scanner and photomultiplier tube.

Digital imaging was developed in the 1960s and 1970s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions including the KH-11 program. As digital technology became cheaper in later decades, it replaced the old film methods for many purposes. 

In the early 1960s, while developing compact, lightweight, portable equipment for the onboard nondestructive testing of naval aircraft, Frederick G. Weighart and James F. McNulty (U.S. radio engineer) at Automation Industries, Inc., then, in El Segundo, California co-invented the first apparatus to generate a digital image in real-time, which image was a fluoroscopic digital radiograph. Square wave signals were detected on the fluorescent screen of a fluoroscope to create the image.

Digital image sensors

The basis for digital image sensors is metal-oxide-semiconductor (MOS) technology, which originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. This led to the development of digital semiconductor image sensors, including the charge-coupled device (CCD) and later the CMOS sensor.

The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straighforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.

Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. It was a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. By 2007, sales of CMOS sensors had surpassed CCD sensors.

Digital image compression

An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972. DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet. Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos, with several billion JPEG images produced every day as of 2015.

Digital cameras

These different scanning ideas were the basis of the first designs of digital camera. Early cameras took a long time to capture an image and were poorly suited for consumer purposes. It wasn’t until the adoption of the CCD (charge-coupled device) that the digital camera really took off. The CCD became part of the imaging systems used in telescopes, the first black-and-white digital cameras in the 1980s. Color was eventually added to the CCD and is a usual feature of cameras today.

Changing environment

Great strides have been made in the field of digital imaging. Negatives and exposure are foreign concepts to many, and the first digital image in 1920 led eventually to cheaper equipment, increasingly powerful yet simple software, and the growth of the Internet.

The constant advancement and production of physical equipment and hardware related to digital imaging has affected the environment surrounding the field. From cameras and webcams to printers and scanners, the hardware is becoming sleeker, thinner, faster, and cheaper. As the cost of equipment decreases, the market for new enthusiasts widens, allowing more consumers to experience the thrill of creating their own images.

Everyday personal laptops, family desktops, and company computers are able to handle photographic software. Our computers are more powerful machines with increasing capacities for running programs of any kind—especially digital imaging software. And that software is quickly becoming both smarter and simpler. Although functions on today’s programs reach the level of precise editing and even rendering 3-D images, user interfaces are designed to be friendly to advanced users as well as first-time fans.

The Internet allows editing, viewing, and sharing digital photos and graphics. A quick browse around the web can easily turn up graphic artwork from budding artists, news photos from around the world, corporate images of new products and services, and much more. The Internet has clearly proven itself a catalyst in fostering the growth of digital imaging.

Online photo sharing of images changes the way we understand photography and photographers. Online sites such as Flickr, Shutterfly, and Instagram give billions the capability to share their photography, whether they are amateurs or professionals. Photography has gone from being a luxury medium of communication and sharing to more of a fleeting moment in time. Subjects have also changed. Pictures used to be primarily taken of people and family. Now, we take them of anything. We can document our day and share it with everyone with the touch of our fingers.

In 1826 Niepce was the first to develop a photo which used lights to reproduce images, the advancement of photography has drastically increased over the years. Everyone is now a photographer in their own way, whereas during the early 1800s and 1900s the expense of lasting photos was highly valued and appreciated by consumers and producers. According to the magazine article on five ways digital camera changed us states the following:The impact on professional photographers has been dramatic. Once upon a time a photographer wouldn’t dare waste a shot unless they were virtually certain it would work.”The use of digital imaging( photography) has changed the way we interacted with our environment over the years. Part of the world is experienced differently through visual imagining of lasting memories, it has become a new form of communication with friends, family and love ones around the world without face to face interactions. Through photography it is easy to see those that you have never seen before and feel their presence without them being around, for example Instagram is a form of social media where anyone is allowed to shoot, edit, and share photos of whatever they want with friends and family. Facebook, snapshot, vine and twitter are also ways people express themselves with little or no words and are able to capture every moment that is important. Lasting memories that were hard to capture, is now easy because everyone is now able to take pictures and edit it on their phones or laptops. Photography has become a new way to communicate and it is rapidly increasing as time goes by, which has affected the world around us.

A study done by Basey, Maines, Francis, and Melbourne found that drawings used in class have a significant negative effect on lower-order content for student’s lab reports, perspectives of labs, excitement, and time efficiency of learning. Documentation style learning has no significant effects on students in these areas. He also found that students were more motivated and excited to learn when using digital imaging.

Field advancements

In the field of education.
  • As digital projectors, screens, and graphics find their way to the classroom, teachers and students alike are benefitting from the increased convenience and communication they provide, although their theft can be a common problem in schools.[26] In addition acquiring a basic digital imaging education is becoming increasingly important for young professionals. Reed, a design production expert from Western Washington University, stressed the importance of using “digital concepts to familiarize students with the exciting and rewarding technologies found in one of the major industries of the 21st century”.
The field of medical imaging
  • A branch of digital imaging that seeks to assist in the diagnosis and treatment of diseases, is growing at a rapid rate. A recent study by the American Academy of Pediatrics suggests that proper imaging of children who may have appendicitis may reduce the amount of appendectomies needed. Further advancements include amazingly detailed and accurate imaging of the brain, lungs, tendons, and other parts of the body—images that can be used by health professionals to better serve patients.
  • According to Vidar, as more countries take on this new way of capturing an image, it has been found that image digitalization in medicine has been increasingly beneficial for both patient and medical staff. Positive ramifications of going paperless and heading towards digitization includes the overall reduction of cost in medical care, as well as an increased global, real-time, accessibility of these images. (http://www.vidar.com/film/images/stories/PDFs/newsroom/Digital%20Transition%20White%20Paper%20hi-res%20GFIN.pdf)
  • There is a program called Digital Imaging in Communications and Medicine (DICOM) that is changing the medical world as we know it. DICOM is not only a system for taking high quality images of the aforementioned internal organs, but also is helpful in processing those images. It is a universal system that incorporates image processing, sharing, and analyzing for the convenience of patient comfort and understanding. This service is all encompassing and is beginning a necessity.
In the field of technology, digital image processing has become more useful than analog image processing when considering the modern technological advancement.
  • Image sharpen & reinstatement
    • Image sharpens & reinstatement is the procedure of images which is capture by the contemporary camera making them an improved picture or manipulating the pictures in the way to get chosen product. This comprises the zooming process, the blurring process, the sharpening process, the gray scale to color translation process, the picture recovery process and the picture identification process.
  • Facial Recognition
    • Face recognition is a PC innovation that decides the positions and sizes of human faces in self-assertive digital pictures. It distinguishes facial components and overlooks whatever, for example, structures, trees & bodies.
  • Remote detection
    • Remote detecting is little or substantial scale procurement of data of article or occurrence, with the utilization of recording or ongoing detecting apparatus which is not in substantial or close contact with an article. Practically speaking, remote detecting is face-off accumulation using an assortment of gadgets for collecting data on particular article or location.
  • Pattern detection
    • The pattern detection is the study or investigation from picture processing. In the pattern detection, image processing is utilized for recognizing elements in the images and after that machine study is utilized to instruct a framework for variation in pattern. The pattern detection is utilized in computer-aided analysis, detection of calligraphy, identification of images, and many more.
  • Color processing
    • The color processing comprises processing of colored pictures and diverse color locations which are utilized. This moreover involves study of transmit, store, and encode of the color pictures.

Theoretical application

Although theories are quickly becoming realities in today’s technological society, the range of possibilities for digital imaging is wide open. One major application that is still in the works is that of child safety and protection. How can we use digital imaging to better protect our kids? Kodak’s program, Kids Identification Digital Software (KIDS) may answer that question. The beginnings include a digital imaging kit to be used to compile student identification photos, which would be useful during medical emergencies and crimes. More powerful and advanced versions of applications such as these are still developing, with increased features constantly being tested and added.

But parents and schools aren’t the only ones who see benefits in databases such as these. Criminal investigation offices, such as police precincts, state crime labs, and even federal bureaus have realized the importance of digital imaging in analyzing fingerprints and evidence, making arrests, and maintaining safe communities. As the field of digital imaging evolves, so does our ability to protect the public.

Digital imaging can be closely related to the social presence theory especially when referring to the social media aspect of images captured by our phones. There are many different definitions of the social presence theory but two that clearly define what it is would be "the degree to which people are perceived as real" (Gunawardena, 1995), and "the ability to project themselves socially and emotionally as real people" (Garrison, 2000). Digital imaging allows one to manifest their social life through images in order to give the sense of their presence to the virtual world. The presence of those images acts as an extension of oneself to others, giving a digital representation of what it is they are doing and who they are with. Digital imaging in the sense of cameras on phones helps facilitate this effect of presence with friends on social media. Alexander (2012) states, "presence and representation is deeply engraved into our reflections on images...this is, of course, an altered presence...nobody confuses an image with the representation reality. But we allow ourselves to be taken in by that representation, and only that 'representation' is able to show the liveliness of the absentee in a believable way." Therefore, digital imaging allows ourselves to be represented in a way so as to reflect our social presence.

Photography is a medium used to capture specific moments visually. Through photography our culture has been given the chance to send information (such as appearance) with little or no distortion. The Media Richness Theory provides a framework for describing a medium’s ability to communicate information without loss or distortion. This theory has provided the chance to understand human behavior in communication technologies. The article written by Daft and Lengel (1984,1986) states the following:

Communication media fall along a continuum of richness. The richness of a medium comprises four aspects: the availability of instant feedback, which allows questions to be asked and answered; the use of multiple cues, such as physical presence, vocal inflection, body gestures, words, numbers and graphic symbols; the use of natural language, which can be used to convey an understanding of a broad set of concepts and ideas; and the personal focus of the medium (pp. 83).

The more a medium is able to communicate the accurate appearance, social cues and other such characteristics the more rich it becomes. Photography has become a natural part of how we communicate. For example, most phones have the ability to send pictures in text messages. Apps Snapchat and Vine have become increasingly popular for communicating. Sites like Instagram and Facebook have also allowed users to reach a deeper level of richness because of their ability to reproduce information. Sheer, V. C. (January–March 2011). Teenagers’ use of MSN features, discussion topics, and online friendship development: the impact of media richness and communication control. Communication Quarterly, 59(1).

Methods

A digital photograph may be created directly from a physical scene by a camera or similar device. Alternatively, a digital image may be obtained from another image in an analog medium, such as photographs, photographic film, or printed paper, by an image scanner or similar device. Many technical images—such as those acquired with tomographic equipment, side-scan sonar, or radio telescopes—are actually obtained by complex processing of non-image data. Weather radar maps as seen on television news are a commonplace example. The digitalization of analog real-world data is known as digitizing, and involves sampling (discretization) and quantization. Projectional imaging of digital radiography can be done by X-ray detectors that directly convert the image to digital format. Alternatively, phosphor plate radiography is where the image is first taken on a photostimulable phosphor (PSP) plate which is subsequently scanned by a mechanism called photostimulated luminescence

Finally, a digital image can also be computed from a geometric model or mathematical formula. In this case the name image synthesis is more appropriate, and it is more often known as rendering

Digital image authentication is an issue for the providers and producers of digital images such as health care organizations, law enforcement agencies and insurance companies. There are methods emerging in forensic photography to analyze a digital image and determine if it has been altered.

Previously digital imaging depended on chemical and mechanical processes, now all these processes have converted to electronic. A few things need to take place for digital imaging to occur, the light energy converts to electrical energy – think of a grid with millions of little solar cells. Each condition generates a specific electrical charge. Charges for each of these "solar cells" are transported and communicated to the firmware to be interpreted. The firmware is what understands and translates the color and other light qualities. Pixels are what is noticed next, with varying intensities they create and cause different colors, creating a picture or image. Finally the firmware records the information for future and further reproduction.

Advantages

There are several benefits of digital imaging. First, the process enables easy access of photographs and word documents. Google is at the forefront of this ‘revolution,’ with its mission to digitize the world’s books. Such digitization will make the books searchable, thus making participating libraries, such as Stanford University and the University of California Berkeley, accessible worldwide. Digital imaging also benefits the medical world because it “allows the electronic transmission of images to third-party providers, referring dentists, consultants, and insurance carriers via a modem”. The process “is also environmentally friendly since it does not require chemical processing”. Digital imaging is also frequently used to help document and record historical, scientific and personal life events.

Benefits also exist regarding photographs. Digital imaging will reduce the need for physical contact with original images. Furthermore, digital imaging creates the possibility of reconstructing the visual contents of partially damaged photographs, thus eliminating the potential that the original would be modified or destroyed. In addition, photographers will be “freed from being ‘chained’ to the darkroom,” will have more time to shoot and will be able to cover assignments more effectively. Digital imaging ‘means’ that “photographers no longer have to rush their film to the office, so they can stay on location longer while still meeting deadlines”.

Another advantage to digital photography is that it has been expanded to camera phones. We are able to take cameras with us wherever as well as send photos instantly to others. It is easy for people to us as well as help in the process of self-identification for the younger generation.

Drawbacks

Critics of digital imaging cite several negative consequences. An increased “flexibility in getting better quality images to the readers” will tempt editors, photographers and journalists to manipulate photographs. In addition, “staff photographers will no longer be photojournalistists, but camera operators…as editors have the power to decide what they want ‘shot’”. Legal constraints, including copyright, pose another concern: will copyright infringement occur as documents are digitized and copying becomes easier?

Butane

From Wikipedia, the free encyclopedia ...