Search This Blog

Wednesday, February 16, 2022

Growth of photovoltaics

From Wikipedia, the free encyclopedia
 
Recent and estimated capacity (GWp)
Year-end 2016 2017 2018 2019 2020 2021E 2022F
Cumulative 306.5 403.3 512 630 774 957 1185
Annual new 76.8 99 109 118 144 183 228
Cumulative
growth
32% 32% 27% 24% 23% 24% 24%
Installed PV in watts per capita

Worldwid PV capacity in watts per capita by country in 2013.

   none or unknown
   0.1–10 watts
   10–100 watts
   100–200 watts
   200–400 watts
   400–600 watts
History of cumulative PV capacity worldwide

Exponential growth-curve on a semi-log scale, show a straight line since 1992

Grid parity for solar PV around the world

Grid parity for solar PV systems around the world

  reached before 2014
  reached after 2014
  only for peak prices
  predicted U.S. states

Added PV capacity by country in 2019 (by percent of world total, clustered by region)

  China (39.16%)
  Vietnam (9.23%)
  Japan (4.35%)
  South Korea (2.08%)
  India (3.29%)
  Australia (3.48%)
  United States (11.72%)
  Brazil (2.60%)
  Germany (3.76%)
  Netherlands (2.49%)
  Spain (2.24%)
  Poland (1.90%)
  Rest of Europe (6.22%)
  Rest of the World (7.56%)

Worldwide growth of photovoltaics has been close to exponential between 1992 and 2018. During this period of time, photovoltaics (PV), also known as solar PV, evolved from a niche market of small-scale applications to a mainstream electricity source.

When solar PV systems were first recognized as a promising renewable energy technology, subsidy programs, such as feed-in tariffs, were implemented by a number of governments in order to provide economic incentives for investments. For several years, growth was mainly driven by Japan and pioneering European countries. As a consequence, cost of solar declined significantly due to experience curve effects like improvements in technology and economies of scale. Several national programs were instrumental in increasing PV deployment, such as the Energiewende in Germany, the Million Solar Roofs project in the United States, and China's 2011 five-year-plan for energy production. Since then, deployment of photovoltaics has gained momentum on a worldwide scale, increasingly competing with conventional energy sources. In the early 21st century a market for utility-scale plants emerged to complement rooftop and other distributed applications. By 2015, some 30 countries had reached grid parity.

Since the 1950s, when the first solar cells were commercially manufactured, there has been a succession of countries leading the world as the largest producer of electricity from solar photovoltaics. First it was the United States, then Japan, followed by Germany, and currently China.

By the end of 2018, global cumulative installed PV capacity reached about 512 gigawatts (GW), of which about 180 GW (35%) were utility-scale plants. Solar power supplied about 3% of global electricity demand in 2019. In 2018, solar PV contributed between 7% and 8% to the annual domestic consumption in Italy, Greece, Germany, and Chile. The largest penetration of solar power in electricity production is found in Honduras (14%). Solar PV contribution to electricity in Australia is edging towards 11%, while in the United Kingdom and Spain it is close to 4%. China and India moved above the world average of 2.55%, while, in descending order, the United States, South Korea, France and South Africa are below the world's average.

Projections for photovoltaic growth are difficult and burdened with many uncertainties. Official agencies, such as the International Energy Agency (IEA) have consistently increased their estimates for decades, while still falling far short of projecting actual deployment in every forecast. Bloomberg NEF projects global solar installations to grow in 2019, adding another 125–141 GW resulting in a total capacity of 637–653 GW by the end of the year. By 2050, the IEA foresees solar PV to reach 4.7 terawatts (4,674 GW) in its high-renewable scenario, of which more than half will be deployed in China and India, making solar power the world's largest source of electricity.

Solar PV nameplate capacity

Nameplate capacity denotes the peak power output of power stations in unit watt prefixed as convenient, to e.g. kilowatt (kW), megawatt (MW) and gigawatt (GW). Because power output for variable renewable sources is unpredictable, a source's average generation is generally significantly lower than the nameplate capacity. In order to have an estimate of the average power output, the capacity can be multiplied by a suitable capacity factor, which takes into account varying conditions - weather, nighttime, latitude, maintenance. Worldwide, the average solar PV capacity factor is 11%. In addition, depending on context, the stated peak power may be prior to a subsequent conversion to alternating current, e.g. for a single photovoltaic panel, or include this conversion and its loss for a grid connected photovoltaic power station.

Wind power has different characteristics, e.g. a higher capacity factor and about four times the 2015 electricity production of solar power. Compared with wind power, photovoltaic power production correlates well with power consumption for air-conditioning in warm countries. As of 2017 a handful of utilities have started combining PV installations with battery banks, thus obtaining several hours of dispatchable generation to help mitigate problems associated with the duck curve after sunset.

Current status

Worldwide

In 2017, photovoltaic capacity increased by 95 GW, with a 29% growth year-on-year of new installations. Cumulative installed capacity exceeded 401 GW by the end of the year, sufficient to supply 2.1 percent of the world's total electricity consumption.

Regions

As of 2018, Asia was the fastest growing region, with almost 75% of global installations. China alone accounted for more than half of worldwide deployment in 2017. In terms of cumulative capacity, Asia was the most developed region with more than half of the global total of 401 GW in 2017. Europe continued to decline as a percentage of the global PV market. In 2017, Europe represented 28% of global capacity, the Americas 19% and Middle East 2%. However with respect to per capita installation the European Union has more than twice the capacity compared to China and 25% more than the US.

Solar PV covered 3.5% and 7% of European electricity demand and peak electricity demand, respectively in 2014.

Countries and territories

Worldwide growth of photovoltaics is extremely dynamic and varies strongly by country. The top installers of 2019 were China, the United States, and India. There are 37 countries around the world with a cumulative PV capacity of more than one gigawatt. The available solar PV capacity in Honduras is sufficient to supply 14.8% of the nation's electrical power while 8 countries can produce between 7% and 9% of their respective domestic electricity consumption.

PV capacity growth in China
 
Growth of PV in Europe 1992-2014

History of leading countries

The United States was the leader of installed photovoltaics for many years, and its total capacity was 77 megawatts in 1996, more than any other country in the world at the time. From the late 1990s, Japan was the world's leader of solar electricity production until 2005, when Germany took the lead and by 2016 had a capacity of over 40 gigawatts. In 2015, China surpassed Germany to become the world's largest producer of photovoltaic power, and in 2017 became the first country to surpass 100 GW of installed capacity.

United States (1954–1996)

The United States, where modern solar PV was invented, led installed capacity for many years. Based on preceding work by Swedish and German engineers, the American engineer Russell Ohl at Bell Labs patented the first modern solar cell in 1946. It was also there at Bell Labs where the first practical c-silicon cell was developed in 1954. Hoffman Electronics, the leading manufacturer of silicon solar cells in the 1950s and 1960s, improved on the cell's efficiency, produced solar radios, and equipped Vanguard I, the first solar powered satellite launched into orbit in 1958.

In 1977 US-President Jimmy Carter installed solar hot water panels on the White House (later removed by President Reagan) promoting solar energy and the National Renewable Energy Laboratory, originally named Solar Energy Research Institute was established at Golden, Colorado. In the 1980s and early 1990s, most photovoltaic modules were used in stand-alone power systems or powered consumer products such as watches, calculators and toys, but from around 1995, industry efforts have focused increasingly on developing grid-connected rooftop PV systems and power stations. By 1996, solar PV capacity in the US amounted to 77 megawatts–more than any other country in the world at the time. Then, Japan moved ahead.

Japan (1997–2004)

Japan took the lead as the world's largest producer of PV electricity, after the city of Kobe was hit by the Great Hanshin earthquake in 1995. Kobe experienced severe power outages in the aftermath of the earthquake, and PV systems were then considered as a temporary supplier of power during such events, as the disruption of the electric grid paralyzed the entire infrastructure, including gas stations that depended on electricity to pump gasoline. Moreover, in December of that same year, an accident occurred at the multibillion-dollar experimental Monju Nuclear Power Plant. A sodium leak caused a major fire and forced a shutdown (classified as INES 1). There was massive public outrage when it was revealed that the semigovernmental agency in charge of Monju had tried to cover up the extent of the accident and resulting damage. Japan remained world leader in photovoltaics until 2004, when its capacity amounted to 1,132 megawatts. Then, focus on PV deployment shifted to Europe.

Germany (2005–2014)

In 2005, Germany took the lead from Japan. With the introduction of the Renewable Energy Act in 2000, feed-in tariffs were adopted as a policy mechanism. This policy established that renewables have priority on the grid, and that a fixed price must be paid for the produced electricity over a 20-year period, providing a guaranteed return on investment irrespective of actual market prices. As a consequence, a high level of investment security lead to a soaring number of new photovoltaic installations that peaked in 2011, while investment costs in renewable technologies were brought down considerably. In 2016 Germany's installed PV capacity was over the 40 GW mark.

China (2015–present)

China surpassed Germany's capacity by the end of 2015, becoming the world's largest producer of photovoltaic power. China's rapid PV growth continued in 2016 – with 34.2 GW of solar photovoltaics installed. The quickly lowering feed in tariff rates at the end of 2015 motivated many developers to secure tariff rates before mid-year 2016 – as they were anticipating further cuts (correctly so). During the course of the year, China announced its goal of installing 100 GW during the next Chinese Five Year Economic Plan (2016–2020). China expected to spend ¥1 trillion ($145B) on solar construction during that period. Much of China's PV capacity was built in the relatively less populated west of the country whereas the main centres of power consumption were in the east (such as Shanghai and Beijing). Due to lack of adequate power transmission lines to carry the power from the solar power plants, China had to curtail its PV generated power.

History of market development

Prices and costs (1977–present)

Swanson's law – the PV learning curve
 
Price decline of c-Si solar cells
 
Type of cell or module Price per Watt
Multi-Si Cell (≥18.6%) $0.071
Mono-Si Cell (≥20.0%) $0.090
G1 Mono-Si Cell (>21.7%) $0.099
M6 Mono-Si Cell (>21.7%) $0.100
275W - 280W (60P) Module $0.176
325W - 330W (72P) Module $0.188
305W - 310W Module $0.240
315W - 320W Module $0.190
>325W - >385W Module $0.200
Source: EnergyTrend, price quotes, average prices, 13 July 2020 

The average price per watt dropped drastically for solar cells in the decades leading up to 2017. While in 1977 prices for crystalline silicon cells were about $77 per watt, average spot prices in August 2018 were as low as $0.13 per watt or nearly 600 times less than forty years ago. Prices for thin-film solar cells and for c-Si solar panels were around $.60 per watt. Module and cell prices declined even further after 2014 (see price quotes in table).

This price trend was seen as evidence supporting Swanson's law (an observation similar to the famous Moore's Law) that states that the per-watt cost of solar cells and panels fall by 20 percent for every doubling of cumulative photovoltaic production. A 2015 study showed price/kWh dropping by 10% per year since 1980, and predicted that solar could contribute 20% of total electricity consumption by 2030.

In its 2014 edition of the Technology Roadmap: Solar Photovoltaic Energy report, the International Energy Agency (IEA) published prices for residential, commercial and utility-scale PV systems for eight major markets as of 2013 (see table below). However, DOE's SunShot Initiative report states lower prices than the IEA report, although both reports were published at the same time and referred to the same period. After 2014 prices fell further. For 2014, the SunShot Initiative modeled U.S. system prices to be in the range of $1.80 to $3.29 per watt. Other sources identified similar price ranges of $1.70 to $3.50 for the different market segments in the U.S. In the highly penetrated German market, prices for residential and small commercial rooftop systems of up to 100 kW declined to $1.36 per watt (€1.24/W) by the end of 2014. In 2015, Deutsche Bank estimated costs for small residential rooftop systems in the U.S. around $2.90 per watt. Costs for utility-scale systems in China and India were estimated as low as $1.00 per watt.

Typical PV system prices in 2013 in selected countries (USD)
USD/W Australia China France Germany Italy Japan United Kingdom United States
 Residential 1.8 1.5 4.1 2.4 2.8 4.2 2.8 4.91
 Commercial 1.7 1.4 2.7 1.8 1.9 3.6 2.4 4.51
 Utility-scale 2.0 1.4 2.2 1.4 1.5 2.9 1.9 3.31
Source: IEA – Technology Roadmap: Solar Photovoltaic Energy report, September 2014'
1U.S figures are lower in DOE's Photovoltaic System Pricing Trends

According to the International Renewable Energy Agency, a "sustained, dramatic decline" in utility-scale solar PV electricity cost driven by lower solar PV module and system costs continued in 2018, with global weighted average levelized cost of energy of solar PV falling to US$0.085 per kilowatt-hour, or 13% lower than projects commissioned the previous year, resulting in a decline from 2010 to 2018 of 77%.

Technologies (1990–present)

Market-share of PV technologies since 1990

There were significant advances in conventional crystalline silicon (c-Si) technology in the years leading up to 2017. The falling cost of the polysilicon since 2009, that followed after a period of severe shortage (see below) of silicon feedstock, pressure increased on manufacturers of commercial thin-film PV technologies, including amorphous thin-film silicon (a-Si), cadmium telluride (CdTe), and copper indium gallium diselenide (CIGS), led to the bankruptcy of several thin-film companies that had once been highly touted. The sector faced price competition from Chinese crystalline silicon cell and module manufacturers, and some companies together with their patents were sold below cost.

Global PV market by technology in 2013.

  CdTe (5.1%)
  a-Si (2.0%)
  CIGS (2.0%)
  mono-Si (36.0%)
  multi-Si (54.9%)

In 2013 thin-film technologies accounted for about 9 percent of worldwide deployment, while 91 percent was held by crystalline silicon (mono-Si and multi-Si). With 5 percent of the overall market, CdTe held more than half of the thin-film market, leaving 2 percent to each CIGS and amorphous silicon.

Copper indium gallium selenide (CIGS) is the name of the semiconductor material on which the technology is based. One of the largest producers of CIGS photovoltaics in 2015 was the Japanese company Solar Frontier with a manufacturing capacity in the gigawatt-scale. Their CIS line technology included modules with conversion efficiencies of over 15%. The company profited from the booming Japanese market and attempted to expand its international business. However, several prominent manufacturers could not keep up with the advances in conventional crystalline silicon technology. The company Solyndra ceased all business activity and filed for Chapter 11 bankruptcy in 2011, and Nanosolar, also a CIGS manufacturer, closed its doors in 2013. Although both companies produced CIGS solar cells, it has been pointed out, that the failure was not due to the technology but rather because of the companies themselves, using a flawed architecture, such as, for example, Solyndra's cylindrical substrates.
The U.S.-company First Solar, a leading manufacturer of CdTe, built several of the world's largest solar power stations, such as the Desert Sunlight Solar Farm and Topaz Solar Farm, both in the Californian desert with 550 MW capacity each, as well as the 102 MWAC Nyngan Solar Plant in Australia (the largest PV power station in the Southern Hemisphere at the time) commissioned in mid-2015. The company was reported in 2013 to be successfully producing CdTe-panels with a steadily increasing efficiency and declining cost per watt. CdTe was the lowest energy payback time of all mass-produced PV technologies, and could be as short as eight months in favorable locations. The company Abound Solar, also a manufacturer of cadmium telluride modules, went bankrupt in 2012.
In 2012, ECD solar, once one of the world's leading manufacturer of amorphous silicon (a-Si) technology, filed for bankruptcy in Michigan, United States. Swiss OC Oerlikon divested its solar division that produced a-Si/μc-Si tandem cells to Tokyo Electron Limited. Other companies that left the amorphous silicon thin-film market include DuPont, BP, Flexcell, Inventux, Pramac, Schuco, Sencera, EPV Solar, NovaSolar (formerly OptiSolar) and Suntech Power that stopped manufacturing a-Si modules in 2010 to focus on crystalline silicon solar panels. In 2013, Suntech filed for bankruptcy in China.

Silicon shortage (2005–2008)

Polysilicon prices since 2004. As of July 2020, the ASP for polysilicon stands at $6.956/kg

In the early 2000s, prices for polysilicon, the raw material for conventional solar cells, were as low as $30 per kilogram and silicon manufacturers had no incentive to expand production.

However, there was a severe silicon shortage in 2005, when governmental programmes caused a 75% increase in the deployment of solar PV in Europe. In addition, the demand for silicon from semiconductor manufacturers was growing. Since the amount of silicon needed for semiconductors makes up a much smaller portion of production costs, semiconductor manufacturers were able to outbid solar companies for the available silicon in the market.

Initially, the incumbent polysilicon producers were slow to respond to rising demand for solar applications, because of their painful experience with over-investment in the past. Silicon prices sharply rose to about $80 per kilogram, and reached as much as $400/kg for long-term contracts and spot prices. In 2007, the constraints on silicon became so severe that the solar industry was forced to idle about a quarter of its cell and module manufacturing capacity—an estimated 777 MW of the then available production capacity. The shortage also provided silicon specialists with both the cash and an incentive to develop new technologies and several new producers entered the market. Early responses from the solar industry focused on improvements in the recycling of silicon. When this potential was exhausted, companies have been taking a harder look at alternatives to the conventional Siemens process.

As it takes about three years to build a new polysilicon plant, the shortage continued until 2008. Prices for conventional solar cells remained constant or even rose slightly during the period of silicon shortage from 2005 to 2008. This is notably seen as a "shoulder" that sticks out in the Swanson's PV-learning curve and it was feared that a prolonged shortage could delay solar power becoming competitive with conventional energy prices without subsidies.

In the meantime the solar industry lowered the number of grams-per-watt by reducing wafer thickness and kerf loss, increasing yields in each manufacturing step, reducing module loss, and raising panel efficiency. Finally, the ramp up of polysilicon production alleviated worldwide markets from the scarcity of silicon in 2009 and subsequently lead to an overcapacity with sharply declining prices in the photovoltaic industry for the following years.

Solar overcapacity (2009–2013)

As the polysilicon industry had started to build additional large production capacities during the shortage period, prices dropped as low as $15 per kilogram forcing some producers to suspend production or exit the sector. Prices for silicon stabilized around $20 per kilogram and the booming solar PV market helped to reduce the enormous global overcapacity from 2009 onwards. However, overcapacity in the PV industry continued to persist. In 2013, global record deployment of 38 GW (updated EPIA figure) was still much lower than China's annual production capacity of approximately 60 GW. Continued overcapacity was further reduced by significantly lowering solar module prices and, as a consequence, many manufacturers could no longer cover costs or remain competitive. As worldwide growth of PV deployment continued, the gap between overcapacity and global demand was expected in 2014 to close in the next few years.

IEA-PVPS published in 2014 historical data for the worldwide utilization of solar PV module production capacity that showed a slow return to normalization in manufacture in the years leading up to 2014. The utilization rate is the ratio of production capacities versus actual production output for a given year. A low of 49% was reached in 2007 and reflected the peak of the silicon shortage that idled a significant share of the module production capacity. As of 2013, the utilization rate had recovered somewhat and increased to 63%.

Anti-dumping duties (2012–present)

After anti-dumping petition were filed and investigations carried out, the United States imposed tariffs of 31 percent to 250 percent on solar products imported from China in 2012. A year later, the EU also imposed definitive anti-dumping and anti-subsidy measures on imports of solar panels from China at an average of 47.7 percent for a two-year time span.

Shortly thereafter, China, in turn, levied duties on U.S. polysilicon imports, the feedstock for the production of solar cells. In January 2014, the Chinese Ministry of Commerce set its anti-dumping tariff on U.S. polysilicon producers, such as Hemlock Semiconductor Corporation to 57%, while other major polysilicon producing companies, such as German Wacker Chemie and Korean OCI were much less affected. All this has caused much controversy between proponents and opponents and was subject of debate.

History of deployment

2016-2020 development of the Bhadla Solar Park (India), documented on Sentinel-2 satellite imagery

Deployment figures on a global, regional and nationwide scale are well documented since the early 1990s. While worldwide photovoltaic capacity grew continuously, deployment figures by country were much more dynamic, as they depended strongly on national policies. A number of organizations release comprehensive reports on PV deployment on a yearly basis. They include annual and cumulative deployed PV capacity, typically given in watt-peak, a break-down by markets, as well as in-depth analysis and forecasts about future trends.

Timeline of the largest PV power stations in the world
Year(a) Name of PV power station Country Capacity
MW
1982 Lugo United States 1
1985 Carrisa Plain United States 5.6
2005 Bavaria Solarpark (Mühlhausen) Germany 6.3
2006 Erlasee Solar Park Germany 11.4
2008 Olmedilla Photovoltaic Park Spain 60
2010 Sarnia Photovoltaic Power Plant Canada 97
2011 Huanghe Hydropower Golmud Solar Park China 200
2012 Agua Caliente Solar Project United States 290
2014 Topaz Solar Farm(b) United States 550
2015 Longyangxia Dam Solar Park China 850
2016 Tengger Desert Solar Park China 1547
2019 Pavagada Solar Park India 2050
2020 Bhadla Solar Park India 2245
Also see list of photovoltaic power stations and list of noteworthy solar parks
(a) year of final commissioning (b) capacity given in  MWAC otherwise in MWDC

Worldwide annual deployment

Due to the exponential nature of PV deployment, most of the overall capacity has been installed in the years leading up to 2017 (see pie-chart). Since the 1990s, each year has been a record-breaking year in terms of newly installed PV capacity, except for 2012. Contrary to some earlier predictions, early 2017 forecasts were that 85 gigawatts would be installed in 2017. Near end-of-year figures however raised estimates to 95 GW for 2017-installations.

Worldwide cumulative

Worldwide cumulative PV capacity on a semi log chart since 1992

Worldwide growth of solar PV capacity was an exponential curve between 1992 and 2017. Tables below show global cumulative nominal capacity by the end of each year in megawatts, and the year-to-year increase in percent. In 2014, global capacity was expected to grow by 33 percent from 139 to 185 GW. This corresponded to an exponential growth rate of 29 percent or about 2.4 years for current worldwide PV capacity to double. Exponential growth rate: P(t) = P0ert, where P0 is 139 GW, growth-rate r 0.29 (results in doubling time t of 2.4 years).

Deployment by country

See section Forecast for projected photovoltaic deployment in 2017
Grid parity for solar PV systems around the world
  Reached grid-parity before 2014
  Reached grid-parity after 2014
  Reached grid-parity only for peak prices
  U.S. states poised to reach grid-parity
Source: Deutsche Bank, as of February 2015

Population ecology

From Wikipedia, the free encyclopedia
 
Map of population trends of native and invasive species of jellyfish
  Increase (high certainty)
  Increase (low certainty)
  Stable/variable
  Decrease
  No data

Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration.

The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics.

History

In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.

Terminology

A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature).

Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates all play a significant role in growth rate. The maximum per capita growth rate for a population is known as the intrinsic rate of increase.

In a population, carrying capacity is known as the maximum population size of the species that the environment can sustain, which is determined by resources available. In many classic population models, r is represented as the intrinsic growth rate, where K is the carrying capacity, and N0 is the initial population size.

Terms used to describe natural groups of individuals in ecological studies
Term Definition
Species population All individuals of a species.
Metapopulation A set of spatially disjunct populations, among which there is some migration.
Population A group of conspecific individuals that is demographically, genetically, or spatially disjunct from other groups of individuals.
Aggregation A spatially clustered group of individuals.
Deme A group of individuals more genetically similar to each other than to other individuals, usually with some degree of spatial isolation as well.
Local population A group of individuals within an investigator-delimited area smaller than the geographic range of the species and often within a population (as defined above). A local population could be a disjunct population as well.
Subpopulation An arbitrary spatially delimited subset of individuals from within a population (as defined above).
Immigration The number of individuals that join a population over time.
Emigration The number of individuals that leave a population over time.

Population dynamics

The development of population ecology owes much to the mathematical models known as population dynamics, which were originally formulae derived from demography at the end of the 18th and beginning of 19th century.

The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant (ceteris paribus), a population will grow (or decline) exponentially. This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model.

A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations.

Exponential vs. Logistic Growth

When describing growth models, there are two types of models that can be used: exponential and logistic.

When the per capita rate of increase takes the same positive value regardless of population size, then it shows exponential growth.

When the per capita rate of increase decreases as the population increases towards a maximum limit, then the graph shows logistic growth.

Fisheries and wildlife management

In fisheries and wildlife management, population is affected by three dynamic rate functions.

  • Natality or birth rate, often recruitment, which means reaching a certain size or reproductive stage. Usually refers to the age a fish can be caught and counted in nets.
  • Population growth rate, which measures the growth of individuals in size and length. More important in fisheries, where population is often measured in biomass.
  • Mortality, which includes harvest mortality and natural mortality. Natural mortality includes non-human predation, disease and old age.

If N1 is the number of individuals at time 1 then

where N0 is the number of individuals at time 0, B is the number of individuals born, D the number that died, I the number that immigrated, and E the number that emigrated between time 0 and time 1.

If we measure these rates over many time intervals, we can determine how a population's density changes over time. Immigration and emigration are present, but are usually not measured.

All of these are measured to determine the harvestable surplus, which is the number of individuals that can be harvested from a population without affecting long-term population stability or average population size. The harvest within the harvestable surplus is termed "compensatory" mortality, where the harvest deaths are substituted for the deaths that would have occurred naturally. Harvest above that level is termed "additive" mortality, because it adds to the number of deaths that would have occurred naturally. These terms are not necessarily judged as "good" and "bad," respectively, in population management. For example, a fish & game agency might aim to reduce the size of a deer population through additive mortality. Bucks might be targeted to increase buck competition, or does might be targeted to reduce reproduction and thus overall population size.

For the management of many fish and other wildlife populations, the goal is often to achieve the largest possible long-run sustainable harvest, also known as maximum sustainable yield (or MSY). Given a population dynamic model, such as any of the ones above, it is possible to calculate the population size that produces the largest harvestable surplus at equilibrium. While the use of population dynamic models along with statistics and optimization to set harvest limits for fish and game is controversial among some scientists, it has been shown to be more effective than the use of human judgment in computer experiments where both incorrect models and natural resource management students competed to maximize yield in two hypothetical fisheries. To give an example of a non-intuitive result, fisheries produce more fish when there is a nearby refuge from human predation in the form of a nature reserve, resulting in higher catches than if the whole area was open to fishing.

r/K selection

At its most elementary level, interspecific competition involves two species utilizing a similar resource. It rapidly gets more complicated, but stripping the phenomenon of all its complications, this is the basic principle: two consumers consuming the same resource.

An important concept in population ecology is the r/K selection theory. For example, if an animal has the choice of producing one or a few offspring, or to put a lot of effort or little effort in offspring -- these are all examples of trade-offs. In order for species to thrive, they must choose what is best for them, leading to a clear distinction between r and K selected species.

The first variable is r (the intrinsic rate of natural increase in population size, density independent) and the second variable is K (the carrying capacity of a population, density dependent). An r-selected species (e.g., many kinds of insects, such as aphids) is one that has high rates of fecundity, low levels of parental investment in the young, and high rates of mortality before individuals reach maturity. Evolution favors productivity in r-selected species.

In contrast, a K-selected species (such as humans) has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Evolution in K-selected species favors efficiency in the conversion of more resources into fewer offspring. K-selected species generally experience stronger competition, where populations generally live near carrying capacity. These species have heavy investment in offspring, resulting in longer lived organisms, and longer period of maturation. Offspring of K-selected species generally have a higher probability of survival, due to heavy parental care and nurturing.

Top-Down and Bottom-Up Controls

Top-Down Controls

In some populations, organisms in lower trophic levels are controlled by organisms at the top. This is known as top-down control.

For example, the presence of top carnivores keep herbivore populations in check. If there were no top carnivores in the ecosystem, then herbivore populations would rapidly increase, leading to all plants being eaten. This ecosystem would eventually collapse.

Bottom-Up Controls

Bottom-up controls, on the other hand, are driven by producers in the ecosystem. If plant populations change, then the population of all species would be impacted.

For example, if plant populations decreased significantly, the herbivore populations would decrease, which would lead to a carnivore population decreasing too. Therefore, if all of the plants disappeared, then the ecosystem would collapse. Another example would be if there were too many plants available, then two herbivore populations may compete for the same food. The competition would lead to an eventual removal of one population.

Do all ecosystems have to be either top-down or bottom-up?

An ecosystem does not have to be either top-down or bottom-up. There are occasions where an ecosystem could be bottom-up sometimes, such as a marine ecosystem, but then have periods of top-down control due to fishing.

Survivorship curves

Survivorship curves show the distribution of populations according to age. Survivorship curves are important to be able to compare generations, populations, or even different species.

Humans and most other mammals have a type I survivorship because death occurs in older years. Typically, Type I survivorship curves characterize K-selected species.

Type II survivorship shows that death at any age is equally probable.

Type III curves indicate few surviving the younger years, but after a certain age, individuals are much more likely to survive. Type III survivorship typically characterizes r-selected species.

Metapopulation

Populations are also studied and conceptualized through the "metapopulation" concept. The metapopulation concept was introduced in 1969:

"as a population of populations which go extinct locally and recolonize."

Metapopulation ecology is a simplified model of the landscape into patches of varying levels of quality. Patches are either occupied or they are not. Migrants moving among the patches are structured into metapopulations either as sources or sinks. Source patches are productive sites that generate a seasonal supply of migrants to other patch locations. Sink patches are unproductive sites that only receive migrants. In metapopulation terminology there are emigrants (individuals that leave a patch) and immigrants (individuals that move into a patch). Metapopulation models examine patch dynamics over time to answer questions about spatial and demographic ecology. An important concept in metapopulation ecology is the rescue effect, where small patches of lower quality (i.e., sinks) are maintained by a seasonal influx of new immigrants. Metapopulation structure evolves from year to year, where some patches are sinks, such as dry years, and become sources when conditions are more favorable. Ecologists utilize a mixture of computer models and field studies to explain metapopulation structure.

Journals

The first journal publication of the Society of Population Ecology, titled Population Ecology (originally called Researches on Population Ecology) was released in 1952.

Scientific articles on population ecology can also be found in the Journal of Animal Ecology, Oikos and other journals.

Tuesday, February 15, 2022

Enactivism

From Wikipedia, the free encyclopedia

Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198).  "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.

The term 'enactivism' is close in meaning to 'enaction', defined as "the manner in which a subject of perception creatively matches its actions to the requirements of its situation". The introduction of the term enaction in this context is attributed to Francisco Varela, Evan Thompson, and Eleanor Rosch in The Embodied Mind (1991), who proposed the name to "emphasize the growing conviction that cognition is not the representation of a pre-given world by a pre-given mind but is rather the enactment of a world and a mind on the basis of a history of the variety of actions that a being in the world performs". This was further developed by Thompson and others, to place emphasis upon the idea that experience of the world is a result of mutual interaction between the sensorimotor capacities of the organism and its environment. However, some writers maintain that there remains a need for some degree of the mediating function of representation in this new approach to the science of the mind.

The initial emphasis of enactivism upon sensorimotor skills has been criticized as "cognitively marginal", but it has been extended to apply to higher level cognitive activities, such as social interactions. "In the enactive view,... knowledge is constructed: it is constructed by an agent through its sensorimotor interactions with its environment, co-constructed between and within living species through their meaningful interaction with each other. In its most abstract form, knowledge is co-constructed between human individuals in socio-linguistic interactions...Science is a particular form of social knowledge construction...[that] allows us to perceive and predict events beyond our immediate cognitive grasp...and also to construct further, even more powerful scientific knowledge."

Enactivism is closely related to situated cognition and embodied cognition, and is presented as an alternative to cognitivism, computationalism, and Cartesian dualism.

Philosophical aspects

Enactivism is one of a cluster of related theories sometimes known as the 4Es. As described by Mark Rowlands, mental processes are:

  • Embodied involving more than the brain, including a more general involvement of bodily structures and processes.
  • Embedded functioning only in a related external environment.
  • Enacted involving not only neural processes, but also things an organism does.
  • Extended into the organism's environment.

Enactivism proposes an alternative to dualism as a philosophy of mind, in that it emphasises the interactions between mind, body and the environment, seeing them all as inseparably intertwined in mental processes. The self arises as part of the process of an embodied entity interacting with the environment in precise ways determined by its physiology. In this sense, individuals can be seen to "grow into" or arise from their interactive role with the world.

"Enaction is the idea that organisms create their own experience through their actions. Organisms are not passive receivers of input from the environment, but are actors in the environment such that what they experience is shaped by how they act."

In The Tree of Knowledge Maturana & Varela proposed the term enactive "to evoke the view of knowledge that what is known is brought forth, in contraposition to the more classical views of either cognitivism or connectionism. They see enactivism as providing a middle ground between the two extremes of representationalism and solipsism. They seek to "confront the problem of understanding how our existence-the praxis of our living- is coupled to a surrounding world which appears filled with regularities that are at every instant the result of our biological and social histories.... to find a via media: to understand the regularity of the world we are experiencing at every moment, but without any point of reference independent of ourselves that would give certainty to our descriptions and cognitive assertions. Indeed the whole mechanism of generating ourselves, as describers and observers tells us that our world, as the world which we bring forth in our coexistence with others, will always have precisely that mixture of regularity and mutability, that combination of solidity and shifting sand, so typical of human experience when we look at it up close."[Tree of Knowledge, p. 241] Another important notion relating to enactivism is autopoiesis. The word refers to a system that is able to reproduce and maintain itself. Maturana & Varela describe that "This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems" Using the term autopoiesis, they argue that any closed system that has autonomy, self-reference and self-construction (or, that has autopoietic activities) has cognitive capacities. Therefore, cognition is present in all living systems. This view is also called autopoietic enactivism.

Radical enactivism is another form of enactivist view of cognition. Radical enactivists often adopt a thoroughly non-representational, enactive account of basic cognition. Basic cognitive capacities mentioned by Hutto and Myin include perceiving, imagining and remembering. They argue that those forms of basic cognition can be explained without positing mental representations. With regard to complex forms of cognition such as language, they think mental representations are needed, because there needs explanations of content. In human being's public practices, they claim that "such intersubjective practices and sensitivity to the relevant norms comes with the mastery of the use of public symbol systems" (2017, p. 120), and so "as it happens, this appears only to have occurred in full form with construction of sociocultural cognitive niches in the human lineage" (2017, p. 134). They conclude that basic cognition as well as cognition in simple organisms such as bacteria are best characterized as non-representational.

Enactivism also addresses the hard problem of consciousness, referred to by Thompson as part of the explanatory gap in explaining how consciousness and subjective experience are related to brain and body. "The problem with the dualistic concepts of consciousness and life in standard formulations of the hard problem is that they exclude each other by construction". Instead, according to Thompson's view of enactivism, the study of consciousness or phenomenology as exemplified by Husserl and Merleau-Ponty is to complement science and its objectification of the world. "The whole universe of science is built upon the world as directly experienced, and if we want to subject science itself to rigorous scrutiny and arrive at a precise assessment of its meaning and scope, we must begin by reawakening the basic experience of the world of which science is the second-order expression" (Merleau-Ponty, The phenomenology of perception as quoted by Thompson, p. 165). In this interpretation, enactivism asserts that science is formed or enacted as part of humankind's interactivity with its world, and by embracing phenomenology "science itself is properly situated in relation to the rest of human life and is thereby secured on a sounder footing."

Enaction has been seen as a move to conjoin representationalism with phenomenalism, that is, as adopting a constructivist epistemology, an epistemology centered upon the active participation of the subject in constructing reality. However, 'constructivism' focuses upon more than a simple 'interactivity' that could be described as a minor adjustment to 'assimilate' reality or 'accommodate' to it. Constructivism looks upon interactivity as a radical, creative, revisionist process in which the knower constructs a personal 'knowledge system' based upon their experience and tested by its viability in practical encounters with their environment. Learning is a result of perceived anomalies that produce dissatisfaction with existing conceptions.

Shaun Gallagher also points out that pragmatism is a forerunner of enactive and extended approaches to cognition. According to him, enactive conceptions of cognition can be found in many pragmatists such as Charles Sanders Pierce and John Dewey. For example, Dewey says that "The brain is essentially an organ for effecting the reciprocal adjustment to each other of the stimuli received from the environment and responses directed upon it" (1916, pp. 336–337). This view is fully consistent with enactivist arguments that cognition is not just a matter of brain processes and brain is one part of the body consisting of the dynamical regulation. Robert Brandom, a neo-pragmatist, comments that "A founding idea of pragmatism is that the most fundamental kind of intentionality (in the sense of directedness towards objects) is the practical involvement with objects exhibited by a sentient creature dealing skillfully with its world" (2008, p. 178).

How does constructivism relate to enactivism? From the above remarks it can be seen that Glasersfeld expresses an interactivity between the knower and the known quite acceptable to an enactivist, but does not emphasize the structured probing of the environment by the knower that leads to the "perturbation relative to some expected result" that then leads to a new understanding. It is this probing activity, especially where it is not accidental but deliberate, that characterizes enaction, and invokes affect, that is, the motivation and planning that lead to doing and to fashioning the probing, both observing and modifying the environment, so that "perceptions and nature condition one another through generating one another." The questioning nature of this probing activity is not an emphasis of Piaget and Glasersfeld.

Sharing enactivism's stress upon both action and embodiment in the incorporation of knowledge, but giving Glasersfeld's mechanism of viability an evolutionary emphasis, is evolutionary epistemology. Inasmuch as an organism must reflect its environment well enough for the organism to be able to survive in it, and to be competitive enough to be able to reproduce at sustainable rate, the structure and reflexes of the organism itself embody knowledge of its environment. This biology-inspired theory of the growth of knowledge is closely tied to universal Darwinism, and is associated with evolutionary epistemologists such as Karl Popper, Donald T. Campbell, Peter Munz, and Gary Cziko. According to Munz, "an organism is an embodied theory about its environment... Embodied theories are also no longer expressed in language, but in anatomical structures or reflex responses, etc."

One objection to enactive approaches to cognition is the so-called "scale-up objection". According to this objection, enactive theories only have limited value because they cannot "scale up" to explain more complex cognitive capacities like human thoughts. Those phenomena are extremely difficult to explain without positing representation. But recently, some philosophers are trying to respond to such objection. For example, Adrian Downey (2020) provides a non-representational account of Obsessive-compulsive disorder, and then argues that ecological-enactive approaches can respond to the "scaling up" objection.

Psychological aspects

McGann & others argue that enactivism attempts to mediate between the explanatory role of the coupling between cognitive agent and environment and the traditional emphasis on brain mechanisms found in neuroscience and psychology. In the interactive approach to social cognition developed by De Jaegher & others, the dynamics of interactive processes are seen to play significant roles in coordinating interpersonal understanding, processes that in part include what they call participatory sense-making. Recent developments of enactivism in the area of social neuroscience involve the proposal of The Interactive Brain Hypothesis where social cognition brain mechanisms, even those used in non-interactive situations, are proposed to have interactive origins.

Enactive views of perception

In the enactive view, perception "is not conceived as the transmission of information but more as an exploration of the world by various means. Cognition is not tied into the workings of an 'inner mind', some cognitive core, but occurs in directed interaction between the body and the world it inhabits."

Alva Noë in advocating an enactive view of perception sought to resolve how we perceive three-dimensional objects, on the basis of two-dimensional input. He argues that we perceive this solidity (or 'volumetricity') by appealing to patterns of sensorimotor expectations. These arise from our agent-active 'movements and interaction' with objects, or 'object-active' changes in the object itself. The solidity is perceived through our expectations and skills in knowing how the object's appearance would change with changes in how we relate to it. He saw all perception as an active exploration of the world, rather than being a passive process, something which happens to us.

Noë's idea of the role of 'expectations' in three-dimensional perception has been opposed by several philosophers, notably by Andy Clark. Clark points to difficulties of the enactive approach. He points to internal processing of visual signals, for example, in the ventral and dorsal pathways, the two-streams hypothesis. This results in an integrated perception of objects (their recognition and location, respectively) yet this processing cannot be described as an action or actions. In a more general criticism, Clark suggests that perception is not a matter of expectations about sensorimotor mechanisms guiding perception. Rather, although the limitations of sensorimotor mechanisms constrain perception, this sensorimotor activity is drastically filtered to fit current needs and purposes of the organism, and it is these imposed 'expectations' that govern perception, filtering for the 'relevant' details of sensorimotor input (called "sensorimotor summarizing").

These sensorimotor-centered and purpose-centered views appear to agree on the general scheme but disagree on the dominance issue – is the dominant component peripheral or central. Another view, the closed-loop perception one, assigns equal a-priori dominance to the peripheral and central components. In closed-loop perception, perception emerges through the process of inclusion of an item in a motor-sensory-motor loop, i.e., a loop (or loops) connecting the peripheral and central components that are relevant to that item. The item can be a body part (in which case the loops are in steady-state) or an external object (in which case the loops are perturbed and gradually converge to a steady state). These enactive loops are always active, switching dominance by the need.

Another application of enaction to perception is analysis of the human hand. The many remarkably demanding uses of the hand are not learned by instruction, but through a history of engagements that lead to the acquisition of skills. According to one interpretation, it is suggested that "the hand [is]...an organ of cognition", not a faithful subordinate working under top-down instruction, but a partner in a "bi-directional interplay between manual and brain activity." According to Daniel Hutto: "Enactivists are concerned to defend the view that our most elementary ways of engaging with the world and others - including our basic forms of perception and perceptual experience - are mindful in the sense of being phenomenally charged and intentionally directed, despite being non-representational and content-free." Hutto calls this position 'REC' (Radical Enactive Cognition): "According to REC, there is no way to distinguish neural activity that is imagined to be genuinely content involving (and thus truly mental, truly cognitive) from other non-neural activity that merely plays a supporting or enabling role in making mind and cognition possible."

Participatory sense-making

Hanne De Jaegher and Ezequiel Di Paolo (2007) have extended the enactive concept of sense-making into the social domain. The idea takes as its departure point the process of interaction between individuals in a social encounter. De Jaegher and Di Paolo argue that the interaction process itself can take on a form of autonomy (operationally defined). This allows them to define social cognition as the generation of meaning and its transformation through interacting individuals.

The notion of participatory sense-making has led to the proposal that interaction processes can sometimes play constitutive roles in social cognition (De Jaegher, Di Paolo, Gallagher, 2010). It has been applied to research in social neuroscience and autism.

In a similar vein, "an inter-enactive approach to agency holds that the behavior of agents in a social situation unfolds not only according to their individual abilities and goals, but also according to the conditions and constraints imposed by the autonomous dynamics of the interaction process itself". According to Torrance, enactivism involves five interlocking themes related to the question "What is it to be a (cognizing, conscious) agent?" It is:

1. to be a biologically autonomous (autopoietic) organism
2. to generate significance or meaning, rather than to act via...updated internal representations of the external world
3. to engage in sense-making via dynamic coupling with the environment
4. to 'enact' or 'bring forth' a world of significances by mutual co-determination of the organism with its enacted world
5. to arrive at an experiential awareness via lived embodiment in the world.

Torrance adds that "many kinds of agency, in particular the agency of human beings, cannot be understood separately from understanding the nature of the interaction that occurs between agents." That view introduces the social applications of enactivism. "Social cognition is regarded as the result of a special form of action, namely social interaction...the enactive approach looks at the circular dynamic within a dyad of embodied agents."

In cultural psychology, enactivism is seen as a way to uncover cultural influences upon feeling, thinking and acting. Baerveldt and Verheggen argue that "It appears that seemingly natural experience is thoroughly intertwined with sociocultural realities." They suggest that the social patterning of experience is to be understood through enactivism, "the idea that the reality we have in common, and in which we find ourselves, is neither a world that exists independently from us, nor a socially shared way of representing such a pregiven world, but a world itself brought forth by our ways of communicating and our joint action....The world we inhabit is manufactured of 'meaning' rather than 'information'.

Luhmann attempted to apply Maturana and Varela's notion of autopoiesis to social systems. "A core concept of social systems theory is derived from biological systems theory: the concept of autopoiesis. Chilean biologist Humberto Maturana come up with the concept to explain how biological systems such as cells are a product of their own production." "Systems exist by way of operational closure and this means that they each construct themselves and their own realities."

Educational aspects

The first definition of enaction was introduced by psychologist Jerome Bruner, who introduced enaction as 'learning by doing' in his discussion of how children learn, and how they can best be helped to learn. He associated enaction with two other ways of knowledge organization: Iconic and Symbolic.

"Any domain of knowledge (or any problem within that domain of knowledge) can be represented in three ways: by a set of actions appropriate for achieving a certain result (enactive representation); by a set of summary images or graphics that stand for a concept without defining it fully (iconic representation); and by a set of symbolic or logical propositions drawn from a symbolic system that is governed by rules or laws for forming and transforming propositions (symbolic representation)"

The term 'enactive framework' was elaborated upon by Francisco Varela and Humberto Maturana.

Sriramen argues that enactivism provides "a rich and powerful explanatory theory for learning and being." and that it is closely related to both the ideas of cognitive development of Piaget, and also the social constructivism of Vygotsky. Piaget focused on the child's immediate environment, and suggested cognitive structures like spatial perception emerge as a result of the child's interaction with the world. According to Piaget, children construct knowledge, using what they know in new ways and testing it, and the environment provides feedback concerning the adequacy of their construction. In a cultural context, Vygotsky suggested that the kind of cognition that can take place is not dictated by the engagement of the isolated child, but is also a function of social interaction and dialogue that is contingent upon a sociohistorical context. Enactivism in educational theory "looks at each learning situation as a complex system consisting of teacher, learner, and context, all of which frame and co-create the learning situation." Enactivism in education is very closely related to situated cognition, which holds that "knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used." This approach challenges the "separating of what is learned from how it is learned and used."

Artificial intelligence aspects

The ideas of enactivism regarding how organisms engage with their environment have interested those involved in robotics and man-machine interfaces. The analogy is drawn that a robot can be designed to interact and learn from its environment in a manner similar to the way an organism does, and a human can interact with a computer-aided design tool or data base using an interface that creates an enactive environment for the user, that is, all the user's tactile, auditory, and visual capabilities are enlisted in a mutually explorative engagement, capitalizing upon all the user's abilities, and not at all limited to cerebral engagement. In these areas it is common to refer to affordances as a design concept, the idea that an environment or an interface affords opportunities for enaction, and good design involves optimizing the role of such affordances.

The activity in the AI community has influenced enactivism as a whole. Referring extensively to modeling techniques for evolutionary robotics by Beer, the modeling of learning behavior by Kelso, and to modeling of sensorimotor activity by Saltzman, McGann, De Jaegher, and Di Paolo discuss how this work makes the dynamics of coupling between an agent and its environment, the foundation of enactivism, "an operational, empirically observable phenomenon." That is, the AI environment invents examples of enactivism using concrete examples that, although not as complex as living organisms, isolate and illuminate basic principles.

Animal ethics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Animal_ethics

Animal ethics is a branch of ethics which examines human-animal relationships, the moral consideration of animals and how nonhuman animals ought to be treated. The subject matter includes animal rights, animal welfare, animal law, speciesism, animal cognition, wildlife conservation, wild animal suffering, the moral status of nonhuman animals, the concept of nonhuman personhood, human exceptionalism, the history of animal use, and theories of justice. Several different theoretical approaches have been proposed to examine this field, in accordance with the different theories currently defended in moral and political philosophy. There is no theory which is completely accepted due to the differing understandings of what is meant by the term ethics; however, there are theories that are more widely accepted by society such as animal rights and utilitarianism.

History

The history of the regulation of animal research was a fundamental step towards the development of animal ethics, as this was when the term "animal ethics" first emerged. In the beginning, the term "animal ethics" was associated solely with cruelty, only changing in the late 20th-century, when it was deemed inadequate in modern society. The United States Animal Welfare Act of 1966, attempted to tackle the problems of animal research; however, their effects were considered futile. Many did not support this act as it communicated that if there was human benefit resulting from the tests, the suffering of the animals was justifiable. It was not the establishment of the animal rights movement, that people started supporting and voicing their opinions in public. Animal ethics was expressed through this movement and led to big changes to the power and meaning of animal ethics.

Animal rights

The first animal rights laws were first introduced between 1635–1780. In 1635, Ireland was the first country to pass animal protection legislation, "An Act against Plowing by the Tayle, and pulling the Wooll off living Sheep". In 1641, Massachusetts colony's called Body of Liberties that includes regulation against any "Tirranny or Crueltie" towards animals. In 1687, Japan reintroduced a ban on eating meat and killing animals. In 1789, philosopher Jeremy Bentham argued in An Introduction to the Principles of Morals and Legislation, that an animal's capacity to suffer—not their intelligence—meant that they should be granted rights: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?"

Between 1822–1892, more laws were passed to protect animals. In 1822, the British Parliament passed the Cruel Treatment of Cattle Act. In 1824, the first animal rights society was founded in England by Richard Martin, Arthur Broome, Lewis Gompertz and William Wilberforce, the Society for the Prevention of Cruelty to Animals, which later became the RSPCA. The same year, Gompertz published Moral Inquiries on the Situation of Man and of Brutes, one of the first books advocating for what will be more than a century later known as veganism. In 1835, Britain passed the first Cruelty to Animals Act. In 1866, the American Society for the Prevention of Cruelty to Animals was founded by New Yorker Henry Bergh. In 1875, Frances Power Cobbe established the National Anti-Vivisection Society in Britain. In 1892, English social reformer Henry Stephens Salt published Animal Rights: Considered in Relation to Social Progress.

In 1970, Richard D. Ryder coined speciesism, a term for discrimination against animals based on their species-membership. This term was popularized by the philosopher and ethicist Peter Singer in his 1975 book Animal Liberation. The late 1970s marked the beginnings of the animal rights movement, which portrayed the belief that animals must be recognised as sentient beings and protected from unessential harm. Since the 18th century, many groups have been organised supporting different aspects of animal rights and carrying out their support in differing ways. On one hand, "The Animal Liberation Front" is an English group that took the law into their own hands, orchestrating the Penn break-in, while a group such as "People for Ethical Treatment of Animals" founded in the US, although supporting the same goals, aim for legislative gains.

Animal testing

Animal testing for biomedical research dates to the writings of the ancient Greeks. It is understood that physician-scientists such as Aristotle, and Erasistratus carried out experiments on living animals. After them, there was also Galen, who was Greek but resided in Rome, carrying out experiments on living animals to improve on the knowledge of anatomy, physiology, pathology, and pharmacology. Animal testing since has evolved considerably and is still being carried out in the modern-day, with millions of experimental animals being used around the world. However, during recent years it has come under severe criticism by the public and animal activist groups. Those against, argue that the benefits that animal testing provides for humanity are not justifiable for the suffering of those animals. Those for, argue that animal testing is fundamental for the advancement of biomedical knowledge.

Drug testing on animals blew up in the 20th century. In 1937, a US pharmaceutical company created an infamous drug called "Elixir Sulfanilamide". This drug had a chemical called DEG in it which is toxic to humans, but at the time was not known to be harmful to humans. Without precautions, the drug was released to the public and was responsible for a mass poisoning. The DEG ended up killing over a hundred people, causing uproar among civilisation. Thus, in 1938 the U.S. Food and Drug Administration (FDA) established the Federal Food, Drug and Cosmetic Act. This ensured the testing of drugs on animals before marketing of the product, to confirm that it would have no harmful implications on humans.

However, since the regulations have been put in place, animal testing deaths have increased. More than one million animals are killed from testing every year in the US. In addition, the deaths of these animals are considered sickening; from inhaling toxic gas, having skin burned off, getting holes drilled into their skulls.

The 3 Rs

Laboratory rat
Laboratory rat with a brain implant being fed
 

The 3 Rs were first introduced in a 1959 book called "The Principles of Humane Experimental technique" by zoologist W. M. S. Russell, and microbiologist R. L. Burch. The 3 Rs stand for Replacement, Reduction, and Refinement and are the guiding principles for the ethical treatment of animals used for testing and experimentation:

  1. Replacement: Avoiding using an animal for testing by switching out the animal for something non-living, such as a computer model, or an animal which is less susceptible to pain in relation to the experiment.
  2. Reduction: Devising a plan to use the fewest animals possible; a combination of using fewer animals to gain sufficient data, and maximising the amount of data from each animal to use fewer animals.
  3. Refinement: A decrease in any unnecessary pain inflicted on the animal; adapting experimental procedures to minimise suffering.

The Three Rs principles are now widely accepted by many countries and are used in any practises that involve the experimentation of animals.

Ethical guidelines for animal research

There is a wide range of ethical assessments regarding animals used in research. There are general opinions that animals do have a moral status and how they are treated should be subjected to ethical consideration; some of the positions include:

  • Animals have intrinsic values that must be respected.
  • Animals can feel pain and their interests must be taken into consideration.
  • Our treatment of all animals/lab animals reflects on our attitudes and influences us on our moral beings.

The Norwegian National Committee for Research Ethics in Science and Technology (NENT) have a set of ethical guidelines for the use of animals in research:

  1. Respect Animal Dignity: Researchers must have respect towards the animals' worth, regardless of their value and the animals' interests as living, sentient creatures. Researchers have to have respect when choosing their topics/methods, and when expanding their research. Researchers also have to supply care that is adapted to needs to each laboratory animal.
  2. Responsibility for considering options (Replace): When there are alternatives available, researchers are responsible for studying those alternatives for animal experimentation. When there are no good alternatives available, researchers have to consider if the research can be postponed until a good alternative are developed. While being able to justify the experiments on animals, researchers then have to be accountable for the absence of alternative options and the urge to obtain the knowledge immediately.
  3. The principle of proportionality: responsibility for considering and balancing suffering and benefit: Researchers have to consider both the risks of pain and suffering that laboratory animals will face and assess them in the value of the relationship to the research of animals, people, and the environment. Researchers have a responsibility on whether or not the research will have improvements for the animals, people or the environment. All of the possible benefits of the study has to be considered, substantiated and specified in both the short and long run. This responsibility also entails the obligation to consider both the scientific quality of the experiment and whether or not the experiment will have relevant scientific benefits. Suffering can only be caused by animals if there is a counterbalance of substantial and probable benefits for animals, people or the environment. Since there are many methods of analyzing the harm and the benefits, research institutions have to provide training on suitable models and researchers have the responsibility to use the methods of analysis when planning any experiments on animals(see guideline 5).
  4. Responsibility for considering reducing the number of animals (Reduce): Researchers have the responsibility to consider whether or not it's acceptable to reduce the number of animals that an experiment's plan on using and include the number necessary to both the scientific quality of the experiments and the relevance to the results only. Before the experiment, researchers have to conduct reading studies and consider alternative designs and perform the calculations that are needed before beginning an experiment.
  5. Responsibility for minimizing the risk of suffering and improving animal welfare (Refine): Researchers have the responsibility to assess the expected effect on laboratory animals. Researchers have to lessen the risk of suffering and provide excellent animal welfare. Suffering includes pain, hunger, malnutrition, thirst, abnormal cold/heat. fear, stress, illness, injury, and restrictions to where the animal can't be able to behave naturally and normally. To find out what is a considerable amount of suffering, a researcher's assessment should be based on which animal suffers the most. Considering the animals is the deciding factor if there are any doubts about regarding the suffering the animals will face. Researchers have to consider the direct suffering that the animal might endure during an experiment, but there are risks before and after the suffering, including breeding, transportation, trapping, euthanizing, labeling, anesthetizing, and stabling. This means that all the researchers have to take into account the needs of periods for adaptation before and after an experiment.
  6. Responsibility for maintaining biological diversity: Researchers are also responsible for ensuring that the use of laboratory animals don't disrupt or endanger biological diversity. This means that researchers have to consider the consequences to the stock and their ecosystem as a whole. The use of endangered species has to be reduced to a minimum. When there is credible and uncertain knowledge that the inclusion of animals in research and the use of certain methods may have ethically unacceptable consequences for the stock and the ecosystem as a whole, researchers must observe the precautionary principle.
  7. Responsibility when intervening in a habitat: Researchers have a responsibility for reducing the disruption and any impact of the natural behaviors of the animals, including those who aren't direct test subjects in research, as well as the population and their surroundings. Most research and technology-related projects, like the ones regarding environmental technology and surveillance, might impact the animals and their living arrangements. In those cases, researchers have to seek to observe the principle of proportionality and to decrease possible negative impact(see guideline 3).
  8. Responsibility for openness and sharing of data and material: Researchers have the responsibility for ensuring the transparency of the research findings and facilitating sharing the data and materials from all animal experiments. Transparency and sharing are important in order to not repeat the same experiments on animals. Transparency is also important in order to release the data to the public and a part of researchers' responsibility for dissimulation. Negative results of the experiments on animals have should be public knowledge. Releasing negative results to other researchers could give them more on the information about which experiments that are not worth pursuing, shine a light on unfortunate research designs, and can help reduce the number of animals used in research.
  9. Requirement of expertise on animals: Researchers and other parties who work and handle live animals are required to have adequate and updated documentation expertise on all animals. This includes knowledge about the biology of the animal species in question, and willingly be able to take care of the animals properly.
  10. Requirement of due care: There are many laws, rules, international convention, and agreements regarding the laboratory animals that both the researchers and the research managers have to comply with. Anyone who wants to use animals in experiments should familiarize themselves with the current rules.

Ethical theories

Ethical thinking has influenced the way society perceives animal ethics in at least three ways. Firstly, the original rise of animal ethics and how animals should be treated. Secondly, the evolution of animal ethics as people started to realise that this ideology was not as simple as was first proposed. The third way, is through the challenges humans face contemplating these ethics; consistency of morals, and the justification of some cases.

Consequentialism

Consequentialism is a collection of ethical theories which judge the rightness or wrongness of an action on its consequences; if the actions brings more benefit than harm, it is good, if it brings more harm than benefit, it is bad. The most well-known type of consequentialism theory is utilitarianism.

The publication of Peter Singer's book Animal Liberation, in 1975, gathered sizeable traction and provided him with a platform to speak his mind on the issues of animal rights. Due to the attention Singer received, his views were the most accessible, and therefore best known by the public. He supported the theory of utilitarianism, which is still a controversial but highly regarded foundation for animal research. The theory of utilitarianism states that "an action is right if and only if it produces a better balance of benefits and harms than available alternative actions", thus, this theory determines whether or not something is right by weighing the pleasure against the suffering of the result. It is not concerned with the process, only the weight of the consequence against the process, and while the consequentialism theory suggests if an action is bad or good, utilitarianism only focuses on the benefit of the outcome. While this may be able to be applied to some animal research and raising for food, several objections have been raised against utilitarianism. Singer made his decision to support utilitarianism on the basis of sentience, selecting that aspect as the differential factor between human and animals; the ability of self-consciousness, autonomy and to act morally. This ended up being called "The argument from marginal cases". However, critics allege that not all morally relevant beings fall under this category, for instance, some people with in a persistent vegetative state who have no awareness of themselves or their surroundings. Based on Singer's arguments, it would be as (or more) justified to carry out experiments in medical research on these non-sentient humans than on other (sentient) animals. Another limitation of applying utilitarianism to animal ethics is that it is difficult to accurately measure and compare the suffering of the harmed animals to the gains of the beneficiaries, for instance, in medical experiments.

Jeff Sebo argues that utilitarianism has three main implications for animal ethics: "First, utilitarianism plausibly implies that all vertebrates and at least some invertebrates morally matter, and that large animals like elephants matter more on average and that small animals like ants might matter more in total. Second, utilitarianism plausibly implies that we morally ought to attempt to both promote animal welfare and respect animal rights in many real-life cases. Third, utilitarianism plausibly implies that we should prioritize farmed and wild animal welfare and pursue a variety of interventions at once to make progress on these issues".

Deontology

Deontology is a theory that evaluates moral actions based only on doing one's duty, not on the consequences of the actions. This means that if it is your duty to carry out a task, it is morally right regardless of the consequences, and if you fail to do your duty, you are morally wrong. There are many types of deontological theories, however, the one most commonly recognised is often associated with Immanuel Kant. This ethical theory can be implemented from conflicting sides, for example, a researcher may think it is their duty to make an animal suffer to find a cure for a disease that is affecting millions of humans, which according to deontology is morally correct. On the other hand, an animal activist might think that saving these animals being tested on is their duty, creating a contradiction in this idea. Furthermore, another conflicting nature of this theory is when you must choose between two imposing moral duties, such as deciding if you should lie about where an escaped chicken went, or if you should tell the truth and send the chicken to its death. Lying is an immoral duty to carry out, however, so is sending a chicken to its death.

A highlighted flaw in Kant's theory is that it was not applicable to non-human animals, only specifically to humans. This theory opposes utilitarianism in the sense that instead of concerning itself with the consequence, it focuses on the duty. However, both are fundamental theories that contribute to animal ethics.

Virtue ethics

Virtue ethics does not pinpoint on either the consequences or duty of the action, but from the act of behaving like a virtuous person. Thus, asking if such actions would stem from a virtuous person or someone with a vicious nature. If it would stem from someone virtuous, it is said that it is morally right, and if from a vicious person, immoral behaviour. A virtuous person is said to hold qualities such as respect, tolerance, justice and equality. One advantage that this theory has over the others, is that it takes into account human emotions, affecting the moral decision, which was absent in the previous two. However, a flaw is that people's opinions of a virtuous person are very subjective, and thus, can drastically affect the person's moral compass. With this underlying issue, this ethical theory cannot be applied to all cases.

Relationship with environmental ethics

Differing conceptions of the treatment of and duties towards animals, particularly those living in the wild, within animal ethics and environmental ethics have been a source of conflict between the two ethical positions; some philosophers have made a case that the two positions are incompatible, while others have argued that such disagreements can be overcome.

United States labor law

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Uni...