Search This Blog

Monday, May 2, 2022

Projections of population growth

From Wikipedia, the free encyclopedia
 
World population growth 1700–2100

Population projections are attempts to show how the human population statistics might change in the future. These projections are an important input to forecasts of the population's impact on this planet and humanity's future well-being. Models of population growth take trends in human development, and apply projections into the future. These models use trend-based-assumptions about how populations will respond to economic, social and technological forces to understand how they will affect fertility and mortality, and thus population growth.

The 2019 projections from the United Nations Population Division (made before the COVID-19 pandemic) show that annual world population growth peaked at 2.1% in 1968, has since dropped to 1.1%, and could drop even further to 0.1% by 2100, which would be a growth rate not seen since pre-industrial revolution days. Based on this, the UN Population Division projects the world population, which is 7.8 billion as of 2020, to level out around 2100 at 10.9 billion (the median line), assuming a continuing decrease in the global average fertility rate from 2.5 births per woman during the 2015–2020 period to 1.9 in 2095–2100, according to the medium-variant projection. A 2014 projection has the population continuing to grow into the next century.

However, estimates outside of the United Nations have put forward alternative models based on additional downward pressure on fertility (such as successful implementation of education and family planning goals in the Sustainable Development Goals) which could result in peak population during the 2060-2070 period rather than later.

According to the UN, about two-thirds of the predicted growth in population between 2020 and 2050 will take place in Africa. It is projected that 50% of births in the 5-year period 2095-2100 will be in Africa. Other organizations project lower levels of population growth in Africa based particularly on improvement in women's education and successfully implementing family planning.

By 2100, the UN projects the population in Sub-Saharan Africa will reach 3.8 billion, IHME projects 3.1 billion, and IIASA is the lowest at 2.6 billion. In contrast to the UN projections, the models of fertility developed by IHME and IIASA incorporate women's educational attainment, and in the case of IHME, also assume successful implementation of family planning.

World population prospects, 2019

Because of population momentum the global population will continue to grow, although at a steadily slower rate, for the remainder of this century, but the main driver of long-term future population growth will be the evolution of the global average fertility rate.

Table of UN projections

The United Nation's Population Division publishes high & low estimates (by gender) & density.

UN World Population Projections (Average Estimates, 2019 revision) 
Year Total population
2021 7,874,965,732
2022 7,953,952,577
2023 8,031,800,338
2024 8,108,605,256
2025 8,184,437,453
2026 8,259,276,651
2027 8,333,078,318
2028 8,405,863,301
2029 8,477,660,723
2030 8,548,487,371
2031 8,618,349,454
2032 8,687,227,873
2033 8,755,083,512
2034 8,821,862,705
2035 8,887,524,229
2036 8,952,048,885
2037 9,015,437,616
2038 9,077,693,645
2039 9,138,828,562
2040 9,198,847,382
2041 9,257,745,483
2042 9,315,508,153
2043 9,372,118,247
2044 9,427,555,382
2045 9,481,803,272
2046 9,534,854,673
2047 9,586,707,749
2048 9,637,357,320
2049 9,686,800,146
2050 9,735,033,900
2051 9,782,061,758
2052 9,827,885,441
2053 9,872,501,562
2054 9,915,905,251
2055 9,958,098,746
2056 9,999,085,167
2057 10,038,881,262
2058 10,077,518,080
2059 10,115,036,360
2060 10,151,469,683
2061 10,186,837,209
2062 10,221,149,040
2063 10,254,419,004
2064 10,286,658,354
2065 10,317,879,315
2066 10,348,098,079
2067 10,377,330,830
2068 10,405,590,532
2069 10,432,889,136
2070 10,459,239,501
2071 10,484,654,858
2072 10,509,150,402
2073 10,532,742,861
2074 10,555,450,003
2075 10,577,288,195
2076 10,598,274,172
2077 10,618,420,909
2078 10,637,736,819
2079 10,656,228,233
2080 10,673,904,454
2081 10,690,773,335
2082 10,706,852,426
2083 10,722,171,375
2084 10,736,765,444
2085 10,750,662,353
2086 10,763,874,023
2087 10,776,402,019
2088 10,788,248,948
2089 10,799,413,366
2090 10,809,892,303
2091 10,819,682,643
2092 10,828,780,959
2093 10,837,182,077
2094 10,844,878,798
2095 10,851,860,145
2096 10,858,111,587
2097 10,863,614,776
2098 10,868,347,636
2099 10,872,284,134
2100 10,875,393,719

History of population projections

Walter Greiling projected in the 1950s that world population would reach a peak of about nine billion, in the 21st century, and then stop growing after a readjustment of the Third World and a sanitation of the tropics.

Estimates published in the 2000s tended to predict that the population of Earth would stop increasing around 2070. In a 2004 long-term prospective report, the United Nations Population Division projected the world population would peak at 7.85 billion in 2075. After reaching this maximum, it would decline slightly and then resume a slow increase, reaching a level of 5.11 billion by 2300, about the same as the projected 2050 figure.

This prediction was revised in the 2010s, to the effect that no maximum will likely be reached in the 21st century. The main reason for the revision was that the ongoing rapid population growth in Africa had been underestimated. A 2014 paper by demographers from several universities and the United Nations Population Division forecast that the world's population would reach about 10.9 billion in 2100 and continue growing thereafter. In 2017 the UN predicted a decline of global population growth rate from +1.0% in 2020 to +0.5% in 2050 and to +0.1% in 2100.

Jørgen Randers, one of the authors of the seminal 1972 long-term simulations in The Limits to Growth, offered an alternative scenario in a 2012 book, arguing that traditional projections insufficiently take into account the downward impact of global urbanization on fertility. Randers' "most likely scenario" predicts a peak in the world population in the early 2040s at about 8.1 billion people, followed by decline.

Drivers of population change

The population of a country or area grows or declines through the interaction of three demographic drivers: fertility, mortality, and migration.

Fertility

Map of countries by fertility rate (2020), according to the Population Reference Bureau

Fertility is expressed as the total fertility rate (TFR), a measure of the number of children on average that a woman will bear in her lifetime. With longevity trending towards uniform and stable values worldwide, the main driver of future population growth will be the evolution of the fertility rate.

Where fertility is high, demographers generally assume that fertility will decline and eventually stabilize at about two children per woman.

During the period 2015–2020, the average world fertility rate was 2.5 children per woman, about half the level in 1950-1955 (5 children per woman). In the medium variant, global fertility is projected to decline further to 2.2 in 2045-2050 and to 1.9 in 2095–2100.

Mortality

If the mortality rate is relatively high and the resulting life expectancy is therefore relatively low, changes in mortality can have a material impact on population growth. Where the mortality rate is low and life expectancy has therefore risen, a change in mortality will have much less of an effect.

Because child mortality has declined substantially over the last several decades, global life expectancy at birth, which is estimated to have risen from 47 years in 1950–1955 to 67 years in 2000–2005, is expected to keep rising to reach 77 years in 2045–2050. In the more Developed regions, the projected increase is from 79 years today to 83 years by mid-century. Among the Least Developed countries, where life expectancy today is just under 65 years, it is expected to be 71 years in 2045–2050.

The population of 31 countries or areas, including Ukraine, Romania, Japan and most of the successor states of the Soviet Union, is expected to be lower in 2050 than in 2005.

Migration

Migration can have a significant effect on population change. Global South-South migration accounts for 38% of total migration, and Global South-North for 34%. For example, the United Nations reports that during the period 2010–2020, fourteen countries will have seen a net inflow of more than one million migrants, while ten countries will have seen a net outflow of similar proportions. The largest migratory outflows have been in response to demand for workers in other countries (Bangladesh, Nepal and the Philippines) or to insecurity in the home country (Myanmar, Syria and Venezuela). Belarus, Estonia, Germany, Hungary, Italy, Japan, the Russian Federation, Serbia and Ukraine have experienced a net inflow of migrants over the decade, helping to offset population losses caused by a negative natural increase (births minus deaths).

World population

Estimates of population levels in different continents between 1950 and 2050, according to the United Nations (2011 edition). The vertical axis is logarithmic and is in millions of people.
 
UN estimates (as of 2017) for world population by continent in 2000 and in 2050 (pie chart size to scale).
     Asia      Africa      Europe      Latin America      Northern America      Oceania
 
World population estimates from 1800 to 2100, based on "high", "medium" and "low" United Nations projections in 2010 (colored red, orange and green) and US Census Bureau historical estimates (in black). Actual recorded population figures (as of 2010) are colored in blue. According to the highest estimate, the world population may rise to 16 billions by 2100; according to the lowest estimate, it may decline to 7.2 billions.

2050

The median scenario of the UN 2019 World Population Prospects predicts the following populations per region in 2050 (compared to population in 2000), in billions:


2000 2050 Growth %/yr
Asia 3.74 5.29 +41% +0.7%
Africa 0.81 2.49 +207% +2.3%
Europe 0.73 0.71 −3% −0.1%
South/Central America
+Caribbean
0.52 0.76 +46% +0.8%
North America 0.31 0.43 +39% +0.7%
Oceania 0.03 0.06 +100% +1.4%
World 6.14 9.74 +60% +0.9%

After 2050

Projections of population reaching more than one generation into the future are highly speculative: Thus, the United Nations Department of Economic and Social Affairs report of 2004 projected the world population to peak at 9.22 billion in 2075 and then stabilise at a value close to 9 billion; By contrast, a 2014 projection by the United Nations Population Division predicted a population close to 11 billion by 2100 without any declining trend in the foreseeable future.

United Nations projections

The UN Population Division report of 2019 projects world population to continue growing, although at a steadily decreasing rate, and to reach 10.9 billion in 2100 with a growth rate at that time of close to zero.

This projected growth of population, like all others, depends on assumptions about vital rates. For example, the UN Population Division assumes that Total fertility rate (TFR) will continue to decline, at varying paces depending on circumstances in individual regions, to a below-replacement level of 1.9 by 2100. Between now (2020) and 2100, regions with TFR currently below this rate, e.g. Europe, will see TFR rise.  Regions with TFR above this rate, will see TFR continue to decline. 

Total Fertility Rate for six regions and the world, 1950-2100

Other projections

  • A 2020 study published by The Lancet from researchers funded by the Global Burden of Disease Study promotes a lower growth scenario, projecting that world population will peak in 2064 at 9.7 billion and then decline to 8.8 billion in 2100. This projection assumes further advancement of women's rights globally. In this case TFR is assumed to decline more rapidly than the UN's projection, to reach 1.7 in 2100.
  • An analysis from the Wittgenstein Center IIASA predicts global population to peak in 2070 at 9.4 billion and then decline to 9.0 billion in 2100.
  • The Institute for Health Metrics and Evaluation (IHME) and the Insurance Institute of South Africa (IIASA) project lower fertility in Sub-Saharan Africa (SSA) in 2100 than the UN. By 2100, the UN projects the population in SSA will reach 3.8 billion, IHME projects 3.1 billion, and IIASA projects 2.6 billion. IHME and IIASA incorporate women's educational attainment in their models of fertility, and in the case of IHME, also consider met need for family planning.

Other assumptions can produce other results.  Some of the authors of the 2004 UN report assumed that life expectancy would rise slowly and continuously. The projections in the report assume this with no upper limit, though at a slowing pace depending on circumstances in individual countries. By 2100, the report assumed life expectancy to be from 66 to 97 years, and by 2300 from 87 to 106 years, depending on the country. Based on that assumption, they expect that rising life expectancy will produce small but continuing population growth by the end of the projections, ranging from 0.03 to 0.07 percent annually. The hypothetical feasibility (and wide availability) of life extension by technological means would further contribute to long term (beyond 2100) population growth.

Evolutionary biology also suggests the demographic transition may reverse itself and global population may continue to grow in the long term. In addition, recent evidence suggests birth rates may be rising in the 21st century in the developed world.

Growth regions

The table below shows that from 2020 to 2050, the bulk of the world's population growth is predicted to take place in Africa: of the additional 1.9 billion people projected between 2020 and 2050, 1.2 billion will be added in Africa, 0.7 billion in Asia and zero in the rest of the world. Africa's share of global population is projected to grow from 17% in 2020 to 26% in 2050 and 39% by 2100, while the share of Asia will fall from 59% in 2020 to 55% in 2050 and 43% in 2100. The strong growth of the African population will happen regardless of the rate of decrease of fertility, because of the exceptional proportion of young people already living today. For example, the UN projects that the population of Nigeria will surpass that of the United States by about 2050.

Projected regional populations
Region Population
2020 2050 Change
2020–50
2100
bn % of
Total
bn % of
Total
bn % of
Total
Africa 1.3 17 2.5 26 1.2 4.3 39
Asia 4.6 59 5.3 55 0.7 4.7 43
Other 1.9 24 1.9 20 0.0 1.9 17

More Developed 1.3 17 1.3 13 0.0 1.3 12
Less Developed 6.5 83 8.4 87 1.9 9.6 88

World 7.8 100 9.7 100 1.9 10.9 100

Thus the population of the More Developed regions is slated to remain mostly unchanged, at 1.3 billion for the remainder of the 21st century. All population growth comes from the Less Developed regions.

The table below breaks out the UN's future population growth predictions by region

Projected annual % changes in population for three periods in the future
Region 2020–25 2045–50 2095–2100
Africa 2.4 1.8 0.6
Asia 0.8 0.1 −0.4
Europe 0.0 −0.3 −0.1
Latin America & the Caribbean 0.8 0.2 −0.5
Northern America 0.6 0.3 0.2
Oceania 1.2 0.8 0.4
World 1.0 0.5 0.0

The UN projects that between 2020 and 2100 there will be declines in population growth in all six regions; that by 2100 three of them will be undergoing population decline, and the world will have reached zero population growth.

Most populous nations by 2050 and 2100

The UN Population Division has calculated the future population of the world's countries, based on current demographic trends. Current (2020) world population is 7.8 billion. The 2019 report projects world population in 2050 to be 9.7 billion people, and possibly as high as 11 billion by the next century, with the following estimates for the top 14 countries in 2020, 2050, and 2100:

Projected population growth of the top 14 countries in 2020, 2050, and 2100
Country Population (millions) Rank
2020 2050 2100 2020 2050 2100
China 1,439 1,402 1,065 1 2 2
India 1,380 1,639 1,447 2 1 1
United States 331 379 434 3 4 4
Indonesia 273 331 321 4 6 7
Pakistan 221 338 403 5 5 5
Brazil 212 229 181 6 7 12
Nigeria 206 401 733 7 3 3
Bangladesh 165 192 151 8 10 14
Russia 146 136 126 9 14 19
Mexico 129 155 141 10 12 17
Japan 126 106 75 11 17 36
Ethiopia 115 205 294 12 8 8
Philippines 110 144 146 13 13 15
Egypt 102 160 225 14 11 10
Democratic Republic of the Congo 90 194 362 16 9 6
Tanzania 60 135 286 24 15 9
Niger 24 66 165 56 30 13
Angola 33 77 188 44 24 11
World 7,795 9,735 10,875

From 2017 to 2050, the nine highlighted countries are expected to account for half of the world's projected population increase: India, Nigeria, the Democratic Republic of the Congo, Pakistan, Ethiopia, Tanzania, the United States, Uganda, and Indonesia, listed according to the expected size of their contribution to that projected population growth.

Physicalism

From Wikipedia, the free encyclopedia

In philosophy, physicalism is the metaphysical thesis that "everything is physical", that there is "nothing over and above" the physical, or that everything supervenes on the physical.[2] Physicalism is a form of ontological monism—a "one substance" view of the nature of reality as opposed to a "two-substance" (dualism) or "many-substance" (pluralism) view. Both the definition of "physical" and the meaning of physicalism have been debated.

Physicalism is closely related to materialism. Physicalism grew out of materialism with advancements of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law).

According to a 2009 survey, physicalism is the majority view among philosophers, but there remains significant opposition to physicalism. Neuroplasticity has been used as an argument to support of a non-physicalist view. The philosophical zombie argument is another attempt to challenge physicalism.

Definition of physicalism

The word "physicalism" was introduced into philosophy in the 1930s by Otto Neurath and Rudolf Carnap.

The use of "physical" in physicalism is a philosophical concept and can be distinguished from alternative definitions found in the literature (e.g. Karl Popper defined a physical proposition to be one which can at least in theory be denied by observation). A "physical property", in this context, may be a metaphysical or logical combination of properties which are physical in the ordinary sense. It is common to express the notion of "metaphysical or logical combination of properties" using the notion of supervenience: A property A is said to supervene on a property B if any change in A necessarily implies a change in B. Since any change in a combination of properties must consist of a change in at least one component property, we see that the combination does indeed supervene on the individual properties. The point of this extension is that physicalists usually suppose the existence of various abstract concepts which are non-physical in the ordinary sense of the word; so physicalism cannot be defined in a way that denies the existence of these abstractions. Also, physicalism defined in terms of supervenience does not entail that all properties in the actual world are type identical to physical properties. It is, therefore, compatible with multiple realizability.

From the notion of supervenience, we see that, assuming that mental, social, and biological properties supervene on physical properties, it follows that two hypothetical worlds cannot be identical in their physical properties but differ in their mental, social or biological properties.

Two common approaches to defining "physicalism" are the theory-based and object-based approaches. The theory-based conception of physicalism proposes that "a property is physical if and only if it either is the sort of property that physical theory tells us about or else is a property which metaphysically (or logically) supervenes on the sort of property that physical theory tells us about". Likewise, the object-based conception claims that "a property is physical if and only if: it either is the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents or else is a property which metaphysically (or logically) supervenes on the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents".

Physicalists have traditionally opted for a "theory-based" characterization of the physical either in terms of current physics, or a future (ideal) physics. These two theory-based conceptions of the physical represent both horns of Hempel's dilemma (named after the late philosopher of science and logical empiricist Carl Gustav Hempel): an argument against theory-based understandings of the physical. Very roughly, Hempel's dilemma is that if we define the physical by reference to current physics, then physicalism is very likely to be false, as it is very likely (by pessimistic meta-induction) that much of current physics is false. But if we instead define the physical in terms of a future (ideal) or completed physics, then physicalism is hopelessly vague or indeterminate.

While the force of Hempel's dilemma against theory-based conceptions of the physical remains contested, alternative "non-theory-based" conceptions of the physical have also been proposed. Frank Jackson (1998) for example, has argued in favour of the aforementioned "object-based" conception of the physical. An objection to this proposal, which Jackson himself noted in 1998, is that if it turns out that panpsychism or panprotopsychism is true, then such a non-materialist understanding of the physical gives the counterintuitive result that physicalism is, nevertheless, also true since such properties will figure in a complete account of paradigmatic examples of the physical.

David Papineau and Barbara Montero have advanced and subsequently defended a "via negativa" characterization of the physical. The gist of the via negativa strategy is to understand the physical in terms of what it is not: the mental. In other words, the via negativa strategy understands the physical as "the non-mental". An objection to the via negativa conception of the physical is that (like the object-based conception) it doesn't have the resources to distinguish neutral monism (or panprotopsychism) from physicalism. Further, Restrepo (2012) argues that this conception of the physical makes core non-physical entities of non-´physicalist metaphysics, like God, Cartesian souls and abstract numbers, physical and thus either false or trivially true: "God is non-mentally-and-non-biologically identifiable as the thing that created the universe. Sup- posing emergentism is true, non-physical emergent properties are non-mentally-and-non-biologically identifiable as non-linear effects of certain arrangements of matter. The immaterial Cartesian soul is non-mentally-and-non-biologically identifiable as one of the things that interact causally with certain particles (coincident with the pineal gland). The Platonic number eight is non-mentally-and-non-biologically identifiable as the number of planets orbiting the Sun".

Supervenience-based definitions of physicalism

Adopting a supervenience-based account of the physical, the definition of physicalism as "all properties are physical" can be unraveled to:

1) Physicalism is true at a possible world w if and only if any world that is a physical duplicate of w is also a duplicate of w simpliciter.

Applied to the actual world (our world), statement 1 above is the claim that physicalism is true at the actual world if and only if at every possible world in which the physical properties and laws of the actual world are instantiated, the non-physical (in the ordinary sense of the word) properties of the actual world are instantiated as well. To borrow a metaphor from Saul Kripke (1972), the truth of physicalism at the actual world entails that once God has instantiated or "fixed" the physical properties and laws of our world, then God's work is done; the rest comes "automatically".

Unfortunately, statement 1 fails to capture even a necessary condition for physicalism to be true at a world w. To see this, imagine a world in which there are only physical properties—if physicalism is true at any world it is true at this one. But one can conceive physical duplicates of such a world that are not also duplicates simpliciter of it: worlds that have the same physical properties as our imagined one, but with some additional property or properties. A world might contain "epiphenomenal ectoplasm", some additional pure experience that does not interact with the physical components of the world and is not necessitated by them (does not supervene on them). To handle the epiphenomenal ectoplasm problem, statement 1 can be modified to include a "that's-all" or "totality" clause or be restricted to "positive" properties. Adopting the former suggestion here, we can reformulate statement 1 as follows:

2) Physicalism is true at a possible world w if and only if any world that is a minimal physical duplicate of w is a duplicate of w simpliciter.

Applied in the same way, statement 2 is the claim that physicalism is true at a possible world w if and only if any world that is a physical duplicate of w (without any further changes), is duplicate of w without qualification. This allows a world in which there are only physical properties to be counted as one at which physicalism is true, since worlds in which there is some extra stuff are not "minimal" physical duplicates of such a world, nor are they minimal physical duplicates of worlds that contain some non-physical properties that are metaphysically necessitated by the physical.

But while statement 2 overcomes the problem of worlds at which there is some extra stuff (sometimes referred to as the "epiphenomenal ectoplasm problem") it faces a different challenge: the so-called "blockers problem". Imagine a world where the relation between the physical and non-physical properties at this world (call the world w1) is slightly weaker than metaphysical necessitation, such that a certain kind of non-physical intervener—"a blocker"—could, were it to exist at w1, prevent the non-physical properties in w1 from being instantiated by the instantiation of the physical properties at w1. Since statement 2 rules out worlds which are physical duplicates of w1 that also contain non-physical interveners by virtue of the minimality, or that's-all clause, statement 2 gives the (allegedly) incorrect result that physicalism is true at w1. One response to this problem is to abandon statement 2 in favour of the alternative possibility mentioned earlier in which supervenience-based formulations of physicalism are restricted to what David Chalmers (1996) calls "positive properties". A positive property is one that "...if instantiated in a world W, is also instantiated by the corresponding individual in all worlds that contain W as a proper part." Following this suggestion, we can then formulate physicalism as follows:

3) Physicalism is true at a possible world w if and only if any world that is a physical duplicate of w is a positive duplicate of w.

On the face of it, statement 3 seems able to handle both the epiphenomenal ectoplasm problem and the blockers problem. With regard to the former, statement 3 gives the correct result that a purely physical world is one at which physicalism is true, since worlds in which there is some extra stuff are positive duplicates of a purely physical world. With regard to the latter, statement 3 appears to have the consequence that worlds in which there are blockers are worlds where positive non-physical properties of w1 will be absent, hence w1 will not be counted as a world at which physicalism is true. Daniel Stoljar (2010) objects to this response to the blockers problem on the basis that since the non-physical properties of w1 aren't instantiated at a world in which there is a blocker, they are not positive properties in Chalmers' (1996) sense, and so statement 3 will count w1 as a world at which physicalism is true after all.

A further problem for supervenience-based formulations of physicalism is the so-called "necessary beings problem". A necessary being in this context is a non-physical being that exists in all possible worlds (for example what theists refer to as God). A necessary being is compatible with all the definitions provided, because it is supervenient on everything; yet it is usually taken to contradict the notion that everything is physical. So any supervenience-based formulation of physicalism will at best state a necessary but not sufficient condition for the truth of physicalism.

Additional objections have been raised to the above definitions provided for supervenience physicalism: one could imagine an alternate world that differs only by the presence of a single ammonium molecule (or physical property), and yet based on statement 1, such a world might be completely different in terms of its distribution of mental properties. Furthermore, there are differences expressed concerning the modal status of physicalism; whether it is a necessary truth, or is only true in a world which conforms to certain conditions (i.e. those of physicalism).

Realisation physicalism

Closely related to supervenience physicalism, is realisation physicalism, the thesis that every instantiated property is either physical or realised by a physical property.

Token physicalism

Token physicalism is the proposition that "for every actual particular (object, event or process) x, there is some physical particular y such that x = y". It is intended to capture the idea of "physical mechanisms". Token physicalism is compatible with property dualism, in which all substances are "physical", but physical objects may have mental properties as well as physical properties. Token physicalism is not however equivalent to supervenience physicalism. Firstly, token physicalism does not imply supervenience physicalism because the former does not rule out the possibility of non-supervenient properties (provided that they are associated only with physical particulars). Secondarily, supervenience physicalism does not imply token physicalism, for the former allows supervenient objects (such as a "nation", or "soul") that are not equal to any physical object.

Reductionism and emergentism

Reductionism

There are multiple versions of reductionism. In the context of physicalism, the reductions referred to are of a "linguistic" nature, allowing discussions of, say, mental phenomena to be translated into discussions of physics. In one formulation, every concept is analysed in terms of a physical concept. One counter-argument to this supposes there may be an additional class of expressions which is non-physical but which increases the expressive power of a theory. Another version of reductionism is based on the requirement that one theory (mental or physical) be logically derivable from a second.

The combination of reductionism and physicalism is usually called reductive physicalism in the philosophy of mind. The opposite view is non-reductive physicalism. Reductive physicalism is the view that mental states are both nothing over and above physical states and reducible to physical states. One version of reductive physicalism is type physicalism or mind-body identity theory. Type physicalism asserts that "for every actually instantiated property F, there is some physical property G such that F=G". Unlike token physicalism, type physicalism entails supervenience physicalism.

Reductive versions of physicalism are increasingly unpopular as they do not account for mental lives. The brain on this position as a physical substance has only physical attributes such as a particular volume, a particular mass, a particular density, a particular location, a particular shape, and so on. However, the brain on this position does not have any mental attributes. The brain is not overjoyed or unhappy. The brain is not in pain. When a person's back aches and he or she is in pain, it is not the brain that is suffering even though the brain is associated with the neural circuitry that provides the experience of pain. Reductive physicalism therefore cannot explain mental lives. In the event of fear, for example, doubtlessly there is neural activity that is corresponding with the experience of fear. However, the brain itself is not fearful. Fear cannot be reduced to a physical brain state even though it is corresponding with neural activity in the brain. For this reason, reductive physicalism is argued to be indefensible as it cannot be reconciled with mental experience.

Another common argument against type physicalism is multiple realizability, the possibility that a psychological process (say) could be instantiated by many different neurological processes (even non-neurological processes, in the case of machine or alien intelligence). For in this case, the neurological terms translating a psychological term must be disjunctions over the possible instantiations, and it is argued that no physical law can use these disjunctions as terms. Type physicalism was the original target of the multiple realizability argument, and it is not clear that token physicalism is susceptible to objections from multiple realizability.

Emergentism

There are two versions of emergentism, the strong version and the weak version. Supervenience physicalism has been seen as a strong version of emergentism, in which the subject's psychological experience is considered genuinely novel. Non-reductive physicalism, on the other side, is a weak version of emergentism because it does not need that the subject's psychological experience be novel. The strong version of emergentism is incompatible with physicalism. Since there are novel mental states, mental states are not nothing over and above physical states. However, the weak version of emergentism is compatible with physicalism.

We can see that emergentism is actually a very broad view. Some forms of emergentism appear either incompatible with physicalism or equivalent to it (e.g. posteriori physicalism), others appear to merge both dualism and supervenience. Emergentism compatible with dualism claims that mental states and physical states are metaphysically distinct while maintaining the supervenience of mental states on physical states. This proposition however contradicts supervenience physicalism, which asserts a denial of dualism.

A priori versus a posteriori physicalism

Physicalists hold that physicalism is true. A natural question for physicalists, then, is whether the truth of physicalism is deducible a priori from the nature of the physical world (i.e., the inference is justified independently of experience, even though the nature of the physical world can itself only be determined through experience) or can only be deduced a posteriori (i.e., the justification of the inference itself is dependent upon experience). So-called "a priori physicalists" hold that from knowledge of the conjunction of all physical truths, a totality or that's-all truth (to rule out non-physical epiphenomena, and enforce the closure of the physical world), and some primitive indexical truths such as "I am A" and "now is B", the truth of physicalism is knowable a priori. Let "P" stand for the conjunction of all physical truths and laws, "T" for a that's-all truth, "I" for the indexical "centering" truths, and "N" for any [presumably non-physical] truth at the actual world. We can then, using the material conditional "→", represent a priori physicalism as the thesis that PTI → N is knowable a priori. An important wrinkle here is that the concepts in N must be possessed non-deferentially in order for PTI → N to be knowable a priori. The suggestion, then, is that possession of the concepts in the consequent, plus the empirical information in the antecedent is sufficient for the consequent to be knowable a priori.

An "a posteriori physicalist", on the other hand, will reject the claim that PTI → N is knowable a priori. Rather, they would hold that the inference from PTI to N is justified by metaphysical considerations that in turn can be derived from experience. So the claim then is that "PTI and not N" is metaphysically impossible.

One commonly issued challenge to a priori physicalism and to physicalism in general is the "conceivability argument", or zombie argument. At a rough approximation, the conceivability argument runs as follows:

P1) PTI and not Q (where "Q" stands for the conjunction of all truths about consciousness, or some "generic" truth about someone being "phenomenally" conscious [i.e., there is "something it is like" to be a person x] ) is conceivable (i.e., it is not knowable a priori that PTI and not Q is false).

P2) If PTI and not Q is conceivable, then PTI and not Q is metaphysically possible.

P3) If PTI and not Q is metaphysically possible then physicalism is false.

C) Physicalism is false.

Here proposition P3 is a direct application of the supervenience of consciousness, and hence of any supervenience-based version of physicalism: If PTI and not Q is possible, there is some possible world where it is true. This world differs from [the relevant indexing on] our world, where PTIQ is true. But the other world is a minimal physical duplicate of our world, because PT is true there. So there is a possible world which is a minimal physical duplicate of our world, but not a full duplicate; this contradicts the definition of physicalism that we saw above.

Since a priori physicalists hold that PTI → N is a priori, they are committed to denying P1) of the conceivability argument. The a priori physicalist, then, must argue that PTI and not Q, on ideal rational reflection, is incoherent or contradictory.

A posteriori physicalists, on the other hand, generally accept P1) but deny P2)--the move from "conceivability to metaphysical possibility". Some a posteriori physicalists think that unlike the possession of most, if not all other empirical concepts, the possession of consciousness has the special property that the presence of PTI and the absence of consciousness will be conceivable—even though, according to them, it is knowable a posteriori that PTI and not Q is not metaphysically possible. These a posteriori physicalists endorse some version of what Daniel Stoljar (2005) has called "the phenomenal concept strategy". Roughly speaking, the phenomenal concept strategy is a label for those a posteriori physicalists who attempt to show that it is only the concept of consciousness—not the property—that is in some way "special" or sui generis. Other a posteriori physicalists eschew the phenomenal concept strategy, and argue that even ordinary macroscopic truths such as "water covers 60% of the earth's surface" are not knowable a priori from PTI and a non-deferential grasp of the concepts "water" and "earth" et cetera. If this is correct, then we should (arguably) conclude that conceivability does not entail metaphysical possibility, and P2) of the conceivability argument against physicalism is false.

Other views

Realistic physicalism

Galen Strawson's realistic physicalism or realistic monism entails panpsychism – or at least micropsychism. Galen Strawson doesn't take into account procedurality, cause and effect relationship and the fact that ontological states (how things are) are not affected by human knowledge or interpretation (except in human interactions when knowledge plays an active role). Strawson doesn't take into account the fact that the soul theory is unprocedural and ontologically nondefinable; because if it were definable but unknown to humans it could have been described scientifically but humans failed to complete the task successfully (not being able to access physically the soul doesn't exclude the creation of a formal theory about it; but that theory would have to be logical in order to be scientific). Strawson argues that "many—perhaps most—of those who call themselves physicalists or materialists [are mistakenly] committed to the thesis that physical stuff is, in itself, in its fundamental nature, something wholly and utterly non-experiential... even when they are prepared to admit with Eddington that physical stuff has, in itself, 'a nature capable of manifesting itself as mental activity', i.e. as experience or consciousness". Because experiential phenomena allegedly cannot be emergent from wholly non-experiential phenomena, philosophers are driven to substance dualism, property dualism, eliminative materialism and "all other crazy attempts at wholesale mental-to-non-mental reduction".

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an 'inference to the best explanation'... Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. 'And were the inmost essence of things laid open to us' I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism... So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of 'substance dualism'... Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?

Neuromorphic engineering

From Wikipedia, the free encyclopedia

Neuromorphic engineering, also known as neuromorphic computing, is the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.

Neuromorphic engineering is an interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems. It was developed by Carver Mead in the late 1980s.

Neurological inspiration

Neuromorphic engineering is set apart by the inspiration it takes from what we know about the structure and operations of the brain. Neuromorphic engineering translates what we know about the brain's function into computer systems. Work has mostly focused on replicating the analog nature of biological computation and the role of neurons in cognition.

The biological processes of neurons and their synapses are dauntingly complex, and thus very difficult to artificially simulate. A key feature of biological brains is that all of the processing in neurons use analog chemical signals. This makes it hard to replicate brains in computers because the current generation of computers is completely digital. However, the characteristics of these parts can be abstracted into mathematical functions that closely capture the essence of the neuron's operations.

The goal of neuromorphic computing is not to perfectly mimic the brain and all of its functions, but instead to extract what is known of its structure and operations to be used in a practical computing system. No neuromorphic system will claim nor attempt to reproduce every element of neurons and synapses, but all adhere to the idea that computation is highly distributed throughout a series of small computing elements analogous to a neuron. While this sentiment is standard, researchers chase this goal with different methods.

Examples

As early as 2006, researchers at Georgia Tech published a field programmable neural array. This chip was the first in a line of increasingly complex arrays of floating gate transistors that allowed programmability of charge on the gates of MOSFETs to model the channel-ion characteristics of neurons in the brain and was one of the first cases of a silicon programmable array of neurons.

In November 2011, a group of MIT researchers created a computer chip that mimics the analog, ion-based communication in a synapse between two neurons using 400 transistors and standard CMOS manufacturing techniques.

In June 2012, spintronic researchers at Purdue University presented a paper on the design of a neuromorphic chip using lateral spin valves and memristors. They argue that the architecture works similarly to neurons and can therefore be used to test methods of reproducing the brain's processing. In addition, these chips are significantly more energy-efficient than conventional ones.

Research at HP Labs on Mott memristors has shown that while they can be non-volatile, the volatile behavior exhibited at temperatures significantly below the phase transition temperature can be exploited to fabricate a neuristor, a biologically-inspired device that mimics behavior found in neurons. In September 2013, they presented models and simulations that show how the spiking behavior of these neuristors can be used to form the components required for a Turing machine.

Neurogrid, built by Brains in Silicon at Stanford University, is an example of hardware designed using neuromorphic engineering principles. The circuit board is composed of 16 custom-designed chips, referred to as NeuroCores. Each NeuroCore's analog circuitry is designed to emulate neural elements for 65536 neurons, maximizing energy efficiency. The emulated neurons are connected using digital circuitry designed to maximize spiking throughput.

A research project with implications for neuromorphic engineering is the Human Brain Project that is attempting to simulate a complete human brain in a supercomputer using biological data. It is made up of a group of researchers in neuroscience, medicine, and computing. Henry Markram, the project's co-director, has stated that the project proposes to establish a foundation to explore and understand the brain and its diseases, and to use that knowledge to build new computing technologies. The three primary goals of the project are to better understand how the pieces of the brain fit and work together, to understand how to objectively diagnose and treat brain diseases, and to use the understanding of the human brain to develop neuromorphic computers. That the simulation of a complete human brain will require a supercomputer a thousand times more powerful than today's encourages the current focus on neuromorphic computers. $1.3 billion has been allocated to the project by The European Commission.

Other research with implications for neuromorphic engineering involve the BRAIN Initiative and the TrueNorth chip from IBM. Neuromorphic devices have also been demonstrated using nanocrystals, nanowires, and conducting polymers. There also is development of a memristive device for quantum neuromorphic architectures.

Intel unveiled its neuromorphic research chip, called “Loihi”, in October 2017. The chip uses an asynchronous spiking neural network (SNN) to implement adaptive self-modifying event-driven fine-grained parallel computations used to implement learning and inference with high efficiency.

IMEC, a Belgium-based nanoelectronics research center, demonstrated the world's first self-learning neuromorphic chip. The brain-inspired chip, based on OxRAM technology, has the capability of self-learning and has been demonstrated to have the ability to compose music. IMEC released the 30-second tune composed by the prototype. The chip was sequentially loaded with songs in the same time signature and style. The songs were old Belgian and French flute minuets, from which the chip learned the rules at play and then applied them.

The Blue Brain Project, led by Henry Markram, aims to build biologically detailed digital reconstructions and simulations of the mouse brain. The Blue Brain Project has created in silico models of rodent brains, while attempting to replicate as many details about its biology as possible. The supercomputer-based simulations offer new perspectives on understanding the structure and functions of the brain.

The European Union funded a series of projects at the University of Heidelberg, which led to the development of BrainScaleS (brain-inspired multiscale computation in neuromorphic hybrid systems), a hybrid analog neuromorphic supercomputer located at Heidelberg University, Germany. It was developed as part of the Human Brain Project neuromorphic computing platform and is the complement to the SpiNNaker supercomputer (which is based on digital technology). The architecture used in BrainScaleS mimics biological neurons and their connections on a physical level; additionally, since the components are made of silicon, these model neurons operate on average 864 times (24 hours of real time is 100 seconds in the machine simulation) that of their biological counterparts.

Neuromorphic sensors

The concept of neuromorphic systems can be extended to sensors (not just to computation). An example of this applied to detecting light is the retinomorphic sensor or, when employed in an array, the event camera.

Ethical considerations

While the interdisciplinary concept of neuromorphic engineering is relatively new, many of the same ethical considerations apply to neuromorphic systems as apply to human-like machines and artificial intelligence in general. However, the fact that neuromorphic systems are designed to mimic a human brain gives rise to unique ethical questions surrounding their usage.

However, the practical debate is that neuromorphic hardware as well as artificial "neural networks" are immensely simplified models of how the brain operates or processes information at a much lower complexity in terms of size and functional technology and a much more regular structure in terms of connectivity. Comparing neuromorphic chips to the brain is a very crude comparison similar to comparing a plane to a bird just because they both have wings and a tail. The fact is that neural cognitive systems are many orders of magnitude more energy- and compute-efficient than current state-of-the-art AI and neuromorphic engineering is an attempt to narrow this gap by inspiring from the brain's mechanism just like many engineering designs have bio-inspired features.

Democratic concerns

Significant ethical limitations may be placed on neuromorphic engineering due to public perception. Special Eurobarometer 382: Public Attitudes Towards Robots, a survey conducted by the European Commission, found that 60% of European Union citizens wanted a ban of robots in the care of children, the elderly, or the disabled. Furthermore, 34% were in favor of a ban on robots in education, 27% in healthcare, and 20% in leisure. The European Commission classifies these areas as notably “human.” The report cites increased public concern with robots that are able to mimic or replicate human functions. Neuromorphic engineering, by definition, is designed to replicate the function of the human brain.

The democratic concerns surrounding neuromorphic engineering are likely to become even more profound in the future. The European Commission found that EU citizens between the ages of 15 and 24 are more likely to think of robots as human-like (as opposed to instrument-like) than EU citizens over the age of 55. When presented an image of a robot that had been defined as human-like, 75% of EU citizens aged 15–24 said it corresponded with the idea they had of robots while only 57% of EU citizens over the age of 55 responded the same way. The human-like nature of neuromorphic systems, therefore, could place them in the categories of robots many EU citizens would like to see banned in the future.

Personhood

As neuromorphic systems have become increasingly advanced, some scholars have advocated for granting personhood rights to these systems. If the brain is what grants humans their personhood, to what extent does a neuromorphic system have to mimic the human brain to be granted personhood rights? Critics of technology development in the Human Brain Project, which aims to advance brain-inspired computing, have argued that advancement in neuromorphic computing could lead to machine consciousness or personhood. If these systems are to be treated as people, critics argue, then many tasks humans perform using neuromorphic systems, including the act of termination of neuromorphic systems, may be morally impermissible as these acts would violate the autonomy of the neuromorphic systems.

Dual use (military applications)

The Joint Artificial Intelligence Center, a branch of the U.S. military, is a center dedicated to the procurement and implementation of AI software and neuromorphic hardware for combat use. Specific applications include smart headsets/goggles and robots. JAIC intends to rely heavily on neuromorphic technology to connect "every fighter every shooter" within a network of neuromorphic-enabled units.

Legal considerations

Skeptics have argued that there is no way to apply the electronic personhood, the concept of personhood that would apply to neuromorphic technology, legally. In a letter signed by 285 experts in law, robotics, medicine, and ethics opposing a European Commission proposal to recognize “smart robots” as legal persons, the authors write, “A legal status for a robot can’t derive from the Natural Person model, since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the right to remuneration or the right to citizenship, thus directly confronting the Human rights. This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms.”

Ownership and property rights

There is significant legal debate around property rights and artificial intelligence. In Acohs Pty Ltd v. Ucorp Pty Ltd, Justice Christopher Jessup of the Federal Court of Australia found that the source code for Material Safety Data Sheets could not be copyrighted as it was generated by a software interface rather than a human author. The same question may apply to neuromorphic systems: if a neuromorphic system successfully mimics a human brain and produces a piece of original work, who, if anyone, should be able to claim ownership of the work?

Neuromemristive systems

Neuromemristive systems are a subclass of neuromorphic computing systems that focus on the use of memristors to implement neuroplasticity. While neuromorphic engineering focuses on mimicking biological behavior, neuromemristive systems focus on abstraction. For example, a neuromemristive system may replace the details of a cortical microcircuit's behavior with an abstract neural network model.

There exist several neuron inspired threshold logic functions implemented with memristors that have applications in high level pattern recognition applications. Some of the applications reported recently include speech recognition, face recognition and object recognition. They also find applications in replacing conventional digital logic gates.

For ideal passive memristive circuits there is an exact equation (Caravelli-Traversa-Di Ventra equation) for the internal memory of the circuit:

as a function of the properties of the physical memristive network and the external sources. In the equation above, is the "forgetting" time scale constant, and is the ratio of off and on values of the limit resistances of the memristors, is the vector of the sources of the circuit and is a projector on the fundamental loops of the circuit. The constant has the dimension of a voltage and is associated to the properties of the memristor; its physical origin is the charge mobility in the conductor. The diagonal matrix and vector and respectively, are instead the internal value of the memristors, with values between 0 and 1. This equation thus requires adding extra constraints on the memory values in order to be reliable.

1947–1948 civil war in Mandatory Palestine

From Wikipedia, the free encyclopedia During the civil war, the Jewish and Arab communities of Palestine clashed (the latter supported b...