Search This Blog

Sunday, October 12, 2014

Plant communities in Holy Land can cope with climate change of 'biblical' dimensions

Oct 09, 2014
Read more at: http://phys.org/news/2014-10-holy-cope-climate-biblical-dimensions.html#jCp

An international research team comprised of German, Israeli and American ecologists, including Dr. Claus Holzapfel, Dept. of Biological Sciences, Rutgers University-Newark, has conducted unique long-term experiments in Israel to test predictions of climate change, and has concluded that plant communities in the Holy Land can cope with climate change of "biblical" dimensions. Their findings appear in the current issue of Nature Communications.

When taking global climate change into account, many scientists predict dire ecological consequences around the world. The Middle East in particular has been thought to be vulnerable, since east Mediterranean ecosystems not only are hotspots of biodiversity, but also contain many of the wild ancestors of important crop plants and therefore harbor a rich genetic reservoir for them.

In a region with the lowest per-capita water availability, rainfall is predicted to decrease further in the near future, and could spell extreme hardship for the function of these unique ecosystems and possibly endanger the survival of important genetic resources.

For nine years the research team of German, Israeli and American ecologists subjected extremely species-rich plant communities to experimental drought designed to correspond to predicted future climate scenarios. For this, the study used four different ecosystems aligned along a steep, natural aridity gradient that ranges from extreme desert (3-4"annual rainfall) to moist Mediterranean woodland (32").


The recently published study demonstrates that in contrast to predicted changes, no measurable changes were seen in the vegetation even after nine years of rainfall manipulations. None of the crucial vegetation characteristics, neither species richness and composition, nor density or biomass - a particularly important trait for these ecosystems traditionally used as rangelands - changed appreciably in the rainfall manipulations.

These conclusions were reached regardless of whether the sites were subjected to more or less rain.

"Based on our study, the going hypothesis that all arid regions will react strongly to climate change needs to be amended," stated Dr. Katja Tielbörger (University of Tübingen in Germany), the lead author of the study.

One of the reasons for the high resilience of the ecosystems studied is likely the high natural variability in rainfall for which the region has been known throughout history. The climate scenarios tested included a decrease of rainfall to about 30% of the current values. That amount of rainfall seems to fall within the natural "comfort zone" of wild-growing plants. Archeological sources (and similar descriptions in the Bible) speak of such dramatic variation in climate over the course of centuries.

The team of scientists implemented a novel experimental approach in which irrigation and rain-out shelters were used not only to compare plots with changed climate within a site with un-manipulated controls, but the placing of sites along the steep aridity gradient also allowed testing the long-standing assumption that with climate change, species will track their climate zone and their ranges will simply shift.

Such shifts, commonly assumed by numerous climate-envelope models, have now for the first time been scientifically tested and have not been confirmed.

"Our experiment is likely the most extensive climate change study ever done, because of the number of sites involved, the long duration of experimental manipulations, and the immense species richness", stated Dr. Claus Holzapfel of Rutgers University-Newark, adding: "These facts add to the robustness of our results."

The study serves to decrease the "doomsday" scenario of climate change for the arid Middle East, despite the fact that the conclusions reached by the research team are only applicable to the specific regions studied.
The authors of the study caution that these results should not be used to address global issues of climate change. However, the researchers maintain that their results are important for understanding and countering specific consequences of climate change in the Middle East.


More information: Katja Tielbörger, Mark. C. Bilton, Johannes Metz, Jaime Kigel, Claus Holzapfel, Edwin Lebrija-Trejos, Irit Konsens, Hadas A. Parag, Marcelo Sternberg: Middle-Eastern plant communities tolerate nine years of drought in a large-scale climate change experiment. Nature Communications Oct. 2014 www.nature.com/ncomms/2014/141… 2/pdf/ncomms6102.pdf

Journal reference: Nature Communications

Provided by Rutgers University

Monday, October 6, 2014

Everyone calm down, there is no “bee-pocalypse”



Everyone calm down, there is no “bee-pocalypse”


Shawn Regan
July 10, 2013
Original link:  http://qz.com/101585/everyone-calm-down-there-is-no-bee-pocalypse/
The media is abuzz once again with stories about dying bees. According to a new report from the USDA, scientists have been unable to pinpoint the cause of colony collapse disorder (CCD), the mysterious affliction causing honey bees to disappear from their hives. Possible factors include parasites, viruses, and a form of pesticide known as neonicotinoids. Whatever the cause, the results of a recent beekeeper survey suggest that the problem is not going away. For yet another year, nearly one-third of US honey bee colonies did not make it through the winter.

Given the variety of crops that rely on honey bees for pollination, the colony collapse story is an important one. But if you were to rely on media reports alone, you might believe that honey bees are in short supply. NPR recently declared that we may have reached “a crisis point for crops.” Others warned of an impending “beepocalypse” or a “beemageddon.”

In a rush to identify the culprit of the disorder, many journalists have made exaggerated claims about the impacts of CCD. Most have uncritically accepted that continued bee losses would be a disaster for America’s food supply. Others speculate about the coming of a second “silent spring.” Worse yet, many depict beekeepers as passive, unimaginative onlookers that stand idly by as their colonies vanish.

This sensational reporting has confused rather than informed discussions over CCD. Yes, honey bees are dying in above average numbers, and it is important to uncover what’s causing the losses, but it hardly spells disaster for bees or America’s food supply.

Consider the following facts about honey bees and CCD.

For starters, US honey bee colony numbers are stable, and they have been since before CCD hit the scene in 2006. In fact, colony numbers were higher in 2010 than any year since 1999. How can this be? Commercial beekeepers, far from being passive victims, have actively rebuilt their colonies in response to increased mortality from CCD. Although average winter mortality rates have increased from around 15% before 2006 to more than 30%, beekeepers have been able to adapt to these changes and maintain colony numbers.


Source: USDA NASS Honey Production Report
Rebuilding colonies is a routine part of modern beekeeping. The most common method involves splitting healthy colonies into multiple hives. The new hives, known as “nucs,” require a new queen bee, which can be purchased readily from commercial queen breeders for about $15-$25 each. Many beekeepers split their hives late in the year in anticipation of winter losses. The new hives quickly produce a new brood and often replace more bees than are lost over the winter. Other methods of rebuilding colonies include buying packaged bees (about $55 for 12,000 worker bees and a fertilized queen) or replacing the queen to improve the health of the hive.

“The state of the honey bee population—numbers, vitality, and economic output—are the products of not just the impact of disease but also the economic decisions made by beekeepers and farmers,” economists Randal Rucker and Walter Thurman write in a summary of their working paper on the impacts of CCD. Searching through a number of economic measures, the researchers came to a surprising conclusion: CCD has had almost no discernible economic impact.

But you don’t need to rely on their study to see that CCD has had little economic effect. Data on colonies and honey production are publicly available from the USDA. Like honey bee numbers, US honey production has shown no pattern of decline since CCD was first detected. In 2010, honey production was 14% greater than it was in 2006. (To be clear, US honey production and colony numbers are lower today than they were 30 years ago, but as Rucker and Thurman explain, this gradual decline happened prior to 2006 and cannot be attributed to CCD).


Source: USDA NASS Honey Production Report
What about the prices of queen bees and packaged bees? Because of higher winter losses, beekeepers are forced to purchase more packaged queen and worker bees to rebuild their lost hives. Yet even these prices seem unaffected. Commercial queen breeders are able to rear large numbers of queen bees quickly, often in less than a month, putting little to no upward pressure on bee prices following CCD.

And what about the prices consumers pay for crops pollinated by honey bees? Are these skyrocketing along with fears of the beepocalypse? Rucker and Thurman find that the cost of CCD on almonds, one of the most important crops from a honey bee pollinating perspective, is trivial. The implied increase in the shelf price of a pound of Smokehouse Almonds is a mere 2.8 cents, and the researchers consider that to be an upper-bound estimate of the impact on fruits and vegetables.

There is, however, one measure that has been significantly affected by CCD—and that’s the pollination fees beekeepers charge almond producers. These fees have more than doubled in recent years, though the fees began rising a few years before CCD was reported. Rucker and Thurman attribute a portion of this increase to the onset of CCD. But even this impact has a bright side: For many beekeepers, the increase in almond pollination fees has more than offset the costs they have incurred rebuilding their lost colonies.

Overcoming CCD is not without its challenges, but beekeepers have thus far proven themselves adept at navigating such changing conditions. Honey bees have long been afflicted with a variety of diseases. The Varroa mite, a blood-thirsty bee parasite, has been a scourge of beekeepers since the 1980s. While CCD has resulted in larger and more mysterious losses, the resourcefulness of beekeepers remains.

Hannah Nordhaus, author of The Beekeeper’s Lament, warned that the scare stories evoked by CCD should serve as a cautionary tale to environmental journalists. “By engaging in simplistic and sometimes misleading environmental narratives—by exaggerating the stakes and brushing over the inconvenient facts that stand in the way of foregone conclusions­­—we do our field, and our subjects, a disservice,” she wrote in her 2011 essay “An Environmental Journalist’s Lament.”

“The overblown response to CCD in the media stems from a failure to appreciate the resilience of markets in accommodating shocks of various sorts,” write Rucker and Thurman. The ability of beekeepers and other market forces to adapt has kept food on the shelves, honey in the cupboard, and honey bees buzzing. Properly understood, the story of CCD is not one of doom and gloom, but one of the triumph and perseverance of beekeepers.

Sunday, September 21, 2014

Functionalism (philosophy of mind)

Functionalism (philosophy of mind)

From Wikipedia, the free encyclopedia
Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. Its core idea is that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they are causal relations to other mental states, sensory inputs, and behavioral outputs.[1] Functionalism is a theoretical level between the physical implementation and behavioral output.[2] Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since mental states are identified by a functional role, they are said to be realized on multiple levels; in other words, they are able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While computers are physical devices with electronic substrate that perform computations on inputs to give outputs, so brains are physical devices with neural substrate that perform computations on inputs which produce behaviors.
While functionalism has its advantages, there have been several arguments against it, claiming that it is an insufficient account of the mind.

Multiple realizability

An important part of some accounts of functionalism is the idea of multiple realizability. Since, according to standard functionalist theories, mental states are the corresponding functional role, mental states can be sufficiently explained without taking into account the underlying physical medium (e.g. the brain, neurons, etc.) that realizes such states; one need only take into account the higher-level functions in the cognitive system. Since mental states are not limited to a particular medium, they can be realized in multiple ways, including, theoretically, within non-biological systems, such as computers. In other words, a silicon-based machine could, in principle, have the same sort of mental life that a human being has, provided that its cognitive system realized the proper functional roles. Thus, mental states are individuated much like a valve; a valve can be made of plastic or metal or whatever material, as long as it performs the proper function (say, controlling the flow of liquid through a tube by blocking and unblocking its pathway).
However, there have been some functionalist theories that combine with the identity theory of mind, which deny multiple realizability. Such Functional Specification Theories (FSTs) (Levin, § 3.4), as they are called, were most notably developed by David Lewis[3] and David Malet Armstrong.[4]
According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (e.g., they tend to yell "Yowzas!" when poked with sharp objects, etc.) then we would say that their type of mental state is perhaps similar to ours, but too different to say it's the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's[5][6] arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types of functionalism

Machine-state functionalism


Artistic representation of a Turing machine.

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam.[5][6] This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Universal Turing machines).

In non-technical terms, a Turing machine can be visualized as an indefinitely and infinitely long tape divided into rectangles (the memory) with a box-shaped scanning device that sits over and scans one component of the memory at a time. Each unit is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:
  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.
An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:


State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

According to machine-state functionalism, the nature of a mental state is just like the nature of the automaton states described above. Just as state one simply is the state in which, given an input B, such and such happens, so being in pain is the state which disposes one to cry "ouch", become distracted, wonder what the cause is, and so forth.

Psychofunctionalism

A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psychofunctionalism.

The fundamental idea of psychofunctionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist . On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:
Mental state M is the state that is preconceived by P and causes Q.
For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe.
Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics.
Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Given the transitivity of supervenience, if M1 supervenes on M2, and M2 supervenes on P (physical base), then M1 and M2 both supervene on P, even though they are (allegedly) totally different sets of mental facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Functionalism and physicalism

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's Mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.

Criticism

China brain

Ned Block[7] argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron (forming what has come to be called a "Blockhead"). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. However, Block argues, this is patently absurd, so there must be something wrong with the thesis of functionalism since it would allow this to be a legitimate description of a mind.
Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious.[8] Indeed, it may be the case that we are constrained by our theory of mind[9] and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.[10]

The Chinese room

The Chinese room argument by John Searle[11] is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.
As noted above, in connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. Of course, Searle responds that there is nothing more than syntax going on at the higher-level as well, so this reply is subject to the same initial problems. Furthermore, Searle suggests the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.

Inverted spectrum

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block.[7][12] This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike "normal" people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.[13]
David Chalmers tries to show[14] that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. The general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least in the human case. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that it begs the central question: How could Chalmers know that functional profile can be preserved, for example while the conscious subject's brain is being supplanted with a silicon substitute, unless he already assumes that the subject's possibly changing qualia would not be a determining factor? And while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.

Twin Earth

The Twin Earth thought experiment, introduced by Hilary Putnam,[15] is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H₂O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H₂O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H₂O, while your duplicate's is of XYZ.
Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of input and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor[12] referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counter-intuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic-synthetic distinction, as many seek to avoid).

Triviality arguments

Hilary Putnam,[16] John Searle,[17] and others[18][19] have offered arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further[20] that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.[20]

Hard problem of consciousness

Hard problem of consciousness

From Wikipedia, the free encyclopedia
The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes.[1] David Chalmers, who introduced the term "hard problem" of consciousness,[2] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3]

The existence of a "hard problem" is controversial and has been disputed by some philosophers.[4][5] Providing an answer to this question could lie in understanding the roles that physical processes play in creating consciousness and the extent to which these processes create our subjective qualities of experience.[3]

Several questions about consciousness must be resolved in order to acquire a full understanding of it. These questions include, but are not limited to, whether being conscious could be wholly described in physical terms, such as the aggregation of neural processes in the brain. If consciousness cannot be explained exclusively by physical events, it must transcend the capabilities of physical systems and require an explanation of nonphysical means. For philosophers who assert that consciousness is nonphysical in nature, there remains a question about what outside of physical theory is required to explain consciousness.

Formulation of the problem

Chalmers' formulation

In Facing Up to the Problem of Consciousness, Chalmers wrote:[3]

Easy problems

Chalmers contrasts the Hard Problem with a number of (relatively) Easy Problems that consciousness presents. (He emphasizes that what the easy problems have in common is that they all represent some ability, or the performance of some function or behavior).
  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

Other formulations

Various formulations of the "hard problem":
  • "How is it that some organisms are subjects of experience?"
  • "Why does awareness of sensory information exist at all?"
  • "Why do qualia exist?"
  • "Why is there a subjective component to experience?"
  • "Why aren't we philosophical zombies?"
James Trefil notes that "it is the only major question in the sciences that we don't even know how to ask."[6]

Historical predecessors

The hard problem has scholarly antecedents considerably earlier than Chalmers.

Gottfried Leibniz wrote, as an example also known as Leibniz's gap:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.[7]
Isaac Newton wrote in a letter to Henry Oldenburg:
to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.[8]
T.H. Huxley remarked:
how it is that any thing so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp.[9]

Responses

Scientific attempts

There have been scientific attempts to explain subjective aspects of consciousness, which is related to the binding problem in neuroscience. Many eminent theorists, including Francis Crick and Roger Penrose, have worked in this field. Nevertheless, even as sophisticated accounts are given, it is unclear if such theories address the hard problem. Eliminative materialist philosopher Patricia Smith Churchland has famously remarked about Penrose's theories that "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."[10]

Consciousness is fundamental or elusive

Some philosophers, including David Chalmers and Alfred North Whitehead, argue that conscious experience is a fundamental constituent of the universe, a form of panpsychism sometimes referred to as panexperientialism. Chalmers argues that a "rich inner life" is not logically reducible to the functional properties of physical processes. He states that consciousness must be described using nonphysical means. This description involves a fundamental ingredient capable of clarifying phenomena that has not been explained using physical means. Use of this fundamental property, Chalmers argues, is necessary to explain certain functions of the world, much like other fundamental features, such as mass and time, and to explain significant principles in nature.

Thomas Nagel has posited that experiences are essentially subjective (accessible only to the individual undergoing them), while physical states are essentially objective (accessible to multiple individuals). So at this stage, we have no idea what it could even mean to claim that an essentially subjective state just is an essentially non-subjective state. In other words, we have no idea of what reductivism really amounts to.[11]

New mysterianism, such as that of Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[12]

Deflationary accounts

Some philosophers, such as Daniel Dennett,[4] Stanislas Dehaene,[5] and Peter Hacker,[13] oppose the idea that there is a hard problem. These theorists argue that once we really come to understand what consciousness is, we will realize that the hard problem is unreal. For instance, Dennett asserts that the so-called hard problem will be solved in the process of answering the easy ones.[4] In contrast with Chalmers, he argues that consciousness is not a fundamental feature of the universe and instead will eventually be fully explained by natural phenomena. Instead of involving the nonphysical, he says, consciousness merely plays tricks on people so that it appears nonphysical—in other words, it simply seems like it requires nonphysical features to account for its powers. In this way, Dennett compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[14]

To show how people might be commonly fooled into overstating the powers of consciousness, Dennett describes a normal phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[15] He uses this concept to argue that the overestimation of the brain's visual processing implies that the conception of our consciousness is likely not as pervasive as we make it out to be. He claims that this error of making consciousness more mysterious than it is could be a misstep in any developments toward an effective explanatory theory. Critics such as Galen Strawson reply that, in the case of consciousness, even a mistaken experience retains the essential face of experience that needs to be explained, contra Dennett.

To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[4] He states that consciousness itself is driven simply by these functions, and to strip them away would wipe out any ability to identify thoughts, feelings, and consciousness altogether. So, unlike Chalmers and other dualists, Dennett says that the easy problems and the hard problem cannot be separated from each other. To him, the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[14]

Dehaene's argument has similarities with those of Dennett. He says Chalmers' 'easy problems of consciousness' are actually the hard problems and the 'hard problems' are based only upon intuitions that, according to Dehaene, are continually shifting as understanding evolves. "Once our intuitions are educated ...Chalmers' hard problem will evaporate" and "qualia...will be viewed as a peculiar idea of the prescientific era, much like vitalism...[Just as science dispatched vitalism] the science of consciousness will eat away at the hard problem of consciousness until it vanishes."[5]

Like Dennett, Peter Hacker argues that the hard problem is fundamentally incoherent and that "consciousness studies," as it exists today, is "literally a total waste of time:"[13]
“The whole endeavour of the consciousness studies community is absurd – they are in pursuit of a chimera. They misunderstand the nature of consciousness. The conception of consciousness which they have is incoherent. The questions they are asking don’t make sense. They have to go back to the drawing board and start all over again.”
Critics of Dennett's approach, such as David Chalmers and Thomas Nagel, argue that Dennett's argument misses the point of the inquiry by merely re-defining consciousness as an external property and ignoring the subjective aspect completely. This has led detractors to refer to Dennett's book Consciousness Explained as Consciousness Ignored or Consciousness Explained Away.[4] Dennett discussed this at the end of his book with a section entitled Consciousness Explained or Explained Away?[15]

Glenn Carruthers and Elizabeth Schier argue that the main arguments for the existence of a hard problem -- philosophical zombies, Mary's room, and Nagel's bats -- are only persuasive if one already assumes that "consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem." Hence, the arguments beg the question. The authors suggest that "instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments."[16] Contrary to this line of argument, Chalmers says: "Some may be led to deny the possibility [of zombies] in order to make some theory come out right, but the justification of such theories should ride on the question of possibility, rather than the other way round".[17]:96

A notable deflationary account is the Higher-Order Thought theories of consciousness.[18][19] Peter Carruthers discusses "recognitional concepts of experience", that is, "a capacity to recognize [a] type of experience when it occurs in one's own mental life", and suggests such a capacity does not depend upon qualia.[20] Though the most common arguments against deflationary accounts and eliminative materialism is the argument from qualia, and that conscious experiences are irreducible to physical states - or that current popular definitions of "physical" are incomplete - the objection follows that the one and same reality can appear in different ways, and that the numerical difference of these ways is consistent with a unitary mode of existence of the reality. Critics of the deflationary approach object that qualia are a case where a single reality cannot have multiple appearances. As John Searle points out: "where consciousness is concerned, the existence of the appearance is the reality."[21]

Massimo Pigliucci distances himself from eliminativism, but he insists that the hard problem is still misguided, resulting from a "category mistake":[22]
Of course an explanation isn't the same as an experience, but that’s because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.

Quantum mind

Quantum mind

From Wikipedia, the free encyclopedia
The quantum mind or quantum consciousness[1] hypothesis proposes that classical mechanics cannot explain consciousness, while quantum mechanical phenomena, such as quantum entanglement and superposition, may play an important part in the brain's function, and could form the basis of an explanation of consciousness. It is not a single theory, but rather a collection of distinct ideas.

A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, whereas quantum mechanics can. The idea that quantum theory has something to do with the workings of the mind go back to Eugene Wigner, who assumed that the wave function collapses due to its interaction with consciousness. However, most contemporary physicists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.[2] Physicist Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons."[3]

The philosopher David Chalmers has argued against quantum consciousness. He has discussed how quantum mechanics may relate to dualistic consciousness.[4] Indeed, Chalmers is skeptical of the ability of any new physics to resolve the hard problem of consciousness.[5][6]

Description of main quantum mind approaches

David Bohm

David Bohm took the view that quantum theory and relativity contradicted one another, and that this contradiction implied that there existed a more fundamental level in the physical universe.[7] He claimed that both quantum theory and relativity pointed towards this deeper theory, which he formulated in terms of a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed implicate order applies both to matter and consciousness, and he suggests that it could explain the relationship between them. Mind and matter are here seen as projections into our explicate order from the underlying reality of the implicate order. Bohm claims that when we look at the matter in space, we can see nothing in these concepts that helps us to understand consciousness.
In trying to describe the nature of consciousness, Bohm discusses the experience of listening to music. He believed that the feeling of movement and change that make up our experience of music derives from both the immediate past and the present both being held in the brain together, with the notes from the past seen as transformations rather than memories. The notes that were implicate in the immediate past are seen as becoming explicate in the present. Bohm views this as consciousness emerging from the implicate order.

Bohm sees the movement, change or flow and also the coherence of experiences, such as listening to music as a manifestation of the implicate order. He claims to derive evidence for this from the work of Jean Piaget[8] in studying infants. He states that these studies show that young children have to learn about time and space, because they are part of the explicate order, but have a "hard-wired" understanding of movement, because it is part of the implicate order. He compares this "hard-wiring" to Chomsky's theory that grammar is "hard-wired" into young human brains.

In his writings, Bohm never proposed any specific brain mechanism by which his implicate order could emerge in a way that was relevant to consciousness, nor any means by which the propositions could be tested or falsified.[citation needed]

Roger Penrose and Stuart Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately, and only later collaborated to produce Orch-OR in the early 1990s. The theory was reviewed and updated by the original authors in late 2013.[9][10]
Penrose's controversial argument began from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal proof system cannot prove its own inconsistency, Gödel-unprovable results are provable by human mathematicians.[citation needed] He took this disparity to mean that human mathematicians are not describable as formal proof systems, and are not therefore running a computable algorithm.[citation needed]

Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, Penrose proposed a new form of wave function collapse that occurred in isolation, called objective reduction. He suggested that each quantum superposition has its own piece of spacetime curvature, and when these become separated by more than one Planck length, they become unstable and collapse.[citation needed] Penrose suggested that objective reduction represented neither randomness nor algorithmic processing, but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derived.[citation needed]

Originally, Penrose lacked a detailed proposal for how quantum processing could be implemented in the brain. However, Hameroff read Penrose's work, and suggested that microtubules would be suitable candidates.[citation needed]

Microtubules are composed of tubulin protein dimer subunits. The tubulin dimers each have hydrophobic pockets that are 8 nm apart, and which may contain delocalised pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by only about 2 nm. Hameroff proposes that these electrons are close enough to become quantum entangled.[11] Hameroff originally suggested the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited.[12] He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too has been experimentally discredited.[13]

Furthermore, he proposed that condensates in one neuron could extend to many others via gap junctions between neurons, thus forming a macroscopic quantum feature across an extended area of the brain. When the wave function of this extended condensate collapsed, it was suggested to non-computationally access mathematical understanding and ultimately conscious experience, that are hypothetically embedded in the geometry of spacetime.[citation needed]

However, Orch-OR made numerous false biological predictions, and is considered to be an extremely poor model of brain physiology. The proposed predominance of 'A' lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al.,[14][15] who showed that all in vivo microtubules have a 'B' lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified.[16] Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), however De Zeeuw et al. proved this impossible,[17] by showing that DLBs are located micrometers away from gap junctions.[18]

In January 2014, Hameroff and Penrose announced that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013[19] confirms the hypothesis of the Orch-OR theory.[10][20]

Umezawa, Vitiello, Freeman, Kak

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman have proposed a dialog model of the mind, where this dialog takes place between the classical and the quantum parts of the brain.[21][22] Quantum field theory models of brain dynamics are fundamentally different from the Penrose-Hameroff theory. Subhash Kak has proposed that the physical substratum to neural networks has a quantum basis,[23] but he also points out that the quantum mind will still have machine-like limitations.[24] He points to a role for quantum theory in the distinction between machine intelligence and biological intelligence.[25][26]

Henry Stapp

Henry Stapp favors the idea that quantum waves are reduced only when they interact with consciousness. He argues from the Orthodox Quantum Mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state.[citation needed]

His theory of how mind may interact with matter via quantum processes in the brain differs from that of Penrose and Hameroff.[27]

Criticism by Max Tegmark

The main argument against the quantum mind proposition is that quantum states in the brain would decohere before they reached a spatial or temporal scale at which they could be useful for neural processing, although in photosynthetic organisms quantum coherence is involved in the efficient transfer of energy, within the timescales calculated by Tegmark.Quantum biology[28] This argument was elaborated by the physicist, Max Tegmark. Based on his calculations, Tegmark concluded that quantum systems in the brain decohere at sub-picosecond timescales commonly assumed to be too short to control brain function.[29][30]

Romanization (cultural)

From Wikipedia, the free encyclopedia ...