Search This Blog

Friday, March 10, 2023

Visual space

From Wikipedia, the free encyclopedia

Visual space is the experience of space by an aware observer. It is the subjective counterpart of the space of physical objects. There is a long history in philosophy, and later psychology of writings describing visual space, and its relationship to the space of physical objects. A partial list would include René Descartes, Immanuel Kant, Hermann von Helmholtz, William James, to name just a few.

Object Space and Visual Space.

Space of physical objects

The location and shape of physical objects can be accurately described with the tools of geometry. For practical purposes the space we occupy is Euclidean. It is three-dimensional and measurable using tools such as rulers. It can be quantified using co-ordinate systems like the Cartesian x,y,z, or polar coordinates with angles of elevation, azimuth and distance from an arbitrary origin.

Space of visual percepts

Percepts, the counterparts in the aware observer's conscious experience of objects in physical space, constitute an ordered ensemble or, as Ernst Cassirer explained, Visual Space can not be measured with rulers. Historically philosophers used introspection and reasoning to describe it. With the development of Psychophysics, beginning with Gustav Fechner, there has been an effort to develop suitable experimental procedures which allow objective descriptions of visual space, including geometric descriptions, to be developed and tested. An example illustrates the relationship between the concepts of object and visual space. Two straight lines are presented to an observer who is asked to set them so that they appear parallel. When this has been done, the lines are parallel in visual space A comparison is then possible with the actual measured layout of the lines in physical space. Good precision can be achieved using these and other psychophysical procedures in human observers or behavioral ones in trained animals.

Visual Space and the Visual Field

The visual field, the area or extent of physical space that is being imaged on the retina, should be distinguished from the perceptual space in which visual percepts are located, which we call visual space. Confusion is caused by the use of Sehraum in the German literature for both. There is no doubt that Ewald Hering and his followers meant visual space in their writings.

Spaces: formal, physical, perceptual

The fundamental distinction was made by Rudolf Carnap between three kinds of space which he called formal, physical and perceptual. Mathematicians, for example, deal with ordered structures, ensembles of elements for which rules of logico-deductive relationships hold, limited solely by being not self-contradictory. These are the formal spaces. According to Carnap, studying physical space means examining the relationship between empirically determined objects. Finally, there is the realm of what students of Kant know as Anschauungen, immediate sensory experiences, often awkwardly translated as "apperceptions", which belong to perceptual spaces.

Visual space and geometry

Geometry is the discipline devoted to the study of space and the rules relating the elements to each other. For example, in Euclidean space the Pythagorean theorem provides a rule to compute distances from Cartesian coordinates. In a two-dimensional space of constant curvature, like the surface of a sphere, the rule is somewhat more complex but applies everywhere. On the two-dimensional surface of a football, the rule is more complex still and has different values depending on location. In well-behaved spaces such rules used for measurement, called metrics, are classically handled by the mathematics invented by Riemann. Object space belongs to that class.

To the extent that it is reachable by scientifically acceptable probes, visual space as defined is also a candidate for such considerations. The first and remarkably prescient analysis was published by Ernst Mach in 1901. Under the heading On Physiological as Distinguished from Geometrical Space Mach states that "Both spaces are threefold manifoldnesses" but the former is "...neither constituted everywhere and in all directions alike, nor infinite in extent, nor unbounded." A notable attempt at a rigorous formulation was made in 1947 by Rudolf Luneburg, who preceded his essay on mathematical analysis of vision by a profound analysis of the underlying principles. When features are sufficiently singular and distinct, there is no problem about a correspondence between an individual item A in object space and its correlate A' in visual space. Questions can be asked and answered such as "If visual percepts A',B',C' are correlates of physical objects A,B,C, and if C lies between A and B, does C' lie between A' and B' ?" In this manner, the possibility of visual space being metrical can be approached. If the exercise is successful, a great deal can be said about the nature of the mapping of the physical space on the visual space.

On the basis of fragmentary psychophysical data of previous generations, Luneburg concluded that visual space was hyperbolic with constant curvature, meaning that elements can be moved throughout the space without changing shape. One of Luneburg's major arguments is that, in accord with a common observation, the transformation involving hyperbolic space renders infinity into a dome (the sky). The Luneburg proposition gave rise to discussions and attempts at corroborating experiments, which on the whole did not favor it.

Basic to the problem, and underestimated by Luneburg the mathematician, is the likely success of a mathematically viable formulation of the relationship between objects in physical space and percepts in visual space. Any scientific investigation of visual space is colored by the kind of access we have to it, and the precision, repeatability and generality of measurements. Insightful questions can be asked about the mapping of visual space to object space  but answers are mostly limited in the range of their validity. If the physical setting that satisfies the criterion of, say, apparent parallelism varies from observer to observer, or from day to day, or from context to context, so does the geometrical nature of, and hence mathematical formulation for, visual space.

All these arguments notwithstanding, there is a major concordance between the locations of items in object space and their correlates in visual space. It is adequately veridical for us to navigate very effectively in the world, deviations from such a situation are sufficiently notable to warrant special consideration. visual space agnosia is a recognized neurological condition, and the many common distortions, called geometrical-optical illusions, are widely demonstrated but of minor consequence.

Neural representation of space

Fechner's inner and outer psychophysics

Its founder, Gustav Theodor Fechner defined the mission of the discipline of psychophysics as the functional relationship between the mental and material worlds—in this particular case, the visual and object spaces—but he acknowledged an intermediate step, which has since blossomed into the major enterprise of modern neuroscience. In distinguishing between inner and outer psychophysics, Fechner recognized that a physical stimulus generates a percept by way of an effect on the organism's sensory and nervous systems. Hence, without denying that its essence is the arc between object and percept, the inquiry can concern itself with the neural substrate of visual space.

Retinotopy and beyond

The retina image topography is maintained through the visual pathway to the primary visual cortex.

Two major concepts dating back to the middle of the 19th century set the parameters of the discussion here. Johannes Müller emphasized that what matters in a neural path is the connection it makes, and Hermann Lotze, from psychological considerations, enunciated the principle of local sign.  Put together in modern neuroanatomical terms they mean that a nerve fiber from a fixed retinal location instructs its target neurons in the brain about the presence of a stimulus in the location in the eye's visual field that is imaged there. The orderly array of retinal locations is preserved in the passage from the retina to the brain, and provides what is aptly called a "retinotopic" mapping in the primary visual cortex. Thus in the first instance brain activity retains the relative spatial ordering of the objects and lays the foundations for a neural substrate of visual space.

Unfortunately simplicity and transparency ends here. Right at the outset, visual signals are analyzed not only for their position, but also, separately in parallel channels, for many other attributes such as brightness, color, orientation, depth. No single neuron or even neuronal center or circuit represents both the nature of a target feature and its accurate location. The unitary mapping of object space into the coherent visual space without internal contradictions or inconsistencies that we as observer automatically experience, demands concepts of conjoint activity in several parts of the nervous system that is at present beyond the reach of neurophysiological research.

Place cells

Though the details of the process by which the experience of visual space emerges remain opaque, a startling finding gives hope for future insights. Neural units have been demonstrated in the brain structure called hippocampus that show activity only when the animal is in a specific place in its environment.

Space and its content

Only on an astronomical scale are physical space and its contents interdependent, This major proposition of the general theory of relativity is of no concern in vision. For us, distances in object space are independent of the nature of the objects.

But this is not so simple in visual space. At a minim an observer judges the relative location of a few light points in an otherwise dark visual field, a simplistic extension from object space that enabled Luneburg to make some statements about the geometry of visual space. In a more richly textured visual world, the various visual percepts carry with them prior perceptual associations which often affect their relative spatial disposition. Identical separations in physical space can look quite different (are quite different in visual space) depending on the features that demarcate them. This is particularly so in the depth dimension because the apparatus by which values in the third visual dimension are assigned is fundamentally different from that for the height and width of objects.

Even in monocular vision, which physiologically has only two dimensions, cues of size, perspective, relative motion etc. are used to assign depth differences to percepts. Looked at as a mathematical/geometrical problem, expanding a 2-dimensional object manifold into a 3-dimensional visual world is "ill-posed," i.e., not capable of a rational solution, but is accomplished quite effectively by the human observer.

The problem becomes less ill-posed when binocular vision allows actual determination of relative depth by stereoscopy, but its linkage to the evaluation of distance in the other two dimensions is uncertain (see: stereoscopic depth rendition). Hence, the uncomplicated three-dimensional visual space of every-day experience is the product of many perceptual and cognitive layers superimposed on the physiological representation of the physical world of objects.

Philosophy of perception

From Wikipedia, the free encyclopedia
 
Do we see what is really there? The two areas of the image marked A and B, and the rectangle connecting them, are all of the same shade: our eyes automatically "correct" for the shadow of the cylinder.

The philosophy of perception is concerned with the nature of perceptual experience and the status of perceptual data, in particular how they relate to beliefs about, or knowledge of, the world. Any explicit account of perception requires a commitment to one of a variety of ontological or metaphysical views. Philosophers distinguish internalist accounts, which assume that perceptions of objects, and knowledge or beliefs about them, are aspects of an individual's mind, and externalist accounts, which state that they constitute real aspects of the world external to the individual. The position of naïve realism—the 'everyday' impression of physical objects constituting what is perceived—is to some extent contradicted by the occurrence of perceptual illusions and hallucinations and the relativity of perceptual experience as well as certain insights in science. Realist conceptions include phenomenalism and direct and indirect realism. Anti-realist conceptions include idealism and skepticism. Recent philosophical work have expanded on the philosophical features of perception by going beyond the single paradigm of vision (for instance, by investigating the uniqueness of olfaction).

Categories of perception

We may categorize perception as internal or external.

  • Internal perception (proprioception) tells us what is going on in our bodies; where our limbs are, whether we are sitting or standing, whether we are depressed, hungry, tired and so forth.
  • External or sensory perception (exteroception), tells us about the world outside our bodies. Using our senses of sight, hearing, touch, smell, and taste, we perceive colors, sounds, textures, etc. of the world at large. There is a growing body of knowledge of the mechanics of sensory processes in cognitive psychology.
  • Mixed internal and external perception (e.g., emotion and certain moods) tells us about what is going on in our bodies and about the perceived cause of our bodily perceptions.

The philosophy of perception is mainly concerned with exteroception.

Scientific accounts of perception

An object at some distance from an observer will reflect light in all directions, some of which will fall upon the corneae of the eyes, where it will be focussed upon each retina, forming an image. The disparity between the electrical output of these two slightly different images is resolved either at the level of the lateral geniculate nucleus or in a part of the visual cortex called 'V1'. The resolved data is further processed in the visual cortex where some areas have specialised functions, for instance area V5 is involved in the modelling of motion and V4 in adding colour. The resulting single image that subjects report as their experience is called a 'percept'. Studies involving rapidly changing scenes show the percept derives from numerous processes that involve time delays. Recent fMRI studies show that dreams, imaginings and perceptions of things such as faces are accompanied by activity in many of the same areas of brain as are involved with physical sight. Imagery that originates from the senses and internally generated imagery may have a shared ontology at higher levels of cortical processing.

Sound is analyzed in term of pressure waves sensed by the cochlea in the ear. Data from the eyes and ears is combined to form a 'bound' percept. The problem of how this is produced, known as the binding problem.

Perception is analyzed as a cognitive process in which information processing is used to transfer information into the mind where it is related to other information. Some psychologists propose that this processing gives rise to particular mental states (cognitivism) whilst others envisage a direct path back into the external world in the form of action (radical behaviourism). Behaviourists such as John B. Watson and B.F. Skinner have proposed that perception acts largely as a process between a stimulus and a response but have noted that Gilbert Ryle's "ghost in the machine of the brain" still seems to exist. "The objection to inner states is not that they do not exist, but that they are not relevant in a functional analysis". This view, in which experience is thought to be an incidental by-product of information processing, is known as epiphenomenalism.

Contrary to the behaviouralist approach to understanding the elements of cognitive processes, gestalt psychology sought to understand their organization as a whole, studying perception as a process of figure and ground.

Philosophical accounts of perception

Important philosophical problems derive from the epistemology of perception—how we can gain knowledge via perception—such as the question of the nature of qualia. Within the biological study of perception naive realism is unusable. However, outside biology modified forms of naive realism are defended. Thomas Reid, the eighteenth-century founder of the Scottish School of Common Sense, formulated the idea that sensation was composed of a set of data transfers but also declared that there is still a direct connection between perception and the world. This idea, called direct realism, has again become popular in recent years with the rise of postmodernism.

The succession of data transfers involved in perception suggests that sense data are somehow available to a perceiving subject that is the substrate of the percept. Indirect realism, the view held by John Locke and Nicolas Malebranche, proposes that we can only be aware of mental representations of objects. However, this may imply an infinite regress (a perceiver within a perceiver within a perceiver...), though a finite regress is perfectly possible. It also assumes that perception is entirely due to data transfer and information processing, an argument that can be avoided by proposing that the percept does not depend wholly upon the transfer and rearrangement of data. This still involves basic ontological issues of the sort raised by Leibniz Locke, Hume, Whitehead and others, which remain outstanding particularly in relation to the binding problem, the question of how different perceptions (e.g. color and contour in vision) are "bound" to the same object when they are processed by separate areas of the brain.

Indirect realism (representational views) provides an account of issues such as perceptual contents, qualia, dreams, imaginings, hallucinations, illusions, the resolution of binocular rivalry, the resolution of multistable perception, the modelling of motion that allows us to watch TV, the sensations that result from direct brain stimulation, the update of the mental image by saccades of the eyes and the referral of events backwards in time. Direct realists must either argue that these experiences do not occur or else refuse to define them as perceptions.

Idealism holds that reality is limited to mental qualities while skepticism challenges our ability to know anything outside our minds. One of the most influential proponents of idealism was George Berkeley who maintained that everything was mind or dependent upon mind. Berkeley's idealism has two main strands, phenomenalism in which physical events are viewed as a special kind of mental event and subjective idealism. David Hume is probably the most influential proponent of skepticism.

A fourth theory of perception in opposition to naive realism, enactivism, attempts to find a middle path between direct realist and indirect realist theories, positing that cognition is a process of dynamic interplay between an organism's sensory-motor capabilities and the environment it brings forth. Instead of seeing perception as a passive process determined entirely by the features of an independently existing world, enactivism suggests that organism and environment are structurally coupled and co-determining. The theory was first formalized by Francisco Varela, Evan Thompson, and Eleanor Rosch in "The Embodied Mind".

Spatial representation

An aspect of perception that is common to both realists and anti-realists is the idea of mental or perceptual space. David Hume concluded that things appear extended because they have attributes of colour and solidity. A popular modern philosophical view is that the brain cannot contain images so our sense of space must be due to the actual space occupied by physical things. However, as René Descartes noticed, perceptual space has a projective geometry, things within it appear as if they are viewed from a point. The phenomenon of perspective was closely studied by artists and architects in the Renaissance, who relied mainly on the 11th century polymath, Alhazen (Ibn al-Haytham), who affirmed the visibility of perceptual space in geometric structuring projections. Mathematicians now know of many types of projective geometry such as complex Minkowski space that might describe the layout of things in perception (see Peters (2000)) and it has also emerged that parts of the brain contain patterns of electrical activity that correspond closely to the layout of the retinal image (this is known as retinotopy). How or whether these become conscious experience is still unknown (see McGinn (1995)).

Beyond spatial representation

Traditionally, the philosophical investigation of perception has focused on the sense of vision as the paradigm of sensory perception. However, studies on the other sensory modalities, such as the sense of smell, can challenge what we consider characteristic or essential features of perception. Take olfaction as an example. Spatial representation relies on a "mapping" paradigm that maps the spatial structures of the stimuli onto discrete neural structures and representations. However, olfactory science has shown us that perception is also a matter of associative learning, observational refinement, and a decision-making process that is context-dependent. One of the consequences of these discoveries on the philosophy of perception is that common perceptual effects such as conceptual imagery turn more on the neural architecture and its development than the topology of the stimulus itself.

Binding problem

From Wikipedia, the free encyclopedia

The consciousness and binding problem is the problem of how objects, background and abstract or emotional features are combined into a single experience.

The binding problem refers to the overall encoding of our brain circuits for the combination of decisions, actions, and perception. The binding problem encompasses a wide range of different circuits and can be divided into subsections that will be discussed later on. The binding problem is considered a "problem" due to the fact that no complete model exists.

The binding problem can be subdivided into four problems of perception, used in neuroscience, cognitive science and philosophy of mind. Including general considerations on coordination, the Subjective unity of perception, and variable binding.

General Considerations on Coordination

Summary of problem

Attention is crucial in determining which phenomena appear to be bound together, noticed, and remembered (Vroomen and Keetels, 2010). This specific binding problem is generally referred to as temporal synchrony. At the most basic level, all neural firing and its adaptation depends on specific consideration to timing (Feldman, 2010). At a much larger level, frequent patterns in large scale neural activity are a major diagnostic and scientific tool.

Synchronization theory and research

A popular hypothesis mentioned by Peter Milner, in his 1974 article A Model for Visual Shape Recognition, has been that features of individual objects are bound/segregated via synchronization of the activity of different neurons in the cortex. The theory, called binding-by-synchrony (BBS), is hypothesized to occur through the transient mutual synchronization of neurons located in different regions of the brain when the stimulus is presented. Empirical testing of the idea was brought to light when von der Malsburg proposed that feature binding posed a special problem that could not be covered simply by cellular firing rates. However, it has been shown this theory may not be a problem since it was revealed that the modules code jointly for multiple features, countering the feature-binding issue. Temporal synchrony has been shown to be the most prevalent when regarding the first problem, "General Considerations on Coordination," because it is an effective method to take in surroundings and is good for grouping and segmentation. A number of studies suggested that there is indeed a relationship between rhythmic synchronous firing and feature binding. This rhythmic firing appears to be linked to intrinsic oscillations in neuronal somatic potentials, typically in the gamma range around 40 - 60 hertz. The positive arguments for a role for rhythmic synchrony in resolving the segregational object-feature binding problem have been summarized by Singer. There is certainly extensive evidence for synchronization of neural firing as part of responses to visual stimuli.

However, there is inconsistency between findings from different laboratories. Moreover, a number of recent reviewers, including Shadlen and Movshon and Merker have raised concerns about the theory being potentially untenable. Thiele and Stoner found that perceptual binding of two moving patterns had no effect on synchronization of the neurons responding to two patterns:coherent and noncoherent plaids. In the primary visual cortex, Dong et al. found that whether two neurons were responding to contours of the same shape or different shapes had no effect on neural synchrony since synchrony is independent of binding condition.

Shadlen and Movshon, raise a series of doubts about both the theoretical and the empirical basis for the idea of segregational binding by temporal synchrony. There is no biophysical evidence that cortical neurons are selective to synchronous input at this point of precision and cortical activity with synchrony this precise is rare. Synchronization is also connected to endorphin activity. It has been shown that precise spike timing may not be necessary to illustrate a mechanism for visual binding and is only prevalent in modeling certain neuronal interactions). In contrast, Seth describes an artificial brain-based robot that demonstrates multiple, separate, widely distributed neural circuits, firing at different phases,showing that regular brain oscillations at specific frequencies are essential to the neural mechanisms of binding).

Goldfarb and Treisman point out that a logical problem appears to arise for binding solely via synchrony if there are several objects that share some of their features and not others. At best synchrony can facilitate segregation supported by other means (as von der Malsburg acknowledges).

A number of neuropsychological studies suggest that the association of color, shape and movement as "features of an object" is not simply a matter of linking or "binding", but shown to be inefficient to not bind elements into groups when considering association and give extensive evidence for top-down feedback signals that ensure that sensory data are handled as features of (sometimes wrongly) postulated objects early in processing. Pylyshyn has also emphasized the way the brain seems to pre-conceive objects from which features are to be allocated to which are attributed continuing existence even if features such as color change. This is because visual integration increases over time and indexing visual objects helps to ground visual concepts.

Feature integration theory

Summary of problem

The visual feature binding problem refers to the question of why we do not confuse a red circle and a blue square with a blue circle and a red square. The understanding of the circuits in the brain stimulated for visual feature binding is increasing. A binding process is required for us to accurately encode various visual features in separate cortical areas.

In her feature integration theory, Treisman suggested that one of the first stages of binding between features is mediated by the features' links to a common location. The second stage is combining individual features of an object that requires attention, and selecting that object occurs within a "master map" of locations. Psychophysical demonstrations of binding failures under conditions of full attention provide support for the idea that binding is accomplished through common location tags.

An implication of these approaches is that sensory data such as color or motion may not normally exist in "unallocated" form. For Merker: "The 'red' of a red ball does not float disembodied in an abstract color space in V4." If color information allocated to a point in the visual field is converted directly, via the instantiation of some form of propositional logic (analogous to that used in computer design) into color information allocated to an "object identity" postulated by a top-down signal as suggested by Purves and Lotto (e.g. There is blue here + Object 1 is here = Object 1 is blue) no special computational task of "binding together" by means such as synchrony may exist. (Although Von der Malsburg  poses the problem in terms of binding "propositions" such as "triangle" and "top", these, in isolation, are not propositional.)

How signals in the brain come to have propositional content, or meaning, is a much larger issue. However, both Marr and Barlow suggested, on the basis of what was known about neural connectivity in the 1970s that the final integration of features into a percept would be expected to resemble the way words operate in sentences.

The role of synchrony in segregational binding remains controversial. Merker has recently suggested that synchrony may be a feature of areas of activation in the brain that relates to an "infrastructural" feature of the computational system analogous to increased oxygen demand indicated via BOLD signal contrast imaging. Apparent specific correlations with segregational tasks may be explainable on the basis of interconnectivity of the areas involved. As a possible manifestation of a need to balance excitation and inhibition over time it might be expected to be associated with reciprocal re-entrant circuits as in the model of Seth et al. (Merker gives the analogy of the whistle from an audio amplifier receiving its own output.)

Experimental Work

Visual feature binding is suggested to have a selective attention to the locations of the objects. If indeed spatial attention does play a role in binding integration it will do so primarily when object location acts as a binding cue. A study's findings has shown that functional MRI images indicate regions of the parietal cortex involved in spatial attention, engaged in feature conjunction tasks in single feature tasks. The task involved multiple objects being shown simultaneously at different locations which activated the parietal cortex. Whereas when multiple objects are shown sequentially at the same location the parietal cortex was less engaged.

Behavioral Experiments

Defoulzi et al. Investigated feature binding through two feature dimensions, to disambiguate whether a specific combination of color and motion direction is perceived as bound or unbound. Two behaviorally relevant features, including color and motion belonging to the same object, are defined as the "bound" condition. Whereas the "unbound" condition has features that belong to different objects. Local field potentials were recorded from the lateral prefrontal cortex(lPFC) in monkeys and were monitored during different stimulus configurations. The findings suggest a neural representation of visual feature binding in 4 to 12 Hertz frequency bands. It is also suggested that transmission of binding information is relayed through different lPFC neural subpopulations. The data shows a behavioral relevance of binding information that is linked to the animal's reaction time. This includes the involvement of the prefrontal cortex targeted by the dorsal and ventral visual streams in binding visual features from different dimensions (color and motion).

It is suggested that the visual feature binding consists of two different mechanisms in visual perception. One mechanism consists of agonistic familiarity of possible combinations of features integrating several temporal integration windows. It is speculated that this process is mediated by neural synchronization processes and temporal synchronization in the visual cortex. The second mechanism is mediated by familiarity with the stimulus and is provided by attentional top-down support from familiar objects. 

Consciousness and Binding

Summary of Problem

Smythies defines the combination problem, also known as the subjective unity of perception, as "How do the brain mechanisms actually construct the phenomenal object?". Revonsuo equates this to "consciousness-related binding", emphasizing the entailment of a phenomenal aspect. As Revonsuo explores in 2006, there are nuances of difference beyond the basic BP1:BP2 division. Smythies speaks of constructing a phenomenal object ("local unity" for Revonsuo) but philosophers such as Descartes, Leibniz, Kant and James (see Brook and Raymont) have typically been concerned with the broader unity of a phenomenal experience ("global unity" for Revonsuo) – which, as Bayne illustrates may involve features as diverse as seeing a book, hearing a tune and feeling an emotion. Further discussion will focus on this more general problem of how sensory data that may have been segregated into, for instance, "blue square" and "yellow circle" are to be re-combined into a single phenomenal experience of a blue square next to a yellow circle, plus all other features of their context. There is a wide range of views on just how real this "unity" is, but the existence of medical conditions in which it appears to be subjectively impaired, or at least restricted, suggests that it is not entirely illusory.

There are many neurobiological theories about the subjective unity of perception. Different visual features such as color, size, shape, and motion are computed by largely distinct neural circuits but we experience an integrated whole. The different visual features interact with each other in various ways. For example, shape discrimination of objects is strongly affected by orientation but only slightly affected by object size. Some theories suggest that global perception of the integrated whole involves higher order visual areas. There is also evidence that the posterior parietal cortex is responsible for perceptual scene segmentation and organization. Bodies facing each other are processed as a single unit and there is increased coupling of the extrastriate body area (EBA) and the posterior superior temporal sulcus (pSTS) when bodies are facing each other. This suggests that the brain is biased towards grouping humans in twos or dyads.

History

Early philosophers Descartes and Leibniz noted that the apparent unity of our experience is an all-or-none qualitative characteristic that does not appear to have an equivalent in the known quantitative features, like proximity or cohesion, of composite matter. William James in the nineteenth century, considered the ways the unity of consciousness might be explained by known physics and found no satisfactory answer. He coined the term "combination problem", in the specific context of a "mind-dust theory" in which it is proposed that a full human conscious experience is built up from proto- or micro-experiences in the way that matter is built up from atoms. James claimed that such a theory was incoherent, since no causal physical account could be given of how distributed proto-experiences would "combine". He favoured instead a concept of "co-consciousness" in which there is one "experience of A, B and C" rather than combined experiences. A detailed discussion of subsequent philosophical positions is given by Brook and Raymont (see 26). However, these do not generally include physical interpretations.

Whitehead proposed a fundamental ontological basis for a relation consistent with James's idea of co-consciousness, in which many causal elements are co-available or "compresent" in a single event or "occasion" that constitutes a unified experience. Whitehead did not give physical specifics but the idea of compresence is framed in terms of causal convergence in a local interaction consistent with physics. Where Whitehead goes beyond anything formally recognized in physics is in the "chunking" of causal relations into complex but discrete "occasions". Even if such occasions can be defined, Whitehead's approach still leaves James's difficulty with finding a site, or sites, of causal convergence that would make neurobiological sense for "co-consciousness". Sites of signal convergence do clearly exist throughout the brain but there is a concern to avoid re-inventing what Dennett calls a Cartesian Theater or single central site of convergence of the form that Descartes proposed.

Descartes's central "soul" is now rejected because neural activity closely correlated with conscious perception is widely distributed throughout the cortex. The remaining choices appear to be either separate involvement of multiple distributed causally convergent events or a model that does not tie a phenomenal experience to any specific local physical event but rather to some overall "functional" capacity. Whichever interpretation is taken, as Revonsuo indicates, there is no consensus on what structural level we are dealing with – whether the cellular level, that of cellular groups as "nodes", "complexes" or "assemblies" or that of widely distributed networks. There is probably only general agreement that it is not the level of the whole brain, since there is evidence that signals in certain primary sensory areas, such as the V1 region of the visual cortex (in addition to motor areas and cerebellum), do not contribute directly to phenomenal experience.

Experimental Work on the Biological Basis of Binding

fMRI work

Stoll and colleagues conducted an fMRI experiment to see whether participants would view a dynamic bistable stimulus globally or locally. Responses in lower visual cortical regions were suppressed when participants viewed the stimulus globally. However, if global perception was without shape grouping, higher cortical regions were suppressed. This experiment shows that higher order cortex is important in perceptual grouping.

Grassi and colleagues used three different motion stimuli to investigate scene segmentation or how meaningful entities are grouped together and separated from other entities in a scene. Across all stimuli, scene segmentation was associated with increased activity in the posterior parietal cortex and decreased activity in lower visual areas. This suggests that the posterior parietal cortex is important for viewing an integrated whole.

EEG work

Mersad and colleagues used an EEG frequency tagging technique to differentiate between brain activity for the integrated whole object and brain activity for parts of the object. The results showed that the visual system binds two humans in close proximity as part of an integrated whole. These results are consistent with evolutionary theories that face-to-face bodies are one of the earliest representations of social interaction. It also supports other experimental work showing that body-selective visual areas respond more strongly to facing bodies.

Electron tunneling

Experiments have shown that ferritin and neuromelanin in fixed human substantia nigra pars compacta (SNc) tissue are able to support widespread electron tunneling. Further experiments have shown that ferritin structures similar to ones found in SNc tissue are able to conduct electrons over distances as great as 80 microns, and that they behave in accordance with Coulomb blockade theory to perform a switching or routing function. Both of these observations are consistent with earlier predictions that are part of a hypothesis that ferritin and neuromelanin can provide a binding mechanism associated with an action selection mechanism, although the hypothesis itself has not yet been directly investigated. The hypothesis and these observations have been applied to Integrated Information Theory.

Modern theories

Dennett has proposed that our sense that our experiences are single events is illusory and that, instead, at any one time there are "multiple drafts" of sensory patterns at multiple sites. Each would only cover a fragment of what we think we experience. Arguably, Dennett is claiming that consciousness is not unified and there is no phenomenal binding problem. Most philosophers have difficulty with this position (see Bayne) but some physiologists agree with it. In particular, the demonstration of perceptual asynchrony in psychophysical experiments by Moutoussis and Zeki, when color is perceived before orientation of lines and before motion by 40 and 80 ms, respectively, constitutes an argument that, over these very short time periods, different attributes are consciously perceived at different times, leading to the view that at least over these brief periods of time after visual stimulation, different events are not bound to each other, leading to the view of a disunity of consciousness, at least over these brief time intervals. Dennett's view might be in keeping with evidence from recall experiments and change blindness purporting to show that our experiences are much less rich than we sense them to be – what has been called the Grand Illusion. However, few, if any, other authors suggest the existence of multiple partial "drafts". Moreover, also on the basis of recall experiments, Lamme has challenged the idea that richness is illusory, emphasizing that phenomenal content cannot be equated with content to which there is cognitive access.

Dennett does not tie drafts to biophysical events. Multiple sites of causal convergence are invoked in specific biophysical terms by Edwards and Sevush. In this view the sensory signals to be combined in phenomenal experience are available, in full, at each of multiple sites. To avoid non-causal combination each site/event is placed within an individual neuronal dendritic tree. The advantage is that "compresence" is invoked just where convergence occurs neuro-anatomically. The disadvantage, as for Dennett, is the counter-intuitive concept of multiple "copies" of experience. The precise nature of an experiential event or "occasion", even if local, also remains uncertain.

The majority of theoretical frameworks for the unified richness of phenomenal experience adhere to the intuitive idea that experience exists as a single copy, and draw on "functional" descriptions of distributed networks of cells. Baars has suggested that certain signals, encoding what we experience, enter a "Global Workspace" within which they are "broadcast" to many sites in the cortex for parallel processing. Dehaene, Changeux and colleagues have developed a detailed neuro-anatomical version of such a workspace. Tononi and colleagues have suggested that the level of richness of an experience is determined by the narrowest information interface "bottleneck" in the largest sub-network or "complex" that acts as an integrated functional unit. Lamme has suggested that networks supporting reciprocal signaling rather than those merely involved in feed-forward signaling support experience. Edelman and colleagues have also emphasized the importance of re-entrant signaling. Cleeremans emphasizes meta-representation as the functional signature of signals contributing to consciousness.

In general, such network-based theories are not explicitly theories of how consciousness is unified, or "bound" but rather theories of functional domains within which signals contribute to unified conscious experience. A concern about functional domains is what Rosenberg has called the boundary problem; it is hard to find a unique account of what is to be included and what excluded. Nevertheless, this is, if anything is, the consensus approach.

Within the network context, a role for synchrony has been invoked as a solution to the phenomenal binding problem as well as the computational one. In his book, The Astonishing Hypothesis, Crick appears to be offering a solution to BP2 as much as BP1. Even von der Malsburg, introduces detailed computational arguments about object feature binding with remarks about a "psychological moment". The Singer group also appear to be interested as much in the role of synchrony in phenomenal awareness as in computational segregation.

The apparent incompatibility of using synchrony to both segregate and unify might be explained by sequential roles. However, Merker points out what appears to be a contradiction in attempts to solve the subjective unity of perception in terms of a functional (effectively meaning computational) rather than a local biophysical, domain, in the context of synchrony.

Functional arguments for a role for synchrony are in fact underpinned by analysis of local biophysical events. However, Merker points out that the explanatory work is done by the downstream integration of synchronized signals in post-synaptic neurons: "It is, however, by no means clear what is to be understood by 'binding by synchrony' other than the threshold advantage conferred by synchrony at, and only at, sites of axonal convergence onto single dendritic trees..." In other words, although synchrony is proposed as a way of explaining binding on a distributed, rather than a convergent, basis the justification rests on what happens at convergence. Signals for two features are proposed as bound by synchrony because synchrony effects downstream convergent interaction. Any theory of phenomenal binding based on this sort of computational function would seem to follow the same principle. The phenomenality would entail convergence, if the computational function does.

The assumption in many of the quoted models suggest that computational and phenomenal events, at least at some point in the sequence of events, parallel each other in some way. The difficulty remains in identifying what that way might be. Merker's analysis suggests that either (1) both computational and phenomenal aspects of binding are determined by convergence of signals on neuronal dendritic trees, or (2) that our intuitive ideas about the need for "binding" in a "holding together" sense in both computational and phenomenal contexts are misconceived. We may be looking for something extra that is not needed. Merker, for instance, argues that the homotopic connectivity of sensory pathways does the necessary work.

Cognitive Science and Binding

In modern connectionism cognitive neuroarchitectures are developed (e.g. “Oscillatory Networks”, “Integrated Connectionist/Symbolic (ICS) Cognitive Architecture”, “Holographic Reduced Representations (HRRs)”, “Neural Engineering Framework (NEF)”) that solve the binding problem by means of integrative synchronization mechanisms (e.g. the (phase-)synchronized “Binding-by-synchrony (BBS)” mechanism) (1) in perceptual cognition ("low-level cognition"): This is the neurocognitive performance of how an object or event that is perceived (e.g., a visual object) is dynamically "bound together" from its properties (e.g., shape, contour, texture, color, direction of motion) as a mental representation, i.e., can be experienced in the mind as a unified "Gestalt" in terms of Gestalt psychology ("feature binding", feature linking"), (2) and in language cognition ("high-level cognition"): This is the neurocognitive performance of how a linguistic unit (e.g. a sentence) is generated by relating semantic concepts and syntactic roles to each other in a dynamic way so that one can generate systematic and compositional symbol structures and propositions that are experienced as complex mental representations in the mind ("variable binding").

Physicalism

From Wikipedia, the free encyclopedia

In philosophy, physicalism is the metaphysical thesis that "everything is physical", that there is "nothing over and above" the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a "one substance" view of the nature of reality as opposed to a "two-substance" (dualism) or "many-substance" (pluralism) view. Both the definition of "physical" and the meaning of physicalism have been debated.

Physicalism is closely related to materialism. Physicalism grew out of materialism with advancements of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law).

According to a 2009 survey, physicalism is the majority view among philosophers, but there remains significant opposition to physicalism. Neuroplasticity has been used as an argument in support of a non-physicalist view. The philosophical zombie argument is another attempt to challenge physicalism.

Alternatively, outside of philosophy, physicalism could also refer to the preference or viewpoint that physics should be considered the best and only way to render truth about the world or reality.

Definition of physicalism in philosophy

The word "physicalism" was introduced into philosophy in the 1930s by Otto Neurath and Rudolf Carnap.

The use of "physical" in physicalism is a philosophical concept and can be distinguished from alternative definitions found in the literature (e.g. Karl Popper defined a physical proposition to be one which can at least in theory be denied by observation). A "physical property", in this context, may be a metaphysical or logical combination of properties which are physical in the ordinary sense. It is common to express the notion of "metaphysical or logical combination of properties" using the notion of supervenience: A property A is said to supervene on a property B if any change in A necessarily implies a change in B. Since any change in a combination of properties must consist of a change in at least one component property, we see that the combination does indeed supervene on the individual properties. The point of this extension is that physicalists usually suppose the existence of various abstract concepts which are non-physical in the ordinary sense of the word; so physicalism cannot be defined in a way that denies the existence of these abstractions. Also, physicalism defined in terms of supervenience does not entail that all properties in the actual world are type identical to physical properties. It is, therefore, compatible with multiple realizability.

From the notion of supervenience, we see that, assuming that mental, social, and biological properties supervene on physical properties, it follows that two hypothetical worlds cannot be identical in their physical properties but differ in their mental, social or biological properties.

Two common approaches to defining "physicalism" are the theory-based and object-based approaches. The theory-based conception of physicalism proposes that "a property is physical if and only if it either is the sort of property that physical theory tells us about or else is a property which metaphysically (or logically) supervenes on the sort of property that physical theory tells us about". Likewise, the object-based conception claims that "a property is physical if and only if: it either is the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents or else is a property which metaphysically (or logically) supervenes on the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents".

Physicalists have traditionally opted for a "theory-based" characterization of the physical either in terms of current physics, or a future (ideal) physics. These two theory-based conceptions of the physical represent both horns of Hempel's dilemma (named after the late philosopher of science and logical empiricist Carl Gustav Hempel): an argument against theory-based understandings of the physical. Very roughly, Hempel's dilemma is that if we define the physical by reference to current physics, then physicalism is very likely to be false, as it is very likely (by pessimistic meta-induction) that much of current physics is false. But if we instead define the physical in terms of a future (ideal) or completed physics, then physicalism is hopelessly vague or indeterminate.

While the force of Hempel's dilemma against theory-based conceptions of the physical remains contested, alternative "non-theory-based" conceptions of the physical have also been proposed. Frank Jackson (1998) for example, has argued in favour of the aforementioned "object-based" conception of the physical. An objection to this proposal, which Jackson himself noted in 1998, is that if it turns out that panpsychism or panprotopsychism is true, then such a non-materialist understanding of the physical gives the counterintuitive result that physicalism is, nevertheless, also true since such properties will figure in a complete account of paradigmatic examples of the physical.

David Papineau and Barbara Montero have advanced and subsequently defended a "via negativa" characterization of the physical. The gist of the via negativa strategy is to understand the physical in terms of what it is not: the mental. In other words, the via negativa strategy understands the physical as "the non-mental". An objection to the via negativa conception of the physical is that (like the object-based conception) it doesn't have the resources to distinguish neutral monism (or panprotopsychism) from physicalism. Further, Restrepo (2012) argues that this conception of the physical makes core non-physical entities of non-´physicalist metaphysics, like God, Cartesian souls and abstract numbers, physical and thus either false or trivially true: "God is non-mentally-and-non-biologically identifiable as the thing that created the universe. Sup- posing emergentism is true, non-physical emergent properties are non-mentally-and-non-biologically identifiable as non-linear effects of certain arrangements of matter. The immaterial Cartesian soul is non-mentally-and-non-biologically identifiable as one of the things that interact causally with certain particles (coincident with the pineal gland). The Platonic number eight is non-mentally-and-non-biologically identifiable as the number of planets orbiting the Sun".

Supervenience-based definitions of physicalism

Adopting a supervenience-based account of the physical, the definition of physicalism as "all properties are physical" can be unraveled to:

1) Physicalism is true at a possible world w if and only if any world that is a physical duplicate of w is also a duplicate of w simpliciter.

Applied to the actual world (our world), statement 1 above is the claim that physicalism is true at the actual world if and only if at every possible world in which the physical properties and laws of the actual world are instantiated, the non-physical (in the ordinary sense of the word) properties of the actual world are instantiated as well. To borrow a metaphor from Saul Kripke (1972), the truth of physicalism at the actual world entails that once God has instantiated or "fixed" the physical properties and laws of our world, then God's work is done; the rest comes "automatically".

Unfortunately, statement 1 fails to capture even a necessary condition for physicalism to be true at a world w. To see this, imagine a world in which there are only physical properties—if physicalism is true at any world it is true at this one. But one can conceive physical duplicates of such a world that are not also duplicates simpliciter of it: worlds that have the same physical properties as our imagined one, but with some additional property or properties. A world might contain "epiphenomenal ectoplasm", some additional pure experience that does not interact with the physical components of the world and is not necessitated by them (does not supervene on them). To handle the epiphenomenal ectoplasm problem, statement 1 can be modified to include a "that's-all" or "totality" clause or be restricted to "positive" properties. Adopting the former suggestion here, we can reformulate statement 1 as follows:

2) Physicalism is true at a possible world w if and only if any world that is a minimal physical duplicate of w is a duplicate of w simpliciter.

Applied in the same way, statement 2 is the claim that physicalism is true at a possible world w if and only if any world that is a physical duplicate of w (without any further changes), is duplicate of w without qualification. This allows a world in which there are only physical properties to be counted as one at which physicalism is true, since worlds in which there is some extra stuff are not "minimal" physical duplicates of such a world, nor are they minimal physical duplicates of worlds that contain some non-physical properties that are metaphysically necessitated by the physical.

But while statement 2 overcomes the problem of worlds at which there is some extra stuff (sometimes referred to as the "epiphenomenal ectoplasm problem") it faces a different challenge: the so-called "blockers problem". Imagine a world where the relation between the physical and non-physical properties at this world (call the world w1) is slightly weaker than metaphysical necessitation, such that a certain kind of non-physical intervener—"a blocker"—could, were it to exist at w1, prevent the non-physical properties in w1 from being instantiated by the instantiation of the physical properties at w1. Since statement 2 rules out worlds which are physical duplicates of w1 that also contain non-physical interveners by virtue of the minimality, or that's-all clause, statement 2 gives the (allegedly) incorrect result that physicalism is true at w1. One response to this problem is to abandon statement 2 in favour of the alternative possibility mentioned earlier in which supervenience-based formulations of physicalism are restricted to what David Chalmers (1996) calls "positive properties". A positive property is one that "...if instantiated in a world W, is also instantiated by the corresponding individual in all worlds that contain W as a proper part." Following this suggestion, we can then formulate physicalism as follows:

3) Physicalism is true at a possible world w if and only if any world that is a physical duplicate of w is a positive duplicate of w.

On the face of it, statement 3 seems able to handle both the epiphenomenal ectoplasm problem and the blockers problem. With regard to the former, statement 3 gives the correct result that a purely physical world is one at which physicalism is true, since worlds in which there is some extra stuff are positive duplicates of a purely physical world. With regard to the latter, statement 3 appears to have the consequence that worlds in which there are blockers are worlds where positive non-physical properties of w1 will be absent, hence w1 will not be counted as a world at which physicalism is true. Daniel Stoljar (2010) objects to this response to the blockers problem on the basis that since the non-physical properties of w1 aren't instantiated at a world in which there is a blocker, they are not positive properties in Chalmers' (1996) sense, and so statement 3 will count w1 as a world at which physicalism is true after all.

A further problem for supervenience-based formulations of physicalism is the so-called "necessary beings problem". A necessary being in this context is a non-physical being that exists in all possible worlds (for example what theists refer to as God). A necessary being is compatible with all the definitions provided, because it is supervenient on everything; yet it is usually taken to contradict the notion that everything is physical. So any supervenience-based formulation of physicalism will at best state a necessary but not sufficient condition for the truth of physicalism.

Additional objections have been raised to the above definitions provided for supervenience physicalism: one could imagine an alternate world that differs only by the presence of a single ammonium molecule (or physical property), and yet based on statement 1, such a world might be completely different in terms of its distribution of mental properties. Furthermore, there are differences expressed concerning the modal status of physicalism; whether it is a necessary truth, or is only true in a world which conforms to certain conditions (i.e. those of physicalism).

Realisation physicalism

Closely related to supervenience physicalism, is realisation physicalism, the thesis that every instantiated property is either physical or realised by a physical property.

Token physicalism

Token physicalism is the proposition that "for every actual particular (object, event or process) x, there is some physical particular y such that x = y". It is intended to capture the idea of "physical mechanisms". Token physicalism is compatible with property dualism, in which all substances are "physical", but physical objects may have mental properties as well as physical properties. Token physicalism is not however equivalent to supervenience physicalism. Firstly, token physicalism does not imply supervenience physicalism because the former does not rule out the possibility of non-supervenient properties (provided that they are associated only with physical particulars). Secondarily, supervenience physicalism does not imply token physicalism, for the former allows supervenient objects (such as a "nation", or "soul") that are not equal to any physical object.

Reductionism and emergentism

Reductionism

There are multiple versions of reductionism. In the context of physicalism, the reductions referred to are of a "linguistic" nature, allowing discussions of, say, mental phenomena to be translated into discussions of physics. In one formulation, every concept is analysed in terms of a physical concept. One counter-argument to this supposes there may be an additional class of expressions which is non-physical but which increases the expressive power of a theory. Another version of reductionism is based on the requirement that one theory (mental or physical) be logically derivable from a second.

The combination of reductionism and physicalism is usually called reductive physicalism in the philosophy of mind. The opposite view is non-reductive physicalism. Reductive physicalism is the view that mental states are both nothing over and above physical states and reducible to physical states. One version of reductive physicalism is type physicalism or mind-body identity theory. Type physicalism asserts that "for every actually instantiated property F, there is some physical property G such that F=G". Unlike token physicalism, type physicalism entails supervenience physicalism.

Reductive versions of physicalism are increasingly unpopular as they do not account for mental lives. The brain on this position as a physical substance has only physical attributes such as a particular volume, a particular mass, a particular density, a particular location, a particular shape, and so on. However, the brain on this position does not have any mental attributes. The brain is not overjoyed or unhappy. The brain is not in pain. When a person's back aches and he or she is in pain, it is not the brain that is suffering even though the brain is associated with the neural circuitry that provides the experience of pain. Reductive physicalism therefore cannot explain mental lives. In the event of fear, for example, doubtlessly there is neural activity that is corresponding with the experience of fear. However, the brain itself is not fearful. Fear cannot be reduced to a physical brain state even though it is corresponding with neural activity in the brain. For this reason, reductive physicalism is argued to be indefensible as it cannot be reconciled with mental experience.

Another common argument against type physicalism is multiple realizability, the possibility that a psychological process (say) could be instantiated by many different neurological processes (even non-neurological processes, in the case of machine or alien intelligence). For in this case, the neurological terms translating a psychological term must be disjunctions over the possible instantiations, and it is argued that no physical law can use these disjunctions as terms. Type physicalism was the original target of the multiple realizability argument, and it is not clear that token physicalism is susceptible to objections from multiple realizability.

Emergentism

There are two versions of emergentism, the strong version and the weak version. Supervenience physicalism has been seen as a strong version of emergentism, in which the subject's psychological experience is considered genuinely novel. Non-reductive physicalism, on the other side, is a weak version of emergentism because it does not need that the subject's psychological experience be novel. The strong version of emergentism is incompatible with physicalism. Since there are novel mental states, mental states are not nothing over and above physical states. However, the weak version of emergentism is compatible with physicalism.

We can see that emergentism is actually a very broad view. Some forms of emergentism appear either incompatible with physicalism or equivalent to it (e.g. posteriori physicalism), others appear to merge both dualism and supervenience. Emergentism compatible with dualism claims that mental states and physical states are metaphysically distinct while maintaining the supervenience of mental states on physical states. This proposition however contradicts supervenience physicalism, which asserts a denial of dualism.

A priori versus a posteriori physicalism

Physicalists hold that physicalism is true. A natural question for physicalists, then, is whether the truth of physicalism is deducible a priori from the nature of the physical world (i.e., the inference is justified independently of experience, even though the nature of the physical world can itself only be determined through experience) or can only be deduced a posteriori (i.e., the justification of the inference itself is dependent upon experience). So-called "a priori physicalists" hold that from knowledge of the conjunction of all physical truths, a totality or that's-all truth (to rule out non-physical epiphenomena, and enforce the closure of the physical world), and some primitive indexical truths such as "I am A" and "now is B", the truth of physicalism is knowable a priori. Let "P" stand for the conjunction of all physical truths and laws, "T" for a that's-all truth, "I" for the indexical "centering" truths, and "N" for any [presumably non-physical] truth at the actual world. We can then, using the material conditional "→", represent a priori physicalism as the thesis that PTI → N is knowable a priori. An important wrinkle here is that the concepts in N must be possessed non-deferentially in order for PTI → N to be knowable a priori. The suggestion, then, is that possession of the concepts in the consequent, plus the empirical information in the antecedent is sufficient for the consequent to be knowable a priori.

An "a posteriori physicalist", on the other hand, will reject the claim that PTI → N is knowable a priori. Rather, they would hold that the inference from PTI to N is justified by metaphysical considerations that in turn can be derived from experience. So the claim then is that "PTI and not N" is metaphysically impossible.

One commonly issued challenge to a priori physicalism and to physicalism in general is the "conceivability argument", or zombie argument. At a rough approximation, the conceivability argument runs as follows:

P1) PTI and not Q (where "Q" stands for the conjunction of all truths about consciousness, or some "generic" truth about someone being "phenomenally" conscious [i.e., there is "something it is like" to be a person x] ) is conceivable (i.e., it is not knowable a priori that PTI and not Q is false).

P2) If PTI and not Q is conceivable, then PTI and not Q is metaphysically possible.

P3) If PTI and not Q is metaphysically possible then physicalism is false.

C) Physicalism is false.[45]

Here proposition P3 is a direct application of the supervenience of consciousness, and hence of any supervenience-based version of physicalism: If PTI and not Q is possible, there is some possible world where it is true. This world differs from [the relevant indexing on] our world, where PTIQ is true. But the other world is a minimal physical duplicate of our world, because PT is true there. So there is a possible world which is a minimal physical duplicate of our world, but not a full duplicate; this contradicts the definition of physicalism that we saw above.

Since a priori physicalists hold that PTI → N is a priori, they are committed to denying P1) of the conceivability argument. The a priori physicalist, then, must argue that PTI and not Q, on ideal rational reflection, is incoherent or contradictory.

A posteriori physicalists, on the other hand, generally accept P1) but deny P2)--the move from "conceivability to metaphysical possibility". Some a posteriori physicalists think that unlike the possession of most, if not all other empirical concepts, the possession of consciousness has the special property that the presence of PTI and the absence of consciousness will be conceivable—even though, according to them, it is knowable a posteriori that PTI and not Q is not metaphysically possible. These a posteriori physicalists endorse some version of what Daniel Stoljar (2005) has called "the phenomenal concept strategy". Roughly speaking, the phenomenal concept strategy is a label for those a posteriori physicalists who attempt to show that it is only the concept of consciousness—not the property—that is in some way "special" or sui generis. Other a posteriori physicalists eschew the phenomenal concept strategy, and argue that even ordinary macroscopic truths such as "water covers 60% of the earth's surface" are not knowable a priori from PTI and a non-deferential grasp of the concepts "water" and "earth" et cetera. If this is correct, then we should (arguably) conclude that conceivability does not entail metaphysical possibility, and P2) of the conceivability argument against physicalism is false.

Other views

Realistic physicalism

Galen Strawson's realistic physicalism or realistic monism entails panpsychism – or at least micropsychism. Strawson argues that "many—perhaps most—of those who call themselves physicalists or materialists [are mistakenly] committed to the thesis that physical stuff is, in itself, in its fundamental nature, something wholly and utterly non-experiential... even when they are prepared to admit with Eddington that physical stuff has, in itself, 'a nature capable of manifesting itself as mental activity', i.e. as experience or consciousness". Because experiential phenomena allegedly cannot be emergent from wholly non-experiential phenomena, philosophers are driven to substance dualism, property dualism, eliminative materialism and "all other crazy attempts at wholesale mental-to-non-mental reduction".

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an 'inference to the best explanation'... Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. 'And were the inmost essence of things laid open to us' I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism... So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of 'substance dualism'... Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?

Entropy (statistical thermodynamics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(statistical_thermody...