Search This Blog

Sunday, April 6, 2025

Solubility equilibrium

From Wikipedia, the free encyclopedia

Solubility equilibrium is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation, or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent solubility product which functions like an equilibrium constant. Solubility equilibria are important in pharmaceutical, environmental and many other scenarios.

Definitions

A solubility equilibrium exists when a chemical compound in the solid state is in chemical equilibrium with a solution containing the compound. This type of equilibrium is an example of dynamic equilibrium in that some individual molecules migrate between the solid and solution phases such that the rates of dissolution and precipitation are equal to one another. When equilibrium is established and the solid has not all dissolved, the solution is said to be saturated. The concentration of the solute in a saturated solution is known as the solubility. Units of solubility may be molar (mol dm−3) or expressed as mass per unit volume, such as μg mL−1. Solubility is temperature dependent. A solution containing a higher concentration of solute than the solubility is said to be supersaturated. A supersaturated solution may be induced to come to equilibrium by the addition of a "seed" which may be a tiny crystal of the solute, or a tiny solid particle, which initiates precipitation.

There are three main types of solubility equilibria.

  1. Simple dissolution.
  2. Dissolution with dissociation reaction. This is characteristic of salts. The equilibrium constant is known in this case as a solubility product.
  3. Dissolution with ionization reaction. This is characteristic of the dissolution of weak acids or weak bases in aqueous media of varying pH.

In each case an equilibrium constant can be specified as a quotient of activities. This equilibrium constant is dimensionless as activity is a dimensionless quantity. However, use of activities is very inconvenient, so the equilibrium constant is usually divided by the quotient of activity coefficients, to become a quotient of concentrations. See Equilibrium chemistry § Equilibrium constant for details. Moreover, the activity of a solid is, by definition, equal to 1 so it is omitted from the defining expression.

For a chemical equilibrium the solubility product, Ksp for the compound ApBq is defined as follows where [A] and [B] are the concentrations of A and B in a saturated solution. A solubility product has a similar functionality to an equilibrium constant though formally Ksp has the dimension of (concentration)p+q.

Effects of conditions

Temperature effect

Solubility is sensitive to changes in temperature. For example, sugar is more soluble in hot water than cool water. It occurs because solubility products, like other types of equilibrium constants, are functions of temperature. In accordance with Le Chatelier's Principle, when the dissolution process is endothermic (heat is absorbed), solubility increases with rising temperature. This effect is the basis for the process of recrystallization, which can be used to purify a chemical compound. When dissolution is exothermic (heat is released) solubility decreases with rising temperature. Sodium sulfate shows increasing solubility with temperature below about 32.4 °C, but a decreasing solubility at higher temperature. This is because the solid phase is the decahydrate (Na
2
SO
4
·10H
2
O
) below the transition temperature, but a different hydrate above that temperature.

The dependence on temperature of solubility for an ideal solution (achieved for low solubility substances) is given by the following expression containing the enthalpy of melting, ΔmH, and the mole fraction of the solute at saturation: where is the partial molar enthalpy of the solute at infinite dilution and the enthalpy per mole of the pure crystal.

This differential expression for a non-electrolyte can be integrated on a temperature interval to give:

For nonideal solutions activity of the solute at saturation appears instead of mole fraction solubility in the derivative with respect to temperature:

Common-ion effect

The common-ion effect is the effect of decreased solubility of one salt when another salt that has an ion in common with it is also present. For example, the solubility of silver chloride, AgCl, is lowered when sodium chloride, a source of the common ion chloride, is added to a suspension of AgCl in water. The solubility, S, in the absence of a common ion can be calculated as follows. The concentrations [Ag+] and [Cl] are equal because one mole of AgCl would dissociate into one mole of Ag+ and one mole of Cl. Let the concentration of [Ag+(aq)] be denoted by x. Then Ksp for AgCl is equal to 1.77×10−10 mol2 dm−6 at 25 °C, so the solubility is 1.33×10−5 mol dm−3.

Now suppose that sodium chloride is also present, at a concentration of 0.01 mol dm−3 = 0.01 M. The solubility, ignoring any possible effect of the sodium ions, is now calculated by This is a quadratic equation in x, which is also equal to the solubility. In the case of silver chloride, x2 is very much smaller than 0.01 M x, so the first term can be ignored. Therefore a considerable reduction from 1.33×10−5 mol dm−3. In gravimetric analysis for silver, the reduction in solubility due to the common ion effect is used to ensure "complete" precipitation of AgCl.

Particle size effect

The thermodynamic solubility constant is defined for large monocrystals. Solubility will increase with decreasing size of solute particle (or droplet) because of the additional surface energy. This effect is generally small unless particles become very small, typically smaller than 1 μm. The effect of the particle size on solubility constant can be quantified as follows: where *KA is the solubility constant for the solute particles with the molar surface area A, *KA→0 is the solubility constant for substance with molar surface area tending to zero (i.e., when the particles are large), γ is the surface tension of the solute particle in the solvent, Am is the molar surface area of the solute (in m2/mol), R is the universal gas constant, and T is the absolute temperature.

Salt effects

The salt effects (salting in and salting-out) refers to the fact that the presence of a salt which has no ion in common with the solute, has an effect on the ionic strength of the solution and hence on activity coefficients, so that the equilibrium constant, expressed as a concentration quotient, changes.

Phase effect

Equilibria are defined for specific crystal phases. Therefore, the solubility product is expected to be different depending on the phase of the solid. For example, aragonite and calcite will have different solubility products even though they have both the same chemical identity (calcium carbonate). Under any given conditions one phase will be thermodynamically more stable than the other; therefore, this phase will form when thermodynamic equilibrium is established. However, kinetic factors may favor the formation the unfavorable precipitate (e.g. aragonite), which is then said to be in a metastable state.

In pharmacology, the metastable state is sometimes referred to as amorphous state. Amorphous drugs have higher solubility than their crystalline counterparts due to the absence of long-distance interactions inherent in crystal lattice. Thus, it takes less energy to solvate the molecules in amorphous phase. The effect of amorphous phase on solubility is widely used to make drugs more soluble.

Pressure effect

For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as: where is the mole fraction of the -th component in the solution, is the pressure, is the absolute temperature, is the partial molar volume of the th component in the solution, is the partial molar volume of the th component in the dissolving solid, and is the universal gas constant.

The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time.

Quantitative aspects

Simple dissolution

Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms. For example, when sucrose (table sugar) forms a saturated solution An equilibrium expression for this reaction can be written, as for any chemical reaction (products over reactants): where Ko is called the thermodynamic solubility constant. The braces indicate activity. The activity of a pure solid is, by definition, unity. Therefore The activity of a substance, A, in solution can be expressed as the product of the concentration, [A], and an activity coefficient, γ. When Ko is divided by γ, the solubility constant, Ks, is obtained. This is equivalent to defining the standard state as the saturated solution so that the activity coefficient is equal to one. The solubility constant is a true constant only if the activity coefficient is not affected by the presence of any other solutes that may be present. The unit of the solubility constant is the same as the unit of the concentration of the solute. For sucrose Ks = 1.971 mol dm−3 at 25 °C. This shows that the solubility of sucrose at 25 °C is nearly 2 mol dm−3 (540 g/L). Sucrose is unusual in that it does not easily form a supersaturated solution at higher concentrations, as do most other carbohydrates.

Dissolution with dissociation

Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for silver chloride: The expression for the equilibrium constant for this reaction is: where is the thermodynamic equilibrium constant and braces indicate activity. The activity of a pure solid is, by definition, equal to one.

When the solubility of the salt is very low the activity coefficients of the ions in solution are nearly equal to one. By setting them to be actually equal to one this expression reduces to the solubility product expression:

For 2:2 and 3:3 salts, such as CaSO4 and FePO4, the general expression for the solubility product is the same as for a 1:1 electrolyte

(electrical charges are omitted in general expressions, for simplicity of notation)

With an unsymmetrical salt like Ca(OH)2 the solubility expression is given by Since the concentration of hydroxide ions is twice the concentration of calcium ions this reduces to

In general, with the chemical equilibrium and the following table, showing the relationship between the solubility of a compound and the value of its solubility product, can be derived.

Salt p q Solubility, S
AgCl
Ca(SO4)
Fe(PO4)
1 1 Ksp
Na2(SO4)
Ca(OH)2
2
1
1
2
Na3(PO4)
FeCl3
3
1
1
3
Al2(SO4)3
Ca3(PO4)2
2
3
3
2
Mp(An)q p q

Solubility products are often expressed in logarithmic form. Thus, for calcium sulfate, with Ksp = 4.93×10−5 mol2 dm−6, log Ksp = −4.32. The smaller the value of Ksp, or the more negative the log value, the lower the solubility.

Some salts are not fully dissociated in solution. Examples include MgSO4, famously discovered by Manfred Eigen to be present in seawater as both an inner sphere complex and an outer sphere complex. The solubility of such salts is calculated by the method outlined in dissolution with reaction.

Hydroxides

The solubility product for the hydroxide of a metal ion, Mn+, is usually defined, as follows: However, general-purpose computer programs are designed to use hydrogen ion concentrations with the alternative definitions.

For hydroxides, solubility products are often given in a modified form, K*sp, using hydrogen ion concentration in place of hydroxide ion concentration. The two values are related by the self-ionization constant for water, Kw. For example, at ambient temperature, for calcium hydroxide, Ca(OH)2, lg Ksp is ca. −5 and lg K*sp ≈ −5 + 2 × 14 ≈ 23.

Dissolution with reaction

When a concentrated solution of ammonia is added to a suspension of silver chloride dissolution occurs because a complex of Ag+ is formed

A typical reaction with dissolution involves a weak base, B, dissolving in an acidic aqueous solution. This reaction is very important for pharmaceutical products. Dissolution of weak acids in alkaline media is similarly important. The uncharged molecule usually has lower solubility than the ionic form, so solubility depends on pH and the acid dissociation constant of the solute. The term "intrinsic solubility" is used to describe the solubility of the un-ionized form in the absence of acid or alkali.

Leaching of aluminium salts from rocks and soil by acid rain is another example of dissolution with reaction: alumino-silicates are bases which react with the acid to form soluble species, such as Al3+(aq).

Formation of a chemical complex may also change solubility. A well-known example is the addition of a concentrated solution of ammonia to a suspension of silver chloride, in which dissolution is favoured by the formation of an ammine complex. When sufficient ammonia is added to a suspension of silver chloride, the solid dissolves. The addition of water softeners to washing powders to inhibit the formation of soap scum provides an example of practical importance.

Experimental determination

The determination of solubility is fraught with difficulties. First and foremost is the difficulty in establishing that the system is in equilibrium at the chosen temperature. This is because both precipitation and dissolution reactions may be extremely slow. If the process is very slow solvent evaporation may be an issue. Supersaturation may occur. With very insoluble substances, the concentrations in solution are very low and difficult to determine. The methods used fall broadly into two categories, static and dynamic.

Static methods

In static methods a mixture is brought to equilibrium and the concentration of a species in the solution phase is determined by chemical analysis. This usually requires separation of the solid and solution phases. In order to do this the equilibration and separation should be performed in a thermostatted room. Very low concentrations can be measured if a radioactive tracer is incorporated in the solid phase.

A variation of the static method is to add a solution of the substance in a non-aqueous solvent, such as dimethyl sulfoxide, to an aqueous buffer mixture. Immediate precipitation may occur giving a cloudy mixture. The solubility measured for such a mixture is known as "kinetic solubility". The cloudiness is due to the fact that the precipitate particles are very small resulting in Tyndall scattering. In fact the particles are so small that the particle size effect comes into play and kinetic solubility is often greater than equilibrium solubility. Over time the cloudiness will disappear as the size of the crystallites increases, and eventually equilibrium will be reached in a process known as precipitate ageing.

Dynamic methods

Solubility values of organic acids, bases, and ampholytes of pharmaceutical interest may be obtained by a process called "Chasing equilibrium solubility". In this procedure, a quantity of substance is first dissolved at a pH where it exists predominantly in its ionized form and then a precipitate of the neutral (un-ionized) species is formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution is monitored and strong acid and base titrant are added to adjust the pH to discover the equilibrium conditions when the two rates are equal. The advantage of this method is that it is relatively fast as the quantity of precipitate formed is quite small. However, the performance of the method may be affected by the formation supersaturated solutions.

Connectionism

From Wikipedia, the free encyclopedia

A 'second wave' connectionist (ANN) model with a hidden layer

Connectionism is an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks.

Connectionism has had many "waves" since its beginnings. The first wave appeared 1943 with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through a formal and mathematical approach, and Frank Rosenblatt who published the 1958 paper "The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain" in Psychological Review, while working at the Cornell Aeronautical Laboratory. The first wave ended with the 1969 book about the limitations of the original perceptron idea, written by Marvin Minsky and Seymour Papert, which contributed to discouraging major funding agencies in the US from investing in connectionist research. With a few noteworthy deviations, most connectionist research entered a period of inactivity until the mid-1980s. The term connectionist model was reintroduced in a 1982 paper in the journal Cognitive Science by Jerome Feldman and Dana Ballard.

The second wave blossomed in the late 1980s, following a 1987 book about Parallel Distributed Processing by James L. McClelland, David E. Rumelhart et al., which introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (now known as "hidden layers") alongside input and output units, and used a sigmoid activation function instead of the old "all-or-nothing" function. Their work built upon that of John Hopfield, who was a key figure investigating the mathematical characteristics of sigmoid activation functions. From the late 1980s to the mid-1990s, connectionism took on an almost revolutionary tone when Schneider, Terence Horgan and Tienson posed the question of whether connectionism represented a fundamental shift in psychology and so-called "good old-fashioned AI," or GOFAI. Some advantages of the second wave connectionist approach included its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity for graceful degradation. Its disadvantages included the difficulty in deciphering how ANNs process information or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level.

The current (third) wave has been marked by advances in deep learning, which have made possible the creation of large language models. The success of deep-learning networks in the past decade has greatly increased the popularity of this approach, but the complexity and scale of such networks has brought with them increased interpretability problems.

Basic principle

The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses, as in the human brain. This principle has been seen as an alternative to GOFAI and the classical theories of mind based on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception.

Activation function

Internal states of any network change over time due to neurons sending a signal to a succeeding layer of neurons in the case of a feedforward network, or to a previous layer in the case of a recurrent network. Discovery of non-linear activation functions has enabled the second wave of connectionism.

Memory and learning

Neural networks follow two basic principles:

  1. Any mental state can be described as a n-dimensional vector of numeric activation values over neural units in a network.
  2. Memory and learning are created by modifying the 'weights' of the connections between neural units, generally represented as an n×m matrix. The weights are adjusted according to some learning rule or algorithm, such as Hebbian learning.

Most of the variety among the models comes from:

  • Interpretation of units: Units can be interpreted as neurons or groups of neurons.
  • Definition of activation: Activation can be defined in a variety of ways. For example, in a Boltzmann machine, the activation is interpreted as the probability of generating an action potential spike, and is determined via a logistic function on the sum of the inputs to a unit.
  • Learning algorithm: Different networks modify their connections differently. In general, any mathematically defined change in connection weights over time is referred to as the "learning algorithm".

Biological realism

Connectionist work in general does not need to be biologically realistic. One area where connectionist models are thought to be biologically implausible is with respect to error-propagation networks that are needed to support learning, but error propagation can explain some of the biologically-generated electrical activity seen at the scalp in event-related potentials such as the N400 and P600, and this provides some biological support for one of the key assumptions of connectionist learning procedures. Many recurrent connectionist models also incorporate dynamical systems theory. Many researchers, such as the connectionist Paul Smolensky, have argued that connectionist models will evolve toward fully continuous, high-dimensional, non-linear, dynamic systems approaches.

Precursors

Precursors of the connectionist principles can be traced to early work in psychology, such as that of William James. Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869, the neurologist John Hughlings Jackson argued for multi-level, distributed systems. Following from this lead, Herbert Spencer's Principles of Psychology, 3rd edition (1872), and Sigmund Freud's Project for a Scientific Psychology (composed 1895) propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century, Edward Thorndike was writing about human learning that posited a connectionist type network.

Hopfield networks had precursors in the Ising model due to Wilhelm Lenz (1920) and Ernst Ising (1925), though the Ising model conceived by them did not involve time. Monte Carlo simulations of Ising model required the advent of computers in the 1950s.

The first wave

The first wave begun in 1943 with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through a formal and mathematical approach. McCulloch and Pitts showed how neural systems could implement first-order logic: Their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. They were influenced by the work of Nicolas Rashevsky in the 1930s and symbolic logic in the style of Principia Mathematica.

Hebb contributed greatly to speculations about neural functioning, and proposed a learning principle, Hebbian learning. Lashley argued for distributed representations as a result of his failure to find anything like a localized engram in years of lesion experiments. Friedrich Hayek independently conceived the model, first in a brief unpublished manuscript in 1920, then expanded into a book in 1952.

The Perceptron machines were proposed and built by Frank Rosenblatt, who published the 1958 paper “The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain” in Psychological Review, while working at the Cornell Aeronautical Laboratory. He cited Hebb, Hayek, Uttley, and Ashby as main influences.

Another form of connectionist model was the relational network framework developed by the linguist Sydney Lamb in the 1960s.

The research group led by Widrow empirically searched for methods to train two-layered ADALINE networks (MADALINE), with limited success.

A method to train multilayered perceptrons with arbitrary levels of trainable weights was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, called the Group Method of Data Handling. This method employs incremental layer by layer training based on regression analysis, where useless units in hidden layers are pruned with the help of a validation set.

The first multilayered perceptrons trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.

In 1972, Shun'ichi Amari produced an early example of self-organizing network.

The neural network winter

There was some conflict among artificial intelligence researchers as to what neural networks are useful for. Around late 1960s, there was a widespread lull in research and publications on neural networks, "the neural network winter", which lasted through the 1970s, during which the field of artificial intelligence turned towards symbolic methods. The publication of Perceptrons (1969) is typically regarded as a catalyst of this event.

The second wave

The second wave begun in the early 1980s. Some key publications included (John Hopfield, 1982) which popularized Hopfield networks, the 1986 paper that popularized backpropagation, and the 1987 two-volume book about the Parallel Distributed Processing (PDP) by James L. McClelland, David E. Rumelhart et al., which has introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (known as "hidden layers" now) alongside input and output units and using sigmoid activation function instead of the old 'all-or-nothing' function.

Hopfield approached the field from the perspective of statistical mechanics, providing some early forms of mathematical rigor that increased the perceived respectability of the field. Another important series of publications proved that neural networks are universal function approximators, which also provided some mathematical respectability.

Some early popular demonstration projects appeared during this time. NETtalk (1987) learned to pronounce written English. It achieved popular success, appearing on the Today show. TD-Gammon (1992) reached top human level in backgammon.

Connectionism vs. computationalism debate

As connectionism became increasingly popular in the late 1980s, some researchers (including Jerry Fodor, Steven Pinker and others) reacted against it. They argued that connectionism, as then developing, threatened to obliterate what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism. Computationalism is a specific form of cognitivism that argues that mental activity is computational, that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine. Some researchers argued that the trend in connectionism represented a reversion toward associationism and the abandonment of the idea of a language of thought, something they saw as mistaken. In contrast, those very tendencies made connectionism attractive for other researchers.

Connectionism and computationalism need not be at odds, but the debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate, some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. Differences between the two approaches include the following:

  • Computationalists posit symbolic models that are structurally similar to underlying brain structure, whereas connectionists engage in "low-level" modeling, trying to ensure that their models resemble neurological structures.
  • Computationalists in general focus on the structure of explicit symbols (mental models) and syntactical rules for their internal manipulation, whereas connectionists focus on learning from environmental stimuli and storing this information in a form of connections between neurons.
  • Computationalists believe that internal mental activity consists of manipulation of explicit symbols, whereas connectionists believe that the manipulation of explicit symbols provides a poor model of mental activity.
  • Computationalists often posit domain specific symbolic sub-systems designed to support learning in specific areas of cognition (e.g., language, intentionality, number), whereas connectionists posit one or a small set of very general learning-mechanisms.

Despite these differences, some theorists have proposed that the connectionist architecture is simply the manner in which organic brains happen to implement the symbol-manipulation system. This is logically possible, as it is well known that connectionist models can implement symbol-manipulation systems of the kind used in computationalist models, as indeed they must be able if they are to explain the human ability to perform symbol-manipulation tasks. Several cognitive models combining both symbol-manipulative and connectionist architectures have been proposed. Among them are Paul Smolensky's Integrated Connectionist/Symbolic Cognitive Architecture (ICS). and Ron Sun's CLARION (cognitive architecture). But the debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example.

The debate was largely centred on logical arguments about whether connectionist networks could produce the syntactic structure observed in this sort of reasoning. This was later achieved although using fast-variable binding abilities outside of those standardly assumed in connectionist models.

Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are in general more opaque, to the extent that they may be describable only in very general terms (such as specifying the learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense, connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e., connectionism), without representing a helpful theory of the particular process that is being modelled. In this sense, the debate might be considered as to some extent reflecting a mere difference in the level of analysis in which particular theories are framed. Some researchers suggest that the analysis gap is the consequence of connectionist mechanisms giving rise to emergent phenomena that may be describable in computational terms.

In the 2000s, the popularity of dynamical systems in philosophy of mind have added a new perspective on the debate; some authors now argue that any split between connectionism and computationalism is more conclusively characterized as a split between computationalism and dynamical systems.

In 2014, Alex Graves and others from DeepMind published a series of papers describing a novel Deep Neural Network structure called the Neural Turing Machine able to read symbols on a tape and store symbols in memory. Relational Networks, another Deep Network module published by DeepMind, are able to create object-like representations and manipulate them to answer complex questions. Relational Networks and Neural Turing Machines are further evidence that connectionism and computationalism need not be at odds.

Symbolism vs. connectionism debate

Smolensky's Subsymbolic Paradigm has to meet the Fodor-Pylyshyn challenge formulated by classical symbol theory for a convincing theory of cognition in modern connectionism. In order to be an adequate alternative theory of cognition, Smolensky's Subsymbolic Paradigm would have to explain the existence of systematicity or systematic relations in language cognition without the assumption that cognitive processes are causally sensitive to the classical constituent structure of mental representations. The subsymbolic paradigm, or connectionism in general, would thus have to explain the existence of systematicity and compositionality without relying on the mere implementation of a classical cognitive architecture. This challenge implies a dilemma: If the Subsymbolic Paradigm could contribute nothing to the systematicity and compositionality of mental representations, it would be insufficient as a basis for an alternative theory of cognition. However, if the Subsymbolic Paradigm's contribution to systematicity requires mental processes grounded in the classical constituent structure of mental representations, the theory of cognition it develops would be, at best, an implementation architecture of the classical model of symbol theory and thus not a genuine alternative (connectionist) theory of cognition. The classical model of symbolism is characterized by (1) a combinatorial syntax and semantics of mental representations and (2) mental operations as structure-sensitive processes, based on the fundamental principle of syntactic and semantic constituent structure of mental representations as used in Fodor's "Language of Thought (LOT)". This can be used to explain the following closely related properties of human cognition, namely its (1) productivity, (2) systematicity, (3) compositionality, and (4) inferential coherence.

This challenge has been met in modern connectionism, for example, not only by Smolensky's "Integrated Connectionist/Symbolic (ICS) Cognitive Architecture", but also by Werning and Maye's "Oscillatory Networks". An overview of this is given for example by Bechtel & Abrahamsen, Marcus and Maurer.

Neutron star

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neutron_star Central neutron star...