Search This Blog

Saturday, August 9, 2014

Nonlinear systems

Nonlinear systems   

This article is about "nonlinearity" in mathematics, physics and other sciences. For video and film editing, see Non-linear editing system. For other uses, see nonlinearity (disambiguation).

In physics and other sciences, a nonlinear system, in contrast to a linear system, is a system which does not satisfy the superposition principle – meaning that the output of a nonlinear system is not directly proportional to the input.

In mathematics, a nonlinear system of equations is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in it (them). It does not matter if nonlinear known functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.

Typically, the behavior of a nonlinear system is described by a nonlinear system of equations.
Nonlinear problems are of interest to engineers, physicists and mathematicians and many other scientists because most systems are inherently nonlinear in nature. As nonlinear equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization).

This works well up to some accuracy and some range for the input values, but some interesting phenomena such as chaos[1] and singularities are hidden by linearization. It follows that some aspects of the behavior of a nonlinear system appear commonly to be chaotic, unpredictable or counterintuitive. Although such chaotic behavior may resemble random behavior, it is absolutely not random.

For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.

Definition

In mathematics, a linear function (or map) f(x) is one which satisfies both of the following properties:
(Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity; for example, an antilinear map is additive but not homogeneous.) The conditions of additivity and homogeneity are often combined in the superposition principle
f(\alpha x + \beta y) = \alpha f(x) + \beta f(y) \,
An equation written as
f(x) = C\,
is called linear if f(x) is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if C = 0.

The definition f(x) = C is very general in that x can be any sensible mathematical object (number, vector, function, etc.), and the function f(x) can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If f(x) contains differentiation with respect to x, the result will be a differential equation.

Nonlinear algebraic equations

Nonlinear algebraic equations, which are also called polynomial equations, are defined by equating polynomials to zero. For example,
x^2 + x - 1 = 0\,.
For a single polynomial equation, root-finding algorithms can be used to find solutions to the equation (i.e., sets of values for the variables that satisfy the equation). However, systems of algebraic equations are more complicated; their study is one motivation for the field of algebraic geometry, a difficult branch of modern mathematics. It is even difficult to decide if a given algebraic system has complex solutions (see Hilbert's Nullstellensatz). Nevertheless, in the case of the systems with a finite number of complex solutions, these systems of polynomial equations are now well understood and efficient methods exist for solving them.[2]

Nonlinear recurrence relations

A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures.[3] These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.

Nonlinear differential equations

A system of differential equations is said to be nonlinear if it is not a linear system. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.

One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.

Ordinary differential equations

First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
\frac{\operatorname{d} u}{\operatorname{d} x} = -u^2\,
has  u=\frac{1}{x+C} as a general solution (and also u = 0 as a particular solution, corresponding to the limit of the general solution when C tends to the infinity). The equation is nonlinear because it may be written as
\frac{\operatorname{d} u}{\operatorname{d} x} + u^2=0\,
and the left-hand side of the equation is not a linear function of u and its derivatives. Note that if the u2 term were replaced with u, the problem would be linear (the exponential decay problem).

Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.

Common methods for the qualitative analysis of nonlinear ordinary differential equations include:

Partial differential equations

The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly even linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.

Another common (though less mathematic) tactic, often seen in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.

Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.

Pendula

 
Illustration of a pendulum
Linearizations of a pendulum

A classic, extensively studied nonlinear problem is the dynamics of a pendulum under influence of gravity. Using Lagrangian mechanics, it may be shown[4] that the motion of a pendulum can be described by the dimensionless nonlinear equation
\frac{d^2 \theta}{d t^2} + \sin(\theta) = 0\,
where gravity points "downwards" and \theta is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use d\theta/dt as an integrating factor, which would eventually yield
\int \frac{d \theta}{\sqrt{C_0 + 2 \cos(\theta)}} = t + C_1\,
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary even if C_0 = 0).

Another way to approach the problem is to linearize any nonlinearities (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at \theta = 0, called the small angle approximation, is
\frac{d^2 \theta}{d t^2} + \theta = 0\,
since \sin(\theta) \approx \theta for \theta \approx 0. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at \theta = \pi, corresponding to the pendulum being straight up:
\frac{d^2 \theta}{d t^2} + \pi - \theta = 0\,
since \sin(\theta) \approx \pi - \theta for \theta \approx \pi. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that |\theta| will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.

One more interesting linearization is possible around \theta = \pi/2, around which \sin(\theta) \approx 1:
\frac{d^2 \theta}{d t^2} + 1 = 0.
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.

Types of nonlinear behaviors

  • Classical chaos – the behavior of a system cannot be predicted.
  • Multistability – alternating between two or more exclusive states.
  • Aperiodic oscillations – functions that do not repeat values after some period (otherwise known as chaotic oscillations or chaos).
  • Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system.
  • Solitons – self-reinforcing solitary waves

Examples of nonlinear equations

See also the list of nonlinear partial differential equations

Software for solving nonlinear systems

Linearity

Linearity

From Wikipedia, the free encyclopedia
 
In common usage, linearity refers to a mathematical relationship or function that can be graphically represented as a straight line, as in two quantities that are directly proportional to each other, such as voltage and current in a simple DC circuit, or the mass and weight of an object.

A crude but simple example of this concept can be observed in the volume control of an audio amplifier. While our ears may (roughly) perceive a relatively even gradation of volume as the control goes from 1 to 10, the electrical power consumed in the speaker is rising geometrically with each numerical increment. The "loudness" is proportional to the volume number (a linear relationship), while the wattage is doubling with every unit increase (a non-linear, exponential relationship).

In mathematics, a linear map or linear function f(x) is a function that satisfies the following two properties:
The homogeneity and additivity properties together are called the superposition principle. It can be shown that additivity implies homogeneity in all cases where α is rational; this is done by proving the case where α is a natural number by mathematical induction and then extending the result to arbitrary rational numbers. If f is assumed to be continuous as well, then this can be extended to show homogeneity for any real number α, using the fact that rationals form a dense subset of the reals.

In this definition, x is not necessarily a real number, but can in general be a member of any vector space. A more specific definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics.

The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and many constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it is generally straightforward to solve by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.

Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations (also called linear maps), and systems of linear equations.
The word linear comes from the Latin word linearis, which means pertaining to or resembling a line. For a description of linear and nonlinear equations, see linear equation. Nonlinear equations and functions are of interest to physicists and mathematicians because they can be used to represent many natural phenomena, including chaos.

Integral linearity

For a device that converts a quantity to another quantity there are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well the device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale, or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in the manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics.

Many times a device's specifications will simply refer to linearity, with no other explanation as to which type of linearity is intended. In cases where a specification is expressed simply as linearity, it is assumed to imply independent linearity.

Independent linearity is probably the most commonly used linearity definition and is often found in the specifications for DMMs and ADCs, as well as devices like potentiometers. Independent linearity is defined as the maximum deviation of actual performance relative to a straight line, located such that it minimizes the maximum deviation. In that case there are no constraints placed upon the positioning of the straight line and it may be wherever necessary to minimize the deviations between it and the device's actual performance characteristic.

Zero-based linearity forces the lower range value of the straight line to be equal to the actual lower range value of the device's characteristic, but it does allow the line to be rotated to minimize the maximum deviation. In this case, since the positioning of the straight line is constrained by the requirement that the lower range values of the line and the device's characteristic be coincident, the non-linearity based on this definition will generally be larger than for independent linearity.

For terminal linearity, there is no flexibility allowed in the placement of the straight line in order to minimize the deviations. The straight line must be located such that each of its end-points coincides with the device's actual upper and lower range values. This means that the non-linearity measured by this definition will typically be larger than that measured by the independent, or the zero-based linearity definitions. This definition of linearity is often associated with ADCs, DACs and various sensors.

A fourth linearity definition, absolute linearity, is sometimes also encountered. Absolute linearity is a variation of terminal linearity, in that it allows no flexibility in the placement of the straight line, however in this case the gain and offset errors of the actual device are included in the linearity measurement, making this the most difficult measure of a device's performance. For absolute linearity the end points of the straight line are defined by the ideal upper and lower range values for the device, rather than the actual values. The linearity error in this instance is the maximum deviation of the actual device's performance from ideal.

Linear polynomials

In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a line.

Over the reals, a linear equation is one of the forms:
f(x) = m x + b\
where m is often called the slope or gradient; b the y-intercept, which gives the point of intersection between the graph of the function and the y-axis.

Note that this usage of the term linear is not the same as the above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if b = 0. Hence, if b ≠ 0, the function is often called an affine function (see in greater generality affine transformation).

Boolean functions

In Boolean algebra, a linear function is a function f for which there exist a_0, a_1, \ldots, a_n \in \{0,1\} such that
f(b_1, \ldots, b_n) = a_0 \oplus (a_1 \land b_1) \oplus \cdots \oplus (a_n \land b_n) for all b_1, \ldots, b_n \in \{0,1\}.
A Boolean function is linear if one of the following holds for the function's truth table:
  1. In every row in which the truth value of the function is 'T', there are an odd number of 'T's assigned to the arguments and in every row in which the function is 'F' there is an even number of 'T's assigned to arguments. Specifically, f('F', 'F', ..., 'F') = 'F', and these functions correspond to linear maps over the Boolean vector space.
  2. In every row in which the value of the function is 'T', there is an even number of 'T's assigned to the arguments of the function; and in every row in which the truth value of the function is 'F', there are an odd number of 'T's assigned to arguments. In this case, f('F', 'F', ..., 'F') = 'T'.
Another way to express this is that each variable always makes a difference in the truth-value of the operation or it never makes a difference.

Negation, Logical biconditional, exclusive or, tautology, and contradiction are linear functions.

Physics

In physics, linearity is a property of the differential equations governing many systems; for instance, the Maxwell equations or the diffusion equation.

Linearity of a differential equation means that if two functions f and g are solutions of the equation, then any linear combination af + bg is, too.

Electronics

In electronics, the linear operating region of a device, for example a transistor, is where a dependent variable (such as the transistor collector current) is directly proportional to an independent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment is a high fidelity audio amplifier, which must amplify a signal without changing its waveform. Others are linear filters, linear regulators, and linear amplifiers in general.

In most scientific and technological, as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within a certain operating region—for example, a high-fidelity amplifier may distort even a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value, taking it away from the approximately linear part of the transfer function.

Military tactical formations

In military tactical formations, "linear formations" were adapted from phalanx-like formations of pike protected by handgunners towards shallow formations of handgunners protected by progressively fewer pikes. This kind of formation would get thinner until its extreme in the age of Wellington with the 'Thin Red Line'. It would eventually be replaced by skirmish order at the time of the invention of the breech-loading rifle that allowed soldiers to move and fire independently of the large-scale formations and fight in small, mobile units.

Art

Linear is one of the five categories proposed by Swiss art historian Heinrich Wölfflin to distinguish "Classic", or Renaissance art, from the Baroque. According to Wölfflin, painters of the fifteenth and early sixteenth centuries (Leonardo da Vinci, Raphael or Albrecht Dürer) are more linear than "painterly" Baroque painters of the seventeenth century (Peter Paul Rubens, Rembrandt, and Velázquez) because they primarily use outline to create shape.[1] Linearity in art can also be
referenced in digital art. For example, hypertext fiction can be an example of nonlinear narrative, but there are also websites designed to go in a specified, organized manner, following a linear path.

Music

In music the linear aspect is succession, either intervals or melody, as opposed to simultaneity or the vertical aspect.

Measurement

In measurement, the term "linear foot" refers to the number of feet in a straight line of material (such as lumber or fabric) generally without regard to the width. It is sometimes incorrectly referred to as "lineal feet"; however, "lineal" is typically reserved for usage when referring to ancestry or heredity.[1] The words "linear"[2] & "lineal" [3] both descend from the same root meaning, the Latin word for line, which is "linea".

Colligative properties

Colligative properties

From Wikipedia, the free encyclopedia
 
In chemistry, colligative properties are properties of solutions that depend upon the ratio of the number of solute particles to the number of solvent molecules in a solution, and not on the type of chemical species present.[1] This number ratio can be related to the various units for concentration of solutions. Here we shall only consider those properties which result because of the dissolution of nonvolatile solute in a volatile liquid solvent.[2] They are independent of the nature of the solute particles, and are due essentially to the dilution of the solvent by the solute. The word colligative is derived from the Latin colligatus meaning bound together.[3]

Colligative properties include:
  1. Relative lowering of vapor pressure
  2. Elevation of boiling point
  3. Depression of freezing point
  4. Osmotic pressure.
Measurement of colligative properties for a dilute solution of a non-ionized solute such as urea or glucose in water or another solvent can lead to determinations of relative molar masses, both for small molecules and for polymers which cannot be studied by other means. Alternatively, measurements for ionized solutes can lead to an estimation of the percentage of ionization taking place.

Colligative properties are mostly studied for dilute solutions, whose behavior may often be approximated as that of an ideal solution.

Relative lowering of Vapor pressure

The vapor pressure of a liquid is the pressure of a vapor in equilibrium with the liquid phase. The vapor pressure of a solvent is lowered by addition of a non-volatile solute to form a solution.
For an ideal solution, the equilibrium vapor pressure is given by Raoult's law as
p = p^{\star}_{\rm A} x_{\rm A} + p^{\star}_{\rm B} x_{\rm B} + \cdots, where
p^{\star}_{\rm i} is the vapor pressure of the pure component (i= A, B, ...) and x_{\rm i} is the mole fraction of the component in the solution
For a solution with a solvent (A) and one non-volatile solute (B), p^{\star}_{\rm B} = 0 and p = p^{\star}_{\rm A} x_{\rm A}

The vapor pressure lowering relative to pure solvent is \Delta p = p^{\star}_{\rm A} - p = p^{\star}_{\rm A} (1 - x_{\rm A}) = p^{\star}_{\rm A} x_{\rm B}, which is proportional to the mole fraction of solute.
If the solute dissociates in solution, then the vapor pressure lowering is increased by the van 't Hoff factor i, which represents the true number of solute particles for each formula unit. For example, the strong electrolyte MgCl2 dissociates into one Mg2+ ion and two Cl- ions, so that if ionization is complete, i = 3 and \Delta p = 3 p^{\star}_{\rm A} x_{\rm B}. The measured colligative properties show that i is somewhat less than 3 due to ion association.

Boiling point and freezing point

Addition of solute to form a solution stabilizes the solvent in the liquid phase, and lowers the solvent chemical potential so that solvent molecules have less tendency to move to the gas or solid phases.
As a result, liquid solutions slightly above the solvent boiling point at a given pressure become stable, which means that the boiling point increases. Similarly, liquid solutions slightly below the solvent freezing point become stable meaning that the freezing point decreases. Both the boiling point elevation and the freezing point depression are proportional to the lowering of vapor pressure in a dilute solution.

These properties are colligative in systems where the solute is essentially confined to the liquid phase. Boiling point elevation (like vapor pressure lowering) is colligative for non-volatile solutes where the solute presence in the gas phase is negligible. Freezing point depression is colligative for most solutes since very few solutes dissolve appreciably in solid solvents.

Boiling point elevation (ebullioscopy)

The boiling point of a liquid is the temperature (T_{\rm b}) at which its vapor pressure is equal to the external pressure. The normal boiling point is the boiling point at a pressure equal to 1 atmosphere.
The boiling point of a pure solvent is increased by the addition of a non-volatile solute, and the elevation can be measured by ebullioscopy. It is found that
\Delta T_{\rm b} = T_{\rm b}(solution) - T_{\rm b}(solvent) = i\cdot K_b \cdot m
Here i is the van 't Hoff factor as above, Kb is the ebullioscopic constant of the solvent (equal to 0.512 °C kg/mol for water), and m is the molality of the solution.

The boiling point is the temperature at which there is equilibrium between liquid and gas phases. At the boiling point, the number of gas molecules condensing to liquid equals the number of liquid molecules evaporating to gas. Adding a solute dilutes the concentration of the liquid molecules and reduces the rate of evaporation. To compensate for this and re-attain equilibrium, the boiling point occurs at a higher temperature.

If the solution is assumed to be an ideal solution, Kb can be evaluated from the thermodynamic condition for liquid-vapor equilibrium. At the boiling point the chemical potential μA of the solvent in the solution phase equals the chemical potential in the pure vapor phase above the solution.
\mu _A(T_b)  = \mu_A^{\star}(T_b)  + RT\ln x_A\  = \mu_A^{\star}(g, 1 atm),
where the asterisks indicate pure phases. This leads to the result K_b = RMT_b^2/\Delta H_{\mathrm{vap}}, where R is the molar gas constant, M is the solvent molar mass and ΔHvap is the solvent molar enthalpy of vaporization.[4]

Freezing point depression (cryoscopy)

The freezing point (T_{\rm f}) of a pure solvent is lowered by the addition of a solute which is insoluble in the solid solvent, and the measurement of this difference is called cryoscopy. It is found that
\Delta T_{\rm f} = T_{\rm f}(solution) - T_{\rm f}(solvent) = - i\cdot K_f \cdot m
Here Kf is the cryoscopic constant, equal to 1.86 °C kg/mol for the freezing point of water. Again "i" is the van 't Hoff factor and m the molality.

In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze. Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well. The equality of chemical potentials permits the evaluation of the cryoscopic constant as K_f = RMT_f^2/\Delta H_{\mathrm{fus}}, where ΔHfus is the solvent molar enthalpy of fusion.[4]

Osmotic pressure

The osmotic pressure of a solution is the difference in pressure between the solution and the pure liquid solvent when the two are in equilibrium across a semipermeable membrane, which allows the passage of solvent molecules but not of solute particles. If the two phases are at the same initial pressure, there is a net transfer of solvent across the membrane into the solution known as osmosis.
The process stops and equilibrium is attained when the pressure difference equals the osmotic pressure.
Two laws governing the osmotic pressure of a dilute solution were discovered by the German botanist W. F. P. Pfeffer and the Dutch chemist J. H. van’t Hoff:
  1. The osmotic pressure of a dilute solution at constant temperature is directly proportional to its concentration.
  2. The osmotic pressure of a solution is directly proportional to its absolute temperature.
These are analogous to Boyle's law and Charles's Law for gases. Similarly, the combined ideal gas law, pV = nRT, has as analog for ideal solutions \Pi V = n R T i, where \Pi is osmotic pressure; V is the volume; n is the number of moles of solute; R is the molar gas constant 8.314 J K−1 mol−1; T is absolute temperature; and i is the Van 't Hoff factor.
The osmotic pressure is then proportional to the molar concentration c = n/V, since
\Pi = \frac {n R T i}{V} = c R T i
The osmotic pressure is proportional to the concentration of solute particles ci and is therefore a colligative property.
As with the other colligative properties, this equation is a consequence of the equality of solvent chemical potentials of the two phases in equilibrium. In this case the phases are the pure solvent at pressure P and the solution at total pressure P + π.[5]

History

The word colligative (German: kolligativ) was introduced in 1891 by Wilhelm Ostwald. Ostwald classified solute properties in three categories:[6][7]
  1. colligative properties which depend only on solute concentration and temperature, and are independent of the nature of the solute particles
  2. additive properties such as mass, which are the sums of properties of the constituent particles and therefore depend also on the composition (or molecular formula) of the solute, and
  3. constitutional properties which depend further on the molecular structure of the solute.

Reproductive rights

From Wikipedia, the free encyclo...