Search This Blog

Friday, August 8, 2014

Enthalpy

Enthalpy

From Wikipedia, the free encyclopedia
            
Enthalpy is a defined thermodynamic potential, designated by the letter "H", that consists of the internal energy of the system (U) plus the product of pressure (p) and volume (V) of the system:[1]
H = U + pV
Since enthalpy, H, consists of internal energy, U, plus the product of pressure (p) and the volume (V) of the system, which are all functions of the state of the thermodynamic system, enthalpy is a state function.

The unit of measurement for enthalpy in the International System of Units (SI) is the joule, but other historical, conventional units are still in use, such as the British thermal unit and the calorie.
The enthalpy is the preferred expression of system energy changes in many chemical, biological, and physical measurements, because it simplifies certain descriptions of energy transfer. Enthalpy change accounts for energy transferred to the environment at constant pressure through expansion or heating.
The total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, ΔH. The change ΔH is positive in endothermic reactions, and negative in heat-releasing exothermic processes.
For processes under constant pressure, ΔH is equal to the change in the internal energy of the system, plus the work that the system has done on its surroundings.[2] This means that the change in enthalpy under such conditions is the heat absorbed (or released) by the material through a chemical reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure assume standard state: most commonly 1 bar pressure. Standard state does not, strictly speaking, specify a temperature (see standard state), but expressions for enthalpy generally reference the standard heat of formation at 25 °C.

Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses.

Origins

The word enthalpy is based on the Greek enthalpein (ἐνθάλπειν), which means "to warm in".[3] It comes from the Classical Greek prefix ἐν- en-, meaning "to put into", and the verb θάλπειν thalpein, meaning "to heat". The word enthalpy is often incorrectly attributed[citation needed] to Benoît Paul Émile Clapeyron and Rudolf Clausius through the 1850 publication of their Clausius–Clapeyron relation. This misconception was popularized by the 1927 publication of The Mollier Steam Tables and Diagrams. However, neither the concept, the word, nor the symbol for enthalpy existed until well after Clapeyron's death.

The earliest writings to contain the concept of enthalpy did not appear until 1875,[4] when Josiah Willard Gibbs introduced "a heat function for constant pressure". However, Gibbs did not use the word "enthalpy" in his writings.[note 1]

The actual word first appears in the scientific literature in a 1909 publication by J. P. Dalton. According to that publication, Heike Kamerlingh Onnes (1853-1926) actually coined the word.[5]

Over the years, many different symbols were used to denote enthalpy. It was not until 1922 that Alfred W. Porter proposed the symbol "H" as the accepted standard,[6] thus finalizing the terminology still in use today.

Formal definition

The enthalpy of a homogeneous system is defined as:[7][8]
H = U + p V\,
where
H is the enthalpy of the system
U is the internal energy of the system
p is the pressure of the system
V is the volume of the system.
The enthalpy is an extensive property. This means that, for homogeneous systems, the enthalpy is proportional to the size of the system. It is convenient to introduce the specific enthalpy h =H/m where m is the mass of the system, or the molar enthalpy Hm = H/n, where n is the number of moles (h and Hm are intensive properties). For inhomogeneous systems the enthalpy is the sum of the enthalpies of the composing subsystems
H = \Sigma_k H_k
where the label k refers to the various subsystems. In case of continuously varying p, T, and/or composition the summation becomes an integral:
H = \int \rho h \mathrm{d}V,
where ρ is the density.

The enthalpy H(S,p) of homogeneous systems can be derived as a characteristic function of the entropy S and the pressure p as follows: we start from the first law of thermodynamics for closed systems for an infinitesimal process
\mathrm{d} U = \delta Q -\delta W.
Here, δQ is a small amount of heat added to the system and δW a small amount of work performed by the system. In a homogeneous system only reversible processes can take place so the second law of thermodynamics gives δQ = TdS with T the absolute temperature of the system. Furthermore, if only pV work is done, δW = pdV. As a result
\mathrm{d} U = T\mathrm{d}S-p\mathrm{d}V.
Adding d(pV) to both sides of this expression gives
\mathrm{d}U+ \mathrm{d}(pV) = T\mathrm{d}S-p\mathrm{d}V+ \mathrm{d}(pV)
or
\mathrm{d} (U + pV) = T\mathrm{d}S+V\mathrm{d}p.
So
\mathrm{d} H(S,p) = T\mathrm{d} S + V \mathrm{d} p.

Other expressions

The expression of dH in terms of entropy and pressure may be unfamiliar to many readers. However, there are expressions in terms of more familiar variables such as temperature and pressure[9][10]
\mathrm{d}H = C_p\mathrm{d}T+V(1-\alpha T)\mathrm{d}p.
Here Cp is the heat capacity at constant pressure and α is the coefficient of (cubic) thermal expansion
\alpha=\frac{1}{V}\left(\frac{\part V}{\part T}\right)_p.
With this expression one can, in principle, determine the enthalpy if Cp and V are known as functions of p and T.
Notice that for an ideal gas, \alpha T = 1,[note 2] so that:
\mathrm{d}H = C_p\mathrm{d}T
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for dH then becomes:
\mathrm{d}H = T\mathrm{d}S+V\mathrm{d}p + \sum_i \mu_i \mathrm{d}N_i
where μi is the chemical potential per particle for an i-type particle, and Ni is the number of such particles. The last term can also be written as μidni (with dni the number of moles of component i added to the system and, in this case, μi the molar chemical potential) or as μidmi (with dmi the mass of component i added to the system and, in this case, μi the specific chemical potential).

Enthalpy versus internal energy

The U term can be interpreted as the energy required to create the system, and the pV term as the energy that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, n moles of a gas of volume V at pressure p and temperature T, is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy U plus pV, where pV is the work done in pushing against the ambient (atmospheric) pressure.

In basic physics and statistical mechanics it may be more interesting to study the internal properties of the system and therefore the internal energy is used.[11][12] In basic chemistry, experiments are often conducted at atmospheric pressure and H is therefore more useful for reaction energy calculations. Furthermore the enthalpy is the workhorse of engineering thermodynamics as we will see later.

Relationship to heat

In order to discuss the relation between the enthalpy increase and heat supply we return to the first law for closed systems: dU = δQ - δW. We apply it to the special case that the pressure at the surface is uniform. In this case the work term can be split in two contributions, the so-called pV work, given by pdV (where here p is the pressure at the surface, dV is the increase of the volume of the system) and other types of work δW ' such as by a shaft or by electromagnetic interaction. So we write δW = pdVW '. In this case the first law reads
\mathrm{d} U = \delta Q -p\mathrm{d}V-\delta W^\prime
or
\mathrm{d} H = \delta Q +V\mathrm{d}p-\delta W^\prime.
From this relation we see that the increase in enthalpy of a system is equal to the added heat
\mathrm{d} H = \delta Q
provided that the system is under constant pressure (dp = 0) and that the only work done by the system is expansion work (δW ' = 0)[13]

Applications

In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, pV, differs based upon the constancy of conditions present at the creation of the thermodynamic system.

Internal energy, U, must be supplied to remove particles from a surrounding in order to allow space for the creation of a system, providing that environmental variables, such as pressure (p) remain constant. This internal energy also includes the energy required for activation and the breaking of bonded compounds into gaseous species.

This process is calculated within enthalpy calculations as U + pV, to label the amount of energy or work required to "set aside space for" and "create" the system; describing the work done by both the reaction or formation of systems, and the surroundings. For systems at constant pressure, the change in enthalpy is the heat received by the system.

Therefore, the change in enthalpy can be devised or represented without the need for compressive or expansive mechanics; for a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant.[this quote needs a citation]

The term pV is the work required to displace the surrounding atmosphere in order to vacate the space to be occupied by the system.

Heat of reaction

The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:

\DeltaH = H_f - H_i

where

\Delta H is the "enthalpy change"

H_f is the final enthalpy of the system, expressed in joules. In a chemical reaction,H_f is the enthalpy
of the products.

H_i is the initial enthalpy of the system, expressed in joules. In a chemical reaction,H_i is the enthalpy
of the reactants.

For an exothermic reaction at constant pressure, the system's change in enthalpy equals the energy released in the reaction, including the energy retained in the system and lost through expansion against its surroundings. In a similar manner, for an endothermic reaction, the system's change in enthalpy is equal to the energy absorbed in the reaction, including the energy lost by the system and gained from compression from its surroundings. A relatively easy way to determine whether or not a reaction is exothermic or endothermic is to determine the sign of ΔH. If ΔH is positive, the reaction is endothermic, that is heat is absorbed by the system due to the products of the reaction having a greater enthalpy than the reactants. On the other hand if ΔH is negative, the reaction is exothermic, that is the overall decrease in enthalpy is achieved by the generation of heat.

Although enthalpy is commonly used in engineering and science, it is impossible to measure directly, as enthalpy has no datum (reference point). Therefore enthalpy can only accurately be used in a closed system. However, few real-world applications exist in closed isolation, and it is for this reason that two or more closed systems cannot correctly be compared using enthalpy as a basis.

Specific enthalpy

As noted before, the specific enthalpy of a uniform system is defined as h = H/m where m is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by h = u + pv, where u is the specific internal energy, p is the pressure, and v is specific volume, which is equal to 1/ρ, where ρ is the density.

Enthalpy changes

An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products, and the initial enthalpy of the system, i.e. the reactants. These processes are reversible and the enthalpy for the reverse process is the negative value of the forward change.

A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.

When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process'. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
  • A temperature of 25 °C or 298 K,
  • A pressure of one atmosphere (1 atm or 101.325 kPa),
  • A concentration of 1.0 M when the element or compound is present in solution,
  • Elements or compounds in their normal physical states, i.e. standard state.
For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation.

Chemical properties:
  • Enthalpy of reaction, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
  • Enthalpy of formation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
  • Enthalpy of combustion, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
  • Enthalpy of hydrogenation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
  • Enthalpy of atomization, defined as the enthalpy change required to atomize one mole of compound completely.
  • Enthalpy of neutralization, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
  • Standard Enthalpy of solution, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
  • Standard enthalpy of Denaturation (biochemistry), defined as the enthalpy change required to denature one mole of compound.
  • Enthalpy of hydration, defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties:
  • Enthalpy of fusion, defined as the enthalpy change required to completely change the state of one mole of substance between solid and liquid states.
  • Enthalpy of vaporization, defined as the enthalpy change required to completely change the state of one mole of substance between liquid and gaseous states.
  • Enthalpy of sublimation, defined as the enthalpy change required to completely change the state of one mole of substance between solid and gaseous states.
  • Lattice enthalpy, defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
  • Enthalpy of mixing, defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.

Open systems

In thermodynamic open systems, matter may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by matter flowing in and by heating, minus the amount lost by matter flowing out and in the form of work done by the system. The first law for open systems is given by:
 \mathrm{d}U = \mathrm{\delta}Q + \mathrm{d}U_{in} - \mathrm{d} U_{out} - \mathrm{\delta} W
where U_{in} is the average internal energy entering the system and U_{out} is the average internal energy leaving the system.
Fig.1 During steady, continuous operation, an energy balance applied to an open system equates shaft work performed by the system to heat added plus net enthalpy added

The region of space enclosed by open system boundaries is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of matter into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of matter out as if it were driving a piston of fluid. There are then two types of work performed: flow work described above, which is performed on the fluid (this is also often called pV work), and shaft work, which may be performed on some mechanical device.

These two types of work are expressed in the equation:
\mathrm{\delta}W = \mathrm{d}(p_{out}V_{out}) - \mathrm{d}(p_{in}V_{in}) + \mathrm{\delta}W_{shaft}
Substitution into the equation above for the control volume cv yields:
\mathrm{d}U_{cv} = \mathrm{\delta}Q + \mathrm{d}U_{in} + \mathrm{d}(p_{in}V_{in}) - \mathrm{d}U_{out} - \mathrm{d}(p_{out}V_{out}) - \mathrm{\delta}W_{shaft}.
The definition of enthalpy, H, permits us to use this thermodynamic potential to account for both internal energy and pV work in fluids for open systems:
\mathrm{d}U_{cv} = \mathrm{\delta}Q + \mathrm{d}H_{in} - \mathrm{d}H_{out} - \mathrm{\delta}W_{shaft}.
This expression is described by Fig.1. If we allow also the system boundary to move (e.g. due to moving pistons) we get a rather general form of the first law for open systems.[14] In terms of time derivatives it reads
\frac{\mathrm{d}U}{\mathrm{d}t} = \Sigma_k \dot Q_k + \Sigma_k \dot H_k - \Sigma_k p_k\frac{\mathrm{d}V_k}{\mathrm{d}t}-P,
where \Sigma represent algebraic sums and the indices k refer to the various places where heat is supplied, matter flows into the system, and boundaries are moving. The \dot H_k terms represent enthalpy flows, which can be written as
\dot H_k = h_k\dot m_k = H_m\dot n_k
with \dot m_k the mass flow and \dot n_k the molar flow at position k respectively. The term dVk/dt represents the rate of change of the system volume at position k that results in pV power done by the system.
The parameter P represents all other forms of power done by the system such as shaft power, but it can also be e.g. electric power produced by an electrical power plant. Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average dU/dt may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions
 P = \Sigma_k \left\langle \dot Q_k \right\rangle
+ \Sigma_k \left\langle \dot H_k \right\rangle
- \Sigma_k \left\langle p_k\frac{\mathrm{d}V_k}{\mathrm{d}t} \right\rangle
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.
Fig.2 Ts diagram of nitrogen. The red curve at the left is the melting curve. The red dome represents the two-phase region with the low-entropy side the saturated liquid and the high-entropy side the saturated gas. The black curves give the Ts relation along isobars. The pressures are indicated in bar. The blue curves are isenthalps (curves of constant enthalpy). The values are indicated in blue in kJ/kg. The specific points a, b, etc., are treated in the main text.

Diagrams

Nowadays the enthalpy values of important substances can be obtained via commercial software.
Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as hT diagrams, which give the specific enthalpy as function of temperature for various pressures and hp diagrams, which give h as function of p for various T.
One of the most common diagrams is the temperature-entropy diagram (Ts-diagram). An example is Fig.2, which is the Ts-diagram of nitrogen.[15] It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.
Fig.3 Two open systems in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is \dot m. a: schematic diagram of the throttling process. b: schematic diagram of a compressor. A power P is applied and a heat flow \dot Q is released to the surroundings at ambient temperature Ta.

Some basic applications

The points a through h in Fig.2 play a role in the discussion in this Section.
a T = 300 K, p = 1 bar, s = 6.85 kJ/(kgK), h = 461 kJ/kg;
b T = 380 K, p = 2 bar, s = 6.85 kJ/(kgK), h = 530 kJ/kg;
c T = 300 K, p = 200 bar, s = 5.16 kJ/(kgK), h = 430 kJ/kg;
d T = 270 K, p = 1 bar, s = 6.79 kJ/(kgK), h = 430 kJ/kg;
e T = 108 K, p = 13 bar, s = 3.55 kJ/(kgK), h = 100 kJ/kg (saturated liquid at 13 bar);
f T = 77.2 K, p = 1 bar, s = 3.75 kJ/(kgK), h = 100 kJ/kg;
g T = 77.2 K, p = 1 bar, s = 2.83 kJ/(kgK), h = 28 kJ/kg (saturated liquid at 1 bar);
h T = 77.2 K, p = 1 bar, s = 5.41 kJ/(kgK), h =230 kJ/kg (saturated gas at 1 bar);

Throttling

One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule-Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in Fig.3a. This process is very important since it is at the heart of domestic refrigerators where it is responsible for the temperature drop between ambient temperature and the interior of the fridge. It is also the final stage in many types of liquefiers.

In the first law for open systems (see above), applied to the system in Fig.3a, all terms are zero except the terms for the enthalpy flow. Hence
0=\dot m h_1 - \dot m h_2.
Since the mass flow is constant the specific enthalpies at the two sides of the flow resistance are the same
h_1 = h_2
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using Fig.2. Point c in Fig.2 is at 200 bar and room temperature (300 K). A Joule-Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 kJ/kg (not shown in Fig.2) lying between the 400 and 450 kJ/kg isenthalps and ends in point d, which is at a temperature of about 270 K. Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K. In the valve there is a lot of friction and a lot of entropy is produced, but still the final temperature is below the starting value!

Point e is chosen so that it is on the saturated liquid line with h = 100 kJ/kg. It corresponds roughly with p = 13 bar and T = 108 K. Throttling from this point to a pressure of one bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter the enthalpy in f (hf) is equal to the enthalpy in g (hg) multiplied with the liquid fraction in f (xf) plus the enthalpy in h (hh) multiplied with the gas fraction in f (1-xf). So
 h_f = x_f h_g+(1-x_f)h_h.
With numbers: 100 = xf 28 + (1 - xf)230 so xf = 0.64. This means that the mass fraction of the liquid in the liquid-gas mixture that leaves the throttling valve is 64%.

Compressors

Fig.3b is a schematic drawing of a compressor. A power P is applied e.g. as electrical power. If the compression is adiabatic the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in Fig.2. E.g. compressing nitrogen from 1 bar (point a) to 2 bar (point b) would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature Ta heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is \dot Q. Since the system is in the steady state the first law gives
0=-\dot Q + \dot m h_1 - \dot m h_2 + P.
The minimum power, needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
0=-\frac{\dot Q}{T_a} + \dot m s_1 - \dot m s_2.
Eliminating \dot Q gives for the minimum power
\frac{P_{\rm min}}{\dot m} = h_2-h_1 - T_a(s_2-s_1).
E.g. compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least (hc - ha) - Ta(sc-sa). With the data, obtained with Fig.2, we find a value of (430-461) - 300(5.16 - 6.85) = 476 kJ/kg.

The relation for the power can be further simplified by writing it as
\frac{P_{\rm min}}{\dot m} = \int_1^2(\mathrm{d}h-T_a\mathrm{d}s).
With dh = Tds + vdp this results in the final relation
\frac{P_{\rm min}}{\dot m} = \int_1^2v\mathrm{d}p.

Gibbs free energy

Gibbs free energy

From Wikipedia, the free encyclopedia
   
In thermodynamics, the Gibbs free energy (IUPAC recommended name: Gibbs energy or Gibbs function; also known as free enthalpy[1] to distinguish it from Helmholtz free energy) is a thermodynamic potential that measures the "usefulness" or process-initiating work obtainable from a thermodynamic system at a constant temperature and pressure (isothermal, isobaric). Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The Gibbs free energy (SI units J/mol) is the maximum amount of non-expansion work that can be extracted from a closed system; this maximum can be attained only in a completely reversible process. When a system changes from a well-defined initial state to a well-defined final state, the Gibbs free energy ΔG equals the work exchanged by the system with its surroundings, minus the work of the pressure forces, during a reversible transformation of the system from the same initial state to the same final state.[2]
Gibbs energy (also referred to as ∆G) is also the chemical potential that is minimized when a system reaches equilibrium at constant pressure and temperature. Its derivative with respect to the reaction coordinate of the system vanishes at the equilibrium point. As such, it is a convenient criterion of spontaneity for processes with constant pressure and temperature.
The Gibbs free energy, originally called available energy, was developed in the 1870s by the American mathematician Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as
the greatest amount of mechanical work which can be obtained from a given quantity of a certain substance in a given initial state, without increasing its total volume or allowing heat to pass to or from external bodies, except such as at the close of the processes are left in their initial condition.[3]
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes." In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical free energy in full.

Overview


The reaction C(s)diamond⇒C(s)graphite has a negative change in Gibbs free energy and is therefore thermodynamically favorable at 25°C and 1 atm. However, even though favorable, it is so slow that it is not observed. Whether a reaction is thermodynamically favorable does not determine its rate.

In a simple manner, with respect to STP reacting systems, a general rule of thumb is that every system seeks to achieve a minimum of free energy.

Hence, out of this general natural tendency, a quantitative measure as to how near or far a potential reaction is from this minimum is when the calculated energetics of the process indicate that the change ΔG in Gibbs free energy is negative. In essence, this means that such a reaction will be favoured and will release energy. The energy released equals the maximum amount of work that can be performed as a result of the chemical reaction. In contrast, if conditions indicated a positive ΔG, then energy—in the form of work—would have to be added to the reacting system for the reaction to occur.

The equation can also be seen from the perspective of both the system and its surroundings (the universe). For the purposes of calculation, we assume the reaction is the only reaction going on in the universe. Thus the entropy released or absorbed by the system is actually the entropy that the environment must absorb or release respectively. Thus the reaction will only be allowed if the total entropy change of the universe is equal to zero (a thermal equilibrium state) or positive. The input of heat into an "endergonic" chemical reaction (e.g. the elimination of cyclohexanol to cyclohexene) can be seen as coupling an inherently unfavourable reaction (elimination) to a favourable one (burning of coal or the energy source of a heat source) such that the total entropy change of the universe is more than or equal to zero, making the Gibbs free energy of the coupled reaction negative.

In traditional use, the term "free" was attached to Gibbs free energy for systems at constant pressure and temperature to mean "available in the form of useful work."[2] For Gibbs free energy, we add the qualification that it is the energy free for non-volume work.[4] (A similar meaning applies used in conjunction with Helmholtz free energy, for systems at constant volume and temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished.[5] [6] [7] This standard, however, has not yet been universally adopted.

History

The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions.

In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volumeentropyinternal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue.

Hence, in 1882, the German scientist Hermann von Helmholtz stated that affinity is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.

Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.

Graphical interpretation

Gibbs free energy was originally defined graphically. In 1873, American engineer Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance.[8] Thus, in order to understand the very difficult concept of Gibbs free energy one must be able to understand its interpretation as Gibbs defined originally by section AB on his figure 3 and as Maxwell sculpted that section on his 3D surface figure.

American engineer Willard Gibbs' 1873 figures two and three (above left and middle) used by Scottish physicist James Clerk Maxwell in 1874 to create a three-dimensional entropy (x), volume (y), energy (z) thermodynamic surface diagram for a fictitious water-like substance, transposed the two figures of Gibbs (above right) onto the volume-entropy coordinates (transposed to bottom of cube) and energy-entropy coordinates (flipped upside down and transposed to back of cube), respectively, of a three-dimensional Cartesian coordinates; the region AB being the first-ever three-dimensional representation of Gibbs free energy, or what Gibbs called "available energy"; the region AC being its capacity for entropy, what Gibbs defined as "the amount by which the entropy of the body can be increased without changing the energy of the body or increasing its volume.

Definitions


Willard Gibbs’ 1873 available energy (free energy) graph, which shows a plane perpendicular to the axis of v (volume) and passing through point A, which represents the initial state of the body. MN is the section of the surface of dissipated energy. Qε and Qη are sections of the planes η = 0 and ε = 0, and therefore parallel to the axes of ε (internal energy) and η (entropy), respectively. AD and AE are the energy and entropy of the body in its initial state, AB and AC its available energy (Gibbs free energy) and its capacity for entropy (the amount by which the entropy of the body can be increased without changing the energy of the body or increasing its volume) respectively.
The Gibbs free energy is defined as:
G(p,T) = U + pV - TS
which is the same as:
G(p,T) = H - TS
where:
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its 'natural variables' p and T, for an open system, subjected to the operation of external forces (for instance electrical or magnetical) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the First Law for reversible processes:
T\mathrm{d}S= \mathrm{d}U + p\mathrm{d}V-\sum_{i=1}^k \mu_i \,\mathrm{d}N_i + \sum_{i=1}^n X_i \,\mathrm{d}a_i + \cdots
\mathrm{d}(TS) - S\mathrm{d}T= \mathrm{d}U + \mathrm{d}(pV) - V\mathrm{d}p-\sum_{i=1}^k \mu_i \,\mathrm{d}N_i + \sum_{i=1}^n X_i \,\mathrm{d}a_i + \cdots
\mathrm{d}(U-TS+pV)=V\mathrm{d}p-S\mathrm{d}T+\sum_{i=1}^k \mu_i \,\mathrm{d}N_i - \sum_{i=1}^n X_i \,\mathrm{d}a_i + \cdots
\mathrm{d}G =V\mathrm{d}p-S\mathrm{d}T+\sum_{i=1}^k \mu_i \,\mathrm{d}N_i - \sum_{i=1}^n X_i \,\mathrm{d}a_i + \cdots
where:
This is one form of Gibbs fundamental equation.[10] In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system. For a closed system, this term may be dropped.

Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term fdl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψde, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.[11]
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.

The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation and its pressure dependence is given by:
\frac{G}{N}  = \frac{G}{N}^\circ  + kT\ln \frac{p}{{p^\circ }}
if the volume is known rather than pressure then it becomes:
\frac{G}{N}  = \frac{G}{N}^\circ  + kT\ln \frac{V^\circ}{{V }}
or more conveniently as its chemical potential:
\frac{G}{N}  = \mu  = \mu^\circ  + kT\ln \frac{p}{{p^\circ }}.
In non-ideal systems, fugacity comes into play.

Derivation

The Gibbs free energy total differential natural variables may be derived via Legendre transforms of the internal energy.
\mathrm{d}U = T\mathrm{d}S - p \,\mathrm{d}V + \sum_i \mu_i \,\mathrm{d} N_i\,.
Because S, V, and Ni are extensive variables, Euler's homogeneous function theorem allows easy integration of dU:[12]
U = T S - p V + \sum_i \mu_i N_i\,.
The definition of G from above is
G = U + p V - T S\,.
Taking the total differential, we have
\mathrm{d} G = \mathrm{d}U + p\,\mathrm{d}V + V\mathrm{d}p - T\mathrm{d}S - S\mathrm{d}T\,.
Replacing dU with the result from the first law gives[12]
\begin{align}
\mathrm{d} G &= T\mathrm{d}S - p\,\mathrm{d}V + \sum_i \mu_i \,\mathrm{d} N_i + p \,\mathrm{d}V + V\mathrm{d}p - T\mathrm{d}S - S\mathrm{d}T\\
&= V\mathrm{d}p - S\mathrm{d}T + \sum_i \mu_i \,\mathrm{d} N_i
\end{align}.
The natural variables of G are then p, T, and {Ni}.

Homogeneous systems

Because some of the natural variables are intensive, dG may not be integrated using Euler integrals as is the case with internal energy. However, simply substituting the Gibbs-Duhem relation result for U into the definition of G gives a standard expression for G:[12]
\begin{align}
G &= T S - p V + \sum_i \mu_i N_i + p V - T S\\
&= \sum_i \mu_i N_i
\end{align}.
This result applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.[13]

Gibbs free energy of reactions

To derive the Gibbs free energy equation for an isolated system, let Stot be the total entropy of the isolated system, that is, a system that cannot exchange heat or mass with its surroundings. According to the second law of thermodynamics:
 \Delta S_{tot} \ge 0 \,
and if ΔStot = 0 then the process is reversible. The heat transfer Q vanishes for an adiabatic system. Any adiabatic process that is also reversible is called an isentropic  \left( {Q\over T} = \Delta S = 0 \right) \, process.
Now consider a system having internal entropy Sint. Such a system is thermally connected to its surroundings, which have entropy Sext. The entropy form of the second law applies only to the closed system formed by both the system and its surroundings. Therefore a process is possible only if
 \Delta S_{int} + \Delta S_{ext} \ge 0 \,.
If Q is the heat transferred to the system from the surroundings, then −Q is the heat lost by the surroundings, so that \Delta S_{ext} = - {Q \over T}, corresponds to the entropy change of the surroundings.
We now have:
 \Delta S_{int} - {Q \over T} \ge 0  \,
Multiplying both sides by T:
 T \Delta S_{int} - Q \ge 0 \,
Q is the heat transferred to the system; if the process is now assumed to be isobaric, then Qp = ΔH:
 T \Delta S_{int} - \Delta H \ge 0\,
ΔH is the enthalpy change of reaction (for a chemical reaction at constant pressure). Then:
 \Delta H - T \Delta S_{int} \le 0 \,
for a possible process. Let the change ΔG in Gibbs free energy be defined as
 \Delta G = \Delta H - T \Delta S_{int} \, (eq.1)
Notice that it is not defined in terms of any external state functions, such as ΔSext or ΔStot. Then the second law, which also tells us about the spontaneity of the reaction, becomes:
 \Delta G < 0 \, favoured reaction (Spontaneous)
 \Delta G = 0 \, Neither the forward nor the reverse reaction prevails (Equilibrium)
 \Delta G > 0 \, disfavoured reaction (Nonspontaneous)
Gibbs free energy G itself is defined as
 G = H - T S_{int} \, (eq.2)
but notice that to obtain equation (1) from equation (2) we must assume that T is constant. Thus, Gibbs free energy is most useful for thermochemical processes at constant temperature and pressure: both isothermal and isobaric. Such processes don't move on a P-V diagram, such as phase change of a pure substance, which takes place at the saturation pressure and temperature. Chemical reactions, however, do undergo changes in chemical potential, which is a state function. Thus, thermodynamic processes are not confined to the two dimensional P-V diagram. There is a third dimension for n, the quantity of gas. For the study of explosive chemicals, the processes are not necessarily isothermal and isobaric. For these studies, Helmholtz free energy is used.

If an isolated system (Q = 0) is at constant pressure (Q = ΔH), then
 \Delta H = 0  \,
Therefore the Gibbs free energy of an isolated system is
 \Delta G = -T \Delta S \,
and if ΔG ≤ 0 then this implies that ΔS ≥ 0, back to where we started the derivation of ΔG.

Useful identities

\Delta G = \Delta H - T \Delta S \, (for constant temperature)
\Delta_r G^\circ = -R T \ln K \,
\Delta_r G = \Delta_r G^\circ + R T \ln Q_r \, (see Chemical equilibrium)
\Delta G = -nFE \,
\Delta G^\circ = -nFE^\circ \,
and rearranging gives
nFE^\circ = RT \ln K \,
nFE = nFE^\circ - R T \ln Q_r \, \,
E = E^\circ - \frac{R T}{n F} \ln Q_r \, \,
which relates the electrical potential of a reaction to the equilibrium coefficient for that reaction (Nernst equation).
where
Moreover, we also have:
K_{eq}=e^{- \frac{\Delta_r G^\circ}{RT}}
\Delta_r G^\circ = -RT(\ln K_{eq}) = -2.303\,RT(\log_{10} K_{eq})
which relates the equilibrium constant with Gibbs free energy.

Gibbs free energy, the second law of thermodynamics, and metabolism

A particular chemical reaction is said to proceed spontaneously if the hypothetical total change in entropy of the universe due to that reaction is greater than or equal to zero Joules per Kelvin. As discussed in the Overview, under certain assumptions Gibbs free energy can be thought of as a negative proxy for the change in total entropy of the universe (it's negative because change in Gibbs free energy is negative when change in total entropy of the universe is positive, and vice versa). Thus, a reaction with a positive Gibbs free energy will not proceed spontaneously. However, in biological systems, energy inputs from other energy sources (including the sun and exothermic chemical reactions) are "coupled" with reactions that are not entropically favored (i.e. have a Gibbs free energy above zero). Between two (or more) coupled reactions, total entropy in the universe always increases. This coupling allows an endergonic reactions, such as photosynthesis and DNA synthesis, to proceed without decreasing the total entropy of the universe. Thus biological systems do not violate the second law of thermodynamics.

Standard energy change of formation

The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, at their standard states (the most stable form of the element at 25 degrees Celsius and 101.3 kilopascals). Its symbol is ΔfG˚.

All elements in their standard states (oxygen gas, graphite, etc.) have 0 standard Gibbs free energy change of formation, as there is no change involved.
ΔrG = ΔrG˚ + RT ln Qr; Qr is the reaction quotient.
At equilibrium, ΔrG = 0 and Qr = K so the equation becomes ΔrG˚ = −RT ln K; K is the equilibrium constant.

Table of selected substances[14]

SubstanceStateΔf(kJ/mol)Δf(kcal/mol)
NOg87.620.9
NO2g51.312.3
N2Og103.724.78
H2Og-228.6−54.64
H2Ol-237.1−56.67
CO2g-394.4−94.26
COg-137.2−32.79
CH4g-50.5−12.1
C2H6g-32.0−7.65
C3H8g-23.4−5.59
C6H6g129.729.76
C6H6l124.531.00

Ontogeny recapitulates phylogeny?

Recapitulation theory

From Wikipedia, the free encyclopedia
   
The theory of recapitulation, also called the biogenetic law or embryological parallelism— often expressed in Ernst Haeckel's phrase as "ontogeny recapitulates phylogeny"—is a largely discredited biological hypothesis that in developing from embryo to adult, animals go through stages resembling or representing successive stages in the evolution of their remote ancestors. With different formulations, such ideas have been applied and extended to several fields and areas, including the origin of language, religion, biology, cognition and mental activities,[1] anthropology,[2] education theory[3] and developmental psychology.[4] While examples of embryonic stages show that molecular features of ancestral organisms exist, the theory of recapitulation itself has been viewed within the field of developmental biology as a historical side-note rather than as dogma.[5][6][7]

In contrast, there is no consensus against the validity of the theory outside biology. Recapitulation theory is still considered plausible and is applied by some researchers in fields such as the study of the origin of language,[8] cognitive development,[9] behavioral development in animal species.[10]

Origins

The earliest recorded trace of a recapitulation theory is from the Egyptian Pharaoh Psamtik I (664 – 610 BCE), who used it as a hypothesis on the origin of language.[11][12] The concept of recapitulation was first formulated outside the field of biology. It was widely held among traditional theories of the origin of language (glottology), being assumed as a premise that children's use of language gives insights on its origin and evolution.[13]

The idea was reprised in 1720 by Giambattista Vico in his influential Scienza Nuova.[13][14][15] It was first formulated in biology in the 1790s among the German Natural philosophers,[16] after which, Marcel Danesi states, it soon gained the status of a supposed biogenetic law.[13]
The first formal formulation was proposed by Étienne Serres in 1824–26 as what became known as the "Meckel-Serres Law", it attempted to provide a link between comparative embryology and a "pattern of unification" in the organic world. It was supported by Étienne Geoffroy Saint-Hilaire and became a prominent part of his ideas which suggested that past transformations of life could have had environmental causes working on the embryo, rather than on the adult as in Lamarckism. These naturalistic ideas led to disagreements with Georges Cuvier. It was widely supported in the Edinburgh and London schools of higher anatomy around 1830, notably by Robert Edmond Grant, but was opposed by Karl Ernst von Baer's ideas of divergence, and attacked by Richard Owen in the 1830s.[17]

Haeckel

George Romanes's 1892 copy of Ernst Haeckel's controversial embryo drawings (this version of the figure is often attributed incorrectly to Haeckel).[18]

Ernst Haeckel attempted to synthesize the ideas of Lamarckism and Goethe's Naturphilosophie with Charles Darwin's concepts. While often seen as rejecting Darwin's theory of branching evolution for a more linear Lamarckian "biogenic law" of progressive evolution, this is not accurate: Haeckel used the Lamarckian picture to describe the ontogenetic and phylogenetic history of individual species, but agreed with Darwin about the branching of all species from one, or a few, original ancestors.[19] Since early in the twentieth century, Haeckel's "biogenetic law" has been refuted on many fronts.[7]
Haeckel formulated his theory as "Ontogeny recapitulates phylogeny". The notion later became simply known as the recapitulation theory. Ontogeny is the growth (size change) and development (shape change) of an individual organism; phylogeny is the evolutionary history of a species. Haeckel claimed that the development of advanced species passes through stages represented by adult organisms of more primitive species.[7] Otherwise put, each successive stage in the development of an individual represents one of the adult forms that appeared in its evolutionary history.

For example, Haeckel proposed that the pharyngeal grooves between the pharyngeal arches in the neck of the human embryo not only roughly resembled gill slits of fish, but directly represented an adult "fishlike" developmental stage, signifying a fishlike ancestor. Embryonic pharyngeal slits, which form in many animals when the thin branchial plates separating pharyngeal pouches and pharyngeal grooves perforate, open the pharynx to the outside. Pharyngeal arches appear in all tetrapod embryos: in mammals, the first pharyngeal arch develops into the lower jaw (Meckel's cartilage), the malleus and the stapes. But these embryonic pharyngeal arches, grooves, pouches, and slits in human embryos can not at any stage carry out the same function as the gills of an adult fish.

Haeckel produced several embryo drawings that often overemphasized similarities between embryos of related species. The misinformation was propagated through many biology textbooks, and popular knowledge, even today. Modern biology rejects the literal and universal form of Haeckel's theory.[20]
Haeckel's drawings were disputed by Wilhelm His, who had developed a rival theory of embryology.[21] His developed a "causal-mechanical theory" of human embryonic development.[22]
Darwin's view, that early embryonic stages are similar to the same embryonic stage of related species but not to the adult stages of these species, has been confirmed by modern evolutionary developmental biology[citation needed].

Modern status

The Haeckelian form of recapitulation theory is considered defunct.[23] However, embryos do undergo a period where their morphology is strongly shaped by their phylogenetic position, rather than selective pressures.[24]
"Embryos do reflect the course of evolution, but that course is far more intricate and quirky than Haeckel claimed. Different parts of the same embryo can even evolve in different directions. As a result, the Biogenetic Law was abandoned, and its fall freed scientists to appreciate the full range of embryonic changes that evolution can produce—an appreciation that has yielded spectacular results in recent years as scientists have discovered some of the specific genes that control development."[25]

Influence

Cognitive development

Although Haeckel's specific form of recapitulation theory is now discredited among biologists, the strong influence it had on social and educational theories of the late 19th century still resonates in the 21st century. Research in the late 20th century confirmed that "both biological evolution and the stages in the child’s cognitive development follow much the same progression of evolutionary stages as that suggested in the archaeological record."[9]

English philosopher Herbert Spencer was one of the most energetic promoters of evolutionary ideas to explain many phenomena. He compactly expressed the basis for a cultural recapitulation theory of education in the following claim, published in 1861, five years before Haeckel first published on the subject:[3] G. Stanley Hall used Haeckel's theories as the basis for his theories of child development.
If there be an order in which the human race has mastered its various kinds of knowledge, there will arise in every child an aptitude to acquire these kinds of knowledge in the same order.... Education is a repetition of civilization in little.[26]
— Herbert Spencer
Developmental psychologist Jean Piaget favored a weaker version of the formula, according to which ontogeny parallels phylogeny because the two are subject to similar external constraints.[27]

The Austrian pioneer in psychoanalysis, Sigmund Freud, also favored Haeckel's doctrine. He was trained as a biologist under the influence of recapitulation theory at the time of its domination, and retained a Lamarckian outlook with justification from the recapitulation theory.[28] He also distinguished between physical and mental recapitulation, in which the differences would become an essential argument for his theory of neuroses.[28]

Art criticism

More recently, several art historians, most prominently musicologist Richard Taruskin, have applied the term "ontogeny becomes phylogeny" to the process of creating and recasting art history, often to assert a perspective or argument. For example, the peculiar development of the works by modernist composer Arnold Schoenberg (here an "ontogeny") is generalized in many histories into a "phylogeny" – a historical development ("evolution") of Western Music toward atonal styles of which Schoenberg is a representative. Such historiographies of the "collapse of traditional tonality" are faulted by art historians as asserting a rhetorical rather than historical point about tonality's "collapse".[29]

Taruskin also developed a variation of the motto into the pun "ontogeny recapitulates ontology" to refute the concept of "absolute music" advancing the socio-artistic theories of Carl Dalhaus. Ontology is the investigation of what exactly something is, and Taruskin asserts that an art object becomes that which society and succeeding generations made of it. For example, composer Johann Sebastian Bach's St. John Passion, composed in the 1720s, was appropriated by the Nazi regime in the 1930s for propaganda. Taruskin claims the historical development of the Passion (its ontogeny) as a work with an anti-Semitic message does, in fact, inform the work's identity (its ontology), even though that was an unlikely concern of the composer. Music or even an abstract visual artwork can not be truly autonomous ("absolute") because it is defined by its historical and social reception.[29]

Archie Bunker

Archie Bunker

From Wikipedia, the free encyclopedia
   
Archie Bunker
All in the family 1975.JPG
Bunker holding his grandson, Joey Stivic, 1975
First appearance"Meet the Bunkers"
(All in the Family)
Last appearance"I'm Torn Here"
(Archie Bunker's Place)
Created byNorman Lear
Based on Alf Garnett, a character created by Johnny Speight
Portrayed byCarroll O'Connor
Information
OccupationBlue-collar worker (loading dock foreman, janitor, and taxi driver)
Bar Owner (1977-)
FamilyDavid Bunker (father)
Sarah Bunker, née Longstreet (mother)
Michael Stivic (son-in-law)
Joey Stivic (grandson)
Alfred "Fred" Bunker (brother)
Philip Bunker (brother)
Alma Bunker (sister)
Linda Bunker (niece)
Barbara Lee "Billie" Bunker (niece)
Katherine Bunker (sister-in-law)
Oscar (cousin)
Lou (cousin)
Fred (cousin)
Spouse(s)Edith Bunker (1948-1980, her death[1])
ChildrenGloria Bunker Stivic (daughter)

Archibald "Archie" Bunker is a fictional New Yorker in the 1970s top-rated American television sitcom All in the Family and its spin-off Archie Bunker's Place, played to acclaim by Carroll O'Connor. Bunker, a principal character of the series, is a veteran of World War II, reactionary, conservative, blue-collar worker, and family man. The Bunker character was first seen by the American public when All in the Family premiered on January 12, 1971, where he was depicted as the head of a family. In 1979, the show was retooled and renamed Archie Bunker’s Place, finally going off the air in 1983. Bunker lived at the fictional address of 704 Hauser Street in the borough of Queens in New York City.

All in the Family got many of its laughs by playing on Archie's bigotry, although the dynamic tension between Archie and liberal Mike provided an ongoing political and social sounding board for a variety of topics. Archie appears in all but seven episodes of the series (three were missed because of a contractual dispute between Carroll O'Connor and Norman Lear in Season 5).

In 1999 TV Guide ranked him number 5 on its 50 Greatest TV Characters of All Time list.[2] In 2005, Archie Bunker was listed as number 1 on Bravo's 100 Greatest TV Characters,[3] defeating runners-up such as Ralph Kramden, Lucy Ricardo, Arthur Fonzarelli, and Homer Simpson.

Archie's armchair is in the permanent collection of the National Museum of American History.

Character traits

Famous for his gruff, ignorant, bigoted persona—blacks, Hispanics, "Communists," hippies, gays, Jews, Catholics, "women's libbers", and Polish-Americans are frequent targets of his barbs—Archie is in fact a complex character. Rather than being motivated by malice, he is portrayed as hardworking, a loving father and husband, and a basically decent man whose views are merely a product of the era and working-class environment in which he had been raised. Nevertheless, Archie is bad-tempered and frequently tells his long-suffering, scatter-brained wife Edith to "Stifle yourself" and "Dummy up". Series creator Norman Lear admitted that this is how his father treated Lear's mother.[4]

As the series progressed, Archie mellows somewhat, albeit often out of necessity. In one episode, he expresses revulsion for a Ku Klux Klan-like organization which he accidentally joins.[5] On another occasion, when asked to speak at the funeral of his friend, Stretch Cunningham, Archie—surprised to learn that his friend was Jewish—overcomes his initial discomfort and delivers a moving eulogy, closing with a heartfelt "Shalom." Most crucially, in 1978, the character became the guardian of Edith's step-cousin Floyd's nine-year old daughter, Stephanie (Danielle Brisebois), and comes to accept her Jewish faith, even buying her a Star of David pendant.[6]

Archie was also known for his frequent malapropisms and spoonerisms. For example, he refers to Edith's gynecologist as a "groinacologist", and to Catholic priests who go around sprinkling "incest" (incense) on their congregation. By the show's second season, these had become dubbed "Bunkerisms", "Archie Bunkerisms" or simply "Archie-isms".[7][8]

Bunker's own ethnicity is never explicitly stated, other than the fact that he is a WASP. (Archie's character voice was created by a mix of accents Carroll O'Connor heard while studying acting in New York City.[citation needed]) Archie mocks the British and refers to England as a "fag country," because of their English accents. He also refers to Germans as "Krauts", the Irish as "Micks", the Japanese as "Japs", the Italians as "Dagos", the Chinese as "Chinks", Polish people as "Polacks," Hispanics or Latinos as "Spics," and Jewish people as "Hebes." He often uses the words "colored" or "spade" in reference to African-Americans.

Archie often misquotes the Bible. He takes pride in being religious, although he rarely attends church services and constantly mispronounces the name of his minister, Reverend Felcher, as "Reverend Fletcher."

The inspiration for Archie Bunker was Alf Garnett, the character from the BBC1 sitcom Till Death Us Do Part, on which All in the Family was based.[9]

Character biography

When first introduced on All in the Family in 1971, Archie is the head of a family consisting of his wife Edith (Jean Stapleton), his adult daughter Gloria (Sally Struthers), and his liberal son-in-law, college student Michael "Mike" Stivic (Rob Reiner), with whom Archie disagrees on virtually everything; he frequently characterizes Mike as a "dumb Polack", and usually addresses him as "Meathead" because, in Archie's words, he is "dead from the neck up". During the show's first five seasons, Mike and Gloria are living with Archie and Edith, so that Mike could put himself through college. They later move to their own home, though it turns out to be next door, allowing Archie and Mike to interact nearly as much as they had when they were living in the same house.

Archie was born on May 20, 1924[10] to parents David and Sarah.[11] Information on his siblings is inconsistent, as three are mentioned, but he is also stated to be an only child. Archie celebrates his 50th birthday in a 1974 episode, and another series shows him to be still alive on April 4, 1983. He is a Taurus.[12]

While locked in the storeroom of Archie's Place with Mike in the episode "Two's a Crowd", a drunk Archie confides that as a child, his family was desperately poor and that he was teased in school because he wore one shoe on one foot and a boot on the other, with kids nicknaming him "Shoe-Booty". In the same episode, Mike learns that Archie was mentally and physically abused by his father, who was the source of his bigoted views. Yet Archie then goes on to vehemently defend his father, who he claims loved him and taught him "right from wrong." The only clue to his father's occupation is a railroad watch that Archie receives from his formerly long-estranged brother, Alfred, or "Fred", played by actor Richard McKenzie, who later appeared in two episodes, Archie's Brother and The Return of Archie's Brother.

Fred and Archie, as it is learned when Fred visits Archie in the "Archie's Brother" episode, had not seen each other in the 29 years since Archie and Edith's wedding, although they apparently had communicated over the years via phone, their long estrangement fueled because of a petty argument, apparently out of a sibling rivalry of sorts going back to their childhood; Fred visits Archie for support because he is about to go into the hospital for a major operation, and the two apparently seem to patch things up between them. However, in Fred's return trip to visit Archie and Edith, he arrives with a beautiful 18-year-old wife named Katherine. This leads to a heated discussion, which erupts into argument between Archie and Fred over May–September romances, and places another strain on the relationship between Archie and Fred, who storms angrily out of the Bunker home with his teen bride.

Archie is a World War II veteran who had been based in Foggia, Italy for twenty-two months. During a visit with a doctor it is stated that he had an undistinguished military record for his non-combat ground role in the Air Corps, which at the time was a branch subordinate to the Army Air Forces.
Archie often insisted that he was a member of the Air Corps. He received the Good Conduct Medal[13] and in the episode "Archie's Civil Rights" it is disclosed he also received the Purple Heart for being hit in his buttocks by shrapnel.

He married Edith Bunker 22 years before the first season. Later recollections of their mid-1940s courtship do not result in a consistent timeline. On the flashback episode showing Mike and Gloria's wedding, Archie indicates to Mike that his courtship of Edith lasted two years, and hints that their relationship was not consummated until a month after their wedding night. Edith elsewhere recollects that Archie fell asleep on their wedding night, and blurts out that their sex life has not been very active in recent years. On another occasion, Edith reveals Archie's history of gambling addiction, which caused problems in the early years of their marriage. Archie also reveals that when Edith was in labor with Gloria, he took her to Bayside Hospital on the Q5 bus because "the subway don't run to Bayside."

According to Edith, Archie's resentment of Mike stemmed primarily from the fact that Mike was attending college, while Archie had been forced to drop out of high school during the Great Depression to help support his family. Archie does not take advantage of the GI Bill to further his education, although he does attend night school to earn a high school diploma in 1973. Archie is also revealed to have been an outstanding baseball player in his youth; his dream was to pitch for the New York Yankees. He had to give up this dream when he left high school to enter the workforce. His uncle got him a job on a loading dock after World War II, and by the 1970s he was a foreman.

A Protestant, Archie seldom attends church, despite professing strong Christian views. The original pilot mentions that in the 22 years Archie and Edith were married, Archie had only attended church seven times (including their wedding day), and that Archie had walked out of the sermon the most recent time, disgusted with the preacher's message (which he perceived as leftist). Archie's religiosity often translates into knee-jerk opposition to atheism or agnosticism (which Mike and Gloria variously espoused), Catholicism, and, until late in the series, Judaism.

Archie is a Republican[14] and an outspoken supporter of Richard Nixon, as well as an early (1976) supporter of Ronald Reagan, correctly predicting his election in 1980. During the Vietnam War, he dismisses peace protesters as unpatriotic, and has little good to say about the Civil Rights Movement. Despite having an adversarial relationship with his black neighbors, the Jeffersons, he forms an unlikely friendship with their son Lionel, who performs various odd jobs for the Bunkers, and tolerates Archie's patronizing racial views.

The later spinoff series 704 Hauser features a new, black family moving into Bunker's old home. The series is set in 1994, but does not indicate whether Bunker, who would be 70 by this time, is still alive. His grandson, Joey Stivic, appears briefly in the first episode of the series, but makes no statement one way or the other on this point.

Viewer reactions

Such was the name recognition and societal influence of the Bunker character that by 1972 commentators were discussing the "Archie Bunker vote" (i.e., the voting bloc comprising urban, white, working-class men) in that year's presidential election; in the same year, there was a parody election campaign, complete with T-shirts, campaign buttons, and bumper stickers, advocating "Archie Bunker for President." The character's imprint on American culture is such that Archie Bunker's name was still being used in the media in 2008, to describe a certain group of voters who voted in that year's U.S. presidential election.[15][16]

Norman Lear originally intended that Bunker be strongly disliked by audiences. Lear was shocked when Bunker quietly became a beloved figure to much of middle America. Lear thought that Bunker's opinions on race, sex, marriage, and religion were so wrong as to represent a parody of right wing bigotry. Sammy Davis, Jr., who was both black and Jewish, genuinely liked the character. He felt that Bunker's bigotry was based on his rough, working-class life experiences, and that Bunker was honest and forthright in his opinions, showing an openness to changing his views if an individual treated him right. Davis in fact appeared in an episode of All in the Family.

Child abandonment

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Child_abandonment ...