Search This Blog

Tuesday, July 8, 2025

Boltzmann equation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Boltzmann_equation
The place of the Boltzmann kinetic equation on the stairs of model reduction from microscopic dynamics to macroscopic continuum dynamics (illustration to the content of the book)

The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872. The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.

The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element ) centered at the position , and has momentum nearly equal to a given momentum vector (thus occupying a very small region of momentum space ), at an instant of time.

The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.

The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.

Overview

The phase space and density function

The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component px, py, pz. The entire space is 6-dimensional: a point in this space is (r, p) = (x, y, z, px, py, pz), and each coordinate is parameterized by time t. A relevant differential element is written

Since the probability of N molecules, which all have r and p within , is in question, at the heart of the equation is a quantity f which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time t. This is a probability density function: f(r, p, t), defined so that, is the number of molecules which all have positions lying within a volume element about r and momenta lying within a momentum space element about p, at time tIntegrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:

which is a 6-fold integral. While f is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one r and p is in question. It is not part of the analysis to use r1, p1 for particle 1, r2, p2 for particle 2, etc. up to rN, pN for particle N.

It is assumed the particles in the system are identical (so each has an identical mass m). For a mixture of more than one chemical species, one distribution is needed for each, see below.

Principal statement

The general equation can then be written as 

where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.

Note that some authors use the particle velocity v instead of momentum p; they are related in the definition of momentum by p = mv.

The force and diffusion terms

Consider particles described by f, each experiencing an external force F not due to other particles (see the collision term for the latter treatment).

Suppose at time t some number of particles all have position r within element and momentum p within . If a force F instantly acts on each particle, then at time t + Δt their position will be and momentum p + Δp = p + FΔt. Then, in the absence of collisions, f must satisfy

Note that we have used the fact that the phase space volume element is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume changes, so

where Δf is the total change in f. Dividing (1) by and taking the limits Δt → 0 and Δf → 0, we have

The total differential of f is:

where is the gradient operator, · is the dot product, is a shorthand for the momentum analogue of , and êx, êy, êz are Cartesian unit vectors.

Final statement

Dividing (3) by dt and substituting into (2) gives:

In this context, F(r, t) is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.

This equation is more useful than the principal one above, yet still incomplete, since f cannot be solved unless the collision term in f is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.

The collision term (Stosszahlansatz) and molecular chaos

Two-body collision term

A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "Stosszahlansatz" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:  where pA and pB are the momenta of any two particles (labeled as A and B for convenience) before a collision, p′A and p′B are the momenta after the collision, is the magnitude of the relative momenta (see relative velocity for more on this concept), and I(g, Ω) is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle θ into the element of the solid angle dΩ, due to the collision.

Simplifications to the collision term

Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:

where is the molecular collision frequency, and is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".

General equation (for a mixture)

For a mixture of chemical species labelled by indices i = 1, 2, 3, ..., n the equation for species i is

where fi = fi(r, pi, t), and the collision term is

where f′ = f′(p′i, t), the magnitude of the relative momenta is

and Iij is the differential cross-section, as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element.

Applications and extensions

Conservation equations

The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy. For a fluid consisting of only one kind of particle, the number density n is given by

The average value of any function A is

Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus and , where is the particle velocity vector. Define as some function of momentum only, whose total value is conserved in a collision. Assume also that the force is a function of position only, and that f is zero for . Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as

where the last term is zero, since A is conserved in a collision. The values of A correspond to moments of velocity (and momentum , as they are linearly dependent).

Zeroth moment

Letting , the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:  where is the mass density, and is the average fluid velocity.

First moment

Letting , the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:

where is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).

Second moment

Letting , the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:

where is the kinetic thermal energy density, and is the heat flux vector.

Hamiltonian mechanics

In Hamiltonian mechanics, the Boltzmann equation is often written more generally as where L is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and C is the collision operator. The non-relativistic form of L is

Quantum theory and violation of particle number conservation

It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density f. However, for a wide class of applications a well-defined generalization of f exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.

General relativity and astronomy

The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.

Its generalization in general relativity is  where Γαβγ is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant (xi, pi) phase space as opposed to fully contravariant (xi, pi) phase space.

In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution f that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.

Solving the equation

Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.

Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.

Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.

Limitations and further uses of the Boltzmann equation

The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.

No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.

Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.

Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.

Absolute zero

From Wikipedia, the free encyclopedia
Zero kelvin (−273.15 °C) is defined as absolute zero.

Absolute zero is the lowest possible temperature, a state at which a system's internal energy, and in ideal cases entropy, reach their minimum values. The absolute zero is defined as 0 K on the Kelvin scale, equivalent to −273.15 °C on the Celsius scale, and −459.67 °F on the Fahrenheit scale. The Kelvin and Rankine temperature scales set their zero points at absolute zero by design. This limit can be estimated by extrapolating the ideal gas law to the temperature at which the volume or pressure of a classical gas becomes zero.

At absolute zero, there is no thermal motion. However, due to quantum effects, the particles still exhibit minimal motion mandated by the Heisenberg uncertainty principle and, for a system of fermions, the Pauli exclusion principle. Even if absolute zero could be achieved, this residual quantum motion would persist.

Although absolute zero can be approached, it cannot be reached. Some isentropic processes, such as adiabatic expansion, can lower the system's temperature without relying on a colder medium. Nevertheless, the third law of thermodynamics implies that no physical process can reach absolute zero in a finite number of steps. As a system nears this limit, further reductions in temperature become increasingly difficult, regardless of the cooling method used. In the 21st century, scientists have achieved temperatures below 100 picokelvin (pK). At low temperatures, matter displays exotic quantum phenomena such as superconductivity, superfluidity, and Bose–Einstein condensation.

Ideal gas laws

Pressure–temperature plots for three different gas samples, measured at constant volume, all extrapolate to zero at the same point, the absolute zero.

For an ideal gas, the pressure at constant volume decreases linearly with temperature, and the volume at constant pressure also decreases linearly with temperature. When these relationships are expressed using the Celsius scale, both pressure and volume extrapolate to zero at approximately −273.15 °C. This implies the existence of a lower bound on temperature, beyond which the gas would have negative pressure or volume—an unphysical result.

To resolve this, the concept of absolute temperature is introduced, with 0 kelvins defined as the point at which pressure or volume would vanish in an ideal gas. This temperature corresponds to −273.15 °C, and is referred to as absolute zero. The ideal gas law is therefore formulated in terms of absolute temperature to remain consistent with observed gas behavior and physical limits.

Absolute temperature scales

Absolute temperature is conventionally measured in Kelvin scale (using Celsius-scaled increments) and, more rarely, in Rankine scale (using Fahrenheit-scaled increments). Absolute temperature measurement is uniquely determined by a multiplicative constant which specifies the size of the degree, so the ratios of two absolute temperatures, T2/T1, are the same in all scales.

Absolute temperature also emerges naturally in statistical mechanics. In the Maxwell–Boltzmann, Fermi–Dirac, and Bose–Einstein distributions, absolute temperature appears in the exponential factor that determines how particles populate energy states. Specifically, the relative number of particles at a given energy E depends exponentially on E/kT, where k is the Boltzmann constant and T is the absolute temperature.

Unattainability of absolute zero

Left side: Absolute zero could be reached in a finite number of steps if S(0, X1) ≠ S(0, X2). Right: An infinite number of steps is needed since S(0, X1) = S(0, X2). Here, X is some controllable parameter of the system, such as its volume or pressure.

The third law of thermodynamics concerns the behavior of entropy as temperature approaches absolute zero. It states that the entropy of a system approaches a constant minimum at 0 K. For a perfect crystal, this minimum is taken to be zero, since the system would be in a state of perfect order with only one microstate available. In some systems, there may be more than one microstate at minimum energy and there is some residual entropy at 0 K.

Several other formulations of the third law exist. Nernst heat theorem holds that the change in entropy for any constant-temperature process tends to zero as the temperature approaches zero. A key consequence is that absolute zero cannot be reached, since removing heat becomes increasingly inefficient and entropy changes vanish. This unattainability principle means no physical process can cool a system to absolute zero in a finite number of steps or finite time.

Thermal properties at low temperatures

Using the Debye model, the specific heat and entropy of a pure crystal are proportional to T 3, while the enthalpy and chemical potential are proportional to T 4 (Guggenheim, p. 111). These quantities drop toward their T = 0 limiting values and approach with zero slopes. For the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion. Maxwell's relations show that various other quantities also vanish. These phenomena were unanticipated.

One model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas. The electrons, being fermions, must be in different quantum states, which leads the electrons to get very high typical velocities, even at absolute zero. The maximum energy that electrons can have at absolute zero is called the Fermi energy. The Fermi temperature is defined as this maximum energy divided by the Boltzmann constant, and is on the order of 80,000 K for typical electron densities found in metals. For temperatures significantly below the Fermi temperature, the electrons behave in almost the same way as at absolute zero. This explains the failure of the classical equipartition theorem for metals that eluded classical physicists in the late 19th century.

Gibbs free energy

Since the relation between changes in Gibbs free energy (G), the enthalpy (H) and the entropy is

thus, as T decreases, ΔG and ΔH approach each other (so long as ΔS is bounded). Experimentally, it is found that all spontaneous processes (including chemical reactions) result in a decrease in G as they proceed toward equilibrium. If ΔS and/or T are small, the condition ΔG < 0 may imply that ΔH < 0, which would indicate an exothermic reaction. However, this is not required; endothermic reactions can proceed spontaneously if the TΔS term is large enough.

Moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T = 0. This ensures that ΔG and ΔH are nearly the same over a considerable range of temperatures and justifies the approximate empirical Principle of Thomsen and Berthelot, which states that the equilibrium state to which a system proceeds is the one that evolves the greatest amount of heat, i.e., an actual process is the most exothermic one (Callen, pp. 186–187).

Zero-point energy

Probability densities and energies (indicated by an offset) of the four lowest energy eigenstates of a quantum harmonic oscillator. ZPE denotes the zero-point energy.

Even at absolute zero, a quantum system retains a minimum amount of energy due to the Heisenberg uncertainty principle, which prevents particles from having both perfectly defined position and momentum. This residual energy is known as zero-point energy. In the case of the quantum harmonic oscillator, a standard model for vibrations in atoms and molecules, the uncertainty in a particle's momentum implies it must retain some kinetic energy, while the uncertainty in its position contributes to potential energy. As a result, such a system has a nonzero energy at absolute zero.

Zero-point energy helps explain certain physical phenomena. For example, liquid helium does not solidify at normal pressure, even at temperatures near absolute zero. The large zero-point motion of helium atoms, caused by their low mass and weak interatomic forces, prevents them from settling into a solid structure. Only under high pressure does helium solidify, as the atoms are forced closer together and the interatomic forces grow stronger.

History

Robert Boyle pioneered the idea of an absolute zero.

One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 New Experiments and Observations touching Cold, articulated the dispute known as the primum frigidum. The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, "There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality."

Limit to the "degree of cold"

The question of whether there is a limit to the degree of coldness possible, and, if so, where the zero must be placed, was first addressed by the French physicist Guillaume Amontons in 1703, in connection with his improvements in the air thermometer. His instrument indicated temperatures by the height at which a certain mass of air sustained a column of mercury—the pressure, or "spring" of the air varying with temperature. Amontons therefore argued that the zero of his thermometer would be that temperature at which the spring of the air was reduced to nothing. He used a scale that marked the boiling point of water at +73 and the melting point of ice at +51+12, so that the zero was equivalent to about −240 on the Celsius scale. Amontons held that the absolute zero cannot be reached, so never attempted to compute it explicitly. The value of −240 °C, or "431 divisions [in Fahrenheit's thermometer] below the cold of freezing water" was published by George Martine in 1740.

This close approximation to the modern value of −273.15 °C for the zero of the air thermometer was further improved upon in 1779 by Johann Heinrich Lambert, who observed that −270 °C (−454.00 °F; 3.15 K) might be regarded as absolute cold.

Values of this order for the absolute zero were not, however, universally accepted about this period. Pierre-Simon Laplace and Antoine Lavoisier, in their 1780 treatise on heat, arrived at values ranging from 1,500 to 3,000 below the freezing point of water, and thought that in any case it must be at least 600 below. John Dalton in his Chemical Philosophy gave ten calculations of this value, and finally adopted −3,000 °C as the natural zero of temperature.

Charles's law

From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100° C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.

Lord Kelvin's work

After James Prescott Joule had determined the mechanical equivalent of heat, Lord Kelvin approached the question from an entirely different point of view, and in 1848 devised a scale of absolute temperature that was independent of the properties of any particular substance and was based on Carnot's theory of the Motive Power of Heat and data published by Henri Victor Regnault. It followed from the principles on which this scale was constructed that its zero was placed at −273 °C, at almost precisely the same point as the zero of the air thermometer, where the air volume would reach "nothing". This value was not immediately accepted; values ranging from −271.1 °C (−455.98 °F) to −274.5 °C (−462.10 °F), derived from laboratory measurements and observations of astronomical refraction, remained in use in the early 20th century.

The race to absolute zero

Commemorative plaque in Leiden

With a better theoretical understanding of absolute zero, scientists were eager to reach this temperature in the lab. By 1845, Michael Faraday had managed to liquefy most gases then known to exist, and reached a new record for lowest temperatures by reaching −130 °C (−202 °F; 143 K). Faraday believed that certain gases, such as oxygen, nitrogen, and hydrogen, were permanent gases and could not be liquefied. Decades later, in 1873 Dutch theoretical scientist Johannes Diderik van der Waals demonstrated that these gases could be liquefied, but only under conditions of very high pressure and very low temperatures. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air at −195 °C (−319.0 °F; 78.1 K). This was followed in 1883 by the production of liquid oxygen −218 °C (−360.4 °F; 55.1 K) by the Polish professors Zygmunt Wróblewski and Karol Olszewski.

Scottish chemist and physicist James Dewar and Dutch physicist Heike Kamerlingh Onnes took on the challenge to liquefy the remaining gases, hydrogen and helium. In 1898, after 20 years of effort, Dewar was the first to liquefy hydrogen, reaching a new low-temperature record of −252 °C (−421.6 °F; 21.1 K). However, Kamerlingh Onnes, his rival, was the first to liquefy helium, in 1908, using several precooling stages and the Hampson–Linde cycle. He lowered the temperature to the boiling point of helium −269 °C (−452.20 °F; 4.15 K). By reducing the pressure of the liquid helium, he achieved an even lower temperature, near 1.5 K. These were the coldest temperatures achieved on Earth at the time and his achievement earned him the Nobel Prize in 1913. Kamerlingh Onnes would continue to study the properties of materials at temperatures near absolute zero, describing superconductivity and superfluids for the first time.

Negative temperatures

Temperatures below zero on the Celsius or Fahrenheit scales are simply colder than the zero points of those scales. In contrast, certain isolated systems can achieve negative thermodynamic temperatures (in kelvins), which are not colder than absolute zero, but paradoxically hotter than any positive temperature. If a negative-temperature system and a positive-temperature system come in contact, heat flows from the negative to the positive-temperature system.

Negative temperatures can only occur in systems that have an upper limit to the energy they can contain. In these cases, adding energy can decrease entropy, reversing the usual relationship between energy and temperature. This leads to a negative thermodynamic temperature. However, such conditions only arise in specialized, quasi-equilibrium systems such as collections of spins in a magnetic field. In contrast, ordinary systems with translational or vibrational motion have no upper energy limit, so their temperatures are always positive.

Very low temperatures

The rapid expansion of gases leaving the Boomerang Nebula, a bi-polar, filamentary, likely proto-planetary nebula in Centaurus, has a temperature of 1 K, the lowest observed outside of a laboratory.
Velocity-distribution data of a gas of rubidium atoms at a temperature within a few billionths of a degree above absolute zero. Left: just before the appearance of a Bose–Einstein condensate. Center: just after the appearance of the condensate. Right: after further evaporation, leaving a sample of nearly pure condensate.

The average temperature of the universe today is approximately 2.73 K (−270.42 °C; −454.76 °F), based on measurements of cosmic microwave background radiation. Standard models of the future expansion of the universe predict that the average temperature of the universe is decreasing over time. This temperature is calculated as the mean density of energy in space; it should not be confused with the mean electron temperature (total energy divided by particle count) which has increased over time.

Absolute zero cannot be achieved, although it is possible to reach temperatures close to it through the use of evaporative cooling, cryocoolers, dilution refrigerators, and nuclear adiabatic demagnetization. The use of laser cooling has produced temperatures of less than a billionth of a kelvin. At very low temperatures in the vicinity of absolute zero, matter exhibits many unusual properties, including superconductivity, superfluidity, and Bose–Einstein condensation. To study such phenomena, scientists have worked to obtain even lower temperatures.

  • In November 2000, nuclear spin temperatures below 100 picokelvin were reported for an experiment at the Helsinki University of Technology's Low Temperature Lab in Espoo, Finland. However, this was the temperature of one particular degree of freedom—a quantum property called nuclear spin—not the overall average thermodynamic temperature for all possible degrees in freedom.
  • In February 2003, the Boomerang Nebula was observed to have been releasing gases at a speed of 500,000 km/h (310,000 mph) for the last 1,500 years. This has cooled it down to approximately 1 K, as deduced by astronomical observation, which is the lowest natural temperature ever recorded.
  • In November 2003, 90377 Sedna was discovered and is one of the coldest known objects in the Solar System, with an average surface temperature of −240 °C (33 K; −400 °F), due to its extremely far orbit of 903 astronomical units.
  • In May 2005, the European Space Agency proposed research in space to achieve femtokelvin temperatures.
  • In May 2006, the Institute of Quantum Optics at the University of Hannover gave details of technologies and benefits of femtokelvin research in space.
  • In January 2013, physicist Ulrich Schneider of the University of Munich in Germany reported to have achieved temperatures formally below absolute zero ("negative temperature") in gases. The gas is artificially forced out of equilibrium into a high potential energy state, which is, however, cold. When it then emits radiation it approaches the equilibrium, and can continue emitting despite reaching formal absolute zero; thus, the temperature is formally negative.
  • In September 2014, scientists in the CUORE collaboration at the Laboratori Nazionali del Gran Sasso in Italy cooled a copper vessel with a volume of one cubic meter to 0.006 K (−273.144 °C; −459.659 °F) for 15 days, setting a record for the lowest temperature in the known universe over such a large contiguous volume.
  • In June 2015, experimental physicists at MIT cooled molecules in a gas of sodium potassium to a temperature of 500 nanokelvin, and it is expected to exhibit an exotic state of matter by cooling these molecules somewhat further.
  • In 2017, Cold Atom Laboratory (CAL), an experimental instrument was developed for launch to the International Space Station (ISS) in 2018. The instrument has created extremely cold conditions in the microgravity environment of the ISS leading to the formation of Bose–Einstein condensates. In this space-based laboratory, temperatures as low as 1 picokelvin are projected to be achievable, and it could further the exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics.
  • The current world record for effective temperatures was set in 2021 at 38 picokelvin through matter-wave lensing of rubidium Bose–Einstein condensates.

Psychoanalytic theory

From Wikipedia, the free encyclopedia

Psychoanalytic theory is the theory of the innate structure of the human soul and the dynamics of personality development relating to the practice of psychoanalysis, a method of research and for treating of mental disorders (psychopathology). Laid out by Sigmund Freud in the late 19th century (s. The Interpretation of Dreams), he developed the theory and practice of psychoanalysis until his death in 1939. Since then, it has been further refined, also divided into various sub-areas, but independent of this, Freuds structural distinction of the soul into three functionally interlocking instances has been largely retained.

Psychoanalysis with its theoretical core came to full prominence in the last third of the twentieth century, as part of the flow of critical discourse regarding psychological treatments in the 1970s. Freud himself had ceased his physiological research of the neural brain organisation in 1906 (cf. history). shifting his focus to psychology and the treatment of mental health issues by using free associations and the phenonmenon of transference. Psychoanalysis is based on the distinction between unconscious and conscious processes, and emphasized the recognition of childhood events that influence the mental functioning of adults. Freud's consideration of human evolutionary history (genetics) and then the aspect of individual psychological development in cultural contexts gave the psychoanalytic theory its characteristics.

Definition

Psychoanalytic and psychoanalytical are used in English. The latter is the older term, and at first, simply meant 'relating to the analysis of the human psyche.' But with the emergence of psychoanalysis as a distinct clinical practice, both terms came to describe that. Although both are still used, today, the normal adjective is psychoanalytic.

Psychoanalysis is defined in the Oxford English Dictionary as

A therapeutic method, originated by Sigmund Freud, for treating mental disorders by investigating the interaction of conscious and unconscious elements in the patient's mind and bringing repressed fears and conflicts into the conscious mind, using techniques such as dream interpretation and free association. Also: a system of psychological theory is associated with this method.

The beginnings

Freud began his studies on psychoanalysis in collaboration with Dr. Josef Breuer, most notably in relation to the case study of Anna O. Anna O. was subject to a number of psychosomatic disturbances, such as not being able to drink out of fear. Breuer and Freud found that hypnosis was a great help in discovering more about Anna O. and her treatment. Freud frequently referred to the study on Anna O. in his lectures on the origin and development of psychoanalysis.

Observations in the Anna O. case led Freud to theorize that the problems faced by hysterical patients could be associated with painful childhood experiences that could not be recalled. The influence of these lost memories shaped the feelings, thoughts, and behaviors of patients. These studies contributed to the development of the psychoanalytic theory.

The unconscious

In psychoanalytic theory, the unconscious mind consists of ideas and drives that have been subject to the mechanism of Repression: anxiety-producing impulses in childhood are barred from consciousness, but do not cease to exist, and exert a constant pressure in the direction of consciousness. However, the content of the unconscious is only knowable to consciousness through its representation in a disguised or distorted form, by way of dreams and neurotic symptoms, as well as in slips of the tongue and jokes. The psychoanalyst seeks to interpret these conscious manifestations in order to understand the nature of the repressed. In psychoanalytic terms, the unconscious does not include all that is not conscious, but rather that which is actively repressed from conscious thought. Freud viewed the unconscious as a repository for socially unacceptable ideas, anxiety-producing wishes or desires, traumatic memories, and painful emotions put out of consciousness by the mechanism of repression. Such unconscious mental processes can only be recognized through analysis of their effects in consciousness. Unconscious thoughts are not directly accessible to ordinary introspection, but they are capable of partially evading the censorship mechanism of repression in a disguised form, manifesting, for example, as dream elements or neurotic symptoms. Dreams and symptoms are supposed to be capable of being "interpreted" during psychoanalysis, with the help of methods such as free association, dream analysis, and analysis of verbal slips.

Personality structure

In Freud's model the psyche consists of three different elements, the id, ego, and the superego. The id is the aspect of personality that is driven by internal and basic drives and needs, such as hunger, thirst, and the drive for sex, or libido. The id acts in accordance with the pleasure principle. Due to the instinctual quality of the id, it is impulsive and unaware of the implications of actions. The superego is driven by the morality principle. It enforces the morality of social thought and action on an intrapsychic level. It employs morality, judging wrong and right and using guilt to discourage socially unacceptable behavior. The ego is driven by the reality principle. The ego seeks to balance the conflicting aims of the id and superego, by trying to satisfy the id's drives in ways that are compatible with reality. The Ego is how we view ourselves: it is what we refer to as 'I' (Freud's word is the German ich, which simply means 'I').

Defense mechanisms

The ego balances demands of the id, the superego, and of reality to maintain a healthy state of consciousness, where there is only minimal intrapsychic conflict. It thus reacts to protect the individual from stressors and from anxiety by distorting internal or external reality to a lesser or greater extent. This prevents threatening unconscious thoughts and material from entering the consciousness. The ten different defence mechanisms initially enumerated by Anna Freud are: repression, regression, reaction formation, isolation of affect, undoing, projection, introjection, turning against the self, reversal into the opposite, and sublimation. In the same work, however, she details other manoeuvres such as identification with the aggressor and intellectualisation that would later come to be considered defence mechanisms in their own right. Furthermore, this list has been greatly expanded upon by other psychoanalysts, with some authors claiming to enumerate in excess of one hundred defence mechanisms.

Psychology theories

Psychosexual development

Freud's take on the development of the personality (psyche). It is a stage theory that believes progress occurs through stages as the libido is directed to different body parts. The different stages, listed in order of progression, are Oral, Anal, Phallic (Oedipus complex), Latency, and Genital. The Genital stage is achieved if people meet all their needs throughout the other stages with enough available sexual energy. Individuals who do not meet their needs in a given stage become fixated or "stuck" in that stage.

Neo-analytic theory

Freud's theory and work with psychosexual development led to Neo-Analytic/Neo-Freudians who also believed in the importance of the unconscious, dream interpretations, defense mechanisms, and the integral influence of childhood experiences but had objections to the theory as well. They do not support the idea that personality development stops at age 6. Instead, they believe development spreads across the lifespan. They extended Freud's work and encompassed more influence from the environment and the importance of conscious thought and the unconscious. The most important theorists are Erik Erikson (Psychosocial Development), Anna Freud, Carl Jung, Alfred Adler and Karen Horney, and including the school of object relations. Erikson's Psychosocial Development theory is based on eight stages of development. The stages are trust vs. mistrust, autonomy vs. shame, initiative vs. guilt, industry vs. inferiority, identity vs. confusion, intimacy vs. isolation, generatively vs. stagnation, and integrity vs. despair. These are important to the psychoanalytic theory because they describe the different stages that people go through in life. Each stage has a major impact on their life outcomes since they are going through conflicts at each stage and whichever route they decide to take, will have certain outcomes.

Criticisms

Some claim that the theory is lacking in empirical data and too focused on pathology. Other criticisms are that the theory lacks consideration of culture and its influence on personality.

Psychoanalytic theory comes from Freud and is focused on childhood. This might be an issue since most believe studying children can be inconclusive. One major concern lies in if observed personality will be a lifelong occurrence or if the child will shed it later in life.

Application to the arts and humanities

Psychoanalytic theory is a major influence in Continental philosophy and in aesthetics in particular. Freud is sometimes considered a philosopher. The psychoanalyst Jacques Lacan, and the philosophers Michel Foucault, and Jacques Derrida, have written extensively on how psychoanalysis informs philosophical analysis. Other philosophers such as Alain Badiou and Rafael Holmberg have argued that the meaning of psychoanalysis for philosophy was not immediately clear, but that they have come to reciprocally define each other.

When analyzing literary texts, the psychoanalytic theory is sometimes used (often specifically with regard to the motives of the author and the characters) to reveal purported concealed meanings or to purportedly better understand the author's intentions.

Anti-war movement

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Anti-war_...