Search This Blog

Wednesday, July 9, 2025

Computational fluid dynamics

Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.

CFD is applied to a range of research and engineering problems in multiple fields of study and industries, including aerodynamics and aerospace analysis, hypersonics, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, engine and combustion analysis, and visual effects for film and games.

Background and history

A computer simulation of high velocity air flow around the Space Shuttle during re-entry
A simulation of the Hyper-X scramjet vehicle in operation at Mach-7

The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define a number of single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.

Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.

One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book.

The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is widely considered one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as particle-in-cell method, fluid-in-cell method, vorticity stream function method, and marker-and-cell method. Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.

The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967. This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of a number of submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available.

In the two-dimensional realm, a number of Panel Codes have been developed for airfoil analysis and design. The codes typically have a boundary layer analysis included, so that viscous effects can be modeled. Richard Eppler [de] developed the PROFILE code, partly with NASA funding, which became available in the early 1980s. This was soon followed by Mark Drela's XFOIL code. Both PROFILE and XFOIL incorporate two-dimensional panel codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses a conformal transformation method for inverse airfoil design, while XFOIL has both a conformal transformation and an inverse panel method for airfoil design.

An intermediate step between Panel Codes and Full Potential codes were codes that used the Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code, developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use.

A simulation of the SpaceX Starship during re-entry

Developers turned to Full Potential codes, as panel methods could not calculate the non-linear flow present at transonic speeds. The first description of a means of using the Full Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970. Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were widely used, the most important being named Program H. A further growth of Program H was developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. Antony Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with David Caughey to develop the important three-dimensional Full Potential code FLO22 in 1975. A number of Full Potential codes emerged after this, culminating in Boeing's Tranair (A633) code, which still sees heavy use.

The next step was the Euler equations, which promised to provide more accurate solutions of transonic flows. The methodology used by Jameson in his three-dimensional FLO57 code (1981) was used by others to produce such programs as Lockheed's TEAM program and IAI/Analytical Methods' MGAERO program. MGAERO is unique in being a structured cartesian mesh code, while most other such codes use structured body-fitted grids (with the exception of NASA's highly successful CART3D code, Lockheed's SPLITFLOW code and Georgia Tech's NASCART-GT). Antony Jameson also developed the three-dimensional AIRPLANE code which made use of unstructured tetrahedral grids.

In the two-dimensional realm, Mark Drela and Michael Giles, then graduate students at MIT, developed the ISES Euler program (actually a suite of programs) for airfoil design and analysis. This code first became available in 1986 and has been further developed to design, analyze and optimize single or multi-element airfoils, as the MSES program. MSES sees wide use throughout the world. A derivative of MSES, for the design and analysis of airfoils in a cascade, is MISES, developed by Harold Youngren while he was a graduate student at MIT.

The Navier–Stokes equations were the ultimate target of development. Two-dimensional codes, such as NASA Ames' ARC2D code first emerged. A number of three-dimensional codes were developed (ARC3D, OVERFLOW, CFL3D are three successful NASA contributions), leading to numerous commercial packages.

Recently CFD methods have gained traction for modeling the flow behavior of granular materials within various chemical processes in engineering. This approach has emerged as a cost-effective alternative, offering a nuanced understanding of complex flow phenomena while minimizing expenses associated with traditional experimental methods.

Hierarchy of fluid flow equations

CFD can be seen as a group of computational methodologies (discussed below) used to solve equations governing fluid flow. In the application of CFD, a critical step is to decide which set of physical assumptions and related equations need to be used for the problem at hand. To illustrate this step, the following summarizes the physical assumptions/simplifications taken in equations of a flow that is single-phase (see multiphase flow and two-phase flow), single-species (i.e., it consists of one chemical species), non-reacting, and (unless said otherwise) compressible. Thermal radiation is neglected, and body forces due to gravity are considered (unless said otherwise). In addition, for this type of flow, the next discussion highlights the hierarchy of flow equations solved with CFD. Note that some of the following equations could be derived in more than one way.

  • Conservation laws (CL): These are the most fundamental equations considered with CFD in the sense that, for example, all the following equations can be derived from them. For a single-phase, single-species, compressible flow one considers the conservation of mass, conservation of linear momentum, and conservation of energy.
  • Continuum conservation laws (CCL): Start with the CL. Assume that mass, momentum and energy are locally conserved: These quantities are conserved and cannot "teleport" from one place to another but can only move by a continuous flow (see continuity equation). Another interpretation is that one starts with the CL and assumes a continuum medium (see continuum mechanics). The resulting system of equations is unclosed since to solve it one needs further relationships/equations: (a) constitutive relationships for the viscous stress tensor; (b) constitutive relationships for the diffusive heat flux; (c) an equation of state (EOS), such as the ideal gas law; and, (d) a caloric equation of state relating temperature with quantities such as enthalpy or internal energy.
  • Compressible Navier-Stokes equations (C-NS): Start with the CCL. Assume a Newtonian viscous stress tensor (see Newtonian fluid) and a Fourier heat flux (see heat flux). The C-NS need to be augmented with an EOS and a caloric EOS to have a closed system of equations.
  • Incompressible Navier-Stokes equations (I-NS): Start with the C-NS. Assume that density is always and everywhere constant. Another way to obtain the I-NS is to assume that the Mach number is very small and that temperature differences in the fluid are very small as well. As a result, the mass-conservation and momentum-conservation equations are decoupled from the energy-conservation equation, so one only needs to solve for the first two equations.
  • Compressible Euler equations (EE): Start with the C-NS. Assume a frictionless flow with no diffusive heat flux.
  • Weakly compressible Navier-Stokes equations (WC-NS): Start with the C-NS. Assume that density variations depend only on temperature and not on pressure. For example, for an ideal gas, use , where is a conveniently defined reference pressure that is always and everywhere constant, is density, is the specific gas constant, and is temperature. As a result, the WC-NS do not capture acoustic waves. It is also common in the WC-NS to neglect the pressure-work and viscous-heating terms in the energy-conservation equation. The WC-NS are also called the C-NS with the low-Mach-number approximation.
  • Boussinesq equations: Start with the C-NS. Assume that density variations are always and everywhere negligible except in the gravity term of the momentum-conservation equation (where density multiplies the gravitational acceleration). Also assume that various fluid properties such as viscosity, thermal conductivity, and heat capacity are always and everywhere constant. The Boussinesq equations are widely used in microscale meteorology.
  • Compressible Reynolds-averaged Navier–Stokes equations and compressible Favre-averaged Navier-Stokes equations (C-RANS and C-FANS): Start with the C-NS. Assume that any flow variable , such as density, velocity and pressure, can be represented as , where is the ensemble-average of any flow variable, and is a perturbation or fluctuation from this average.  is not necessarily small. If is a classic ensemble-average (see Reynolds decomposition) one obtains the Reynolds-averaged Navier–Stokes equations. And if is a density-weighted ensemble-average one obtains the Favre-averaged Navier-Stokes equations. As a result, and depending on the Reynolds number, the range of scales of motion is greatly reduced, something which leads to much faster solutions in comparison to solving the C-NS. However, information is lost, and the resulting system of equations requires the closure of various unclosed terms, notably the Reynolds stress.
  • Ideal flow or potential flow equations: Start with the EE. Assume zero fluid-particle rotation (zero vorticity) and zero flow expansion (zero divergence). The resulting flowfield is entirely determined by the geometrical boundaries. Ideal flows can be useful in modern CFD to initialize simulations.
  • Linearized compressible Euler equations (LEE): Start with the EE. Assume that any flow variable , such as density, velocity and pressure, can be represented as , where is the value of the flow variable at some reference or base state, and is a perturbation or fluctuation from this state. Furthermore, assume that this perturbation is very small in comparison with some reference value. Finally, assume that satisfies "its own" equation, such as the EE. The LEE and its multiple variations are widely used in computational aeroacoustics.
  • Sound wave or acoustic wave equation: Start with the LEE. Neglect all gradients of and , and assume that the Mach number at the reference or base state is very small. The resulting equations for density, momentum and energy can be manipulated into a pressure equation, giving the well-known sound wave equation.
  • Shallow water equations (SW): Consider a flow near a wall where the wall-parallel length-scale of interest is much larger than the wall-normal length-scale of interest. Start with the EE. Assume that density is always and everywhere constant, neglect the velocity component perpendicular to the wall, and consider the velocity parallel to the wall to be spatially-constant.
  • Boundary layer equations (BL): Start with the C-NS (I-NS) for compressible (incompressible) boundary layers. Assume that there are thin regions next to walls where spatial gradients perpendicular to the wall are much larger than those parallel to the wall.
  • Bernoulli equation: Start with the EE. Assume that density variations depend only on pressure variations. See Bernoulli's Principle.
  • Steady Bernoulli equation: Start with the Bernoulli Equation and assume a steady flow. Or start with the EE and assume that the flow is steady and integrate the resulting equation along a streamline.
  • Stokes Flow or creeping flow equations: Start with the C-NS or I-NS. Neglect the inertia of the flow. Such an assumption can be justified when the Reynolds number is very low. As a result, the resulting set of equations is linear, something which simplifies greatly their solution.
  • Two-dimensional channel flow equation: Consider the flow between two infinite parallel plates. Start with the C-NS. Assume that the flow is steady, two-dimensional, and fully developed (i.e., the velocity profile does not change along the streamwise direction). Note that this widely used, fully developed assumption can be inadequate in some instances, such as some compressible, microchannel flows, in which case it can be supplanted by a locally fully developed assumption.
  • One-dimensional Euler equations or one-dimensional gas-dynamic equations (1D-EE): Start with the EE. Assume that all flow quantities depend only on one spatial dimension.
  • Fanno flow equation: Consider the flow inside a duct with constant area and adiabatic walls. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the momentum-conservation equation an empirical term to recover the effect of wall friction (neglected in the EE). To close the Fanno flow equation, a model for this friction term is needed. Such a closure involves problem-dependent assumptions.
  • Rayleigh flow equation. Consider the flow inside a duct with constant area and either non-adiabatic walls without volumetric heat sources or adiabatic walls with volumetric heat sources. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the energy-conservation equation an empirical term to recover the effect of wall heat transfer or the effect of the heat sources (neglected in the EE).

Methodology

In all of these approaches the same basic procedure is followed.

  • During preprocessing
    • The geometry and physical bounds of the problem can be defined using computer aided design (CAD). From there, data can be suitably processed (cleaned-up) and the fluid volume (or fluid domain) is extracted.
    • The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non-uniform, structured or unstructured, consisting of a combination of hexahedral, tetrahedral, prismatic, pyramidal or polyhedral elements.
    • The physical modeling is defined – for example, the equations of fluid motion + enthalpy + radiation + species conservation
    • Boundary conditions are defined. This involves specifying the fluid behaviour and properties at all bounding surfaces of the fluid domain. For transient problems, the initial conditions are also defined.
  • The simulation is started and the equations are solved iteratively as a steady-state or transient.
  • Finally a postprocessor is used for the analysis and visualization of the resulting solution.

Discretization methods

The stability of the selected discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks and contact surfaces.

Some of the discretization methods being used are:

Finite volume method

The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion).

In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretization guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,

where is the vector of conserved variables, is the vector of fluxes (see Euler equations or Navier–Stokes equations), is the volume of the control volume element, and is the surface area of the control volume element.

Finite element method

The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations. Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. FEM also provides more accurate solutions for smooth problems comparing to FVM.  Another advantage of FEM is that it can handle complex geometries and boundary conditions. However, FEM can require more memory and has slower solution times than the FVM.

In this method, a weighted residual equation is formed:

where is the equation residual at an element vertex , is the conservation equation expressed on an element basis, is the weight factor, and is the volume of the element.

Finite difference method

The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).

where is the vector of conserved variables, and , , and are the fluxes in the , , and directions respectively.

Spectral element method

Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinite-dimensional function space. Clearly an infinite-dimensional function space cannot be represented on a discrete spectral element mesh; this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form . In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out. At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.

Lattice Boltzmann method

The lattice Boltzmann method (LBM) with its simplified kinetic picture on a lattice provides a computationally efficient description of hydrodynamics. Unlike the traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice mesh. In this method, one works with the discrete in space and time version of the kinetic evolution equation in the Boltzmann Bhatnagar-Gross-Krook (BGK) form.

Vortex method

The vortex method, also Lagrangian Vortex Particle Method, is a meshfree technique for the simulation of incompressible turbulent flows. In it, vorticity is discretized onto Lagrangian particles, these computational elements being called vortices, vortons, or vortex particles. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). This breakthrough came in the 1980s with the development of the Barnes-Hut and fast multipole method (FMM) algorithms. These paved the way to practical computation of the velocities from the vortex elements.

Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;

  • It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES.
  • All problems are treated identically. No modeling or calibration inputs are required.
  • Time-series simulations, which are crucial for correct analysis of acoustics, are possible.
  • The small scale and large scale are accurately simulated at the same time.

Boundary element method

In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.

High-resolution discretization schemes

High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing.

Turbulence models

In computational modeling of turbulent flows, one common objective is to obtain a model that can predict quantities of interest, such as fluid velocity, for use in engineering designs of the system being modeled. For turbulent flows, the range of length scales and complexity of phenomena involved in turbulence make most modeling approaches prohibitively expensive; the resolution required to resolve all scales involved in turbulence is beyond what is computationally possible. The primary approach in such cases is to create numerical models to approximate unresolved phenomena. This section lists some commonly used computational models for turbulent flows.

Turbulence models can be classified based on computational expense, which corresponds to the range of scales that are modeled versus resolved (the more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost). If a majority or all of the turbulent scales are not modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.

In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.

Reynolds-averaged Navier–Stokes

External aerodynamics of the DrivAer model, computed using URANS (top) and DDES (bottom)
A simulation of aerodynamic package of a Porsche Cayman (987.2)

Reynolds-averaged Navier–Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second-order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.

RANS models can be divided into two broad approaches:

Boussinesq hypothesis
This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding), Mixing Length Model (Prandtl), and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the is a "Two Equation" model because two transport equations (one for and one for ) are solved.
Reynolds stress model (RSM)
This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort.

Large eddy simulation

Volume rendering of a non-premixed swirl flame as simulated by LES

Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.

Detached eddy simulation

Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Philippe R. Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart–Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.

IDDES Simulation of the Karel Motorsports BMW. This is a type of DES simulation completed in OpenFOAM. The plot is coefficient of pressure.

Direct numerical simulation

Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to . DNS is intractable for flows with complex geometries or flow configurations.

Coherent vortex simulation

The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Vasilyev applied the FDV model to large eddy simulation, but did not assume that the wavelet filter eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.

PDF methods

Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity, , which gives the probability of the velocity at point being between and . This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.

Vorticity confinement method

The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.

Linear eddy model

The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.

Two-phase flow

Simulation of bubble horde using volume of fluid method

The modeling of two-phase flow is still under development. Different methods have been proposed, including the Volume of fluid method, the level-set method and front tracking. These methods often involve a tradeoff between maintaining a sharp interface or conserving mass. This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface.

Solution algorithms

Discretization in the space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.

Multigrid has the advantage of asymptotically optimal performance on a number of problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require a number of iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.[citation needed]

For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates.

Unsteady aerodynamics

CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3.

Biomedical engineering

Simulation of blood flow in a human aorta

CFD investigations are used to clarify the characteristics of aortic flow in details that are beyond the capabilities of experimental measurements. To analyze these conditions, CAD models of the human vascular system are extracted employing modern imaging techniques such as MRI or Computed Tomography. A 3D model is reconstructed from this data and the fluid flow can be computed. Blood properties such as density and viscosity, and realistic boundary conditions (e.g. systemic pressure) have to be taken into consideration. Therefore, making it possible to analyze and optimize the flow in the cardiovascular system for different applications.

CPU versus GPU

Traditionally, CFD simulations are performed on CPUs.

In a more recent trend, simulations are also performed on GPUs. These typically contain slower but more processors. For CFD algorithms that feature good parallelism performance (i.e. good speed-up by adding more cores) this can greatly reduce simulation times. Fluid-implicit particle and lattice-Boltzmann methods are typical examples of codes that scale well on GPUs.

Unconscious mind

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Unconscious_mind

In psychoanalysis and other psychological theories, the unconscious mind (or the unconscious) is the part of the psyche that is not available to introspection. Although these processes exist beneath the surface of conscious awareness, they are thought to exert an effect on conscious thought processes and behavior. The term was coined by the 18th-century German Romantic philosopher Friedrich Schelling and later introduced into English by the poet and essayist Samuel Taylor Coleridge.

The emergence of the concept of the unconscious in psychology and general culture was mainly due to the work of Austrian neurologist and psychoanalyst Sigmund Freud. In psychoanalytic theory, the unconscious mind consists of ideas and drives that have been subject to the mechanism of repression: anxiety-producing impulses in childhood are barred from consciousness, but do not cease to exist, and exert a constant pressure in the direction of consciousness. However, the content of the unconscious is only knowable to consciousness through its representation in a disguised or distorted form, by way of dreams and neurotic symptoms, as well as in slips of the tongue and jokes. The psychoanalyst seeks to interpret these conscious manifestations in order to understand the nature of the repressed.

The unconscious mind can be seen as the source of dreams and automatic thoughts (those that appear without any apparent cause), the repository of forgotten memories (that may still be accessible to consciousness at some later time), and the locus of implicit knowledge (the things that we have learned so well that we do them without thinking). Phenomena related to semi-consciousness include awakening, implicit memory, subliminal messages, trances, hypnagogia and hypnosis. While sleep, sleepwalking, dreaming, delirium and comas may signal the presence of unconscious processes, these processes are seen as symptoms rather than the unconscious mind itself.

Some critics have doubted the existence of the unconscious altogether.

Historical overview

German

The term "unconscious" (German: unbewusst) was coined by the 18th-century German Romantic philosopher Friedrich Schelling (in his System of Transcendental Idealism, ch. 6, § 3) and later introduced into English by the poet and essayist Samuel Taylor Coleridge (in his Biographia Literaria). Some rare earlier instances of the term "unconsciousness" (Unbewußtseyn) can be found in the work of the 18th-century German physician and philosopher Ernst Platner.

Vedas

Influences on thinking that originate from outside an individual's consciousness were reflected in the ancient ideas of temptation, divine inspiration, and the predominant role of the gods in affecting motives and actions. The idea of internalised unconscious processes in the mind was present in antiquity, and has been explored across a wide variety of cultures. Unconscious aspects of mentality were referred to between 2,500 and 600 BC in the Hindu texts known as the Vedas, found today in Ayurvedic medicine.

Paracelsus

Paracelsus is credited as the first to make mention of an unconscious aspect of cognition in his work Von den Krankheiten (translates as "About illnesses", 1567), and his clinical methodology created a cogent system that is regarded by some as the beginning of modern scientific psychology.

Shakespeare

William Shakespeare explored the role of the unconscious in many of his plays, without naming it as such.

Philosophy

In his work Anthropology, philosopher Immanuel Kant was one of the first to discuss the subject of unconscious ideas.

Western philosophers such as Arthur SchopenhauerBaruch Spinoza, Gottfried Wilhelm LeibnizJohann Gottlieb Fichte, Georg Wilhelm Friedrich Hegel, Karl Robert Eduard von Hartmann, Carl Gustav Carus, Søren Aabye Kierkegaard, Friedrich Wilhelm Nietzsche and Thomas Carlyle used the word unconscious.

In 1880 at the Sorbonne, Edmond Colsenet defended a philosophy thesis (PhD) on the unconscious. Elie Rabier and Alfred Fouillee performed syntheses of the unconscious "at a time when Freud was not interested in the concept".

Psychology

Nineteenth century

According to historian of psychology Mark Altschule, "It is difficult—or perhaps impossible—to find a nineteenth-century psychologist or psychiatrist who did not recognize unconscious cerebration as not only real but of the highest importance." In 1890, when psychoanalysis was still unheard of, William James, in his monumental treatise on psychology (The Principles of Psychology), examined the way Schopenhauer, von Hartmann, Janet, Binet and others had used the term 'unconscious' and 'subconscious.'" German psychologists, Gustav Fechner and Wilhelm Wundt, had begun to use the term in their experimental psychology, in the context of manifold, jumbled sense data that the mind organizes at an unconscious level before revealing it as a cogent totality in conscious form." Eduard von Hartmann published a book dedicated to the topic, Philosophy of the Unconscious, in 1869.

Freud

The iceberg metaphor proposed by G. T. Fechner is often used to provide a visual representation of Freud's theory that most of the human mind operates unconsciously.

Sigmund Freud and his followers developed an account of the unconscious mind. He worked with the unconscious mind to develop an explanation for mental illness.

For Freud, the unconscious is not merely that which is not conscious. He refers to that as the descriptive unconscious and it is only the starting postulate for real investigation into the psyche. He further distinguishes the unconscious from the pre-conscious: the pre-conscious is merely latent – thoughts, memories, etc. that are not present to consciousness but are capable of becoming so; the unconscious consists of psychic material that is made completely inaccessible to consciousness by the act of repression. The distinctions and inter-relationships between these three regions of the psyche—the conscious, the pre-conscious, and the unconscious—form what Freud calls the topographical model of the psyche. He later sought to respond to the perceived ambiguity of the term "unconscious" by developing what he called the structural model of the psyche, in which unconscious processes were described in terms of the id and the superego in their relation to the ego.

In the psychoanalytic view, unconscious mental processes can only be recognized through analysis of their effects in consciousness. Unconscious thoughts are not directly accessible to ordinary introspection, but they are capable of partially evading the censorship mechanism of repression in a disguised form, manifesting, for example, as dream elements or neurotic symptoms. Such symptoms are supposed to be capable of being "interpreted" during psychoanalysis, with the help of methods such as free association, dream analysis, and analysis of verbal slips and other unintentional manifestations in conscious life.

Jung

Carl Gustav Jung agreed with Freud that the unconscious is a determinant of personality, but he proposed that the unconscious be divided into two layers: the personal unconscious and the collective unconscious. The personal unconscious is a reservoir of material that was once conscious but has been forgotten or suppressed, much like Freud's notion. The collective unconscious, however, is the deepest level of the psyche, containing the accumulation of inherited psychic structures and archetypal experiences. Archetypes are not memories but energy centers or psychological functions that are apparent in the culture's use of symbols. The collective unconscious is therefore said to be inherited and contain material of an entire species rather than of an individual. The collective unconscious is, according to Jung, "[the] whole spiritual heritage of mankind's evolution, born anew in the brain structure of every individual".

In addition to the structure of the unconscious, Jung differed from Freud in that he did not believe that sexuality was at the base of all unconscious thoughts.

Dreams

Freud

The purpose of dreams, according to Freud, is to fulfill repressed wishes while simultaneously allowing the dreamer to remain asleep. The dream is a disguised fulfillment of the wish because the unconscious desire in its raw form would disturb the sleeper and can only avoid censorship by associating itself with elements that are not subject to repression. Thus Freud distinguished between the manifest content and latent content of the dream. The manifest content consists of the plot and elements of a dream as they appear to consciousness, particularly upon waking, as the dream is recalled. The latent content refers to the hidden or disguised meaning of the events and elements of the dream. It represents the unconscious psychic realities of the dreamer's current issues and childhood conflicts, the nature of which the analyst is seeking to understand through interpretation of the manifest content. In Freud's theory, dreams are instigated by the events and thoughts of everyday life. In what he called the "dream-work", these events and thoughts, governed by the rules of language and the reality principle, become subject to the "primary process" of unconscious thought, which is governed by the pleasure principle, wish gratification and the repressed sexual scenarios of childhood. The dream-work involves a process of disguising these unconscious desires in order to preserve sleep. This process occurs primarily by means of what Freud called condensation and displacement. Condensation is the focusing of the energy of several ideas into one, and displacement is the surrender of one idea's energy to another more trivial representative. The manifest content is thus thought to be a highly significant simplification of the latent content, capable of being deciphered in the analytic process, potentially allowing conscious insight into unconscious mental activity.

Neurobiological theory of dreams

Allan Hobson and colleagues developed what they called the activation-synthesis hypothesis which proposes that dreams are simply the side effects of the neural activity in the brain that produces beta brain waves during REM sleep that are associated with wakefulness. According to this hypothesis, neurons fire periodically during sleep in the lower brain levels and thus send random signals to the cortex. The cortex then synthesizes a dream in reaction to these signals in order to try to make sense of why the brain is sending them. However, the hypothesis does not state that dreams are meaningless, it just downplays the role that emotional factors play in determining dreams.

Contemporary cognitive psychology

Research

There is an extensive body of research in contemporary cognitive psychology devoted to mental activity that is not mediated by conscious awareness. Most of this research on unconscious processes has been done in the academic tradition of the information processing paradigm. The cognitive tradition of research into unconscious processes does not rely on the clinical observations and theoretical bases of the psychoanalytic tradition; instead it is mostly data driven. Cognitive research reveals that individuals automatically register and acquire more information than they are consciously aware of or can consciously remember and report.

Much research has focused on the differences between conscious and unconscious perception. There is evidence that whether something is consciously perceived depends both on the incoming stimulus (bottom up strength) and on top-down mechanisms like attention. Recent research indicates that some unconsciously perceived information can become consciously accessible if there is cumulative evidence. Similarly, content that would normally be conscious can become unconscious through inattention (e.g. in the attentional blink) or through distracting stimuli like visual masking.

Unconscious processing of information about frequency

An extensive line of research conducted by Hasher and Zacks has demonstrated that individuals register information about the frequency of events automatically (outside conscious awareness and without engaging conscious information processing resources). Moreover, perceivers do this unintentionally, truly "automatically", regardless of the instructions they receive, and regardless of the information processing goals they have. The ability to unconsciously and relatively accurately tally the frequency of events appears to have little or no relation to the individual's age, education, intelligence, or personality. Thus it may represent one of the fundamental building blocks of human orientation in the environment and possibly the acquisition of procedural knowledge and experience, in general.

Criticism of the Freudian concept

The notion that the unconscious mind exists at all has been disputed.

Franz Brentano rejected the concept of the unconscious in his 1874 book Psychology from an Empirical Standpoint, although his rejection followed largely from his definitions of consciousness and unconsciousness.

Jean-Paul Sartre offers a critique of Freud's theory of the unconscious in Being and Nothingness, based on the claim that consciousness is essentially self-conscious. Sartre also argues that Freud's theory of repression is internally flawed. Philosopher Thomas Baldwin argues that Sartre's argument is based on a misunderstanding of Freud.

Erich Fromm contends that "The term 'the unconscious' is actually a mystification (even though one might use it for reasons of convenience, as I am guilty of doing in these pages). There is no such thing as the unconscious; there are only experiences of which we are aware, and others of which we are not aware, that is, of which we are unconscious. If I hate a man because I am afraid of him, and if I am aware of my hate but not of my fear, we may say that my hate is conscious and that my fear is unconscious; still my fear does not lie in that mysterious place: 'the' unconscious."

John Searle has offered a critique of the Freudian unconscious. He argues that the Freudian cases of shallow, consciously held mental states would be best characterized as 'repressed consciousness,' while the idea of more deeply unconscious mental states is more problematic. He contends that the very notion of a collection of "thoughts" that exist in a privileged region of the mind such that they are in principle never accessible to conscious awareness, is incoherent. This is not to imply that there are not "nonconscious" processes that form the basis of much of conscious life. Rather, Searle simply claims that to posit the existence of something that is like a "thought" in every way except for the fact that no one can ever be aware of it (can never, indeed, "think" it) is an incoherent concept. To speak of "something" as a "thought" either implies that it is being thought by a thinker or that it could be thought by a thinker. Processes that are not causally related to the phenomenon called thinking are more appropriately called the nonconscious processes of the brain.

Other critics of the Freudian unconscious include David StannardRichard WebsterEthan WattersRichard Ofshe, and Eric Thomas Weber.

Some scientific researchers proposed the existence of unconscious mechanisms that are very different from the Freudian ones. They speak of a "cognitive unconscious" (John Kihlstrom), an "adaptive unconscious" (Timothy Wilson), or a "dumb unconscious" (Loftus and Klinger), which executes automatic processes but lacks the complex mechanisms of repression and symbolic return of the repressed, and the "deep unconscious system" of Robert Langs.

In modern cognitive psychology, many researchers have sought to strip the notion of the unconscious from its Freudian heritage, and alternative terms such as "implicit" or "automatic" have been used. These traditions emphasize the degree to which cognitive processing happens outside the scope of cognitive awareness, and show that things we are unaware of can nonetheless influence other cognitive processes as well as behavior. Active research traditions related to the unconscious include implicit memory (for example, priming), and Pawel Lewicki's nonconscious acquisition of knowledge.

Tuesday, July 8, 2025

Atmospheric refraction

From Wikipedia, the free encyclopedia
Diagram showing displacement of the Sun's image at sunrise and sunset
Comparison of inferior and superior mirages due to differing air refractive indices, n

Atmospheric refraction is the deviation of light or other electromagnetic wave from a straight line as it passes through the atmosphere due to the variation in air density as a function of height. This refraction is due to the velocity of light through air decreasing (the refractive index increases) with increased density. Atmospheric refraction near the ground produces mirages. Such refraction can also raise or lower, or stretch or shorten, the images of distant objects without involving mirages. Turbulent air can make distant objects appear to twinkle or shimmer. The term also applies to the refraction of sound. Atmospheric refraction is considered in measuring the position of both celestial and terrestrial objects.

Astronomical or celestial refraction causes astronomical objects to appear higher above the horizon than they actually are. Terrestrial refraction usually causes terrestrial objects to appear higher than they actually are, although in the afternoon when the air near the ground is heated, the rays can curve upward making objects appear lower than they actually are.

Refraction not only affects visible light rays, but all electromagnetic radiation, although in varying degrees. For example, in the visible spectrum, blue is more affected than red. This may cause astronomical objects to appear dispersed into a spectrum in high-resolution images.

The atmosphere refracts the image of a waxing crescent Moon as it sets into the horizon.[2]

Whenever possible, astronomers will schedule their observations around the times of culmination, when celestial objects are highest in the sky. Likewise, sailors will not shoot a star below 20° above the horizon. If observations of objects near the horizon cannot be avoided, it is possible to equip an optical telescope with control systems to compensate for the shift caused by the refraction. If the dispersion is also a problem (in case of broadband high-resolution observations), atmospheric refraction correctors (made from pairs of rotating glass prisms) can be employed as well.

Since the amount of atmospheric refraction is a function of the temperature gradient, temperature, pressure, and humidity (the amount of water vapor, which is especially important at mid-infrared wavelengths), the amount of effort needed for a successful compensation can be prohibitive. Surveyors, on the other hand, will often schedule their observations in the afternoon, when the magnitude of refraction is minimum.

Atmospheric refraction becomes more severe when temperature gradients are strong, and refraction is not uniform when the atmosphere is heterogeneous, as when turbulence occurs in the air. This causes suboptimal seeing conditions, such as the twinkling of stars and various deformations of the Sun's apparent shape soon before sunset or after sunrise.

Astronomical refraction

Atmospheric refraction distorting the Sun’s disk into an uneven shape as it sets in the lower horizon.

Astronomical refraction deals with the angular position of celestial bodies, their appearance as a point source, and through differential refraction, the shape of extended bodies such as the Sun and Moon.

Atmospheric refraction of the light from a star is zero in the zenith, less than 1′ (one arc-minute) at 45° apparent altitude, and still only 5.3′ at 10° altitude; it quickly increases as altitude decreases, reaching 9.9′ at 5° altitude, 18.4′ at 2° altitude, and 35.4′ at the horizon; all values are for 10 °C and 1013.25 hPa in the visible part of the spectrum.

On the horizon, refraction is slightly greater than the apparent diameter of the Sun, so when the bottom of the sun's disc appears to touch the horizon, the sun's true altitude is negative. If the atmosphere suddenly vanished at this moment, one couldn't see the sun, as it would be entirely below the horizon. By convention, sunrise and sunset refer to times at which the Sun's upper limb appears on or disappears from the horizon and the standard value for the Sun's true altitude is −50′: −34′ for the refraction and −16′ for the Sun's semi-diameter. The altitude of a celestial body is normally given for the center of the body's disc. In the case of the Moon, additional corrections are needed for the Moon's horizontal parallax and its apparent semi-diameter; both vary with the Earth–Moon distance.

Refraction near the horizon is highly variable, principally because of the variability of the temperature gradient near the Earth's surface and the geometric sensitivity of the nearly horizontal rays to this variability. As early as 1830, Friedrich Bessel had found that even after applying all corrections for temperature and pressure (but not for the temperature gradient) at the observer, highly precise measurements of refraction varied by ±0.19′ at two degrees above the horizon and by ±0.50′ at a half degree above the horizon. At and below the horizon, values of refraction significantly higher than the nominal value of 35.4′ have been observed in a wide range of climates. Georg Constantin Bouris measured refraction of as much of 4° for stars on the horizon at the Athens Observatory and, during his ill-fated Endurance expedition, Sir Ernest Shackleton recorded refraction of 2°37′:

“The sun which had made ‘positively his last appearance’ seven days earlier surprised us by lifting more than half its disk above the horizon on May 8. A glow on the northern horizon resolved itself into the sun at 11 am that day. A quarter of an hour later the unreasonable visitor disappeared again, only to rise again at 11:40 am, set at 1 pm, rise at 1:10 pm and set lingeringly at 1:20 pm. These curious phenomena were due to refraction which amounted to 2° 37′ at 1:20 pm. The temperature was 15° below 0° Fahr., and we calculated that the refraction was 2° above normal.”

Day-to-day variations in the weather will affect the exact times of sunrise and sunset as well as moon-rise and moon-set, and for that reason it generally is not meaningful to give rise and set times to greater precision than the nearest minute. More precise calculations can be useful for determining day-to-day changes in rise and set times that would occur with the standard value for refraction if it is understood that actual changes may differ because of unpredictable variations in refraction.

Because atmospheric refraction is nominally 34′ on the horizon, but only 29′ at 0.5° above it, the setting or rising sun seems to be flattened by about 5′ (about 1/6 of its apparent diameter).

Calculating refraction

Young distinguished several regions where different methods for calculating astronomical refraction were applicable. In the upper portion of the sky, with a zenith distance of less than 70° (or an altitude over 20°), various simple refraction formulas based on the index of refraction (and hence on the temperature, pressure, and humidity) at the observer are adequate. Between 20° and 5° of the horizon the temperature gradient becomes the dominant factor and numerical integration, using a method such as that of Auer and Standish and employing the temperature gradient of the standard atmosphere and the measured conditions at the observer, is required. Closer to the horizon, actual measurements of the changes with height of the local temperature gradient need to be employed in the numerical integration. Below the astronomical horizon, refraction is so variable that only crude estimates of astronomical refraction can be made; for example, the observed time of sunrise or sunset can vary by several minutes from day to day. As The Nautical Almanac notes, "the actual values of …the refraction at low altitudes may, in extreme atmospheric conditions, differ considerably from the mean values used in the tables."

Plot of refraction vs. altitude using Bennett's 1982 formula

Many different formulas have been developed for calculating astronomical refraction; they are reasonably consistent, differing among themselves by a few minutes of arc at the horizon and becoming increasingly consistent as they approach the zenith. The simpler formulations involved nothing more than the temperature and pressure at the observer, powers of the cotangent of the apparent altitude of the astronomical body and in the higher order terms, the height of a fictional homogeneous atmosphere. The simplest version of this formula, which Smart held to be only accurate within 45° of the zenith, is:

where R is the refraction in radians, n0 is the index of refraction at the observer (which depends on the temperature, pressure, and humidity), and ha is the apparent altitude angle of the astronomical body.

An early simple approximation of this form, which directly incorporated the temperature and pressure at the observer, was developed by George Comstock:

where R is the refraction in seconds of arc, b is the atmospheric pressure in millimeters of mercury, and t is the temperature in Celsius. Comstock considered that this formula gave results within one arcsecond of Bessel's values for refraction from 15° above the horizon to the zenith.

A further expansion in terms of the third power of the cotangent of the apparent altitude incorporates H0, the height of the homogeneous atmosphere, in addition to the usual conditions at the observer:

A version of this formula is used in the International Astronomical Union's Standards of Fundamental Astronomy; a comparison of the IAU's algorithm with more rigorous ray-tracing procedures indicated an agreement within 60 milliarcseconds at altitudes above 15°.

Bennett developed another simple empirical formula for calculating refraction from the apparent altitude which gives the refraction R in arcminutes:

This formula is used in the U. S. Naval Observatory's Vector Astrometry Software, and is reported to be consistent with Garfinkel's more complex algorithm within 0.07′ over the entire range from the zenith to the horizon. Sæmundsson developed an inverse formula for determining refraction from true altitude; if h is the true altitude in degrees, refraction R in arcminutes is given by

the formula is consistent with Bennett's to within 0.1′. The formulas of Bennet and Sæmundsson assume an atmospheric pressure of 101.0 kPa and a temperature of 10 °C; for different pressure P and temperature T, refraction calculated from these formulas is multiplied by

Refraction increases approximately 1% for every 0.9 kPa increase in pressure, and decreases approximately 1% for every 0.9 kPa decrease in pressure. Similarly, refraction increases approximately 1% for every 3 °C decrease in temperature, and decreases approximately 1% for every 3 °C increase in temperature.

Random refraction effects

The animated image of the Moon's surface shows the effects of atmospheric turbulence on the view.

Turbulence in Earth's atmosphere scatters the light from stars, making them appear brighter and fainter on a time-scale of milliseconds. The slowest components of these fluctuations are visible as twinkling (also called scintillation).

Turbulence also causes small, sporadic motions of the star image, and produces rapid distortions in its structure. These effects are not visible to the naked eye, but can be easily seen even in small telescopes. They perturb astronomical seeing conditions. Some telescopes employ adaptive optics to reduce this effect.

Terrestrial refraction

Terrestrial refraction, sometimes called geodetic refraction, deals with the apparent angular position and measured distance of terrestrial bodies. It is of special concern for the production of precise maps and surveys. Since the line of sight in terrestrial refraction passes near the earth's surface, the magnitude of refraction depends chiefly on the temperature gradient near the ground, which varies widely at different times of day, seasons of the year, the nature of the terrain, the state of the weather, and other factors.

As a common approximation, terrestrial refraction is considered as a constant bending of the ray of light or line of sight, in which the ray can be considered as describing a circular path. A common measure of refraction is the coefficient of refraction. Unfortunately there are two different definitions of this coefficient. One is the ratio of the radius of the Earth to the radius of the line of sight, the other is the ratio of the angle that the line of sight subtends at the center of the Earth to the angle of refraction measured at the observer. Since the latter definition only measures the bending of the ray at one end of the line of sight, it is one half the value of the former definition.

The coefficient of refraction is directly related to the local vertical temperature gradient and the atmospheric temperature and pressure. The larger version of the coefficient k, measuring the ratio of the radius of the Earth to the radius of the line of sight, is given by:

where temperature T is given in kelvins, pressure P in millibars, and height h in meters. The angle of refraction increases with the coefficient of refraction and with the length of the line of sight.

Although the straight line from your eye to a distant mountain might be blocked by a closer hill, the ray may curve enough to make the distant peak visible. A convenient method to analyze the effect of refraction on visibility is to consider an increased effective radius of the Earth Reff, given by

where R is the radius of the Earth and k is the coefficient of refraction. Under this model the ray can be considered a straight line on an Earth of increased radius.

The curvature of the refracted ray in arc seconds per meter can be computed using the relationship

where 1/σ is the curvature of the ray in arcsec per meter, P is the pressure in millibars, T is the temperature in kelvins, and β is the angle of the ray to the horizontal. Multiplying half the curvature by the length of the ray path gives the angle of refraction at the observer. For a line of sight near the horizon cos β differs little from unity and can be ignored. This yields

where L is the length of the line of sight in meters and Ω is the refraction at the observer measured in arc seconds.

A simple approximation is to consider that a mountain's apparent altitude at your eye (in degrees) will exceed its true altitude by its distance in kilometers divided by 1500. This assumes a fairly horizontal line of sight and ordinary air density; if the mountain is very high (so much of the sightline is in thinner air) divide by 1600 instead.

Ballistics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Ballistics Trajectories of thr...