Search This Blog

Saturday, November 27, 2021

Computational fluid dynamics

Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.

CFD is applied to a wide range of research and engineering problems in many fields of study and industries, including aerodynamics and aerospace analysis, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, and engine and combustion analysis.

Background and history

A computer simulation of high velocity air flow around the Space Shuttle during re-entry.
 
A simulation of the Hyper-X scramjet vehicle in operation at Mach-7

The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define many single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.

Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.

One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book.

The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is widely considered one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as particle-in-cell method, fluid-in-cell method, vorticity stream function method, and marker-and-cell method. Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.

The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967. This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of many submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available.

In the two-dimensional realm, a number of Panel Codes have been developed for airfoil analysis and design. The codes typically have a boundary layer analysis included, so that viscous effects can be modeled. Richard Eppler [de] developed the PROFILE code, partly with NASA funding, which became available in the early 1980s. This was soon followed by Mark Drela's XFOIL code. Both PROFILE and XFOIL incorporate two-dimensional panel codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses a conformal transformation method for inverse airfoil design, while XFOIL has both a conformal transformation and an inverse panel method for airfoil design.

An intermediate step between Panel Codes and Full Potential codes were codes that used the Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code, developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use.

Developers turned to Full Potential codes, as panel methods could not calculate the non-linear flow present at transonic speeds. The first description of a means of using the Full Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970. Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were widely used, the most important being named Program H. A further growth of Program H was developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. Antony Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with David Caughey to develop the important three-dimensional Full Potential code FLO22 in 1975. Many Full Potential codes emerged after this, culminating in Boeing's Tranair (A633) code, which still sees heavy use.

The next step was the Euler equations, which promised to provide more accurate solutions of transonic flows. The methodology used by Jameson in his three-dimensional FLO57 code (1981) was used by others to produce such programs as Lockheed's TEAM program and IAI/Analytical Methods' MGAERO program. MGAERO is unique in being a structured cartesian mesh code, while most other such codes use structured body-fitted grids (with the exception of NASA's highly successful CART3D code, Lockheed's SPLITFLOW code and Georgia Tech's NASCART-GT). Antony Jameson also developed the three-dimensional AIRPLANE code which made use of unstructured tetrahedral grids.

In the two-dimensional realm, Mark Drela and Michael Giles, then graduate students at MIT, developed the ISES Euler program (actually a suite of programs) for airfoil design and analysis. This code first became available in 1986 and has been further developed to design, analyze and optimize single or multi-element airfoils, as the MSES program. MSES sees wide use throughout the world. A derivative of MSES, for the design and analysis of airfoils in a cascade, is MISES, developed by Harold Youngren while he was a graduate student at MIT.

The Navier–Stokes equations were the ultimate target of development. Two-dimensional codes, such as NASA Ames' ARC2D code first emerged. A number of three-dimensional codes were developed (ARC3D, OVERFLOW, CFL3D are three successful NASA contributions), leading to numerous commercial packages.

Hierarchy of fluid flow equations

CFD can be seen as a group of computational methodologies (discussed below) used to solve equations governing fluid flow. In the application of CFD, a critical step is to decide which set of physical assumptions and related equations need to be used for the problem at hand. To illustrate this step, the following summarizes the physical assumptions/simplifications taken in equations of a flow that is single-phase (see multiphase flow and two-phase flow), single-species (i.e., it consists of one chemical species), non-reacting, and (unless said otherwise) compressible. Thermal radiation is neglected, and body forces due to gravity are considered (unless said otherwise). In addition, for this type of flow, the next discussion highlights the hierarchy of flow equations solved with CFD. Note that some of the following equations could be derived in more than one way.

  • Conservation laws (CL): These are the most fundamental equations considered with CFD in the sense that, for example, all the following equations can be derived from them. For a single-phase, single-specie, compressible flow one considers the conservation of mass, conservation of linear momentum, and conservation of energy.
  • Continuum conservation laws (CCL): Start with the CL. Assume that mass, momentum and energy are locally conserved: These quantities are conserved and cannot "teleport" from one place to another but can only move by a continuous flow (see continuity equation). Another interpretation is that one starts with the CL and assumes a continuum medium (see continuum mechanics). The resulting system of equations is unclosed since to solve it one needs further relationships/equations: (a) constitutive relationships for the viscous stress tensor; (b) constitutive relationships for the diffusive heat flux; (c) an equation of state (EOS), such as the ideal gas law; and, (d) a caloric equation of state relating temperature with quantities such as enthalpy or internal energy.
  • Compressible Navier-Stokes equations (C-NS): Start with the CCL. Assume a Newtonian viscous stress tensor (see Newtonian fluid) and a Fourier heat flux (see heat flux). The C-NS need to be augmented with a EOS and a caloric EOS to have a closed system of equations.
  • Incompressible Navier-Stokes equations (I-NS): Start with the C-NS. Assume that density is always and everywhere constant. Another way to obtain the I-NS is to assume that the Mach number is very small and that temperature differences in the fluid are very small as well. As a result, the mass-conservation and momentum-conservation equations are decoupled from the energy-conservation equation, so one only needs to solve for the first two equations.
  • Compressible Euler equations (EE): Start with the C-NS. Assume a frictionless flow with no diffusive heat flux.
  • Weakly compressible Navier-Stokes equations (WC-NS): Start with the C-NS. Assume that density variations depend only on temperature and not on pressure. For example, for an ideal gas, use , where is a conveniently-defined reference pressure that is always and everywhere constant, is density, is the specific gas constant, and is temperature. As a result, the WK-NS do not capture acoustic waves. It is also common in the WK-NS to neglect the pressure-work and viscous-heating terms in the energy-conservation equation. The WK-NS are also called the C-NS with the low-Mach-number approximation.
  • Boussinesq equations: Start with the C-NS. Assume that density variations are always and everywhere negligible except in the gravity term of the momentum-conservation equation (where density multiplies the gravitational acceleration). Also assume that various fluid properties such as viscosity, thermal conductivity, and heat capacity are always and everywhere constant. The Boussinesq equations are widely used in microscale meteorology.
  • Compressible Reynolds-averaged Navier–Stokes equations and compressible Favre-averaged Navier-Stokes equations (C-RANS and C-FANS): Start with the C-NS. Assume that any flow variable , such as density, velocity and pressure, can be represented as , where is the ensemble-average of any flow variable, and is a perturbation or fluctuation from this average. is not necessarily small. If is a classic ensemble-average (see Reynolds decomposition) one obtains the Reynolds-averaged Navier–Stokes equations. And if is a density-weighted ensemble-average one obtains the Favre-averaged Navier-Stokes equations. As a result, and depending on the Reynolds number, the range of scales of motion is greatly reduced, something which leads to much faster solutions in comparison to solving the C-NS. However, information is lost, and the resulting system of equations requires the closure of various unclosed terms, notably the Reynolds stress.
  • Ideal flow or potential flow equations: Start with the EE. Assume zero fluid-particle rotation (zero vorticity) and zero flow expansion (zero divergence). The resulting flowfield is entirely determined by the geometrical boundaries. Ideal flows can be useful in modern CFD to initialize simulations.
  • Linearized compressible Euler equations (LEE): Start with the EE. Assume that any flow variable , such as density, velocity and pressure, can be represented as , where is the value of the flow variable at some reference or base state, and is a perturbation or fluctuation from this state. Furthermore, assume that this perturbation is very small in comparison with some reference value. Finally, assume that satisfies "its own" equation, such as the EE. The LEE and its many variations are widely used in computational aeroacoustics.
  • Sound wave or acoustic wave equation: Start with the LEE. Neglect all gradients of and , and assume that the Mach number at the reference or base state is very small. The resulting equations for density, momentum and energy can be manipulated into a pressure equation, giving the well-known sound wave equation.
  • Shallow water equations (SW): Consider a flow near a wall where the wall-parallel length-scale of interest is much larger than the wall-normal length-scale of interest. Start with the EE. Assume that density is always and everywhere constant, neglect the velocity component perpendicular to the wall, and consider the velocity parallel to the wall to be spatially-constant.
  • Boundary layer equations (BL): Start with the C-NS (I-NS) for compressible (incompressible) boundary layers. Assume that there are thin regions next to walls where spatial gradients perpendicular to the wall are much larger than those parallel to the wall.
  • Bernoulli equation: Start with the EE. Assume that density variations depend only on pressure variations.
  • Steady Bernoulli equation: Start with the Bernoulli Equation and assume a steady flow. Or start with the EE and assume that the flow is steady and integrate the resulting equation along a streamline.
  • Stokes Flow or creeping flow equations: Start with the C-NS or I-NS. Neglect the inertia of the flow. Such an assumption can be justified when the Reynolds number is very low. As a result, the resulting set of equations is linear, something which simplifies greatly their solution.
  • Two-dimensional channel flow equation: Consider the flow between two infinite parallel plates. Start with the C-NS. Assume that the flow is steady, two-dimensional, and fully developed (i.e., the velocity profile does not change along the streamwise direction). Note that this widely-used fully-developed assumption can be inadequate in some instances, such as some compressible, microchannel flows, in which case it can be supplanted by a locally fully-developed assumption.
  • One-dimensional Euler equations or one-dimensional gas-dynamic equations (1D-EE): Start with the EE. Assume that all flow quantities depend only on one spatial dimension.
  • Fanno flow equation: Consider the flow inside a duct with constant area and adiabatic walls. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the momentum-conservation equation an empirical term to recover the effect of wall friction (neglected in the EE). To close the Fanno flow equation, a model for this friction term is needed. Such a closure involves problem-dependent assumptions.
  • Rayleigh flow equation. Consider the flow inside a duct with constant area and either non-adiabatic walls without volumetric heat sources or adiabatic walls with volumetric heat sources. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the energy-conservation equation an empirical term to recover the effect of wall heat transfer or the effect of the heat sources (neglected in the EE).

Methodology

In all of these approaches the same basic procedure is followed.

  • During preprocessing
    • The geometry and physical bounds of the problem can be defined using computer aided design (CAD). From there, data can be suitably processed (cleaned-up) and the fluid volume (or fluid domain) is extracted.
    • The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non-uniform, structured or unstructured, consisting of a combination of hexahedral, tetrahedral, prismatic, pyramidal or polyhedral elements.
    • The physical modeling is defined – for example, the equations of fluid motion + enthalpy + radiation + species conservation
    • Boundary conditions are defined. This involves specifying the fluid behaviour and properties at all bounding surfaces of the fluid domain. For transient problems, the initial conditions are also defined.
  • The simulation is started and the equations are solved iteratively as a steady-state or transient.
  • Finally a postprocessor is used for the analysis and visualization of the resulting solution.

Discretization methods

The stability of the selected discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks, and contact surfaces.

Some of the discretization methods being used are:

Finite volume method

The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion).

In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretization guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,

where is the vector of conserved variables, is the vector of fluxes (see Euler equations or Navier–Stokes equations), is the volume of the control volume element, and is the surface area of the control volume element.

Finite element method

The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations. Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. However, FEM can require more memory and has slower solution times than the FVM.

In this method, a weighted residual equation is formed:

where is the equation residual at an element vertex , is the conservation equation expressed on an element basis, is the weight factor, and is the volume of the element.

Finite difference method

The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).

where is the vector of conserved variables, and , , and are the fluxes in the , , and directions respectively.

Spectral element method

Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinite-dimensional function space. Clearly an infinite-dimensional function space cannot be represented on a discrete spectral element mesh; this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form . In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out. At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.

Lattice Boltzmann method

The lattice Boltzmann method (LBM) with its simplified kinetic picture on a lattice provides a computationally efficient description of hydrodynamics. Unlike the traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice mesh. In this method, one works with the discrete in space and time version of the kinetic evolution equation in the Boltzmann Bhatnagar-Gross-Krook (BGK) form.

Boundary element method

In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.

High-resolution discretization schemes

High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing.

Turbulence models

In computational modeling of turbulent flows, one common objective is to obtain a model that can predict quantities of interest, such as fluid velocity, for use in engineering designs of the system being modeled. For turbulent flows, the range of length scales and complexity of phenomena involved in turbulence make most modeling approaches prohibitively expensive; the resolution required to resolve all scales involved in turbulence is beyond what is computationally possible. The primary approach in such cases is to create numerical models to approximate unresolved phenomena. This section lists some commonly used computational models for turbulent flows.

Turbulence models can be classified based on computational expense, which corresponds to the range of scales that are modeled versus resolved (the more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost). If a majority or all of the turbulent scales are not modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.

In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.

Reynolds-averaged Navier–Stokes

External aerodynamics of the DrivAer model, computed using URANS (top) and DDES (bottom)
 
A simulation of aerodynamic package of a Porsche Cayman (987.2).

Reynolds-averaged Navier–Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.

RANS models can be divided into two broad approaches:

Boussinesq hypothesis
This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding), Mixing Length Model (Prandtl), and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the is a "Two Equation" model because two transport equations (one for and one for ) are solved.
Reynolds stress model (RSM)
This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort.

Large eddy simulation

Volume rendering of a non-premixed swirl flame as simulated by LES.

Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.

Detached eddy simulation

Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart–Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.

Direct numerical simulation

Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to . DNS is intractable for flows with complex geometries or flow configurations.

Coherent vortex simulation

The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Vasilyev applied the FDV model to large eddy simulation, but did not assume that the wavelet filter completely eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.

PDF methods

Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity, , which gives the probability of the velocity at point being between and . This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.

Vortex method

The vortex method is a grid-free technique for the simulation of turbulent flows. It uses vortices as the computational elements, mimicking the physical structures in turbulence. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). A breakthrough came in the late 1980s with the development of the fast multipole method (FMM), an algorithm by V. Rokhlin (Yale) and L. Greengard (Courant Institute). This breakthrough paved the way to practical computation of the velocities from the vortex elements and is the basis of successful algorithms.

Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;

  • It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES.
  • All problems are treated identically. No modeling or calibration inputs are required.
  • Time-series simulations, which are crucial for correct analysis of acoustics, are possible.
  • The small scale and large scale are accurately simulated at the same time.

Vorticity confinement method

The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.

Linear eddy model

The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.

Two-phase flow

Simulation of bubble horde using volume of fluid method

The modeling of two-phase flow is still under development. Different methods have been proposed, including the Volume of fluid method, the level-set method and front tracking. These methods often involve a tradeoff between maintaining a sharp interface or conserving mass. This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface. Lagrangian multiphase models, which are used for dispersed media, are based on solving the Lagrangian equation of motion for the dispersed phase.

Solution algorithms

Discretization in the space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.

Multigrid has the advantage of asymptotically optimal performance on many problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require many iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.

For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates.

Unsteady aerodynamics

CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3.

Biomedical engineering

Simulation of blood flow in a human aorta

CFD investigations are used to clarify the characteristics of aortic flow in details that are beyond the capabilities of experimental measurements. To analyze these conditions, CAD models of the human vascular system are extracted employing modern imaging techniques such as MRI or Computed Tomography. A 3D model is reconstructed from this data and the fluid flow can be computed. Blood properties such as density and viscosity, and realistic boundary conditions (e.g. systemic pressure) have to be taken into consideration. Therefore, making it possible to analyze and optimize the flow in the cardiovascular system for different applications.

CPU versus GPU

Traditionally, CFD simulations are performed on CPUs. In a more recent trend, simulations are also performed on GPUs. These typically contain slower but more processors. For CFD algorithms that feature good parallelism performance (i.e. good speed-up by adding more cores) this can greatly reduce simulation times. Fluid-implicit particle and lattice-Boltzmann methods are typical examples of codes that scale well on GPUs.

UniverseMachine

From Wikipedia, the free encyclopedia
Logo of the UniverseMachine.

The UniverseMachine (also known as the Universe Machine) is a project of an ongoing series of astrophysical supercomputer simulations of various models of possible universes, that was created by astronomer Peter Behroozi and his research team at the Steward Observatory and the University of Arizona. As such, numerous universes with different physical characteristics may be simulated in order to develop insights into the possible beginning, and later evolution, of our current universe. One of the major objectives of the project is to better understand the role of dark matter in the development of the universe. According to Behroozi, "On the computer, we can create many different universes and compare them to the actual one, and that lets us infer which rules lead to the one we see."

Besides lead investigator Behroozi, research team members include astronomer Charlie Conroy of Harvard University, physicist Andrew Hearin of the Argonne National Laboratory and physicist Risa Wechsler of Stanford University. Support funding for the project is provided by NASA, the National Science Foundation and the Munich Institute for Astro- and Particle Physics.

Description

Besides using computers and related resources at the NASA Ames Research Center and the Leibniz-Rechenzentrum in Garching, Germany, the research team used the High-Performance Computing cluster at the University of Arizona. Two-thousand processors simultaneously processed the data over three weeks. In this way, the research team generated over 8 million universes, and at least 9.6×1013 galaxies. As such, the UniverseMachine program continuously produced millions of universes, each simulated universe containing 12 million galaxies, and each resulting simulated universe permitted to develop from 400 million years after the Big Bang, on up to the present day.

According to team member Wechsler, "The really cool thing about this study is that we can use all the data we have about galaxy evolution — the numbers of galaxies, how many stars they have and how they form those stars — and put that together into a comprehensive picture of the last 13 billion years of the universe." Wechsler further commented, "For me, the most exciting thing is that we now have a model where we can start to ask all of these questions in a framework that works ... We have a model that is inexpensive enough computationally, that we can essentially calculate an entire universe in about a second.Then we can afford to do that millions of times and explore all of the parameter space."

Results

One of the results of the study suggests that denser dark matter in the early universe didn't seem to negatively impact star formation rates as thought initially. According to the studies, galaxies of a given size were more likely to form stars much longer and at a high rate. The researchers expect to extend their studies with the project to include how often stars expire in supernovae, how dark matter may affect the shape of galaxies and eventually, by at least providing a better understanding of the workings of the universe, how life originated.

 

Computer simulation

From Wikipedia, the free encyclopedia

A 48-hour computer simulation of Typhoon Mawar using the Weather Research and Forecasting model
 
Process of building a computer model, and the interplay between experiment, simulation, and theory.

Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.

Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program. Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005; a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.

Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.

Simulation versus model

A computer model is the algorithms and equations used to capture the behavior of the system being modeled. By contrast, computer simulation is the actual running of the program that contains these equations or algorithms. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model(or a simulator)", and then either "run the model" or equivalently "run a simulation".

History

Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.

Data preparation

The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).

Input sources also vary widely:

  • Sensors and other physical devices connected to the model;
  • Control surfaces used to direct the progress of the simulation in some way;
  • Current or historical data entered by hand;
  • Values extracted as a by-product from other processes;
  • Values output for the purpose by other simulations, models, or processes.

Lastly, the time at which data is available varies:

  • "invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
  • data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
  • data can be provided during the simulation run, for example by a sensor network.

Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others.

Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis"[8] to confirm that values output by the simulation will still be usefully accurate.

Types

Computer models can be classified according to several independent pairs of attributes, including:

  • Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
  • Steady-state or dynamic
  • Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
  • Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
  • Local or distributed.

Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:

  • Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
  • If the underlying graph is not a regular grid, the model may belong to the meshfree method class.

Equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.

  • Dynamic simulations model changes in a system in response to (usually changing) input signals.
  • Stochastic models use random number generators to model chance or random events;
  • A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
  • A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
  • A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
  • Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).

Visualization

Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.

Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.

Computer simulation in science

Computer simulation of the process of osmosis

Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:

Specific examples of computer simulations follow:

  • statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
  • agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
  • time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
  • computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
  • computer simulation using molecular modeling for drug discovery.
  • computer simulation to model viral infection in mammalian cells.
  • computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.
  • Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
  • An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.

Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.

In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling 

Computer simulation in practical contexts

Computer simulations are used in a wide variety of practical contexts, such as:

The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.

Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.

In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.

Pitfalls

Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.

Model calibration techniques

The following three steps should be used to produce accurate simulation models: calibration, verification, and validation. Computer simulations are good at portraying and comparing theoretical scenarios, but in order to accurately model actual case studies they have to match what is actually happening today. A base model should be created and calibrated so that it matches the area being studied. The calibrated model should then be verified to ensure that the model is operating as expected based on the inputs. Once the model has been verified, the final step is to validate the model by comparing the outputs to historical data from the study area. This can be done by using statistical techniques and ensuring an adequate R-squared value. Unless these techniques are employed, the simulation model created will produce inaccurate results and not be a useful prediction tool.

Model calibration is achieved by adjusting any available parameters in order to adjust how the model operates and simulates the process. For example, in traffic simulation, typical parameters include look-ahead distance, car-following sensitivity, discharge headway, and start-up lost time. These parameters influence driver behavior such as when and how long it takes a driver to change lanes, how much distance a driver leaves between his car and the car in front of it, and how quickly a driver starts to accelerate through an intersection. Adjusting these parameters has a direct effect on the amount of traffic volume that can traverse through the modeled roadway network by making the drivers more or less aggressive. These are examples of calibration parameters that can be fine-tuned to match characteristics observed in the field at the study location. Most traffic models have typical default values but they may need to be adjusted to better match the driver behavior at the specific location being studied.

Model verification is achieved by obtaining output data from the model and comparing them to what is expected from the input data. For example, in traffic simulation, traffic volume can be verified to ensure that actual volume throughput in the model is reasonably close to traffic volumes input into the model. Ten percent is a typical threshold used in traffic simulation to determine if output volumes are reasonably close to input volumes. Simulation models handle model inputs in different ways so traffic that enters the network, for example, may or may not reach its desired destination. Additionally, traffic that wants to enter the network may not be able to, if congestion exists. This is why model verification is a very important part of the modeling process.

The final step is to validate the model by comparing the results with what is expected based on historical data from the study area. Ideally, the model should produce similar results to what has happened historically. This is typically verified by nothing more than quoting the R-squared statistic from the fit. This statistic measures the fraction of variability that is accounted for by the model. A high R-squared value does not necessarily mean the model fits the data well. Another tool used to validate models is graphical residual analysis. If model output values drastically differ from historical values, it probably means there is an error in the model. Before using the model as a base to produce additional models, it is important to verify it for different scenarios to ensure that each one is accurate. If the outputs do not reasonably match historic values during the validation process, the model should be reviewed and updated to produce results more in line with expectations. It is an iterative process that helps to produce more realistic models.

Validating traffic simulation models requires comparing traffic estimated by the model to observed traffic on the roadway and transit systems. Initial comparisons are for trip interchanges between quadrants, sectors, or other large areas of interest. The next step is to compare traffic estimated by the models to traffic counts, including transit ridership, crossing contrived barriers in the study area. These are typically called screenlines, cutlines, and cordon lines and may be imaginary or actual physical barriers. Cordon lines surround particular areas such as a city's central business district or other major activity centers. Transit ridership estimates are commonly validated by comparing them to actual patronage crossing cordon lines around the central business district.

Three sources of error can cause weak correlation during calibration: input error, model error, and parameter error. In general, input error and parameter error can be adjusted easily by the user. Model error however is caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically built using several different modeling theories that can produce conflicting results. Some models are more generalized while others are more detailed. If model error occurs as a result, in may be necessary to adjust the model methodology to make results more consistent.

In order to produce good models that can be used to produce realistic results, these are the necessary steps that need to be taken in order to ensure that simulation models are functioning properly. Simulation models can be used as a tool to verify engineering theories, but they are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the intended functions. The validation process establishes the credibility of the model by demonstrating its ability to replicate reality. The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. For example, in traffic analysis it is typical for a traffic engineer to perform a site visit to verify traffic counts and become familiar with traffic patterns in the area. The resulting models and forecasts will be no better than the data used for model estimation and validation.

 

The Blind Watchmaker

From Wikipedia, the free encyclopedia

The Blind Watchmaker
The Blind Watchmaker (first edition).jpg
First edition cover
AuthorRichard Dawkins
CountryUnited Kingdom
LanguageEnglish
SubjectEvolutionary biology
PublisherNorton & Company, Inc
Publication date
1986
Media typePrint
ISBN0-393-31570-3
OCLC35648431
576.8/2 21
LC ClassQH366.2 .D37 1996
Preceded byThe Extended Phenotype 
Followed byRiver Out of Eden 

The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design is a 1986 book by Richard Dawkins, in which the author presents an explanation of, and argument for, the theory of evolution by means of natural selection. He also presents arguments to refute certain criticisms made on his first book, The Selfish Gene. (Both books espouse the gene-centric view of evolution.) An unabridged audiobook edition was released in 2011, narrated by Richard Dawkins and Lalla Ward.

Overview

In his choice of the title for this book, Dawkins refers to the watchmaker analogy made famous by William Paley in his 1802 book Natural Theology. Paley, writing long before Charles Darwin published On the Origin of Species in 1859, held that the complexity of living organisms was evidence of the existence of a divine creator by drawing a parallel with the way in which the existence of a watch compels belief in an intelligent watchmaker. Dawkins, in contrasting the differences between human design and its potential for planning with the workings of natural selection, therefore dubbed evolutionary processes as analogous to a blind watchmaker.

To dispel the idea that complexity cannot arise without the intervention of a "creator", Dawkins uses the example of the eye. Beginning with a simple organism, capable only of distinguishing between light and dark, in only the crudest fashion, he takes the reader through a series of minor modifications, which build in sophistication until we arrive at the elegant and complex mammalian eye. In making this journey, he points to several creatures whose various seeing apparatus are, whilst still useful, living examples of intermediate levels of complexity.

In developing his argument that natural selection can explain the complex adaptations of organisms, Dawkins' first concern is to illustrate the difference between the potential for the development of complexity as a result of pure randomness, as opposed to that of randomness coupled with cumulative selection. He demonstrates this by the example of the weasel program. Dawkins then describes his experiences with a more sophisticated computer model of artificial selection implemented in a program also called The Blind Watchmaker, which was sold separately as a teaching aid.

The program displayed a two-dimensional shape (a "biomorph") made up of straight black lines, the length, position, and angle of which were defined by a simple set of rules and instructions (analogous to a genome). Adding new lines (or removing them) based on these rules offered a discrete set of possible new shapes (mutations), which were displayed on screen so that the user could choose between them. The chosen mutation would then be the basis for another generation of biomorph mutants to be chosen from, and so on. Thus, the user, by selection, could steer the evolution of biomorphs. This process often produced images which were reminiscent of real organisms for instance beetles, bats, or trees. Dawkins speculated that the unnatural selection role played by the user in this program could be replaced by a more natural agent if, for example, colourful biomorphs could be selected by butterflies or other insects, via a touch-sensitive display set up in a garden.

"Biomorph" that randomly evolves following changes of several numeric "genes", determining its shape. The gene values are given as bars on the top.

In an appendix to a later edition of the book (1996), Dawkins explains how his experiences with computer models led him to a greater appreciation of the role of embryological constraints on natural selection. In particular, he recognised that certain patterns of embryological development could lead to the success of a related group of species in filling varied ecological niches, though he emphasised that this should not be confused with group selection. He dubbed this insight the evolution of evolvability.

After arguing that evolution is capable of explaining the origin of complexity, near the end of the book Dawkins uses this to argue against the existence of God: "a deity capable of engineering all the organized complexity in the world, either instantaneously or by guiding evolution ... must already have been vastly complex in the first place ..." He calls this "postulating organized complexity without offering an explanation."

In the preface, Dawkins states that he wrote the book "to persuade the reader, not just that the Darwinian world-view happens to be true, but that it is the only known theory that could, in principle, solve the mystery of our existence."

Reception

Tim Radford, writing in The Guardian, noted that despite Dawkins's "combative secular humanism", he had written "a patient, often beautiful book from 1986 that begins in a generous mood and sustains its generosity to the end." 30 years on, people still read the book, Radford argues, because it is "one of the best books ever to address, patiently and persuasively, the question that has baffled bishops and disconcerted dissenters alike: how did nature achieve its astonishing complexity and variety?"[1]

The philosopher and historian of biology, Michael T. Ghiselin, writing in The New York Times, comments that Dawkins "succeeds admirably in showing how natural selection allows biologists to dispense with such notions as purpose and design". He notes that analogies with computer programs have their limitations, but are still useful. Ghiselin observes that Dawkins is "not content with rebutting creationists" but goes on to press home his arguments against alternative theories to neo-Darwinism. He thinks the book fills the need to know more about evolution "that others [creationists] would conceal from them." He concludes that "Readers who are not outraged will be delighted."

The American philosopher of religion Dallas Willard, reflecting on the book, denies the connection of evolution to the validity of arguments from design to God: whereas, he asserts, Dawkins seems to consider the arguments to rest entirely on that basis. Willard argues that Chapter 6, "Origins and Miracles", attempts the "hard task" of making not just a blind watchmaker but "a blind watchmaker watchmaker", which he comments would have made an "honest" title for the book. He notes that Dawkins demolishes several "weak" arguments, such as the argument from personal incredulity. He denies that Dawkins's computer "exercises" and arguments from gradual change show that complex forms of life could have evolved. Willard concludes by arguing that in writing this book, Dawkins is not functioning as a scientist "in the line of Darwin", but as "just a naturalist metaphysician".

Influence

The engineer Theo Jansen read the book in 1986 and became fascinated by evolution and natural selection. Since 1990 he has been building kinetic sculptures, the Strandbeest, capable of walking when impelled by the wind.

The journalist Dick Pountain described Sean B. Carroll's 2005 account of evolutionary developmental biology, Endless Forms Most Beautiful, as the most important popular science book since The Blind Watchmaker, "and in effect a sequel [to it]."

 

Classical radicalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cla...