Search This Blog

Wednesday, May 11, 2022

Recurrent neural network

From Wikipedia, the free encyclopedia

A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed or undirected graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.

The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.

Both finite impulse and infinite impulse recurrent networks can have additional stored states, and the storage can be under direct control by the neural network. The storage can also be replaced by another network or graph if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN).

History

Recurrent neural networks were based on David Rumelhart's work in 1986. Hopfield networks – a special kind of RNN – were (re-)discovered by John Hopfield in 1982. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.

LSTM

Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1997 and set accuracy records in multiple applications domains.

Around 2007, LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications. In 2009, a Connectionist Temporal Classification (CTC)-trained LSTM network was the first RNN to win pattern recognition contests when it won several competitions in connected handwriting recognition. In 2014, the Chinese company Baidu used CTC-trained RNNs to break the 2S09 Switchboard Hub5'00 speech recognition dataset benchmark without using any traditional speech processing methods.

LSTM also improved large-vocabulary speech recognition and text-to-speech synthesis and was used in Google Android. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM.

LSTM broke records for improved machine translation, Language Modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning.

Architectures

RNNs come in many variants.

Fully recurrent

Compressed (left) and unfolded (right) basic recurrent neural network.

Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of all neurons. This is the most general neural network topology because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons. The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be layers are, in fact, different steps in time of the same fully recurrent neural network. The left-most item in the illustration shows the recurrent connections as the arc labeled 'v'. It is "unfolded" in time to produce the appearance of layers.

Elman networks and Jordan networks

The Elman network

An Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of context units (u in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one. At each time step, the input is fed forward and a learning rule is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multilayer perceptron.

Jordan networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also referred to as the state layer. They have a recurrent connection to themselves.

Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).

Elman network
Jordan network

Variables and functions

  • : input vector
  • : hidden layer vector
  • : output vector
  • , and : parameter matrices and vector
  • and : Activation functions

Hopfield

The Hopfield network is an RNN in which all connections across layers are equally sized. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration.

Bidirectional associative memory

Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for increased network stability and relevance to real-world applications.

A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.

Echo state

The echo state network (ESN) has a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain time series. A variant for spiking neurons is known as a liquid state machine.

Independently RNN (IndRNN)

The Independently recurrent neural network (IndRNN) addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with the non-saturated nonlinear functions such as ReLU. Using skip connections, deep networks can be trained.

Recursive

A recursive neural network is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation. They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing. The Recursive Neural Tensor Network uses a tensor-based composition function for all nodes in the tree.

Neural history compressor

The neural history compressor is an unsupervised stack of RNNs. At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.

The system effectively minimises the description length or the negative logarithm of the probability of the data. Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.

It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level). Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.

A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.

Second order RNNs

Second order RNNs use higher order weights instead of the standard weights, and states can be a product. This allows a direct mapping to a finite-state machine both in training, stability, and representation. Long short-term memory is an example of this but has no such formal mappings or proof of stability.

Long short-term memory

Long short-term memory unit

Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates". LSTM prevents backpropagated errors from vanishing or exploding. Instead, errors can flow backwards through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved. LSTM works even given long delays between significant events and can handle signals that mix low and high frequency components.

Many applications use stacks of LSTM RNNs and train them by Connectionist Temporal Classification (CTC) to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov models (HMM) and similar concepts.

Gated recurrent unit

Gated recurrent unit

Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks introduced in 2014. They are used in the full form and several simplified variants. Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory. They have fewer parameters than LSTM, as they lack an output gate.

Bi-directional

Bi-directional RNNs use a finite sequence to predict or label each element of the sequence based on the element's past and future contexts. This is done by concatenating the outputs of two RNNs, one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique has been proven to be especially useful when combined with LSTM RNNs.

Continuous-time

A continuous-time recurrent neural network (CTRNN) uses a system of ordinary differential equations to model the effects on a neuron of the incoming inputs.

For a neuron in the network with activation , the rate of change of activation is given by:

Where:

  •  : Time constant of postsynaptic node
  •  : Activation of postsynaptic node
  •  : Rate of change of activation of postsynaptic node
  •  : Weight of connection from pre to postsynaptic node
  •  : Sigmoid of x e.g. .
  •  : Activation of presynaptic node
  •  : Bias of presynaptic node
  •  : Input (if any) to node

CTRNNs have been applied to evolutionary robotics where they have been used to address vision, co-operation, and minimal cognitive behaviour.

Note that, by the Shannon sampling theorem, discrete time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations. This transformation can be thought of as occurring after the post-synaptic node activation functions have been low-pass filtered but prior to sampling.

Hierarchical

Hierarchical RNNs connect their neurons in various ways to decompose hierarchical behavior into useful subprograms. Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models.

Recurrent multilayer perceptron network

Generally, a recurrent multilayer perceptron network (RMLP) network consists of cascaded subnetworks, each of which contains multiple layers of nodes. Each of these subnetworks is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.

Multiple timescales model

A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization that depends on spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book On Intelligence. Such a hierarchy also agrees with theories of memory posited by philosopher Henri Bergson, which have been incorporated into an MTRNN model.

Neural Turing machines

Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources which they can interact with by attentional processes. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.

Differentiable neural computer

Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.

Neural network pushdown automata

Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analogue stacks that are differentiable and that are trained. In this way, they are similar in complexity to recognizers of context free grammars (CFGs).

Memristive Networks

Greg Snider of HP Labs describes a system of cortical computing with memristive nanodevices. The memristors (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. DARPA's SyNAPSE project has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures which may be based on memristive systems. Memristive networks are a particular type of physical neural network that have very similar properties to (Little-)Hopfield networks, as they have a continuous dynamics, have a limited memory capacity and they natural relax via the minimization of a function which is asymptotic to the Ising model. In this sense, the dynamics of a memristive circuit has the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering an analog memristive networks accounts to a peculiar type of neuromorphic engineering in which the device behavior depends on the circuit wiring, or topology. 

Training

Gradient descent

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation functions are differentiable. Various methods for doing so were developed in the 1980s and early 1990s by Werbos, Williams, Robinson, Schmidhuber, Hochreiter, Pearlmutter and others.

The standard method is called "backpropagation through time" or BPTT, and is a generalization of back-propagation for feed-forward networks. Like that method, it is an instance of automatic differentiation in the reverse accumulation mode of Pontryagin's minimum principle. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL, which is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space.

In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.

For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon. An online hybrid between BPTT and RTRL with intermediate complexity exists, along with variants for continuous time.

A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems. This problem is also solved in the independently recurrent neural network (IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different range including long-term memory can be learned without the gradient vanishing and exploding problem.

The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks. It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves stability of the algorithm, providing a unifying view on gradient calculation techniques for recurrent networks with local feedback.

One approach to the computation of gradient information in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations. It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.

Global optimization methods

Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared-difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.

The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.

Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:

  • Each weight encoded in the chromosome is assigned to the respective weight link of the network.
  • The training set is presented to the network which propagates the input signals forward.
  • The mean-squared-error is returned to the fitness function.
  • This function drives the genetic selection process.

Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:

  • When the neural network has learnt a certain percentage of the training data or
  • When the minimum value of the mean-squared-error is satisfied or
  • When the maximum number of training generations has been reached.

The stopping criterion is evaluated by the fitness function as it gets the reciprocal of the mean-squared-error from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared-error.

Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization.

Related fields and models

RNNs may behave chaotically. In such cases, dynamical systems theory may be used for analysis.

They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.

In particular, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX).

Libraries

  • Apache Singa
  • Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers.
  • Chainer: The first stable deep learning library that supports dynamic, define-by-run neural networks. Fully in Python, production support for CPU, GPU, distributed training.
  • Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka.
  • Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia.
  • Keras: High-level, easy to use API, providing a wrapper to many other deep learning libraries.
  • Microsoft Cognitive Toolkit
  • MXNet: a modern open-source deep learning framework used to train and deploy deep neural networks.
  • PyTorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
  • TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU, mobile
  • Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation.
  • Torch (www.torch.ch): A scientific computing framework with wide support for machine learning algorithms, written in C and lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter.

Applications

Applications of recurrent neural networks include:

Lattice Boltzmann methods

From Wikipedia, the free encyclopedia

Lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.

0:20
Computer simulation in two dimensions, using Lattice Boltzmann method, of a droplet that starts stretched and relaxes to its equilibrium circular shape

Algorithm

Unlike CFD methods that solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice. Due to its particulate nature and local dynamics, LBM has several advantages over other conventional CFD methods, especially in dealing with complex boundaries, incorporating microscopic interactions, and parallelization of the algorithm. A different interpretation of the lattice Boltzmann equation is that of a discrete-velocity Boltzmann equation. The numerical methods of solution of the system of partial differential equations then give rise to a discrete map, which can be interpreted as the propagation and collision of fictitious particles.

Schematic of D2Q9 lattice vectors for 2D Lattice Boltzmann

In an algorithm, there are collision and streaming steps. These evolve the density of the fluid , for the position and the time. As the fluid is on a lattice, the density has a number of components equal to the number of lattice vectors connected to each lattice point. As an example, the lattice vectors for a simple lattice used in simulations in two dimensions is shown here. This lattice is usually denoted D2Q9, for two dimensions and nine vectors: four vectors along north, east, south and west, plus four vectors to the corners of a unit square, plus a vector with both components zero. Then, for example vector , i.e., it points due south and so has no component but a component of . So one of the nine components of the total density at the central lattice point, , is that part of the fluid at point moving due south, at a speed in lattice units of one.

Then the steps that evolve the fluid in time are:

The collision step

which is the Bhatnagar Gross and Krook (BGK) model for relaxation to equilibrium via collisions between the molecules of a fluid. is the equilibrium density along direction i at the current density there. The model assumes that the fluid locally relaxes to equilibrium over a characteristic timescale . This timescale determines the kinematic viscosity, the larger it is, the larger is the kinematic viscosity.
 
The streaming step

As is, by definition, the fluid density at point at time , that is moving at a velocity of per time step, then at the next time step it will have flowed to point .

Advantages

  • The LBM was designed from scratch to run efficiently on massively parallel architectures, ranging from inexpensive embedded FPGAs and DSPs up to GPUs and heterogeneous clusters and supercomputers (even with a slow interconnection network). It enables complex physics and sophisticated algorithms. Efficiency leads to a qualitatively new level of understanding since it allows solving problems that previously could not be approached (or only with insufficient accuracy).
  • The method originates from a molecular description of a fluid and can directly incorporate physical terms stemming from a knowledge of the interaction between molecules. Hence it is an indispensable instrument in fundamental research, as it keeps the cycle between the elaboration of a theory and the formulation of a corresponding numerical model short.
  • Automated data pre-processing and lattice generation in a time that accounts for a small fraction of the total simulation.
  • Parallel data analysis, post-processing and evaluation.
  • Fully resolved multi-phase flow with small droplets and bubbles.
  • Fully resolved flow through complex geometries and porous media.
  • Complex, coupled flow with heat transfer and chemical reactions.

Limitations

Despite the increasing popularity of LBM in simulating complex fluid systems, this novel approach has some limitations. At present, high-Mach number flows in aerodynamics are still difficult for LBM, and a consistent thermo-hydrodynamic scheme is absent. However, as with Navier–Stokes based CFD, LBM methods have been successfully coupled with thermal-specific solutions to enable heat transfer (solids-based conduction, convection and radiation) simulation capability. For multiphase/multicomponent models, the interface thickness is usually large and the density ratio across the interface is small when compared with real fluids. Recently this problem has been resolved by Yuan and Schaefer who improved on models by Shan and Chen, Swift, and He, Chen, and Zhang. They were able to reach density ratios of 1000:1 by simply changing the equation of state. It has been proposed to apply Galilean Transformation to overcome the limitation of modelling high-speed fluid flows. Nevertheless, the wide applications and fast advancements of this method during the past twenty years have proven its potential in computational physics, including microfluidics: LBM demonstrates promising results in the area of high Knudsen number flows.

Development from the LGA method

LBM originated from the lattice gas automata (LGA) method, which can be considered as a simplified fictitious molecular dynamics model in which space, time, and particle velocities are all discrete. For example, in the 2-dimensional FHP Model each lattice node is connected to its neighbors by 6 lattice velocities on a triangular lattice; there can be either 0 or 1 particles at a lattice node moving with a given lattice velocity. After a time interval, each particle will move to the neighboring node in its direction; this process is called the propagation or streaming step. When more than one particle arrives at the same node from different directions, they collide and change their velocities according to a set of collision rules. Streaming steps and collision steps alternate. Suitable collision rules should conserve the particle number (mass), momentum, and energy before and after the collision. LGA suffer from several innate defects for use in hydrodynamic simulations: lack of Galilean invariance for fast flows, statistical noise and poor Reynolds number scaling with lattice size. LGA are, however, well suited to simplify and extend the reach of reaction diffusion and molecular dynamics models.

The main motivation for the transition from LGA to LBM was the desire to remove the statistical noise by replacing the Boolean particle number in a lattice direction with its ensemble average, the so-called density distribution function. Accompanying this replacement, the discrete collision rule is also replaced by a continuous function known as the collision operator. In the LBM development, an important simplification is to approximate the collision operator with the Bhatnagar-Gross-Krook (BGK) relaxation term. This lattice BGK (LBGK) model makes simulations more efficient and allows flexibility of the transport coefficients. On the other hand, it has been shown that the LBM scheme can also be considered as a special discretized form of the continuous Boltzmann equation. From Chapman-Enskog theory, one can recover the governing continuity and Navier–Stokes equations from the LBM algorithm.

Lattices and the DnQm classification

Lattice Boltzmann models can be operated on a number of different lattices, both cubic and triangular, and with or without rest particles in the discrete distribution function.

A popular way of classifying the different methods by lattice is the DnQm scheme. Here "Dn" stands for "n dimensions", while "Qm" stands for "m speeds". For example, D3Q15 is a 3-dimensional lattice Boltzmann model on a cubic grid, with rest particles present. Each node has a crystal shape and can deliver particles to 15 nodes: each of the 6 neighboring nodes that share a surface, the 8 neighboring nodes sharing a corner, and itself. (The D3Q15 model does not contain particles moving to the 12 neighboring nodes that share an edge; adding those would create a "D3Q27" model.)

Real quantities as space and time need to be converted to lattice units prior to simulation. Nondimensional quantities, like the Reynolds number, remain the same.

Lattice units conversion

In most Lattice Boltzmann simulations is the basic unit for lattice spacing, so if the domain of length has lattice units along its entire length, the space unit is simply defined as . Speeds in lattice Boltzmann simulations are typically given in terms of the speed of sound. The discrete time unit can therefore be given as , where the denominator is the physical speed of sound.

For small-scale flows (such as those seen in porous media mechanics), operating with the true speed of sound can lead to unacceptably short time steps. It is therefore common to raise the lattice Mach number to something much larger than the real Mach number, and compensating for this by raising the viscosity as well in order to preserve the Reynolds number.

Simulation of mixtures

Simulating multiphase/multicomponent flows has always been a challenge to conventional CFD because of the moving and deformable interfaces. More fundamentally, the interfaces between different phases (liquid and vapor) or components (e.g., oil and water) originate from the specific interactions among fluid molecules. Therefore, it is difficult to implement such microscopic interactions into the macroscopic Navier–Stokes equation. However, in LBM, the particulate kinetics provides a relatively easy and consistent way to incorporate the underlying microscopic interactions by modifying the collision operator. Several LBM multiphase/multicomponent models have been developed. Here phase separations are generated automatically from the particle dynamics and no special treatment is needed to manipulate the interfaces as in traditional CFD methods. Successful applications of multiphase/multicomponent LBM models can be found in various complex fluid systems, including interface instability, bubble/droplet dynamics, wetting on solid surfaces, interfacial slip, and droplet electrohydrodynamic deformations.

A lattice Boltzmann model for simulation of gas mixture combustion capable of accommodating significant density variations at low-Mach number regime has been recently proposed. To this respect, it is worth to notice that, since LBM deals with a larger set of fields (as compared to conventional CFD), the simulation of reactive gas mixtures presents some additional challenges in terms of memory demand as far as large detailed combustion mechanisms are concerned. Those issues may be addressed, though, by resorting to systematic model reduction techniques.

Thermal lattice-Boltzmann method

Currently (2009), a thermal lattice-Boltzmann method (TLBM) falls into one of three categories: the multi-speed approach, the passive scalar approach, and the thermal energy distribution.

Derivation of Navier–Stokes equation from discrete LBE

Starting with the discrete lattice Boltzmann equation (also referred to as LBGK equation due to the collision operator used). We first do a 2nd-order Taylor series expansion about the left side of the LBE. This is chosen over a simpler 1st-order Taylor expansion as the discrete LBE cannot be recovered. When doing the 2nd-order Taylor series expansion, the zero derivative term and the first term on the right will cancel, leaving only the first and second derivative terms of the Taylor expansion and the collision operator:

For simplicity, write as . The slightly simplified Taylor series expansion is then as follows, where ":" is the colon product between dyads:

By expanding the particle distribution function into equilibrium and non-equilibrium components and using the Chapman-Enskog expansion, where is the Knudsen number, the Taylor-expanded LBE can be decomposed into different magnitudes of order for the Knudsen number in order to obtain the proper continuum equations:

The equilibrium and non-equilibrium distributions satisfy the following relations to their macroscopic variables (these will be used later, once the particle distributions are in the "correct form" in order to scale from the particle to macroscopic level):

The Chapman-Enskog expansion is then:

By substituting the expanded equilibrium and non-equilibrium into the Taylor expansion and separating into different orders of , the continuum equations are nearly derived.

For order :

For order :

Then, the second equation can be simplified with some algebra and the first equation into the following:

Applying the relations between the particle distribution functions and the macroscopic properties from above, the mass and momentum equations are achieved:

The momentum flux tensor has the following form then:

where is shorthand for the square of the sum of all the components of (i. e. ), and the equilibrium particle distribution with second order to be comparable to the Navier–Stokes equation is:

The equilibrium distribution is only valid for small velocities or small Mach numbers. Inserting the equilibrium distribution back into the flux tensor leads to:

Finally, the Navier–Stokes equation is recovered under the assumption that density variation is small:

This derivation follows the work of Chen and Doolen.

Mathematical equations for simulations

The continuous Boltzmann equation is an evolution equation for a single particle probability distribution function and the internal energy density distribution function (He et al.) are each respectively:

where is related to by

is an external force, is a collision integral, and (also labeled by in literature) is the microscopic velocity. The external force is related to temperature external force by the relation below. A typical test for one's model is the Rayleigh–Bénard convection for .

Macroscopic variables such as density , velocity , and temperature can be calculated as the moments of the density distribution function:

The lattice Boltzmann method discretizes this equation by limiting space to a lattice and the velocity space to a discrete set of microscopic velocities (i. e. ). The microscopic velocities in D2Q9, D3Q15, and D3Q19 for example are given as:

The single-phase discretized Boltzmann equation for mass density and internal energy density are:

The collision operator is often approximated by a BGK collision operator under the condition it also satisfies the conservation laws:

In the collision operator is the discrete, equilibrium particle probability distribution function. In D2Q9 and D3Q19, it is shown below for an incompressible flow in continuous and discrete form where D, R, and T are the dimension, universal gas constant, and absolute temperature respectively. The partial derivation for the continuous to discrete form is provided through a simple derivation to second order accuracy.

Letting yields the final result:

As much work has already been done on a single-component flow, the following TLBM will be discussed. The multicomponent/multiphase TLBM is also more intriguing and useful than simply one component. To be in line with current research, define the set of all components of the system (i. e. walls of porous media, multiple fluids/gases, etc.) with elements .

The relaxation parameter,, is related to the kinematic viscosity,, by the following relationship:

The moments of the give the local conserved quantities. The density is given by

and the weighted average velocity, , and the local momentum are given by

In the above equation for the equilibrium velocity , the term is the interaction force between a component and the other components. It is still the subject of much discussion as it is typically a tuning parameter that determines how fluid-fluid, fluid-gas, etc. interact. Frank et al. list current models for this force term. The commonly used derivations are Gunstensen chromodynamic model, Swift's free energy-based approach for both liquid/vapor systems and binary fluids, He's intermolecular interaction-based model, the Inamuro approach, and the Lee and Lin approach.

The following is the general description for as given by several authors.

is the effective mass and is Green's function representing the interparticle interaction with as the neighboring site. Satisfying and where represents repulsive forces. For D2Q9 and D3Q19, this leads to

The effective mass as proposed by Shan and Chen uses the following effective mass for a single-component, multiphase system. The equation of state is also given under the condition of a single component and multiphase.

So far, it appears that and are free constants to tune but once plugged into the system's equation of state(EOS), they must satisfy the thermodynamic relationships at the critical point such that and . For the EOS, is 3.0 for D2Q9 and D3Q19 while it equals 10.0 for D3Q15.

It was later shown by Yuan and Schaefer[19] that the effective mass density needs to be changed to simulate multiphase flow more accurately. They compared the Shan and Chen (SC), Carnahan-Starling (C–S), van der Waals (vdW), Redlich–Kwong (R–K), Redlich–Kwong Soave (RKS), and Peng–Robinson (P–R) EOS. Their results revealed that the SC EOS was insufficient and that C–S, P–R, R–K, and RKS EOS are all more accurate in modeling multiphase flow of a single component.

For the popular isothermal Lattice Boltzmann methods these are the only conserved quantities. Thermal models also conserve energy and therefore have an additional conserved quantity:

Applications

During the last years, the LBM has proven to be a powerful tool for solving problems at different length and time scales. Some of the applications of LBM include:

  • Porous Media flows
  • Biomedical Flows
  • Earth sciences (Soil filtration).
  • Energy Sciences (Fuel Cells).

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...