Search This Blog

Sunday, December 3, 2023

Structural bioinformatics

From Wikipedia, the free encyclopedia
Three-dimensional structure of a protein

Structural bioinformatics is the branch of bioinformatics that is related to the analysis and prediction of the three-dimensional structure of biological macromolecules such as proteins, RNA, and DNA. It deals with generalizations about macromolecular 3D structures such as comparisons of overall folds and local motifs, principles of molecular folding, evolution, binding interactions, and structure/function relationships, working both from experimentally solved structures and from computational models. The term structural has the same meaning as in structural biology, and structural bioinformatics can be seen as a part of computational structural biology. The main objective of structural bioinformatics is the creation of new methods of analysing and manipulating biological macromolecular data in order to solve problems in biology and generate new knowledge.

Introduction

Protein structure

The structure of a protein is directly related to its function. The presence of certain chemical groups in specific locations allows proteins to act as enzymes, catalyzing several chemical reactions. In general, protein structures are classified into four levels: primary (sequences), secondary (local conformation of the polypeptide chain), tertiary (three-dimensional structure of the protein fold), and quaternary (association of multiple polypeptide structures). Structural bioinformatics mainly addresses interactions among structures taking into consideration their space coordinates. Thus, the primary structure is better analyzed in traditional branches of bioinformatics. However, the sequence implies restrictions that allow the formation of conserved local conformations of the polypeptide chain, such as alpha-helix, beta-sheets, and loops (secondary structure). Also, weak interactions (such as hydrogen bonds) stabilize the protein fold. Interactions could be intrachain, i.e., when occurring between parts of the same protein monomer (tertiary structure), or interchain, i.e., when occurring between different structures (quaternary structure). Finally, the topological arrangement of interactions, whether strong or weak, and entanglements is being studied in the field of structural bioinformatics, utilizing frameworks such as circuit topology.

Structure visualization

Structural visualization of BACTERIOPHAGE T4 LYSOZYME (PDB ID: 2LZM). (A) Cartoon; (B) Lines; (C) Surface; (D) Sticks.

Protein structure visualization is an important issue for structural bioinformatics. It allows users to observe static or dynamic representations of the molecules, also allowing the detection of interactions that may be used to make inferences about molecular mechanisms. The most common types of visualization are:

  • Cartoon: this type of protein visualization highlights the secondary structure differences. In general, α-helix is represented as a type of screw, β-strands as arrows, and loops as lines.
  • Lines: each amino acid residue is represented by thin lines, which allows a low cost for graphic rendering.
  • Surface: in this visualization, the external shape of the molecule is shown.
  • Sticks: each covalent bond between amino acid atoms is represented as a stick. This type of visualization is most used to visualize interactions between amino acids...

DNA structure

The classic DNA duplexes structure was initially described by Watson and Crick (and contributions of Rosalind Franklin). The DNA molecule is composed of three substances: a phosphate group, a pentose, and a nitrogen base (adenine, thymine, cytosine, or guanine). The DNA double helix structure is stabilized by hydrogen bonds formed between base pairs: adenine with thymine (A-T) and cytosine with guanine (C-G). Many structural bioinformatics studies have focused on understanding interactions between DNA and small molecules, which has been the target of several drug design studies.

Interactions

Interactions are contacts established between parts of molecules at different levels. They are responsible for stabilizing protein structures and perform a varied range of activities. In biochemistry, interactions are characterized by the proximity of atom groups or molecules regions that present an effect upon one another, such as electrostatic forces, hydrogen bonding, and hydrophobic effect. Proteins can perform several types of interactions, such as protein-protein interactions (PPI), protein-peptide interactions, protein-ligand interactions (PLI), and protein-DNA interaction.

Contacts between two amino acid residues: Q196-R200 (PDB ID- 2X1C) 

Calculating contacts

Calculating contacts is an important task in structural bioinformatics, being important for the correct prediction of protein structure and folding, thermodynamic stability, protein-protein and protein-ligand interactions, docking and molecular dynamics analyses, and so on.

Traditionally, computational methods have used threshold distance between atoms (also called cutoff) to detect possible interactions. This detection is performed based on Euclidean distance and angles between atoms of determined types. However, most of the methods based on simple Euclidean distance cannot detect occluded contacts. Hence, cutoff free methods, such as Delaunay triangulation, have gained prominence in recent years. In addition, the combination of a set of criteria, for example, physicochemical properties, distance, geometry, and angles, have been used to improve the contact determination.

Protein Data Bank (PDB)

The number of structures from PDB. (A) The overall growth of released structures in Protein DataBank per year. (B) Growth of structures deposited in PDB from X-ray crystallography, NMR spectroscopy, and 3D electron microscopy experiments per year. Source: https://www.rcsb.org/stats/growth

The Protein Data Bank (PDB) is a database of 3D structure data for large biological molecules, such as proteins, DNA, and RNA. PDB is managed by an international organization called the Worldwide Protein Data Bank (wwPDB), which is composed of several local organizations, as. PDBe, PDBj, RCSB, and BMRB. They are responsible for keeping copies of PDB data available on the internet at no charge. The number of structure data available at PDB has increased each year, being obtained typically by X-ray crystallography, NMR spectroscopy, or cryo-electron microscopy.

Data format

The PDB format (.pdb) is the legacy textual file format used to store information of three-dimensional structures of macromolecules used by the Protein Data Bank. Due to restrictions in the format structure conception, the PDB format does not allow large structures containing more than 62 chains or 99999 atom records.

The PDBx/mmCIF (macromolecular Crystallographic Information File) is a standard text file format for representing crystallographic information. Since 2014, the PDB format was substituted as the standard PDB archive distribution by the PDBx/mmCIF file format (.cif). While PDB format contains a set of records identified by a keyword of up to six characters, the PDBx/mmCIF format uses a structure based on key and value, where the key is a name that identifies some feature and the value is the variable information.

Other structural databases

In addition to the Protein Data Bank (PDB), there are several databases of protein structures and other macromolecules. Examples include:

  • MMDB: Experimentally determined three-dimensional structures of biomolecules derived from Protein Data Bank (PDB).
  • Nucleic acid Data Base (NDB): Experimentally determined information about nucleic acids (DNA, RNA).
  • Structural Classification of Proteins (SCOP): Comprehensive description of the structural and evolutionary relationships between structurally known proteins.
  • TOPOFIT-DB: Protein structural alignments based on the TOPOFIT method.
  • Electron Density Server (EDS): Electron-density maps and statistics about the fit of crystal structures and their maps.
  • CASP: Prediction Center Community-wide, worldwide experiment for protein structure prediction CASP.
  • PISCES server for creating non-redundant lists of proteins: Generates PDB list by sequence identity and structural quality criteria.
  • The Structural Biology Knowledgebase: Tools to aid in protein research design.
  • ProtCID: The Protein Common Interface Database Database of similar protein-protein interfaces in crystal structures of homologous proteins.
  • AlphaFold:AlphaFold - Protein Structure Database.

Structure comparison

Structural alignment

Structural alignment is a method for comparison between 3D structures based on their shape and conformation. It could be used to infer the evolutionary relationship among a set of proteins even with low sequence similarity. Structural alignment implies superimposing a 3D structure over a second one, rotating and translating atoms in corresponding positions (in general, using the Cα atoms or even the backbone heavy atoms C, N, O, and Cα). Usually, the alignment quality is evaluated based on the root-mean-square deviation (RMSD) of atomic positions, i.e., the average distance between atoms after superimposition:

where δi is the distance between atom i and either a reference atom corresponding in the other structure or the mean coordinate of the N equivalent atoms. In general, the RMSD outcome is measured in Ångström (Å) unit, which is equivalent to 10−10 m. The nearer to zero the RMSD value, the more similar are the structures.

Graph-based structural signatures

Structural signatures, also called fingerprints, are macromolecule pattern representations that can be used to infer similarities and differences. Comparisons among a large set of proteins using RMSD still is a challenge due to the high computational cost of structural alignments. Structural signatures based on graph distance patterns among atom pairs have been used to determine protein identifying vectors and to detect non-trivial information. Furthermore, linear algebra and machine learning can be used for clustering protein signatures, detecting protein-ligand interactions, predicting ΔΔG, and proposing mutations based on Euclidean distance.

Structure prediction

A Ramachandran plot generated from human PCNA (PDB ID 1AXC). The red, brown, and yellow regions represent the favored, allowed, and "generously allowed" regions as defined by ProCheck. This plot can be used to verify incorrectly modeled amino acids.

The atomic structures of molecules can be obtained by several methods, such as X-ray crystallography (XRC), NMR spectroscopy, and 3D electron microscopy; however, these processes can present high costs and sometimes some structures can be hardly established, such as membrane proteins. Hence, it is necessary to use computational approaches for determining 3D structures of macromolecules. The structure prediction methods are classified into comparative modeling and de novo modeling.

Comparative modeling

Comparative modeling, also known as homology modeling, corresponds to the methodology to construct three-dimensional structures from an amino acid sequence of a target protein and a template with known structure. The literature has described that evolutionarily related proteins tend to present a conserved three-dimensional structure. In addition, sequences of distantly related proteins with identity lower than 20% can present different folds.

De novo modeling

In structural bioinformatics, de novo modeling, also known as ab initio modeling, refers to approaches for obtaining three-dimensional structures from sequences without the necessity of a homologous known 3D structure. Despite the new algorithms and methods proposed in the last years, de novo protein structure prediction is still considered one of the remain outstanding issues in modern science.

Structure validation

After structure modeling, an additional step of structure validation is necessary since many of both comparative and 'de novo' modeling algorithms and tools use heuristics to try assembly the 3D structure, which can generate many errors. Some validation strategies consist of calculating energy scores and comparing them with experimentally determined structures. For example, the DOPE score is an energy score used by the MODELLER tool for determining the best model.

Another validation strategy is calculating φ and ψ backbone dihedral angles of all residues and construct a Ramachandran plot. The side-chain of amino acids and the nature of interactions in the backbone restrict these two angles, and thus, the visualization of allowed conformations could be performed based on the Ramachandran plot. A high quantity of amino acids allocated in no permissive positions of the chart is an indication of a low-quality modeling.

Prediction tools

A list with commonly used software tools for protein structure prediction, including comparative modeling, protein threading, de novo protein structure prediction, and secondary structure prediction is available in the list of protein structure prediction software.

Molecular docking

Representation of docking a ligand (green) to a protein target (black).

Molecular docking (also referred to only as docking) is a method used to predict the orientation coordinates of a molecule (ligand) when bound to another one (receptor or target). The binding may be mostly through non-covalent interactions while covalently linked binding can also be studied. Molecular docking aims to predict possible poses (binding modes) of the ligand when it interacts with specific regions on the receptor. Docking tools use force fields to estimate a score for ranking best poses that favored better interactions between the two molecules.

In general, docking protocols are used to predict the interactions between small molecules and proteins. However, docking also can be used to detect associations and binding modes among proteins, peptides, DNA or RNA molecules, carbohydrates, and other macromolecules.

Virtual screening

Virtual screening (VS) is a computational approach used for fast screening of large compound libraries for drug discovery. Usually, virtual screening uses docking algorithms to rank small molecules with the highest affinity to a target receptor.

In recent times, several tools have been used to evaluate the use of virtual screening in the process of discovering new drugs. However, problems such as missing information, inaccurate understanding of drug-like molecular properties, weak scoring functions, or insufficient docking strategies hinder the docking process. Hence, the literature has described that it is still not considered a mature technology.

Molecular dynamics

Example: molecular dynamics of a glucose-tolerant β-Glucosidase

Molecular dynamics (MD) is a computational method for simulating interactions between molecules and their atoms during a given period of time. This method allows the observation of the behavior of molecules and their interactions, considering the system as a whole. To calculate the behavior of the systems and, thus, determine the trajectories, an MD can use Newton's equation of motion, in addition to using molecular mechanics methods to estimate the forces that occur between particles (force fields).

Applications

Informatics approaches used in structural bioinformatics are:

  • Selection of Target - Potential targets are identified by comparing them with databases of known structures and sequence. The importance of a target can be decided on the basis of published literature. Target can also be selected on the basis of its protein domain. Protein domains are building blocks that can be rearranged to form new proteins. They can be studied in isolation initially.
  • Tracking X-ray crystallography trials - X-Ray crystallography can be used to reveal three-dimensional structure of a protein. But, in order to use X-ray for studying protein crystals, pure proteins crystals must be formed, which can take a lot of trials. This leads to a need for tracking the conditions and results of trials. Furthermore, supervised machine learning algorithms can be used on the stored data to identify conditions that might increase the yield of pure crystals.
  • Analysis of X-Ray crystallographic data - The diffraction pattern obtained as a result of bombarding X-rays on electrons is Fourier transform of electron density distribution. There is a need for algorithms that can deconvolve Fourier transform with partial information ( due to missing phase information, as the detectors can only measure amplitude of diffracted X-rays, and not the phase shifts ). Extrapolation technique such as Multiwavelength anomalous dispersion can be used to generate electron density map, which uses the location of selenium atoms as a reference to determine rest of the structure. Standard Ball-and-stick model is generated from the electron density map.
  • Analysis of NMR spectroscopy data - Nuclear magnetic resonance spectroscopy experiments produce two (or higher) dimensional data, with each peak corresponding to a chemical group within the sample. Optimization methods are used to convert spectra into three dimensional structures.
  • Correlating Structural information with functional information - Structural studies can be used as probe for structural-functional relationship.

Tools

Distance criteria for contact definition
Type Max distance criteria
Hydrogen bond 3,9 Å
Hydrophobic interaction 5 Å
Ionic interaction 6 Å
Aromatic Stacking 6 Å
List of structural bioinformatics tools
Software Description
I-TASSER Predicting three-dimensional structure model of protein molecules from amino acid sequences.
MOE Molecular Operating Environment (MOE) is an extensive platform including structural modeling for proteins, protein families and antibodies
SBL The Structural Bioinformatics Library: end-user applications and advanced algorithms
BALLView Molecular modeling and visualization
STING Visualization and analysis
PyMOL Viewer and modeling
VMD Viewer, molecular dynamics
KiNG An open-source Java kinemage viewer
STRIDE Determination of secondary structure from coordinates
DSSP Algorithm assigning a secondary structure to the amino acids of a protein
MolProbity Structure-validation web server
PROCHECK A structure-validation web service
CheShift A protein structure-validation on-line application
3D-mol.js A molecular viewer for web applications developed using Javascript
PROPKA Rapid prediction of protein pKa values based on empirical structure/function relationships
CARA Computer Aided Resonance Assignment
Docking Server A molecular docking web server
StarBiochem A java protein viewer, features direct search of protein databank
SPADE The structural proteomics application development environment
PocketSuite A web portal for various web-servers for binding site-level analysis. PocketSuite is divided into:: PocketDepth (Binding site prediction)

PocketMatch (Binding site comparison), PocketAlign (Binding site alignment), and PocketAnnotate (Binding site annotation).

MSL An open-source C++ molecular modeling software library for the implementation of structural analysis, prediction and design methods
PSSpred Protein secondary structure prediction
Proteus Webtool for suggesting mutation pairs
SDM A server for predicting effects of mutations on protein stability

Information field theory

From Wikipedia, the free encyclopedia

Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas. IFT summarizes the information available on a physical field using Bayesian probabilities. It uses computational techniques developed for quantum field theory and statistical field theory to handle the infinite number of degrees of freedom of a field and to derive algorithms for the calculation of field expectation values. For example, the posterior expectation value of a field generated by a known Gaussian process and measured by a linear device with known Gaussian noise statistics is given by a generalized Wiener filter applied to the measured data. IFT extends such known filter formula to situations with nonlinear physics, nonlinear devices, non-Gaussian field or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it uses Feynman diagrams, renormalisation flow equations, and other methods from mathematical physics.

Motivation

Fields play an important role in science, technology, and economy. They describe the spatial variations of a quantity, like the air temperature, as a function of position. Knowing the configuration of a field can be of large value. Measurements of fields, however, can never provide the precise field configuration with certainty. Physical fields have an infinite number of degrees of freedom, but the data generated by any measurement device is always finite, providing only a finite number of constraints on the field. Thus, an unambiguous deduction of such a field from measurement data alone is impossible and only probabilistic inference remains as a means to make statements about the field. Fortunately, physical fields exhibit correlations and often follow known physical laws. Such information is best fused into the field inference in order to overcome the mismatch of field degrees of freedom to measurement points. To handle this, an information theory for fields is needed, and that is what information field theory is.

Concepts

Bayesian inference

is a field value at a location in a space . The prior knowledge about the unknown signal field is encoded in the probability distribution . The data provides additional information on via the likelihood that gets incorporated into the posterior probability

according to Bayes theorem.

Information Hamiltonian

In IFT Bayes theorem is usually rewritten in the language of a statistical field theory,

with the information Hamiltonian defined as
the negative logarithm of the joint probability of data and signal and with the partition function being
This reformulation of Bayes theorem permits the usage of methods of mathematical physics developed for the treatment of statistical field theories and quantum field theories.

Fields

As fields have an infinite number of degrees of freedom, the definition of probabilities over spaces of field configurations has subtleties. Identifying physical fields as elements of function spaces provides the problem that no Lebesgue measure is defined over the latter and therefore probability densities can not be defined there. However, physical fields have much more regularity than most elements of function spaces, as they are continuous and smooth at most of their locations. Therefore less general, but sufficiently flexible constructions can be used to handle the infinite number of degrees of freedom of a field.

A pragmatic approach is to regard the field to be discretized in terms of pixels. Each pixel carries a single field value that is assumed to be constant within the pixel volume. All statements about the continuous field have then to be cast into its pixel representation. This way, one deals with finite dimensional field spaces, over which probability densities are well definable.

In order for this description to be a proper field theory, it is further required that the pixel resolution can always be refined, while expectation values of the discretized field converge to finite values:

Path integrals

If this limit exists, one can talk about the field configuration space integral or path integral

irrespective of the resolution it might be evaluated numerically.

Gaussian prior

The simplest prior for a field is that of a zero mean Gaussian probability distribution

The determinant in the denominator might be ill-defined in the continuum limit , however, all what is necessary for IFT to be consistent is that this determinant can be estimated for any finite resolution field representation with and that this permits the calculation of convergent expectation values.

A Gaussian probability distribution requires the specification of the field two point correlation function with coefficients

and a scalar product for continuous fields
with respect to which the inverse signal field covariance is constructed, i.e.

The corresponding prior information Hamiltonian reads

Measurement equation

The measurement data was generated with the likelihood . In case the instrument was linear, a measurement equation of the form

can be given, in which is the instrument response, which describes how the data on average reacts to the signal, and is the noise, simply the difference between data and linear signal response . It is essential to note that the response translates the infinite dimensional signal vector into the finite dimensional data space. In components this reads

where a vector component notation was also introduced for signal and data vectors.

If the noise follows a signal independent zero mean Gaussian statistics with covariance , then the likelihood is Gaussian as well,

and the likelihood information Hamiltonian is
A linear measurement of a Gaussian signal, subject to Gaussian and signal-independent noise leads to a free IFT.

Free theory

Free Hamiltonian

The joint information Hamiltonian of the Gaussian scenario described above is

where denotes equality up to irrelevant constants, which, in this case, means expressions that are independent of . From this is it clear, that the posterior must be a Gaussian with mean and variance ,
where equality between the right and left hand sides holds as both distributions are normalized, .

Generalized Wiener filter

The posterior mean

is also known as the generalized Wiener filter solution and the uncertainty covariance
as the Wiener variance.

In IFT, is called the information source, as it acts as a source term to excite the field (knowledge), and the information propagator, as it propagates information from one location to another in

Interacting theory

Interacting Hamiltonian

If any of the assumptions that lead to the free theory is violated, IFT becomes an interacting theory, with terms that are of higher than quadratic order in the signal field. This happens when the signal or the noise are not following Gaussian statistics, when the response is non-linear, when the noise depends on the signal, or when response or covariances are uncertain.

In this case, the information Hamiltonian might be expandable in a Taylor-Fréchet series,

where is the free Hamiltonian, which alone would lead to a Gaussian posterior, and is the interacting Hamiltonian, which encodes non-Gaussian corrections. The first and second order Taylor coefficients are often identified with the (negative) information source and information propagator , respectively. The higher coefficients are associated with non-linear self-interactions.

Classical field

The classical field minimizes the information Hamiltonian,

and therefore maximizes the posterior:
The classical field is therefore the maximum a posteriori estimator of the field inference problem.

Critical filter

The Wiener filter problem requires the two point correlation of a field to be known. If it is unknown, it has to be inferred along with the field itself. This requires the specification of a hyperprior . Often, statistical homogeneity (translation invariance) can be assumed, implying that is diagonal in Fourier space (for being a dimensional Cartesian space). In this case, only the Fourier space power spectrum needs to be inferred. Given a further assumption of statistical isotropy, this spectrum depends only on the length of the Fourier vector and only a one dimensional spectrum has to be determined. The prior field covariance reads then in Fourier space coordinates .

If the prior on is flat, the joint probability of data and spectrum is

where the notation of the information propagator and source of the Wiener filter problem was used again. The corresponding information Hamiltonian is
where denotes equality up to irrelevant constants (here: constant with respect to ). Minimizing this with respect to , in order to get its maximum a posteriori power spectrum estimator, yields
where the Wiener filter mean and the spectral band projector were introduced. The latter commutes with , since is diagonal in Fourier space. The maximum a posteriori estimator for the power spectrum is therefore
It has to be calculated iteratively, as and depend both on themselves. In an empirical Bayes approach, the estimated would be taken as given. As a consequence, the posterior mean estimate for the signal field is the corresponding and its uncertainty the corresponding in the empirical Bayes approximation.

The resulting non-linear filter is called the critical filter. The generalization of the power spectrum estimation formula as

exhibits a perception thresholds for , meaning that the data variance in a Fourier band has to exceed the expected noise level by a certain threshold before the signal reconstruction becomes non-zero for this band. Whenever the data variance exceeds this threshold slightly, the signal reconstruction jumps to a finite excitation level, similar to a first order phase transition in thermodynamic systems. For filter with perception of the signal starts continuously as soon the data variance exceeds the noise level. The disappearance of the discontinuous perception at is similar to a thermodynamic system going through a critical point. Hence the name critical filter.

The critical filter, extensions thereof to non-linear measurements, and the inclusion of non-flat spectrum priors, permitted the application of IFT to real world signal inference problems, for which the signal covariance is usually unknown a priori.

IFT application examples

Radio interferometric image of radio galaxies in the galaxy cluster Abell 2219. The images were constructed by data back-projection (top), the CLEAN algorithm (middle), and the RESOLVE algorithm (bottom). Negative and therefore not physical fluxes are displayed in white.

The generalized Wiener filter, that emerges in free IFT, is in broad usage in signal processing. Algorithms explicitly based on IFT were derived for a number of applications. Many of them are implemented using the Numerical Information Field Theory (NIFTy) library.

  • D³PO is a code for Denoising, Deconvolving, and Decomposing Photon Observations. It reconstructs images from individual photon count events taking into account the Poisson statistics of the counts and an instrument response function. It splits the sky emission into an image of diffuse emission and one of point sources, exploiting the different correlation structure and statistics of the two components for their separation. D³PO has been applied to data of the Fermi and the RXTE satellites.
  • RESOLVE is a Bayesian algorithm for aperture synthesis imaging in radio astronomy. RESOLVE is similar to D³PO, but it assumes a Gaussian likelihood and a Fourier space response function. It has been applied to data of the Very Large Array.
  • PySESA is a Python framework for Spatially Explicit Spectral Analysis for spatially explicit spectral analysis of point clouds and geospatial data.

Advanced theory

Many techniques from quantum field theory can be used to tackle IFT problems, like Feynman diagrams, effective actions, and the field operator formalism.

Feynman diagrams

First three Feynman diagrams contributing to the posterior mean estimate of a field. A line expresses an information propagator, a dot at the end of a line to an information source, and a vertex to an interaction term. The first diagram encodes the Wiener filter, the second a non-linear correction, and the third an uncertainty correction to the Wiener filter.

In case the interaction coefficients in a Taylor-Fréchet expansion of the information Hamiltonian

are small, the log partition function, or Helmholtz free energy,
can be expanded asymptotically in terms of these coefficients. The free Hamiltonian specifies the mean and variance of the Gaussian distribution over which the expansion is integrated. This leads to a sum over the set of all connected Feynman diagrams. From the Helmholtz free energy, any connected moment of the field can be calculated via
Situations where small expansion parameters exist that are needed for such a diagrammatic expansion to converge are given by nearly Gaussian signal fields, where the non-Gaussianity of the field statistics leads to small interaction coefficients . For example, the statistics of the Cosmic Microwave Background is nearly Gaussian, with small amounts of non-Gaussianities believed to be seeded during the inflationary epoch in the Early Universe.

Effective action

In order to have a stable numerics for IFT problems, a field functional that if minimized provides the posterior mean field is needed. Such is given by the effective action or Gibbs free energy of a field. The Gibbs free energy can be constructed from the Helmholtz free energy via a Legendre transformation. In IFT, it is given by the difference of the internal information energy

and the Shannon entropy
for temperature , where a Gaussian posterior approximation is used with the approximate data containing the mean and the dispersion of the field.

The Gibbs free energy is then

the Kullback-Leibler divergence between approximative and exact posterior plus the Helmholtz free energy. As the latter does not depend on the approximate data , minimizing the Gibbs free energy is equivalent to minimizing the Kullback-Leibler divergence between approximate and exact posterior. Thus, the effective action approach of IFT is equivalent to the variational Bayesian methods, which also minimize the Kullback-Leibler divergence between approximate and exact posteriors.

Minimizing the Gibbs free energy provides approximatively the posterior mean field

whereas minimizing the information Hamiltonian provides the maximum a posteriori field. As the latter is known to over-fit noise, the former is usually a better field estimator.

Operator formalism

The calculation of the Gibbs free energy requires the calculation of Gaussian integrals over an information Hamiltonian, since the internal information energy is

Such integrals can be calculated via a field operator formalism, in which
is the field operator. This generates the field expression within the integral if applied to the Gaussian distribution function,
and any higher power of the field if applied several times,
If the information Hamiltonian is analytical, all its terms can be generated via the field operator
As the field operator does not depend on the field itself, it can be pulled out of the path integral of the internal information energy construction,
where should be regarded as a functional that always returns the value irrespective the value of its input . The resulting expression can be calculated by commuting the mean field annihilator to the right of the expression, where they vanish since . The mean field annihilator commutes with the mean field as

By the usage of the field operator formalism the Gibbs free energy can be calculated, which permits the (approximate) inference of the posterior mean field via a numerical robust functional minimization.

History

The book of Norbert Wiener might be regarded as one of the first works on field inference. The usage of path integrals for field inference was proposed by a number of authors, e.g. Edmund Bertschinger or William Bialek and A. Zee. The connection of field theory and Bayesian reasoning was made explicit by Jörg Lemm. The term information field theory was coined by Torsten Enßlin. See the latter reference for more information on the history of IFT.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...