Search This Blog

Sunday, March 9, 2025

Evolution of the brain

From Wikipedia, the free encyclopedia
Evolution of the brain from ape to man

The evolution of the brain refers to the progressive development and complexity of neural structures over millions of years, resulting in the diverse range of brain sizes and functions observed across different species today, particularly in vertebrates.

The evolution of the brain has exhibited diverging adaptations within taxonomic classes, such as Mammalia, and even more diverse adaptations across other taxonomic classes. Brain-to-body size scales allometrically. This means that as body size changes, so do other physiological, anatomical, and biochemical connections between the brain and body. Small-bodied mammals tend to have relatively large brains compared to their bodies, while larger mammals (such as whales) have smaller brain-to-body ratios. When brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a species. For example, lemurs fall below this line, suggesting that for a primate of their size, a larger brain would be expected. In contrast, humans lie well above this line, indicating they are more encephalized than lemurs and, in fact, more encephalized than any other primate. This suggests that human brains have undergone a larger evolutionary increase in complexity relative to size. Some of these changes have been linked to multiple genetic factors, including proteins and other organelles.

Early history

Unsolved problem in biology:  How and why did the brain evolve?

One approach to understanding overall brain evolution is to use a paleoarchaeological timeline to trace the necessity for ever increasing complexity in structures that allow for chemical and electrical signaling. Because brains and other soft tissues do not fossilize as readily as mineralized tissues, scientists often look to other structures as evidence in the fossil record to get an understanding of brain evolution. This, however, leads to a dilemma as the emergence of organisms with more complex nervous systems with protective bone or other protective tissues that can then readily fossilize occur in the fossil record before evidence for chemical and electrical signaling. Evidence from 2008 showed that the ability to transmit electrical and chemical signals existed even before more complex multicellular lifeforms.

Fossilization of brain tissue, as well as other soft tissue, is nonetheless possible, and scientists can infer that the first brain structure appeared at least 521 million years ago, with fossil brain tissue present in sites of exceptional preservation.

Another approach to understanding brain evolution is to look at extant organisms that do not possess complex nervous systems, comparing anatomical features that allow for chemical or electrical messaging. For example, choanoflagellates are organisms that possess various membrane channels that are crucial to electrical signaling. The membrane channels of choanoflagellates' are homologous to the ones found in animal cells, and this is supported by the evolutionary connection between early choanoflagellates and the ancestors of animals. Another example of extant organisms with the capacity to transmit electrical signals would be the glass sponge, a multicellular organism, which is capable of propagating electrical impulses without the presence of a nervous system.

Before the evolutionary development of the brain, nerve nets, the simplest form of a nervous system developed. These nerve nets were a sort of precursor for the more evolutionarily advanced brains. They were first observed in Cnidaria and consist of a number of neurons spread apart that allow the organism to respond to physical contact. They are able to rudimentarily detect food and other chemicals, but these nerve nets do not allow them to detect the source of the stimulus.

Ctenophores also demonstrate this crude precursor to a brain or centralized nervous system, however they phylogenetically diverged before the phylum Porifera (the Sponges) and Cnidaria. There are two current theories on the emergence of nerve nets. One theory is that nerve nets may have developed independently in Ctenophores and Cnidarians. The other theory states that a common ancestor may have developed nerve nets, but they were lost in Porifera. While comparing the average neuron size and the packing density the difference between primate and mammal brains is shown.

A trend in brain evolution according to a study done with mice, chickens, monkeys and apes concluded that more evolved species tend to preserve the structures responsible for basic behaviors. A long term human study comparing the human brain to the primitive brain found that the modern human brain contains the primitive hindbrain region – what most neuroscientists call the protoreptilian brain. The purpose of this part of the brain is to sustain fundamental homeostatic functions, which are self regulating processes organisms use to help their bodies adapt. The pons and medulla are major structures found there. A new region of the brain developed in mammals about 250 million years after the appearance of the hindbrain. This region is known as the paleomammalian brain, the major parts of which are the hippocampi and amygdalas, often referred to as the limbic system. The limbic system deals with more complex functions including emotional, sexual and fighting behaviors. Of course, animals that are not vertebrates also have brains, and their brains have undergone separate evolutionary histories.

The brainstem and limbic system are largely based on nuclei, which are essentially balled-up clusters of tightly packed neurons and the axon fibers that connect them to each other, as well as to neurons in other locations. The other two major brain areas (the cerebrum and cerebellum) are based on a cortical architecture. At the outer periphery of the cortex, the neurons are arranged into layers (the number of which vary according to species and function) a few millimeters thick. There are axons that travel between the layers, but the majority of axon mass is below the neurons themselves. Since cortical neurons and most of their axon fiber tracts do not have to compete for space, cortical structures can scale more easily than nuclear ones. A key feature of cortex is that because it scales with surface area, more of it can be fit inside a skull by introducing convolutions, in much the same way that a dinner napkin can be stuffed into a glass by wadding it up. The degree of convolution is generally greater in species with more complex behavior, which benefits from the increased surface area.

The cerebellum, or "little brain," is behind the brainstem and below the occipital lobe of the cerebrum in humans. Its purposes include the coordination of fine sensorimotor tasks, and it may be involved in some cognitive functions, such as language and different motor skills that may involve hands and feet. The cerebellum helps keep equilibrium. Damage to the cerebellum would result in all physical roles in life to be affected. Human cerebellar cortex is finely convoluted, much more so than cerebral cortex. Its interior axon fiber tracts are called the arbor vitae, or Tree of Life.

The area of the brain with the greatest amount of recent evolutionary change is called the neocortex. In reptiles and fish, this area is called the pallium and is smaller and simpler relative to body mass than what is found in mammals. According to research, the cerebrum first developed about 200 million years ago. It is responsible for higher cognitive functions—for example, language, thinking, and related forms of information processing. It is also responsible for processing sensory input (together with the thalamus, a part of the limbic system that acts as an information router). The thalamus receives the different sensations before the information is then passed onto the cerebral cortex. Most of its function is subconscious, that is, not available for inspection or intervention by the conscious mind. The neocortex is an elaboration, or outgrowth, of structures in the limbic system, with which it is tightly integrated. The neocortex is the main part controlling many brain functions as it covers half of the whole brain in volume. The development of these recent evolutionary changes in the neocortex likely occurred as a result of new neural network formations and positive selections of certain genetic components.

Role of embryology

In addition to studying the fossil record, evolutionary history can be investigated via embryology. An embryo is an unborn/unhatched animal and evolutionary history can be studied by observing how processes in embryonic development are conserved (or not conserved) across species. Similarities between different species may indicate evolutionary connection. One way anthropologists study evolutionary connection between species is by observing orthologs. An ortholog is defined as two or more homologous genes between species that are evolutionarily related by linear descent. By using embryology the evolution of the brain can be tracked between various species.

Bone morphogenetic protein (BMP), a growth factor that plays a significant role in embryonic neural development, is highly conserved amongst vertebrates, as is sonic hedgehog (SHH), a morphogen that inhibits BMP to allow neural crest development. Tracking these growth factors with the use of embryology provides a deeper understanding of what areas of the brain diverged in their evolution. Varying levels of these growth factors lead to differing embryonic neural development which then in turn affects the complexity of future neural systems. Studying the brain's development at various embryonic stages across differing species provides additional insight into what evolutionary changes may have historically occurred. This then allows scientists to look into what factors may have caused such changes, such as links to neural network diversity, growth factor production, protein- coding selections, and other genetic factors.

Randomizing access and increasing size

Some animal phyla have gone through major brain enlargement through evolution (e.g. vertebrates and cephalopods both contain many lineages in which brains have grown through evolution) but most animal groups are composed only of species with extremely small brains. Some scientists argue that this difference is due to vertebrate and cephalopod neurons having evolved ways of communicating that overcome the scalability problem of neural networks while most animal groups have not. They argue that the reason why traditional neural networks fail to improve their function when they scale up is because filtering based on previously known probabilities cause self-fulfilling prophecy-like biases that create false statistical evidence giving a completely false worldview and that randomized access can overcome this problem and allow brains to be scaled up to more discriminating conditioned reflexes at larger brains that lead to new worldview forming abilities at certain thresholds. This means when neurons scale in a non randomized fashion that their functionality becomes more limited due to their neural networks being unable to process more complex systems without the exposure to new formations. This is explained by randomization allowing the entire brain to eventually get access to all information over the course of many shifts even though instant privileged access is physically impossible. They cite that vertebrate neurons transmit virus-like capsules containing RNA that are sometimes read in the neuron to which it is transmitted and sometimes passed further on unread which creates randomized access, and that cephalopod neurons make different proteins from the same gene which suggests another mechanism for randomization of concentrated information in neurons, both making it evolutionarily worth scaling up brains.

Brain re-organization

With the use of in vivo Magnetic resonance imaging (MRI) and tissue sampling, different cortical samples from members of each hominoid species were analyzed. In each species, specific areas were either relatively enlarged or shrunken, which can detail neural organizations. Different sizes in the cortical areas can show specific adaptations, functional specializations and evolutionary events that were changes in how the hominoid brain is organized. In early prediction it was thought that the frontal lobe, a large part of the brain that is generally devoted to behavior and social interaction, predicted the differences in behavior between hominoid and humans. Discrediting this theory was evidence supporting that damage to the frontal lobe in both humans and hominoids show atypical social and emotional behavior; thus, this similarity means that the frontal lobe was not very likely to be selected for reorganization. Instead, it is now believed that evolution occurred in other parts of the brain that are strictly associated with certain behaviors. The reorganization that took place is thought to have been more organizational than volumetric; whereas the brain volumes were relatively the same but specific landmark position of surface anatomical features, for example, the lunate sulcus suggest that the brains had been through a neurological reorganization. There is also evidence that the early hominin lineage also underwent a quiescent period, or a period of dormancy, which supports the idea of neural reorganization.

Dental fossil records for early humans and hominins show that immature hominins, including australopithecines and members of Homo, have a quiescent period (Bown et al. 1987). A quiescent period is a period in which there are no dental eruptions of adult teeth; at this time the child becomes more accustomed to social structure, and development of culture. During this time the child is given an extra advantage over other hominoids, devoting several years into developing speech and learning to cooperate within a community. This period is also discussed in relation to encephalization. It was discovered that chimpanzees do not have this neutral dental period, which suggests that a quiescent period occurred in very early hominin evolution. Using the models for neurological reorganization it can be suggested the cause for this period, dubbed middle childhood, is most likely for enhanced foraging abilities in varying seasonal environments.

Genetic factors in recent evolution

Genes involved in the neuro-development and in neuron physiology are extremely conserved between mammalian species (94% of genes expressed in common between humans and chimpanzees, 75% between humans and mice), compared to other organs. Therefore, few genes account for species differences in the human brain development and function.

Development of the human cerebral cortex

Main differences rely on the evolution of non-coding genomic regions, involved in the regulation of gene expression. This leads to differential expression of genes during the development of the human brain compared to other species, including chimpanzees. Some of these regions evolved fast in the human genome (human accelerated regions). The new genes expressed during human neurogenesis are notably associated with the NOTCH, WNT and mTOR pathways, but are also involved ZEB2, PDGFD and its receptor PDGFRβ. The human cerebral cortex is also characterized by a higher gradient of retinoic acid in the prefrontal cortex, leading to higher prefrontal cortex volume. All these differential gene expression lead to higher proliferation of the neural progenitors leading to more neurons in the human cerebral cortex. Some genes are lost in their expression during the development of the human cerebral cortex like GADD45G and FLRT2/FLRT3.

Another source of molecular novelty rely on new genes in the human or hominid genomes through segmental duplication. Around 30 new genes in the hominid genomes are dynamically expressed during human corticogenesis. Some were linked to higher proliferation of neural progenitors: NOTCH2NLA/B/C, ARHGAP11B, CROCCP2, TBC1D3, TMEM14B. Patients with deletions with NOTCH2NL genes display microcephaly, showing the necessity of such duplicated genes, acquired in the human genomes, in the proper corticogenesis.

MCPH1 and ASPM

Bruce Lahn, the senior author at the Howard Hughes Medical Center at the University of Chicago and colleagues have suggested that there are specific genes that control the size of the human brain. These genes continue to play a role in brain evolution, implying that the brain is continuing to evolve. The study began with the researchers assessing 214 genes that are involved in brain development. These genes were obtained from humans, macaques, rats and mice. Lahn and the other researchers noted points in the DNA sequences that caused protein alterations. These DNA changes were then scaled to the evolutionary time that it took for those changes to occur. The data showed the genes in the human brain evolved much faster than those of the other species. Once this genomic evidence was acquired, Lahn and his team decided to find the specific gene or genes that allowed for or even controlled this rapid evolution. Two genes were found to control the size of the human brain as it develops. These genes are Microcephalin (MCPH1) and Abnormal Spindle-like Microcephaly (ASPM). The researchers at the University of Chicago were able to determine that under the pressures of selection, both of these genes showed significant DNA sequence changes. Lahn's earlier studies displayed that Microcephalin experienced rapid evolution along the primate lineage which eventually led to the emergence of Homo sapiens. After the emergence of humans, Microcephalin seems to have shown a slower evolution rate. On the contrary, ASPM showed its most rapid evolution in the later years of human evolution once the divergence between chimpanzees and humans had already occurred.

Each of the gene sequences went through specific changes that led to the evolution of humans from ancestral relatives. In order to determine these alterations, Lahn and his colleagues used DNA sequences from multiple primates then compared and contrasted the sequences with those of humans. Following this step, the researchers statistically analyzed the key differences between the primate and human DNA to come to the conclusion, that the differences were due to natural selection. The changes in DNA sequences of these genes accumulated to bring about a competitive advantage and higher fitness that humans possess in relation to other primates. This comparative advantage is coupled with a larger brain size which ultimately allows the human mind to have a higher cognitive awareness.

ZEB2 protein

ZEB2

ZEB2 is a protein- coding gene in the Homo sapien species. A 2021 study found that a delayed change in the shape of early brain cells causes the distinctly large human forebrain compared to other apes and identify ZEB2 as a genetic regulator of it, whose manipulation lead to acquisition of nonhuman ape cortical architecture in brain organoids.

NOVA1

In 2021, researchers reported that brain organoids created with stem cells into which they reintroduced the archaic gene variant NOVA1 present in Neanderthals and Denisovans via CRISPR-Cas9 shows that it has a major impact on neurodevelopment and that such genetic mutations during the evolution of the human brain underlie traits that separate modern humans from extinct Homo species. They found that expression of the archaic NOVA1 in cortical organoids leads to "modified synaptic protein interactions, affects glutamatergic signaling, underlies differences in neuronal connectivity, and promotes higher heterogeneity of neurons regarding their electrophysiological profiles". This research suggests positive selection of the modern NOVA1 gene, which may have promoted the randomization of neural scaling. A subsequent study failed to replicate the differences in organoid morphology between the modern human and the archaic NOVA1 variant, consistent with suspected unwanted side effects of CRISPR editing in the original study.

SRGAP2C and neuronal maturation

Less is known about neuronal maturation. Synaptic gene and protein expression are protracted, in line with the protracted synaptic maturation of human cortical neurons so called neoteny. This probably relies on the evolution of non-coding genomic regions. The consequence of the neoteny could be an extension of the period of synaptic plasticity and therefore of learning. A human-specific duplicated gene, SRGAP2C accounts for this synaptic neoteny and acts by regulating molecular pathways linked to neurodevelopmental disorders. Other genes are deferentially expressed in human neurons during their development such as osteocrin or cerebelin-2 .

LRRC37B and neuronal electrical properties

Even less is known about molecular specificities linked to the physiology of the human neurons. Human neurons are more divergent in the genes they express compared to chimpanzees than chimpanzees to gorilla, which suggests an acceleration of non-coding genomic regions associated with genes involved in neuronal physiology, in particular linked to the synapses. A hominid-specific duplicated gene, LRRC37B, codes for a transmembrane receptor that is selectively localized at the axon initial segment of human cortical pyramidal neurons. It inhibits their voltage-gated sodium channels that generate the action potentials leading to a lower neuronal excitability. Human cortical pyramidal neurons display a lower excitability compared to other mammalian species (including macaques and marmosets) which could lead to different circuit functions in the human species. Therefore, LRRC37B whose expression has been acquired in the human lineage after the separation from the chimpanzees could be a key gene in the function of the human cerebral cortex. LRRC37B binds to secreted FGF13A and SCN1B and modulate indirectly the activity of SCN8A, all involved in neural disorders such as epilepsy and autism. Therefore, LRRC37B may contribute to human-specific sensitivities to such disorders, both involved defects in neuronal excitability.

Genome repair

The genomic DNA of postmitotic neurons ordinarily does not replicate. Protection strategies have evolved to ensure the distinctive longevity of the neuronal genome. Human neurons are reliant on DNA repair processes to maintain function during an individual's life-time. DNA repair tends to occur preferentially at evolutionarily conserved sites that are specifically involved with the regulation of expression of genes essential for neuronal identity and function.

Other factors

Many other genetics may also be involved in recent evolution of the brain.

  • For instance, scientists showed experimentally, with brain organoids grown from stem cells, how differences between humans and chimpanzees are also substantially caused by non-coding DNA (often discarded as relatively meaningless "junk DNA") – in particular via CRE-regulated expression of the ZNF558 gene for a transcription factor that regulates the SPATA18 gene. SPATA18 gene encodes a protein and is able to influence lysosome-like organelles that are found within mitochondria that eradicate oxidized mitochondrial proteins. This helps monitor the quality of the mitochondria as the disregulation of its quality control has been linked to cancer and degenerative diseases. This example may contribute to illustrations of the complexity and scope of relatively recent evolution to Homo sapiens.
  • A change in gene TKTL1 could be a key factor of recent brain evolution and difference of modern humans to (other) apes and Neanderthals, related to neocortex-neurogenesis. However, the "archaic" allele attributed to Neanderthals is present in 0.03% of Homo sapiens, but no resultant phenotypic differences have been reported in these people. Additionally, as Herai et al. contend, more is not always better. In fact, enhanced neuron production "can lead to an abnormally enlarged cortex and layer-specific imbalances in glia/neuron ratios and neuronal subpopulations during neurodevelopment." Even the original study's authors agree that “any attempt to discuss prefrontal cortex and cognitive advantage of modern humans over Neandertals based on TKTL1 alone is problematic”.
  • Some of the prior study's authors reported a similar ARHGAP11B mutation in 2016.
  • Epigenetics also play a major role in the brain evolution in and to humans.

Recently evolved traits

Language

A genome-wide association study meta-analysis reported genetic factors of, the so far uniquely human, language-related capacities, in particular factors of differences in skill-levels of five tested traits. It e.g. identified association with neuroanatomy of a language-related brain area via neuroimaging correlation. The data contributes to identifying or understanding the biological basis of this recently evolved characteristic capability.

Human brain evolution

One of the prominent ways of tracking the evolution of the human brain is through direct evidence in the form of fossils. The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominids and finally to Homo sapiens. Because fossilized brain tissue is rare, a more reliable approach is to observe anatomical characteristics of the skull that offer insight into brain characteristics. One such method is to observe the endocranial cast (also referred to as endocasts). Endocasts occur when, during the fossilization process, the brain deteriorates away, leaving a space that is filled by surrounding sedimentary material over time. These casts, give an imprint of the lining of the brain cavity, which allows a visualization of what was there. This approach, however, is limited in regard to what information can be gathered. Information gleaned from endocasts is primarily limited to the size of the brain (cranial capacity or endocranial volume), prominent sulci and gyri, and size of dominant lobes or regions of the brain. While endocasts are extremely helpful in revealing superficial brain anatomy, they cannot reveal brain structure, particularly of deeper brain areas. By determining scaling metrics of cranial capacity as it relates to total number of neurons present in primates, it is also possible to estimate the number of neurons through fossil evidence.

Facial reconstruction of a Homo georgicus from over 1.5 Mya

Despite the limitations to endocasts, they can and do provide a basis for understanding human brain evolution, which shows primarily a gradually bigger brain. The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominins and finally to Homo sapiens. This trend that has led to the present day human brain size indicates that there has been a 2-3 factor increase in size over the past 3 million years. This can be visualized with current data on hominin evolution, starting with Australopithecus, a group of hominins from which humans are likely descended. After all of the data, all observations concluded that the main development that occurred during evolution was the increase of brain size.

However, recent research has called into question the hypothesis of a threefold increase in brain size when comparing Homo sapiens with Australopithecus and chimpanzees. For example, in an article published in 2022 compiled a large data set of contemporary humans and found that the smallest human brains are less than twice that of large brained chimpanzees. As the authors write '...the upper limit of chimpanzee brain size is 500g/ml yet numerous modern humans have brain size below 900 g/ml.' (Note that in this quote, the unit g/ml is to be understood not in the usual way as gram per millilitre but rather as gram or millilitre. This is consistent because brain density is close to 1 g/ml.) Consequently, the authors argue that the notion of an increase in brain size being related to advances in cognition needs to be re-thought in light of global variation in brain size, as the brains of many modern humans with normal cognitive capacities are only 400g/ml larger than chimpanzees. Additionally, much of the increase in brain size - which occurs to a much greater degree in specific modern populations - can be explained by increases in correlated body size related to diet and climatic factors.

Australopiths lived from 3.85 to 2.95 million years ago with the general cranial capacity somewhere near that of the extant chimpanzee—around 300–500 cm3. Considering that the volume of the modern human brain is around 1,352 cm3 on average this represents a substantial amount of brain mass evolved. Australopiths are estimated to have a total neuron count of ~30-35 billion.

Progressing along the human ancestral timeline, brain size continues to steadily increase (see Homininae) when moving into the era of Homo. For example, Homo habilis, living 2.4 million to 1.4 million years ago and argued to be the first Homo species based on a host of characteristics, had a cranial capacity of around 600 cm3. Homo habilis is estimated to have had ~40 billion neurons.

A little closer to present day, Homo heidelbergensis lived from around 700,000 to 200,000 years ago and had a cranial capacity of around 1290 cm3 and having around 76 billion neurons.

Homo neaderthalensis, living 400,000 to 40,000 years ago, had a cranial capacity comparable to that of modern humans at around 1500–1600 cm3on average, with some specimens of Neanderthal having even greater cranial capacity. Neanderthals are estimated to have had around 85 billion neurons. The increase in brain size topped with Neanderthals, possibly due to their larger visual systems.

It is also important to note that the measure of brain mass or volume, seen as cranial capacity, or even relative brain size, which is brain mass that is expressed as a percentage of body mass, are not a measure of intelligence, use, or function of regions of the brain. Total neurons, however, also do not indicate a higher ranking in cognitive abilities. Elephants have a higher number of total neurons (257 billion) compared to humans (100 billion). Relative brain size, overall mass, and total number of neurons are only a few metrics that help scientists follow the evolutionary trend of increased brain to body ratio through the hominin phylogeny.

In 2021, scientists suggested that the brains of early Homo from Africa and Dmanisi, Georgia, Western Asia "retained a great ape-like structure of the frontal lobe" for far longer than previously thought – until about 1.5 million years ago. Their findings imply that Homo first dispersed out of Africa before human brains evolved to roughly their modern anatomical structure in terms of the location and organization of individual brain regions. It also suggests that this evolution occurred – not during – but only long after the Homo lineage evolved ~2.5 million years ago and after they – Homo erectus in particular – evolved to walk upright. What is the least controversial is that the brain expansion started about 2.6 Ma (about the same as the start of the Pleistocene), and ended around 0.2 Ma.

Evolution of the neocortex

In addition to just the size of the brain, scientists have observed changes in the folding of the brain, as well as in the thickness of the cortex. The more convoluted the surface of the brain is, the greater the surface area of the cortex which allows for an expansion of cortex. It is the most evolutionarily advanced part of the brain. Greater surface area of the brain is linked to higher intelligence as is the thicker cortex but there is an inverse relationship—the thicker the cortex, the more difficult it is for it to fold. In adult humans, thicker cerebral cortex has been linked to higher intelligence.

The neocortex is the most advanced and most evolutionarily young part of the human brain. It is six layers thick and is only present in mammals. It is especially prominent in humans and is the location of most higher level functioning and cognitive ability. The six-layered neocortex found in mammals is evolutionarily derived from a three-layer cortex present in all modern reptiles. This three-layer cortex is still conserved in some parts of the human brain such as the hippocampus and is believed to have evolved in mammals to the neocortex during the transition between the Triassic and Jurassic periods. After looking at history, the mammals had little neocortex compared to the primates as they had more cortex. The three layers of this reptilian cortex correlate strongly to the first, fifth and sixth layers of the mammalian neocortex. Across species of mammals, primates have greater neuronal density compared to rodents of similar brain mass and this may account for increased intelligence.

Theories of human brain evolution

Explanations of the rapid evolution and exceptional size of the human brain can be classified into five groups: instrumental, social, environmental, dietary, and anatomo-physiological. The instrumental hypotheses are based on the logic that evolutionary selection for larger brains is beneficial for species survival, dominance, and spread, because larger brains facilitate food-finding and mating success. The social hypotheses suggest that social behavior stimulates evolutionary expansion of brain size. Similarly, the environmental hypotheses suppose that encephalization is promoted by environmental factors such as stress, variability, and consistency. The dietary theories maintain that food quality and certain nutritional components directly contributed to the brain growth in the Homo genus. The anatomo-physiologic concepts, such as cranio-cerebral vascular hypertension due to head-down posture of the anthropoid fetus during pregnancy, are primarily focused on anatomic-functional changes that predispose to brain enlargement.

No single theory can completely account for human brain evolution. Multiple selective pressures in combination seems to have been involved. Synthetic theories have been proposed, but have not clearly explained reasons for the uniqueness of the human brain. Puzzlingly, brain enlargement has been found to have occurred independently in different primate lineages, but only human lineage ended up with an exceptional brain capacity. Fetal head-down posture may be an explanation of this conundrum because Homo sapiens is the only primate obligatory biped with upright posture.

Force field (chemistry)

From Wikipedia, the free encyclopedia
Part of force field of ethane for the C-C stretching bond.

In the context of chemistry, molecular physics, physical chemistry, and molecular modelling, a force field is a computational model that is used to describe the forces between atoms (or collections of atoms) within molecules or between molecules as well as in crystals. Force fields are a variety of interatomic potentials. More precisely, the force field refers to the functional form and parameter sets used to calculate the potential energy of a system on the atomistic level. Force fields are usually used in molecular dynamics or Monte Carlo simulations. The parameters for a chosen energy function may be derived from classical laboratory experiment data, calculations in quantum mechanics, or both. Force fields utilize the same concept as force fields in classical physics, with the main difference being that the force field parameters in chemistry describe the energy landscape on the atomistic level. From a force field, the acting forces on every particle are derived as a gradient of the potential energy with respect to the particle coordinates.

A large number of different force field types exist today (e.g. for organic molecules, ions, polymers, minerals, and metals). Depending on the material, different functional forms are usually chosen for the force fields since different types of atomistic interactions dominate the material behavior.

There are various criteria that can be used for categorizing force field parametrization strategies. An important differentiation is 'component-specific' and 'transferable'. For a component-specific parametrization, the considered force field is developed solely for describing a single given substance (e.g. water). For a transferable force field, all or some parameters are designed as building blocks and become transferable/ applicable for different substances (e.g. methyl groups in alkane transferable force fields). A different important differentiation addresses the physical structure of the models: All-atom force fields provide parameters for every type of atom in a system, including hydrogen, while united-atom interatomic potentials treat the hydrogen and carbon atoms in methyl groups and methylene bridges as one interaction center. Coarse-grained potentials, which are often used in long-time simulations of macromolecules such as proteins, nucleic acids, and multi-component complexes, sacrifice chemical details for higher computing efficiency.

Force fields for molecular systems

Molecular mechanics potential energy function with continuum solvent.

The basic functional form of potential energy for modeling molecular systems includes intramolecular interaction terms for interactions of atoms that are linked by covalent bonds, and intermolecular (i.e. nonbonded also termed noncovalent) terms that describe the long-range electrostatic and van der Waals forces. The specific decomposition of the terms depends on the force field, but a general form for the total energy in an additive force field can be written as where the components of the covalent and noncovalent contributions are given by the following summations:

The bond and angle terms are usually modeled by quadratic energy functions that do not allow bond breaking. A more realistic description of a covalent bond at higher stretching is provided by the more expensive Morse potential. The functional form for dihedral energy is variable from one force field to another. Additionally, "improper torsional" terms may be added to enforce the planarity of aromatic rings and other conjugated systems, and "cross-terms" that describe the coupling of different internal variables, such as angles and bond lengths. Some force fields also include explicit terms for hydrogen bonds.

The nonbonded terms are computationally most intensive. A popular choice is to limit interactions to pairwise energies. The van der Waals term is usually computed with a Lennard-Jones potential or the Mie potential and the electrostatic term with Coulomb's law. However, both can be buffered or scaled by a constant factor to account for electronic polarizability. A large number of force fields based on this or similar energy expressions have been proposed in the past decades for modeling different types of materials such as molecular substances, metals, glasses etc. - see below for a comprehensive list of force fields.

Bond stretching

As it is rare for bonds to deviate significantly from their equilibrium values, the most simplistic approaches utilize a Hooke's law formula: where is the force constant, is the bond length, and is the value for the bond length between atoms and when all other terms in the force field are set to 0. The term is at times differently defined or taken at different thermodynamic conditions.

The bond stretching constant can be determined from the experimental infrared spectrum, Raman spectrum, or high-level quantum-mechanical calculations. The constant determines vibrational frequencies in molecular dynamics simulations. The stronger the bond is between atoms, the higher is the value of the force constant, and the higher the wavenumber (energy) in the IR/Raman spectrum.

Though the formula of Hooke's law provides a reasonable level of accuracy at bond lengths near the equilibrium distance, it is less accurate as one moves away. In order to model the Morse curve better, one could employ cubic and higher powers. However, for most practical applications these differences are negligible, and inaccuracies in predictions of bond lengths are on the order of the thousandth of an angstrom, which is also the limit of reliability for common force fields. A Morse potential can be employed instead to enable bond breaking and higher accuracy, even though it is less efficient to compute. For reactive force fields, bond breaking and bond orders are additionally considered.

Electrostatic interactions

Electrostatic interactions are represented by a Coulomb energy, which utilizes atomic charges to represent chemical bonding ranging from covalent to polar covalent and ionic bonding. The typical formula is the Coulomb law: where is the distance between two atoms and . The total Coulomb energy is a sum over all pairwise combinations of atoms and usually excludes 1, 2 bonded atoms, 1, 3 bonded atoms, as well as 1, 4 bonded atoms.

Atomic charges can make dominant contributions to the potential energy, especially for polar molecules and ionic compounds, and are critical to simulate the geometry, interaction energy, and the reactivity. The assignment of charges usually uses some heuristic approach, with different possible solutions.

Force fields for crystal systems

Atomistic interactions in crystal systems significantly deviate from those in molecular systems, e.g. of organic molecules. For crystal systems, particularly multi-body interactions, these interactions are important and cannot be neglected if a high accuracy of the force field is the aim. For crystal systems with covalent bonding, bond order potentials are usually used, e.g. Tersoff potentials. For metal systems, usually embedded atom potentials are used. Additionally, Drude model potentials have been developed, which describe a form of attachment of electrons to nuclei.

Parameterization

In addition to the functional form of the potentials, a force fields consists of the parameters of these functions. Together, they specify the interactions on the atomistic level. The parametrization, i.e. determining of the parameter values, is crucial for the accuracy and reliability of the force field. Different parametrization procedures have been developed for the parametrization of different substances, e.g. metals, ions, and molecules. For different material types, usually different parametrization strategies are used. In general, two main types can be distinguished for the parametrization, either using data/ information from the atomistic level, e.g. from quantum mechanical calculations or spectroscopic data, or using data from macroscopic properties, e.g. the hardness or compressibility of a given material. Often a combination of these routes is used. Hence, one way or the other, the force field parameters are always determined in an empirical way. Nevertheless, the term 'empirical' is often used in the context of force field parameters when macroscopic material property data was used for the fitting. Experimental data (microscopic and macroscopic) included for the fit, for example, the enthalpy of vaporization, enthalpy of sublimation, dipole moments, and various spectroscopic properties such as vibrational frequencies. Often, for molecular systems, quantum mechanical calculations in the gas phase are used for parametrizing intramolecular interactions and parametrizing intermolecular dispersive interactions by using macroscopic properties such as liquid densities. The assignment of atomic charges often follows quantum mechanical protocols with some heuristics, which can lead to significant deviation in representing specific properties.

A large number of workflows and parametrization procedures have been employed in the past decades using different data and optimization strategies for determining the force field parameters. They differ significantly, which is also due to different focuses of different developments. The parameters for molecular simulations of biological macromolecules such as proteins, DNA, and RNA were often derived/transferred from observations for small organic molecules, which are more accessible for experimental studies and quantum calculations.

Atom types are defined for different elements as well as for the same elements in sufficiently different chemical environments. For example, oxygen atoms in water and an oxygen atoms in a carbonyl functional group are classified as different force field types. Typical molecular force field parameter sets include values for atomic mass, atomic charge, Lennard-Jones parameters for every atom type, as well as equilibrium values of bond lengths, bond angles, and dihedral angles. The bonded terms refer to pairs, triplets, and quadruplets of bonded atoms, and include values for the effective spring constant for each potential.

Heuristic force field parametrization procedures have been very successful for many years, but recently criticized since they are usually not fully automated and therefore subject to some subjectivity of the developers, which also brings problems regarding the reproducibility of the parametrization procedure.

Efforts to provide open source codes and methods include openMM and openMD. The use of semi-automation or full automation, without input from chemical knowledge, is likely to increase inconsistencies at the level of atomic charges, for the assignment of remaining parameters, and likely to dilute the interpretability and performance of parameters.

Force field databases

A large number of force fields has been published in the past decades - mostly in scientific publications. In recent years, some databases have attempted to collect, categorize and make force fields digitally available. Therein, different databases focus on different types of force fields. For example, the openKim database focuses on interatomic functions describing the individual interactions between specific elements. The TraPPE database focuses on transferable force fields of organic molecules (developed by the Siepmann group). The MolMod database focuses on molecular and ionic force fields (both component-specific and transferable).

Transferability and mixing function types

Functional forms and parameter sets have been defined by the developers of interatomic potentials and feature variable degrees of self-consistency and transferability. When functional forms of the potential terms vary or are mixed, the parameters from one interatomic potential function can typically not be used together with another interatomic potential function. In some cases, modifications can be made with minor effort, for example, between 9-6 Lennard-Jones potentials to 12-6 Lennard-Jones potentials. Transfers from Buckingham potentials to harmonic potentials, or from Embedded Atom Models to harmonic potentials, on the contrary, would require many additional assumptions and may not be possible.

In many cases, force fields can be straight forwardly combined. Yet, often, additional specifications and assumptions are required.

Limitations

All interatomic potentials are based on approximations and experimental data, therefore often termed empirical. The performance varies from higher accuracy than density functional theory (DFT) calculations, with access to million times larger systems and time scales, to random guesses depending on the force field. The use of accurate representations of chemical bonding, combined with reproducible experimental data and validation, can lead to lasting interatomic potentials of high quality with much fewer parameters and assumptions in comparison to DFT-level quantum methods.

Possible limitations include atomic charges, also called point charges. Most force fields rely on point charges to reproduce the electrostatic potential around molecules, which works less well for anisotropic charge distributions. The remedy is that point charges have a clear interpretation and virtual electrons can be added to capture essential features of the electronic structure, such additional polarizability in metallic systems to describe the image potential, internal multipole moments in π-conjugated systems, and lone pairs in water. Electronic polarization of the environment may be better included by using polarizable force fields or using a macroscopic dielectric constant. However, application of one value of dielectric constant is a coarse approximation in the highly heterogeneous environments of proteins, biological membranes, minerals, or electrolytes.

All types of van der Waals forces are also strongly environment-dependent because these forces originate from interactions of induced and "instantaneous" dipoles (see Intermolecular force). The original Fritz London theory of these forces applies only in a vacuum. A more general theory of van der Waals forces in condensed media was developed by A. D. McLachlan in 1963 and included the original London's approach as a special case. The McLachlan theory predicts that van der Waals attractions in media are weaker than in vacuum and follow the like dissolves like rule, which means that different types of atoms interact more weakly than identical types of atoms. This is in contrast to combinatorial rules or Slater-Kirkwood equation applied for development of the classical force fields. The combinatorial rules state that the interaction energy of two dissimilar atoms (e.g., C...N) is an average of the interaction energies of corresponding identical atom pairs (i.e., C...C and N...N). According to McLachlan's theory, the interactions of particles in media can even be fully repulsive, as observed for liquid helium, however, the lack of vaporization and presence of a freezing point contradicts a theory of purely repulsive interactions. Measurements of attractive forces between different materials (Hamaker constant) have been explained by Jacob Israelachvili. For example, "the interaction between hydrocarbons across water is about 10% of that across vacuum". Such effects are represented in molecular dynamics through pairwise interactions that are spatially more dense in the condensed phase relative to the gas phase and reproduced once the parameters for all phases are validated to reproduce chemical bonding, density, and cohesive/surface energy.

Limitations have been strongly felt in protein structure refinement. The major underlying challenge is the huge conformation space of polymeric molecules, which grows beyond current computational feasibility when containing more than ~20 monomers. Participants in Critical Assessment of protein Structure Prediction (CASP) did not try to refine their models to avoid "a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Force fields have been applied successfully for protein structure refinement in different X-ray crystallography and NMR spectroscopy applications, especially using program XPLOR. However, the refinement is driven mainly by a set of experimental constraints and the interatomic potentials serve mainly to remove interatomic hindrances. The results of calculations were practically the same with rigid sphere potentials implemented in program DYANA (calculations from NMR data), or with programs for crystallographic refinement that use no energy functions at all. These shortcomings are related to interatomic potentials and to the inability to sample the conformation space of large molecules effectively. Thereby also the development of parameters to tackle such large-scale problems requires new approaches. A specific problem area is homology modeling of proteins. Meanwhile, alternative empirical scoring functions have been developed for ligand docking, protein folding, homology model refinement, computational protein design, and modeling of proteins in membranes.

It was also argued that some protein force fields operate with energies that are irrelevant to protein folding or ligand binding. The parameters of proteins force fields reproduce the enthalpy of sublimation, i.e., energy of evaporation of molecular crystals. However, protein folding and ligand binding are thermodynamically closer to crystallization, or liquid-solid transitions as these processes represent freezing of mobile molecules in condensed media. Thus, free energy changes during protein folding or ligand binding are expected to represent a combination of an energy similar to heat of fusion (energy absorbed during melting of molecular crystals), a conformational entropy contribution, and solvation free energy. The heat of fusion is significantly smaller than enthalpy of sublimation. Hence, the potentials describing protein folding or ligand binding need more consistent parameterization protocols, e.g., as described for IFF. Indeed, the energies of H-bonds in proteins are ~ -1.5 kcal/mol when estimated from protein engineering or alpha helix to coil transition data, but the same energies estimated from sublimation enthalpy of molecular crystals were -4 to -6 kcal/mol, which is related to re-forming existing hydrogen bonds and not forming hydrogen bonds from scratch. The depths of modified Lennard-Jones potentials derived from protein engineering data were also smaller than in typical potential parameters and followed the like dissolves like rule, as predicted by McLachlan theory.

Force fields available in literature

Different force fields are designed for different purposes:

Classical

  • AMBER (Assisted Model Building and Energy Refinement) – widely used for proteins and DNA.
  • CFF (Consistent Force Field) – a family of force fields adapted to a broad variety of organic compounds, includes force fields for polymers, metals, etc. CFF was developed by Arieh Warshel, Lifson, and coworkers as a general method for unifying studies of energies, structures, and vibration of general molecules and molecular crystals. The CFF program, developed by Levitt and Warshel, is based on the Cartesian representation of all the atoms, and it served as the basis for many subsequent simulation programs.
  • CHARMM (Chemistry at HARvard Molecular Mechanics) – originally developed at Harvard, widely used for both small molecules and macromolecules
  • COSMOS-NMR – hybrid QM/MM force field adapted to various inorganic compounds, organic compounds, and biological macromolecules, including semi-empirical calculation of atomic charges NMR properties. COSMOS-NMR is optimized for NMR-based structure elucidation and implemented in COSMOS molecular modelling package.
  • CVFF – also used broadly for small molecules and macromolecules.
  • ECEPP – first force field for polypeptide molecules - developed by F.A. Momany, H.A. Scheraga and colleagues. ECEPP was developed specifically for the modeling of peptides and proteins. It uses fixed geometries of amino acid residues to simplify the potential energy surface. Thus, the energy minimization is conducted in the space of protein torsion angles. Both MM2 and ECEPP include potentials for H-bonds and torsion potentials for describing rotations around single bonds. ECEPP/3 was implemented (with some modifications) in Internal Coordinate Mechanics and FANTOM.
  • GROMOS (GROningen MOlecular Simulation) – a force field that comes as part of the GROMOS software, a general-purpose molecular dynamics computer simulation package for the study of biomolecular systems. GROMOS force field A-version has been developed for application to aqueous or apolar solutions of proteins, nucleotides, and sugars. A B-version to simulate gas phase isolated molecules is also available.
  • IFF (Interface Force Field) – covers metals, minerals, 2D materials, and polymers. It uses 12-6 LJ and 9-6 LJ interactions. IFF was developed as for compounds across the periodic table. It assigs consistent charges, utilizes standard conditions as a reference state, reproduces structures, energies, and energy derivatives, and quantifies limitations for all included compounds. The Interface force field (IFF) assumes one single energy expression for all compounds across the periodic (with 9-6 and 12-6 LJ options). The IFF is in most parts non-polarizable, but also comprises polarizable parts, e.g. for some metals (Au, W) and pi-conjugated molecules
  • MMFF (Merck Molecular Force Field) – developed at Merck for a broad range of molecules.
  • MM2 was developed by Norman Allinger mainly for conformational analysis of hydrocarbons and other small organic molecules. It is designed to reproduce the equilibrium covalent geometry of molecules as precisely as possible. It implements a large set of parameters that is continuously refined and updated for many different classes of organic compounds (MM3 and MM4).
  • OPLS (Optimized Potential for Liquid Simulations) (variants include OPLS-AA, OPLS-UA, OPLS-2001, OPLS-2005, OPLS3e, OPLS4) – developed by William L. Jorgensen at the Yale University Department of Chemistry.
  • QCFF/PI – A general force fields for conjugated molecules.
  • UFF (Universal Force Field) – A general force field with parameters for the full periodic table up to and including the actinoids, developed at Colorado State University. The reliability is known to be poor due to lack of validation and interpretation of the parameters for nearly all claimed compounds, especially metals and inorganic compounds.

Polarizable

Several force fields explicitly capture polarizability, where a particle's effective charge can be influenced by electrostatic interactions with its neighbors. Core-shell models are common, which consist of a positively charged core particle, representing the polarizable atom, and a negatively charged particle attached to the core atom through a spring-like harmonic oscillator potential. Recent examples include polarizable models with virtual electrons that reproduce image charges in metals and polarizable biomolecular force fields.

  • AMBER – polarizable force field developed by Jim Caldwell and coworkers.
  • AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) – force field developed by Pengyu Ren (University of Texas at Austin) and Jay W. Ponder (Washington University). AMOEBA force field is gradually moving to more physics-rich AMOEBA+.
  • CHARMM – polarizable force field developed by S. Patel (University of Delaware) and C. L. Brooks III (University of Michigan). Based on the classical Drude oscillator developed by Alexander MacKerell (University of Maryland, Baltimore) and Benoit Roux (University of Chicago).
  • CFF/ind and ENZYMIX – The first polarizable force field which has subsequently been used in many applications to biological systems.
  • COSMOS-NMR (Computer Simulation of Molecular Structure) – developed by Ulrich Sternberg and coworkers. Hybrid QM/MM force field enables explicit quantum-mechanical calculation of electrostatic properties using localized bond orbitals with fast BPT formalism. Atomic charge fluctuation is possible in each molecular dynamics step.
  • DRF90 – developed by P. Th. van Duijnen and coworkers.
  • NEMO (Non-Empirical Molecular Orbital) – procedure developed by Gunnar Karlström and coworkers at Lund University (Sweden)
  • PIPF – The polarizable intermolecular potential for fluids is an induced point-dipole force field for organic liquids and biopolymers. The molecular polarization is based on Thole's interacting dipole (TID) model and was developed by Jiali Gao Gao Research Group | at the University of Minnesota.
  • Polarizable Force Field (PFF) – developed by Richard A. Friesner and coworkers.
  • SP-basis Chemical Potential Equalization (CPE) – approach developed by R. Chelli and P. Procacci.
  • PHAST – polarizable potential developed by Chris Cioce and coworkers.
  • ORIENT – procedure developed by Anthony J. Stone (Cambridge University) and coworkers.
  • Gaussian Electrostatic Model (GEM) – a polarizable force field based on Density Fitting developed by Thomas A. Darden and G. Andrés Cisneros at NIEHS; and Jean-Philip Piquemal at Paris VI University.
  • Atomistic Polarizable Potential for Liquids, Electrolytes, and Polymers(APPLE&P), developed by Oleg Borogin, Dmitry Bedrov and coworkers, which is distributed by Wasatch Molecular Incorporated.
  • Polarizable procedure based on the Kim-Gordon approach developed by Jürg Hutter and coworkers (University of Zürich)
  • GFN-FF (Geometry, Frequency, and Noncovalent Interaction Force-Field) – a completely automated partially polarizable generic force-field for the accurate description of structures and dynamics of large molecules across the periodic table developed by Stefan Grimme and Sebastian Spicher at the University of Bonn.
  • WASABe v1.0 PFF (for Water, orgAnic Solvents, And Battery electrolytes) An isotropic atomic dipole polarizable force field for accurate description of battery electrolytes in terms of thermodynamic and dynamic properties for high lithium salt concentrations in sulfonate solvent by Oleg Starovoytov 
  • XED (eXtended Electron Distribution) - a polarizable force-field created as a modification of an atom-centered charge model, developed by Andy Vinter. Partially charged monopoles are placed surrounding atoms to simulate more geometrically accurate electrostatic potentials at a fraction of the expense of using quantum mechanical methods. Primarily used by software packages supplied by Cresset Biomolecular Discovery.

Reactive

  • EVB (Empirical valence bond) – reactive force field introduced by Warshel and coworkers for use in modeling chemical reactions in different environments. The EVB facilitates calculating activation free energies in condensed phases and in enzymes.
  • ReaxFF – reactive force field (interatomic potential) developed by Adri van Duin, William Goddard and coworkers. It is slower than classical MD (50x), needs parameter sets with specific validation, and has no validation for surface and interfacial energies. Parameters are non-interpretable. It can be used atomistic-scale dynamical simulations of chemical reactions. Parallelized ReaxFF allows reactive simulations on >>1,000,000 atoms on large supercomputers.

Coarse-grained

  • DPD (Dissipative particle dynamics) – This is a method commonly applied in chemical engineering. It is typically used for studying the hydrodynamics of various simple and complex fluids which require consideration of time and length scales larger than those accessible to classical Molecular dynamics. The potential was originally proposed by Hoogerbrugge and Koelman  with later modifications by Español and Warren  The current state of the art was well documented in a CECAM workshop in 2008. Recently, work has been undertaken to capture some of the chemical subtitles relevant to solutions. This has led to work considering automated parameterisation of the DPD interaction potentials against experimental observables.
  • MARTINI – a coarse-grained potential developed by Marrink and coworkers at the University of Groningen, initially developed for molecular dynamics simulations of lipids,[6] later extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parameterized with the aim of reproducing thermodynamic properties.
  • SAFT – A top-down coarse-grained model developed in the Molecular Systems Engineering group at Imperial College London fitted to liquid phase densities and vapor pressures of pure compounds by using the SAFT equation of state.
  • SIRAH – a coarse-grained force field developed by Pantano and coworkers of the Biomolecular Simulations Group, Institut Pasteur of Montevideo, Uruguay; developed for molecular dynamics of water, DNA, and proteins. Free available for AMBER and GROMACS packages.
  • VAMM (Virtual atom molecular mechanics) – a coarse-grained force field developed by Korkut and Hendrickson for molecular mechanics calculations such as large scale conformational transitions based on the virtual interactions of C-alpha atoms. It is a knowledge based force field and formulated to capture features dependent on secondary structure and on residue-specific contact information in proteins.

Machine learning

  • MACE (Multi Atomic Cluster Expansion) is a highly accurate machine learning force field architecture that combines the rigorous many-body expansion of the total potential energy with rotationally equivariant representations of the system.
  • ANI (Artificial Narrow Intelligence) is a transferable neural network potential, built from atomic environment vectors, and able to provide DFT accuracy in terms of energies.
  • FFLUX (originally QCTFF)  A set of trained Kriging models which operate together to provide a molecular force field trained on Atoms in molecules or Quantum chemical topology energy terms including electrostatic, exchange and electron correlation.
  • TensorMol, a mixed model, a neural network provides a short-range potential, whilst more traditional potentials add screened long-range terms.
  • Δ-ML not a force field method but a model that adds learnt correctional energy terms to approximate and relatively computationally cheap quantum chemical methods in order to provide an accuracy level of a higher order, more computationally expensive quantum chemical model.
  • SchNet a Neural network utilising continuous-filter convolutional layers, to predict chemical properties and potential energy surfaces.
  • PhysNet is a Neural Network-based energy function to predict energies, forces and (fluctuating) partial charges.

Water

The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW. Other solvents and methods of solvent representation are also applied within computational chemistry and physics; these are termed solvent models.

Modified amino acids

  • Forcefield_PTM – An AMBER-based forcefield and webtool for modeling common post-translational modifications of amino acids in proteins developed by Chris Floudas and coworkers. It uses the ff03 charge model and has several side-chain torsion corrections parameterized to match the quantum chemical rotational surface.
  • Forcefield_NCAA - An AMBER-based forcefield and webtool for modeling common non-natural amino acids in proteins in condensed-phase simulations using the ff03 charge model. The charges have been reported to be correlated with hydration free energies of corresponding side-chain analogs.

Other

  • LFMM (Ligand Field Molecular Mechanics) - functions for the coordination sphere around transition metals based on the angular overlap model (AOM). Implemented in the Molecular Operating Environment (MOE) as DommiMOE and in Tinker
  • VALBOND - a function for angle bending that is based on valence bond theory and works for large angular distortions, hypervalent molecules, and transition metal complexes. It can be incorporated into other force fields such as CHARMM and UFF.

Subtle body

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Subtle_b...