Search This Blog

Wednesday, June 2, 2021

Mathematical and theoretical biology

Yellow chamomile head showing the Fibonacci numbers in spirals consisting of 21 (blue) and 13 (aqua). Such arrangements have been noticed since the Middle Ages and can be used to make mathematical models of a wide variety of plants.

Mathematical and theoretical biology or, Biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of the living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to prove and validate the scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.

Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It can be useful in both theoretical and practical research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models.

Because of the complexity of the living systems, theoretical biology employs several fields of mathematics, and has contributed to the development of new techniques.

History

Early history

Mathematics has been used in biology as early as the 13th century, when Fibonacci used the famous Fibonacci series to describe a growing population of rabbits. In the 18th century Daniel Bernoulli applied mathematics to describe the effect of smallpox on the human population. Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth. Pierre François Verhulst formulated the logistic growth model in 1836.

Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment's carrying capacity) could only grow arithmetically.

The term "theoretical biology" was first used by Johannes Reinke in 1901. One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson, and other early pioneers include Ronald Fisher, Hans Leo Przibram, Nicolas Rashevsky and Vito Volterra.

Recent growth

Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include:

  • The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools
  • Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology
  • An increase in computing power, which facilitates calculations and simulations not previously possible
  • An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research

Areas of research

Several areas of specialized research in mathematical and theoretical biology as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models.

Abstract relational biology

Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957-1958 as abstract, relational models of cellular and organismal organization.

Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.

Algebraic biology

Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes.

Complex systems biology

An elaboration of systems biology to understanding the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology.

Computer models and automata theory

A monograph on this topic summarizes an extensive amount of published research in this area up to 1986, including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics, cancer modelling, neural nets, genetic networks, abstract categories in relational biology, metabolic-replication systems, category theory applications in biology and medicine, automata theory, cellular automata, tessellation models and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories.

Modeling cell and molecular biology

This area has received a boost due to the growing importance of molecular biology.

  • Mechanics of biological tissues
  • Theoretical enzymology and enzyme kinetics
  • Cancer modelling and simulation
  • Modelling the movement of interacting cell populations
  • Mathematical modelling of scar tissue formation
  • Mathematical modelling of intracellular dynamics
  • Mathematical modelling of the cell cycle
  • Mathematical modelling of apoptosis

Modelling physiological systems

Computational neuroscience

Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.

Evolutionary biology

Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology.

Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic.

Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.

In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.

Mathematical biophysics

The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.

The following is a list of mathematical descriptions and their assumptions.

Deterministic processes (dynamical systems)

A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space.

Stochastic processes (random dynamical systems)

A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution.

Spatial modelling

One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society.

Mathematical methods

A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur.

Molecular set theory

Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine. In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.

Organizational biology

Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea.

For example, abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957-1958 as abstract, relational models of cellular and organismal organization.

Model example: the cell cycle

The eukaryotic cell cycle is very complex and is one of the most studied topics, since its misregulation leads to cancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups  have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006).

By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process).

To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size.

To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments.

Cell cycle bifurcation diagram.jpg

In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate).

A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation

Natural resource

From Wikipedia, the free encyclopedia
The rainforest in Fatu-Hiva, in the Marquesas Islands, is an example of an undisturbed natural resource. Forest provides timber for humans, food, water and shelter for the flora and fauna tribes and animals. The nutrient cycle between organisms form food chains and foster a biodiversity of species.
 
The Carson Fall in Mount Kinabalu, Malaysia is an example of undisturbed natural resources. Waterfalls provide spring water for humans, animals and plants for survival and also habitat for marine organisms. The water current can be used to turn turbines for hydroelectric generation.
 
The ocean is an example of a natural resource. Ocean waves can be used to generate wave power, a renewable energy source. Ocean water is important for salt production, desalination, and providing habitat for deep-water fishes. There is biodiversity of marine species in the sea where nutrient cycles are common.
 
A picture of the Udachnaya pipe, an open-pit diamond mine in Siberia. An example of a non-renewable natural resource.

Natural resources are resources that exist without any actions of humankind. This includes the sources of valued characteristics such as commercial and industrial use, aesthetic value, scientific interest and cultural value. On Earth, it includes sunlight, atmosphere, water, land, all minerals along with all vegetation, and animal life. Natural resources can be part of our natural heritage or protected in nature reserves.

Particular areas (such as the rainforest in Fatu-Hiva) often feature biodiversity and geodiversity in their ecosystems. Natural resources may be classified in different ways. Natural resources are materials and components (something that can be used) that can be found within the environment. Every man-made product is composed of natural resources (at its fundamental level). A natural resource may exist as a separate entity such as fresh water, air, as well as any living organism such as a fish, or it may exist in an alternate form that must be processed to obtain the resource such as metal ores, rare-earth elements, petroleum, and most forms of energy.

There is much debate worldwide over natural-resource allocations. This is particularly true during periods of increasing scarcity and shortages (depletion and overconsumption of resources).

Classification

There are various methods of categorizing natural resources. These include the source of origin, stage of development, and by their renewability.

On the basis of origin, natural resources may be divided into two types:

Considering their stage of development, natural resources may be referred to in the following ways:

  • Potential resources — Potential resources are those that may be used in the future—for example, petroleum in sedimentary rocks that, until drilled out and put to use remains a potential resource
  • Actual resources — Those resources that have been surveyed, quantified and qualified, and are currently used in development, such as wood processing, and are typically dependent on technology
  • Reserve resources — The part of an actual resource that can be developed profitably in the future
  • Stock resources — Those that have been surveyed, but cannot be used due to lack of technology—for example, hydrogen

On the basis of recovery rate, natural resources can be categorized as follows:

  • Renewable resources — Renewable resources can be replenished naturally. Some of these resources, like sunlight, air, wind, water, etc. are continuously available and their quantities are not noticeably affected by human consumption. Though many renewable resources do not have such a rapid recovery rate, these resources are susceptible to depletion by over-use. Resources from a human use perspective are classified as renewable so long as the rate of replenishment/recovery exceeds that of the rate of consumption. They replenish easily compared to non-renewable resources.
  • Non-renewable resources – Non-renewable resources either form slowly or do not naturally form in the environment. Minerals are the most common resource included in this category. From the human perspective, resources are non-renewable when their rate of consumption exceeds the rate of replenishment/recovery; a good example of this are fossil fuels, which are in this category because their rate of formation is extremely slow (potentially millions of years), meaning they are considered non-renewable. Some resources naturally deplete in amount without human interference, the most notable of these being radio-active elements such as uranium, which naturally decay into heavy metals. Of these, the metallic minerals can be re-used by recycling them, but coal and petroleum cannot be recycled. Once they are completely used they take millions of years to replenish.

Extraction

Resource extraction involves any activity that withdraws resources from nature. This can range in scale from the traditional use of preindustrial societies to global industry. Extractive industries are, along with agriculture, the basis of the primary sector of the economy. Extraction produces raw material, which is then processed to add value. Examples of extractive industries are hunting, trapping, mining, oil and gas drilling, and forestry. Natural resources can add substantial amounts to a country's wealth; however, a sudden inflow of money caused by a resource boom can create social problems including inflation harming other industries ("Dutch disease") and corruption, leading to inequality and underdevelopment, this is known as the "resource curse".

Extractive industries represent a large growing activity in many less-developed countries but the wealth generated does not always lead to sustainable and inclusive growth. People often accuse extractive industry businesses as acting only to maximize short-term value, implying that less-developed countries are vulnerable to powerful corporations. Alternatively, host governments are often assumed to be only maximizing immediate revenue. Researchers argue there are areas of common interest where development goals and business cross. These present opportunities for international governmental agencies to engage with the private sector and host governments through revenue management and expenditure accountability, infrastructure development, employment creation, skills, and enterprise development, and impacts on children, especially girls and women. A strong civil society can play an important role in ensuring the effective management of natural resources. Norway can serve as a role model in this regard as it has good institutions and open and dynamic public debate with strong civil society actors that provide an effective checks and balances system for the government's management of extractive industries, such as the Extractive Industries Transparency Initiative (EITI), a global standard for the good governance of oil, gas and mineral resources. It seeks to address the key governance issues in the extractive sectors.

Depletion of resources

Wind is a natural resource that can be used to generate electricity, as with these 5 MW wind turbines in Thorntonbank Wind Farm 28 km (17 mi) off the coast of Belgium.

In recent years, the depletion of natural resources has become a major focus of governments and organizations such as the United Nations (UN). This is evident in the UN's Agenda 21 Section Two, which outlines the necessary steps for countries to take to sustain their natural resources. The depletion of natural resources is considered a sustainable development issue. The term sustainable development has many interpretations, most notably the Brundtland Commission's 'to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs; however, in broad terms it is balancing the needs of the planet's people and species now and in the future. In regards to natural resources, depletion is of concern for sustainable development as it has the ability to degrade current environments and the potential to impact the needs of future generations.

"The conservation of natural resources is the fundamental problem. Unless we solve that problem, it will avail us little to solve all others."

Theodore Roosevelt

Depletion of natural resources is associated with social inequity. Considering most biodiversity are located in developing countries, depletion of this resource could result in losses of ecosystem services for these countries. Some view this depletion as a major source of social unrest and conflicts in developing nations.

At present, there is a particular concern for rainforest regions that hold most of the Earth's biodiversity. According to Nelson, deforestation and degradation affect 8.5% of the world's forests with 30% of the Earth's surface already cropped. If we consider that 80% of people rely on medicines obtained from plants and 34 of the world's prescription medicines have ingredients taken from plants, loss of the world's rainforests could result in a loss of finding more potential life-saving medicines.

The depletion of natural resources is caused by 'direct drivers of change' such as Mining, petroleum extraction, fishing, and forestry as well as 'indirect drivers of change' such as demography (e.g. population growth), economy, society, politics, and technology. The current practice of Agriculture is another factor causing depletion of natural resources. For example, the depletion of nutrients in the soil due to excessive use of nitrogen and desertification. The depletion of natural resources is a continuing concern for society. This is seen in the cited quote given by Theodore Roosevelt, a well-known conservationist and former United States president, who was opposed to unregulated natural resource extraction.

Protection

In 1982, the United Nations developed the World Charter for Nature, which recognized the need to protect nature from further depletion due to human activity. It states that measures must be taken at all societal levels, from international to individual, to protect nature. It outlines the need for sustainable use of natural resources and suggests that the protection of resources should be incorporated into national and international systems of law. To look at the importance of protecting natural resources further, the World Ethic of Sustainability, developed by the IUCN, WWF and the UNEP in 1990, set out eight values for sustainability, including the need to protect natural resources from depletion. Since the development of these documents, many measures have been taken to protect natural resources including establishment of the scientific field and practice of conservation biology and habitat conservation, respectively.

Conservation biology is the scientific study of the nature and status of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction. It is an interdisciplinary subject drawing on science, economics and the practice of natural resource management. The term conservation biology was introduced as the title of a conference held at the University of California, San Diego, in La Jolla, California, in 1978, organized by biologists Bruce A. Wilcox and Michael E. Soulé.

Habitat conservation is a land management practice that seeks to conserve, protect and restore habitat areas for wild plants and animals, especially conservation reliant species, and prevent their extinction, fragmentation or reduction in range.

Management

Natural resource management is a discipline in the management of natural resources such as land, water, soil, plants, and animals—with a particular focus on how management affects quality of life for present and future generations. Hence, sustainable development is followed according to judicial use of resources to supply both the present generation and future generations. The disciplines of fisheries, forestry, and wildlife are examples of large subdisciplines of natural resource management.

Management of natural resources involves identifying who has the right to use the resources, and who does not, for defining the boundaries of the resource. The resources may be managed by the users according to the rules governing when and how the resource is used depending on local condition or the resources may be managed by a governmental organization or other central authority.

A "...successful management of natural resources depends on freedom of speech, a dynamic and wide-ranging public debate through multiple independent media channels and an active civil society engaged in natural resource issues...", because of the nature of the shared resources the individuals who are affected by the rules can participate in setting or changing them. The users have rights to devise their own management institutions and plans under the recognition by the government. The right to resources includes land, water, fisheries and pastoral rights. The users or parties accountable to the users have to actively monitor and ensure the utilisation of the resource compliance with the rules and to impose penalty on those peoples who violate the rules. These conflicts are resolved in a quick and low cost manner by the local institution according to the seriousness and context of the offence. The global science-based platform to discuss natural resources management is the World Resources Forum, based in Switzerland.

 

Calabi–Yau manifold

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Calabi%E2%80%93Yau_manifold

A 2D slice of a 6D Calabi–Yau quintic manifold.

In algebraic geometry, a Calabi–Yau manifold, also known as a Calabi–Yau space, is a particular type of manifold which has properties, such as Ricci flatness, yielding applications in theoretical physics. Particularly in superstring theory, the extra dimensions of spacetime are sometimes conjectured to take the form of a 6-dimensional Calabi–Yau manifold, which led to the idea of mirror symmetry. Their name was coined by Candelas et al. (1985), after Eugenio Calabi (1954, 1957) who first conjectured that such surfaces might exist, and Shing-Tung Yau (1978) who proved the Calabi conjecture.

Calabi–Yau manifolds are complex manifolds that are generalizations of K3 surfaces in any number of complex dimensions (i.e. any even number of real dimensions). They were originally defined as compact Kähler manifolds with a vanishing first Chern class and a Ricci-flat metric, though many other similar but inequivalent definitions are sometimes used.

Definitions

The motivational definition given by Shing-Tung Yau is of a compact Kähler manifold with a vanishing first Chern class, that is also Ricci flat.

There are many other definitions of a Calabi–Yau manifold used by different authors, some inequivalent. This section summarizes some of the more common definitions and the relations between them.

A Calabi–Yau n-fold or Calabi–Yau manifold of (complex) dimension n is sometimes defined as a compact n-dimensional Kähler manifold M satisfying one of the following equivalent conditions:

These conditions imply that the first integral Chern class of M vanishes. Nevertheless, the converse is not true. The simplest examples where this happens are hyperelliptic surfaces, finite quotients of a complex torus of complex dimension 2, which have vanishing first integral Chern class but non-trivial canonical bundle.

For a compact n-dimensional Kähler manifold M the following conditions are equivalent to each other, but are weaker than the conditions above, though they are sometimes used as the definition of a Calabi–Yau manifold:

  • M has vanishing first real Chern class.
  • M has a Kähler metric with vanishing Ricci curvature.
  • M has a Kähler metric with local holonomy contained in SU(n).
  • A positive power of the canonical bundle of M is trivial.
  • M has a finite cover that has trivial canonical bundle.
  • M has a finite cover that is a product of a torus and a simply connected manifold with trivial canonical bundle.

If a compact Kähler manifold is simply connected, then the weak definition above is equivalent to the stronger definition. Enriques surfaces give examples of complex manifolds that have Ricci-flat metrics, but their canonical bundles are not trivial, so they are Calabi–Yau manifolds according to the second but not the first definition above. On the other hand, their double covers are Calabi–Yau manifolds for both definitions (in fact, K3 surfaces).

By far the hardest part of proving the equivalences between the various properties above is proving the existence of Ricci-flat metrics. This follows from Yau's proof of the Calabi conjecture, which implies that a compact Kähler manifold with a vanishing first real Chern class has a Kähler metric in the same class with vanishing Ricci curvature. (The class of a Kähler metric is the cohomology class of its associated 2-form.) Calabi showed such a metric is unique.

There are many other inequivalent definitions of Calabi–Yau manifolds that are sometimes used, which differ in the following ways (among others):

  • The first Chern class may vanish as an integral class or as a real class.
  • Most definitions assert that Calabi–Yau manifolds are compact, but some allow them to be non-compact. In the generalization to non-compact manifolds, the difference must vanish asymptotically. Here, is the Kähler form associated with the Kähler metric, (Gang Tian; Shing-Tung Yau 1990, 1991).
  • Some definitions put restrictions on the fundamental group of a Calabi–Yau manifold, such as demanding that it be finite or trivial. Any Calabi–Yau manifold has a finite cover that is the product of a torus and a simply-connected Calabi–Yau manifold.
  • Some definitions require that the holonomy be exactly equal to SU(n) rather than a subgroup of it, which implies that the Hodge numbers vanish for . Abelian surfaces have a Ricci flat metric with holonomy strictly smaller than SU(2) (in fact trivial) so are not Calabi–Yau manifolds according to such definitions.
  • Most definitions assume that a Calabi–Yau manifold has a Riemannian metric, but some treat them as complex manifolds without a metric.
  • Most definitions assume the manifold is non-singular, but some allow mild singularities. While the Chern class fails to be well-defined for singular Calabi–Yau's, the canonical bundle and canonical class may still be defined if all the singularities are Gorenstein, and so may be used to extend the definition of a smooth Calabi–Yau manifold to a possibly singular Calabi–Yau variety.

Examples

The most important fundamental fact is that any smooth algebraic variety embedded in a projective space is a Kähler manifold, because there is a natural Fubini–Study metric on a projective space which one can restrict to the algebraic variety. By definition, if ω is the Kähler metric on the algebraic variety X and the canonical bundle KX is trivial, then X is Calabi–Yau. Moreover, there is unique Kähler metric ω on X such that [ω0] = [ω] ∈ H2(X,R), a fact which was conjectured by Eugenio Calabi and proved by Shing-Tung Yau.

Calabi-Yau algebraic curves

In one complex dimension, the only compact examples are tori, which form a one-parameter family. The Ricci-flat metric on a torus is actually a flat metric, so that the holonomy is the trivial group SU(1). A one-dimensional Calabi–Yau manifold is a complex elliptic curve, and in particular, algebraic.

CY algebraic surfaces

In two complex dimensions, the K3 surfaces furnish the only compact simply connected Calabi–Yau manifolds. These can be constructed as quartic surfaces in , such as the complex algebraic variety defined by the vanishing locus of

for

Other examples can be constructed as elliptic fibrations, as quotients of abelian surfaces, or as complete intersections.

Non simply-connected examples are given by abelian surfaces, which are real four tori equipped with a complex manifold structure. Enriques surfaces and hyperelliptic surfaces have first Chern class that vanishes as an element of the real cohomology group, but not as an element of the integral cohomology group, so Yau's theorem about the existence of a Ricci-flat metric still applies to them but they are sometimes not considered to be Calabi–Yau manifolds. Abelian surfaces are sometimes excluded from the classification of being Calabi–Yau, as their holonomy (again the trivial group) is a proper subgroup of SU(2), instead of being isomorphic to SU(2). However, the Enriques surface subset do not conform entirely to the SU(2) subgroup in the String theory landscape.

CY threefolds

In three complex dimensions, classification of the possible Calabi–Yau manifolds is an open problem, although Yau suspects that there is a finite number of families (albeit a much bigger number than his estimate from 20 years ago). In turn, it has also been conjectured by Miles Reid that the number of topological types of Calabi–Yau 3-folds is infinite, and that they can all be transformed continuously (through certain mild singularizations such as conifolds) one into another—much as Riemann surfaces can. One example of a three-dimensional Calabi–Yau manifold is a non-singular quintic threefold in CP4, which is the algebraic variety consisting of all of the zeros of a homogeneous quintic polynomial in the homogeneous coordinates of the CP4. Another example is a smooth model of the Barth–Nieto quintic. Some discrete quotients of the quintic by various Z5 actions are also Calabi–Yau and have received a lot of attention in the literature. One of these is related to the original quintic by mirror symmetry.

For every positive integer n, the zero set, in the homogeneous coordinates of the complex projective space CPn+1, of a non-singular homogeneous degree n + 2 polynomial in n + 2 variables is a compact Calabi–Yau n-fold. The case n = 1 describes an elliptic curve, while for n = 2 one obtains a K3 surface.

More generally, Calabi–Yau varieties/orbifolds can be found as weighted complete intersections in a weighted projective space. The main tool for finding such spaces is the adjunction formula.

All hyper-Kähler manifolds are Calabi–Yau manifolds.

Applications in superstring theory

Calabi–Yau manifolds are important in superstring theory. Essentially, Calabi–Yau manifolds are shapes that satisfy the requirement of space for the six "unseen" spatial dimensions of string theory, which may be smaller than our currently observable lengths as they have not yet been detected. A popular alternative known as large extra dimensions, which often occurs in braneworld models, is that the Calabi–Yau is large but we are confined to a small subset on which it intersects a D-brane. Further extensions into higher dimensions are currently being explored with additional ramifications for general relativity.

In the most conventional superstring models, ten conjectural dimensions in string theory are supposed to come as four of which we are aware, carrying some kind of fibration with fiber dimension six. Compactification on Calabi–Yau n-folds are important because they leave some of the original supersymmetry unbroken. More precisely, in the absence of fluxes, compactification on a Calabi–Yau 3-fold (real dimension 6) leaves one quarter of the original supersymmetry unbroken if the holonomy is the full SU(3).

More generally, a flux-free compactification on an n-manifold with holonomy SU(n) leaves 21−n of the original supersymmetry unbroken, corresponding to 26−n supercharges in a compactification of type II supergravity or 25−n supercharges in a compactification of type I. When fluxes are included the supersymmetry condition instead implies that the compactification manifold be a generalized Calabi–Yau, a notion introduced by Hitchin (2003). These models are known as flux compactifications.

F-theory compactifications on various Calabi–Yau four-folds provide physicists with a method to find a large number of classical solutions in the so-called string theory landscape.

Connected with each hole in the Calabi–Yau space is a group of low-energy string vibrational patterns. Since string theory states that our familiar elementary particles correspond to low-energy string vibrations, the presence of multiple holes causes the string patterns to fall into multiple groups, or families. Although the following statement has been simplified, it conveys the logic of the argument: if the Calabi–Yau has three holes, then three families of vibrational patterns and thus three families of particles will be observed experimentally.

Logically, since strings vibrate through all the dimensions, the shape of the curled-up ones will affect their vibrations and thus the properties of the elementary particles observed. For example, Andrew Strominger and Edward Witten have shown that the masses of particles depend on the manner of the intersection of the various holes in a Calabi–Yau. In other words, the positions of the holes relative to one another and to the substance of the Calabi–Yau space was found by Strominger and Witten to affect the masses of particles in a certain way. This is true of all particle properties.

Humanoid robot

From Wikipedia, the free encyclopedia
 
Honda P series: P1 (1993), P2 (1996), P3 (1997), P4 (2000)

A humanoid robot is a robot with its body shape built to resemble the human body. The design may be for functional purposes, such as interacting with human tools and environments, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots also have heads designed to replicate human facial features such as eyes and mouths. Androids are humanoid robots built to aesthetically resemble humans.

Purpose

iCub robot at the Genoa Science Festival, Italy, in 2009

Humanoid robots are now used as research tools in several scientific areas. Researchers study the human body structure and behavior (biomechanics) to build humanoid robots. On the other side, the attempt to simulate the human body leads to a better understanding of it. Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.

It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans.

Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are powered leg prosthesis for neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.

Valkyrie, from NASA

Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, through which they should be able to assist the sick and elderly, and dirty or dangerous jobs. Humanoids are also suitable for some procedurally-based vocations, such as reception-desk administrators and automotive manufacturing line workers. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is immense.

They are also becoming increasingly popular as entertainers. For example, Ursula, a female robot, sings, plays music, dances and speaks to her audiences at Universal Studios. Several Disney theme park shows utilize animatronic robots that look, move and speak much like human beings. Although these robots look realistic, they have no cognition or physical autonomy. Various humanoid robots and their possible applications in daily life are featured in an independent documentary film called Plug & Pray, which was released in 2010.

Humanoid robots, especially those with artificial intelligence algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed.

Sensors

A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.

Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.

Proprioceptive

Proprioceptive sensors sense the position, the orientation and the speed of the humanoid's body and joints.

In human beings the otoliths and semi-circular canals (in the inner ear) are used to maintain balance and orientation. In addition humans use their own proprioceptive sensors (e.g. touch, muscle extension, limb position) to help with their orientation. Humanoid robots use accelerometers to measure the acceleration, from which velocity can be calculated by integration; tilt sensors to measure inclination; force sensors placed in robot's hands and feet to measure contact force with environment; position sensors, that indicate the actual position of the robot (from which the velocity can be calculated by derivation) or even speed sensors.

Exteroceptive

An artificial hand holding a lightbulb

Arrays of tactels can be used to provide data on what has been touched. The Shadow Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each finger tip. Tactile sensors also provide information about forces and torques transferred between the robot and other objects.

Vision refers to processing data from any modality which uses the electromagnetic spectrum to produce an image. In humanoid robots it is used to recognize objects and determine their properties. Vision sensors work most similarly to the eyes of human beings. Most humanoid robots use CCD cameras as vision sensors.

Sound sensors allow humanoid robots to hear speech and environmental sounds, and perform as the ears of the human being. Microphones are usually used for this task.

Actuators

Actuators are the motors responsible for motion in the robot.

Humanoid robots are constructed in such a way that they mimic the human body, so they use actuators that perform like muscles and joints, though with a different structure. To achieve the same effect as human motion, humanoid robots use mainly rotary actuators. They can be either electric, pneumatic, hydraulic, piezoelectric or ultrasonic.

Hydraulic and electric actuators have a very rigid behavior and can only be made to act in a compliant manner through the use of relatively complex feedback control strategies. While electric coreless motor actuators are better suited for high speed and low load applications, hydraulic ones operate well at low speed and high load applications.

Piezoelectric actuators generate a small movement with a high force capability when voltage is applied. They can be used for ultra-precise positioning and for generating and handling high forces or pressures in static or dynamic situations.

Ultrasonic actuators are designed to produce movements in a micrometer order at ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration, positioning applications and quick switching.

Pneumatic actuators operate on the basis of gas compressibility. As they are inflated, they expand along the axis, and as they deflate, they contract. If one end is fixed, the other will move in a linear trajectory. These actuators are intended for low speed and low/medium load applications. Between pneumatic actuators there are: cylinders, bellows, pneumatic engines, pneumatic stepper motors and pneumatic artificial muscles.

Planning and control

Rashmi-an Indian realistic lip-syncing multilingual humanoid robot

In planning and control, the essential difference between humanoids and other kinds of robots (like industrial ones) is that the movement of the robot must be human-like, using legged locomotion, especially biped gait. The ideal planning for humanoid movements during normal walking should result in minimum energy consumption, as it does in the human body. For this reason, studies on dynamics and control of these kinds of structures has become increasingly important.

The question of walking biped robots stabilization on the surface is of great importance. Maintenance of the robot's gravity center over the center of bearing area for providing a stable position can be chosen as a goal of control.

To maintain dynamic balance during the walk, a robot needs information about contact force and its current and desired motion. The solution to this problem relies on a major concept, the Zero Moment Point (ZMP).

Another characteristic of humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it. They don't stay still like factory manipulators and other robots that work in highly structured environments. To allow humanoids to move in complex environments, planning and control must focus on self-collision detection, path planning and obstacle avoidance.

Humanoid robots do not yet have some features of the human body. They include structures with variable flexibility, which provide safety (to the robot itself and to the people), and redundancy of movements, i.e. more degrees of freedom and therefore wide task availability. Although these characteristics are desirable to humanoid robots, they will bring more complexity and new problems to planning and control. The field of whole-body control deals with these issues and addresses the proper coordination of numerous degrees of freedom, e.g. to realize several control tasks simultaneously while following a given order of priority.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...