https://en.wikipedia.org/wiki/Numerical_relativity
Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes, gravitational waves, neutron stars and many other phenomena governed by Einstein's theory of general relativity. A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves. Other branches are also active.
Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes, gravitational waves, neutron stars and many other phenomena governed by Einstein's theory of general relativity. A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves. Other branches are also active.
Overview
A primary goal of numerical relativity is to study spacetimes whose exact form is not known. The spacetimes so found computationally can either be fully dynamical, stationary or static
and may contain matter fields or vacuum. In the case of stationary and
static solutions, numerical methods may also be used to study the
stability of the equilibrium spacetimes. In the case of dynamical
spacetimes, the problem may be divided into the initial value problem
and the evolution, each requiring different methods.
Numerical relativity is applied to many areas, such as cosmological models, critical phenomena, perturbed black holes and neutron stars, and the coalescence of black holes
and neutron stars, for example. In any of these cases, Einstein's
equations can be formulated in several ways that allow us to evolve the
dynamics. While Cauchy methods have received a majority of the attention, characteristic and Regge calculus based methods have also been used. All of these methods begin with a snapshot of the gravitational fields on some hypersurface, the initial data, and evolve these data to neighboring hypersurfaces.
Like all problems in numerical analysis, careful attention is paid to the stability and convergence of the numerical solutions. In this line, much attention is paid to the gauge conditions,
coordinates, and various formulations of the Einstein equations and the
effect they have on the ability to produce accurate numerical
solutions.
Numerical relativity research is distinct from work on classical field theories
as many techniques implemented in these areas are inapplicable in
relativity. Many facets are however shared with large scale problems in
other computational sciences like computational fluid dynamics, electromagnetics, and solid mechanics. Numerical relativists often work with applied mathematicians and draw insight from numerical analysis, scientific computation, partial differential equations, and geometry among other mathematical areas of specialization.
History
Foundations in theory
Albert Einstein published his theory of general relativity in 1915. It, like his earlier theory of special relativity, described space and time as a unified spacetime subject to what are now known as the Einstein field equations. These form a set of coupled nonlinear partial differential equations (PDEs). After more than 100 years since the first publication of the theory, relatively few closed-form solutions are known for the field equations, and, of those, most are cosmological solutions that assume special symmetry to reduce the complexity of the equations.
The field of numerical relativity emerged from the desire to
construct and study more general solutions to the field equations by
approximately solving the Einstein equations numerically. A necessary
precursor to such attempts was a decomposition of spacetime back into
separated space and time. This was first published by Richard Arnowitt, Stanley Deser, and Charles W. Misner in the late 1950s in what has become known as the ADM formalism.[3]
Although for technical reasons the precise equations formulated in the
original ADM paper are rarely used in numerical simulations, most
practical approaches to numerical relativity use a "3+1 decomposition"
of spacetime into three-dimensional space and one-dimensional time that
is closely related to the ADM formulation, because the ADM procedure
reformulates the Einstein field equations into a constrained initial value problem that can be addressed using computational methodologies.
At the time that ADM published their original paper, computer
technology would not have supported numerical solution to their
equations on any problem of any substantial size. The first documented
attempt to solve the Einstein field equations numerically appears to be
Hahn and Lindquist in 1964, followed soon thereafter by Smarr and by Eppley. These early attempts were focused on evolving Misner data in axisymmetry
(also known as "2+1 dimensions"). At around the same time Tsvi Piran
wrote the first code that evolved a system with gravitational radiation
using a cylindrical symmetry.
In this calculation Piran has set the foundation for many of the
concepts used today in evolving ADM equations, like "free evolution"
versus "constrained evolution",
which deal with the fundamental problem of treating the constraint
equations that arise in the ADM formalism. Applying symmetry reduced
the computational and memory requirements associated with the problem,
allowing the researchers to obtain results on the supercomputers available at the time.
Early results
The first realistic calculations of rotating collapse were carried out in the early eighties by Richard Stark and Tsvi Piran
in which the gravitational wave forms resulting from formation of a
rotating black hole were calculated for the first time. For nearly 20
years following the initial results, there were fairly few other
published results in numerical relativity, probably due to the lack of
sufficiently powerful computers to address the problem. In the late
1990s, the Binary Black Hole Grand Challenge Alliance successfully simulated a head-on binary black hole collision. As a post-processing step the group computed the event horizon for the spacetime. This result still required imposing and exploiting axisymmetry in the calculations.
Some of the first documented attempts to solve the Einstein equations in three dimensions were focused on a single Schwarzschild black hole,
which is described by a static and spherically symmetric solution to
the Einstein field equations. This provides an excellent test case in
numerical relativity because it does have a closed-form solution so that
numerical results can be compared to an exact solution, because it is
static, and because it contains one of the most numerically challenging
features of relativity theory, a physical singularity. One of the earliest groups to attempt to simulate this solution was Anninos et al. in 1995. In their paper they point out that
- "Progress in three dimensional numerical relativity has been impeded in part by lack of computers with sufficient memory and computational power to perform well resolved calculations of 3D spacetimes."
Maturation of the field
In
the years that followed, not only did computers become more powerful,
but also various research groups developed alternate techniques to
improve the efficiency of the calculations. With respect to black hole
simulations specifically, two techniques were devised to avoid problems
associated with the existence of physical singularities in the solutions
to the equations: (1) Excision, and (2) the "puncture" method. In
addition the Lazarus group developed techniques for using early results
from a short-lived simulation solving the nonlinear ADM equations, in
order to provide initial data for a more stable code based on linearized
equations derived from perturbation theory. More generally, adaptive mesh refinement techniques, already used in computational fluid dynamics were introduced to the field of numerical relativity.
Excision
In the excision technique, which was first proposed in the late 1990s, a portion of a spacetime inside of the event horizon
surrounding the singularity of a black hole is simply not evolved. In
theory this should not affect the solution to the equations outside of
the event horizon because of the principle of causality
and properties of the event horizon (i.e. nothing physical inside the
black hole can influence any of the physics outside the horizon). Thus
if one simply does not solve the equations inside the horizon one should
still be able to obtain valid solutions outside. One "excises" the
interior by imposing ingoing boundary conditions on a boundary
surrounding the singularity but inside the horizon.
While the implementation of excision has been very successful, the
technique has two minor problems. The first is that one has to be
careful about the coordinate conditions. While physical effects cannot
propagate from inside to outside, coordinate effects could. For example,
if the coordinate conditions were elliptical, coordinate changes inside
could instantly propagate out through the horizon. This then means that
one needs hyperbolic type coordinate conditions with characteristic
velocities less than that of light for the propagation of coordinate
effects (e.g., using harmonic coordinates coordinate conditions). The
second problem is that as the black holes move, one must continually
adjust the location of the excision region to move with the black hole.
The excision technique was developed over several years including
the development of new gauge conditions that increased stability and
work that demonstrated the ability of the excision regions to move
through the computational grid. The first stable, long-term evolution of the orbit and merger of two black holes using this technique was published in 2005.
Punctures
In the puncture method the solution is factored into an analytical part,
which contains the singularity of the black hole, and a numerically
constructed part, which is then singularity free. This is a
generalization of the Brill-Lindquist prescription for initial data of black holes at rest and can be generalized to the Bowen-York
prescription for spinning and moving black hole initial data. Until
2005, all published usage of the puncture method required that the
coordinate position of all punctures remain fixed during the course of
the simulation. Of course black holes in proximity to each other will
tend to move under the force of gravity, so the fact that the coordinate
position of the puncture remained fixed meant that the coordinate
systems themselves became "stretched" or "twisted," and this typically
led to numerical instabilities at some stage of the simulation.
Breakthrough
In
2005 researchers demonstrated for the first time the ability to allow
punctures to move through the coordinate system, thus eliminating some
of the earlier problems with the method. This allowed accurate
long-term evolutions of black holes.
By choosing appropriate coordinate conditions and making crude analytic
assumption about the fields near the singularity (since no physical
effects can propagate out of the black hole, the crudeness of the
approximations does not matter), numerical solutions could be obtained
to the problem of two black holes orbiting each other, as well as
accurate computation of gravitational radiation (ripples in spacetime) emitted by them.
Lazarus project
The
Lazarus project (1998–2005) was developed as a post-Grand Challenge
technique to extract astrophysical results from short lived full
numerical simulations of binary black holes. It combined approximation
techniques before (post-Newtonian trajectories) and after (perturbations
of single black holes) with full numerical simulations attempting to
solve General Relativity field equations.
All previous attempts to numerically integrate in supercomputers the
Hilbert-Einstein equations describing the gravitational field around
binary black holes led to software failure before a single orbit was
completed.
The Lazarus approach, in the meantime, gave the best insight into
the binary black hole problem and produced numerous and relatively
accurate results, such as the radiated energy and angular momentum
emitted in the latest merging state, the linear momentum radiated by unequal mass holes, and the final mass and spin of the remnant black hole.
The method also computed detailed gravitational waves emitted by the
merger process and predicted that the collision of black holes is the
most energetic single event in the Universe, releasing more energy in a
fraction of a second in the form of gravitational radiation than an
entire galaxy in its lifetime.
Adaptive mesh refinement
Adaptive mesh refinement
(AMR) as a numerical method has roots that go well beyond its first
application in the field of numerical relativity. Mesh refinement first
appears in the numerical relativity literature in the 1980s, through
the work of Choptuik in his studies of critical collapse of scalar fields. The original work was in one dimension, but it was subsequently extended to two dimensions. In two dimensions, AMR has also been applied to the study of inhomogeneous cosmologies, and to the study of Schwarzschild black holes.
The technique has now become a standard tool in numerical relativity
and has been used to study the merger of black holes and other compact
objects in addition to the propagation of gravitational radiation generated by such astronomical events.
Recent developments
In
the past few years, hundreds of research papers have been published
leading to a wide spectrum of mathematical relativity, gravitational
wave, and astrophysical results for the orbiting black hole problem.
This technique extended to astrophysical binary systems involving
neutron stars and black holes, and multiple black holes.
One of the most surprising predictions is that the merger of two black
holes can give the remnant hole a speed of up to 4000 km/s that can
allow it to escape from any known galaxy.
The simulations also predict an enormous release of gravitational
energy in this merger process, amounting up to 8% of its total rest
mass.