Search This Blog

Tuesday, September 15, 2020

Computer simulation

From Wikipedia, the free encyclopedia

Process of building a computer model, and the interplay between experiment, simulation, and theory.

Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of or the outcome of a real-world or physical system. Since they allow to check the reliability of chosen mathematical models, computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.

Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program. Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005; a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.

Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.

Simulation versus model

A computer model is the algorithms and equations used to capture the behavior of the system being modeled. By contrast, computer simulation is the actual running of the program that contains these equations or algorithms. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model", and then either "run the model" or equivalently "run a simulation".

History

Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.

Data preparation

The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).

Input sources also vary widely:

  • Sensors and other physical devices connected to the model;
  • Control surfaces used to direct the progress of the simulation in some way;
  • Current or historical data entered by hand;
  • Values extracted as a by-product from other processes;
  • Values output for the purpose by other simulations, models, or processes.

Lastly, the time at which data is available varies:

  • "invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
  • data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
  • data can be provided during the simulation run, for example by a sensor network.

Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula (sometimes called Simula-67, after the year 1967 when it was proposed). There are now many others.

Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis" to confirm that values output by the simulation will still be usefully accurate.

Types

Computer models can be classified according to several independent pairs of attributes, including:

  • Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
  • Steady-state or dynamic
  • Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
  • Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
  • Local or distributed.

Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:

  • Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
  • If the underlying graph is not a regular grid, the model may belong to the meshfree method class.

Equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.

  • Dynamic simulations model changes in a system in response to (usually changing) input signals.
  • Stochastic models use random number generators to model chance or random events;
  • A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
  • A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
  • A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
  • Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).

Visualization

Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.

Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.

Computer simulation in science

Computer simulation of the process of osmosis

Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:

Specific examples of computer simulations follow:

  • statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
  • agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
  • time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
  • computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
  • computer simulation using molecular modeling for drug discovery.
  • computer simulation to model viral infection in mammalian cells.
  • computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.
  • Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
  • An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.

Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.

In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling 

Simulation environments for physics and engineering

Graphical environments to design simulations have been developed. Special care was taken to handle events (situations in which the simulation equations are not valid and have to be changed). The open project Open Source Physics was started to develop reusable libraries for simulations in Java, together with Easy Java Simulations, a complete graphical environment that generates code based on these libraries.

Simulation environments for linguistics

Taiwanese Tone Group Parser is a simulator of Taiwanese tone sandhi acquisition. In practical, the method using linguistic theory to implement the Taiwanese tone group parser is a way to apply knowledge engineering technique to build the experiment environment of computer simulation for language acquisition. A work-in-process version of artificial tone group parser that includes a knowledge base and an executable program file for Microsoft Windows system (XP/Win7) can be download for evaluation.

Computer simulation in practical contexts

Computer simulations are used in a wide variety of practical contexts, such as:

The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.

Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.

In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.

Pitfalls

Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.

Model calibration techniques

The following three steps should be used to produce accurate simulation models: calibration, verification, and validation. Computer simulations are good at portraying and comparing theoretical scenarios, but in order to accurately model actual case studies they have to match what is actually happening today. A base model should be created and calibrated so that it matches the area being studied. The calibrated model should then be verified to ensure that the model is operating as expected based on the inputs. Once the model has been verified, the final step is to validate the model by comparing the outputs to historical data from the study area. This can be done by using statistical techniques and ensuring an adequate R-squared value. Unless these techniques are employed, the simulation model created will produce inaccurate results and not be a useful prediction tool.

Model calibration is achieved by adjusting any available parameters in order to adjust how the model operates and simulates the process. For example, in traffic simulation, typical parameters include look-ahead distance, car-following sensitivity, discharge headway, and start-up lost time. These parameters influence driver behavior such as when and how long it takes a driver to change lanes, how much distance a driver leaves between his car and the car in front of it, and how quickly a driver starts to accelerate through an intersection. Adjusting these parameters has a direct effect on the amount of traffic volume that can traverse through the modeled roadway network by making the drivers more or less aggressive. These are examples of calibration parameters that can be fine-tuned to match characteristics observed in the field at the study location. Most traffic models have typical default values but they may need to be adjusted to better match the driver behavior at the specific location being studied.

Model verification is achieved by obtaining output data from the model and comparing them to what is expected from the input data. For example, in traffic simulation, traffic volume can be verified to ensure that actual volume throughput in the model is reasonably close to traffic volumes input into the model. Ten percent is a typical threshold used in traffic simulation to determine if output volumes are reasonably close to input volumes. Simulation models handle model inputs in different ways so traffic that enters the network, for example, may or may not reach its desired destination. Additionally, traffic that wants to enter the network may not be able to, if congestion exists. This is why model verification is a very important part of the modeling process.

The final step is to validate the model by comparing the results with what is expected based on historical data from the study area. Ideally, the model should produce similar results to what has happened historically. This is typically verified by nothing more than quoting the R-squared statistic from the fit. This statistic measures the fraction of variability that is accounted for by the model. A high R-squared value does not necessarily mean the model fits the data well. Another tool used to validate models is graphical residual analysis. If model output values drastically differ from historical values, it probably means there is an error in the model. Before using the model as a base to produce additional models, it is important to verify it for different scenarios to ensure that each one is accurate. If the outputs do not reasonably match historic values during the validation process, the model should be reviewed and updated to produce results more in line with expectations. It is an iterative process that helps to produce more realistic models.

Validating traffic simulation models requires comparing traffic estimated by the model to observed traffic on the roadway and transit systems. Initial comparisons are for trip interchanges between quadrants, sectors, or other large areas of interest. The next step is to compare traffic estimated by the models to traffic counts, including transit ridership, crossing contrived barriers in the study area. These are typically called screenlines, cutlines, and cordon lines and may be imaginary or actual physical barriers. Cordon lines surround particular areas such as a city's central business district or other major activity centers. Transit ridership estimates are commonly validated by comparing them to actual patronage crossing cordon lines around the central business district.

Three sources of error can cause weak correlation during calibration: input error, model error, and parameter error. In general, input error and parameter error can be adjusted easily by the user. Model error however is caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically built using several different modeling theories that can produce conflicting results. Some models are more generalized while others are more detailed. If model error occurs as a result, in may be necessary to adjust the model methodology to make results more consistent.

In order to produce good models that can be used to produce realistic results, these are the necessary steps that need to be taken in order to ensure that simulation models are functioning properly. Simulation models can be used as a tool to verify engineering theories, but they are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the intended functions. The validation process establishes the credibility of the model by demonstrating its ability to replicate reality. The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. For example, in traffic analysis it is typical for a traffic engineer to perform a site visit to verify traffic counts and become familiar with traffic patterns in the area. The resulting models and forecasts will be no better than the data used for model estimation and validation.


  • Strogatz, Steven (2007). "The End of Insight". In Brockman, John (ed.). What is your dangerous idea?. HarperCollins. ISBN 9780061214950.







  • " "Researchers stage largest Military Simulation ever" Archived 2008-01-22 at the Wayback Machine, Jet Propulsion Laboratory, Caltech, December 1997,







  • "Molecular Simulation of Macroscopic Phenomena". Archived from the original on 2013-05-22.







  • "Largest computational biology simulation mimics life's most essential nanomachine" (news), News Release, Nancy Ambrosiano, Los Alamos National Laboratory, Los Alamos, NM, October 2005, webpage: LANL-Fuse-story7428 Archived 2007-07-04 at the Wayback Machine.







  • "Mission to build a simulated brain begins" Archived 2015-02-09 at the Wayback Machine, project of the institute at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, New Scientist, June 2005.







  • Santner, Thomas J; Williams, Brian J; Notz, William I (2003). The design and analysis of computer experiments. Springer Verlag.







  • Bratley, Paul; Fox, Bennet L.; Schrage, Linus E. (2011-06-28). A Guide to Simulation. Springer Science & Business Media. ISBN 9781441987242.







  • John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. pp. 128–129. ISBN 978-0-935702-75-0. Archived from the original on 2015-03-16.







  • Gupta, Ankur; Rawlings, James B. (April 2014). "Comparison of Parameter Estimation Methods in Stochastic Chemical Kinetic Models: Examples in Systems Biology". AIChE Journal. 60 (4): 1253–1268. doi:10.1002/aic.14409. ISSN 0001-1541. PMC 4946376. PMID 27429455.







  • Atanasov, AG; Waltenberger, B; Pferschy-Wenzig, EM; Linder, T; Wawrosch, C; Uhrin, P; Temml, V; Wang, L; Schwaiger, S; Heiss, EH; Rollinger, JM; Schuster, D; Breuss, JM; Bochkov, V; Mihovilovic, MD; Kopp, B; Bauer, R; Dirsch, VM; Stuppner, H (2015). "Discovery and resupply of pharmacologically active plant-derived natural products: A review". Biotechnol Adv. 33 (8): 1582–614. doi:10.1016/j.biotechadv.2015.08.001. PMC 4748402. PMID 26281720.







  • Mizukami, Koichi ; Saito, Fumio ; Baron, Michel. Study on grinding of pharmaceutical products with an aid of computer simulation Archived 2011-07-21 at the Wayback Machine







  • Mesly, Olivier (2015). Creating Models in Psychological Research. United States: Springer Psychology: 126 pages. ISBN 978-3-319-15752-8







  • Wilensky, Uri; Rand, William (2007). "Making Models Match: Replicating an Agent-Based Model". Journal of Artificial Societies and Social Simulation. 10 (4): 2.







  • Chang, Y. C. (2017). "A Knowledge Representation Method to Implement A Taiwanese Tone Group Parser [In Chinese]". International Journal of Computational Linguistics & Chinese Language Processing. 22 (212): 73–86.







  • Wescott, Bob (2013). The Every Computer Performance Book, Chapter 7: Modeling Computer Performance. CreateSpace. ISBN 978-1482657753.



  • Primordial nuclide

    From Wikipedia, the free encyclopedia
     
    Relative abundance of the chemical elements in the Earth's upper continental crust, on a per-atom basis

    In geochemistry, geophysics and nuclear physics, primordial nuclides, also known as primordial isotopes, are nuclides found on Earth that have existed in their current form since before Earth was formed. Primordial nuclides were present in the interstellar medium from which the solar system was formed, and were formed in, or after, the Big Bang, by nucleosynthesis in stars and supernovae followed by mass ejection, by cosmic ray spallation, and potentially from other processes. They are the stable nuclides plus the long-lived fraction of radionuclides surviving in the primordial solar nebula through planet accretion until the present. Only 286 such nuclides are known.

    Stability

    All of the known 252 stable nuclides, plus another 34 nuclides that have half-lives long enough to have survived from the formation of the Earth, occur as primordial nuclides. These 34 primordial radionuclides represent isotopes of 28 separate elements. Cadmium, tellurium, xenon, neodymium, samarium and uranium each have two primordial radioisotopes (113
    Cd
    , 116
    Cd
    ; 128
    Te
    , 130
    Te
    ; 124
    Xe
    , 136
    Xe
    ; 144
    Nd
    , 150
    Nd
    ; 147
    Sm
    , 148
    Sm
    ; and 235
    U
    , 238
    U
    ).

    Because the age of the Earth is 4.58×109 years (4.6 billion years), the half-life of the given nuclides must be greater than about 108 years (100 million years) for practical considerations. For example, for a nuclide with half-life 6×107 years (60 million years), this means 77 half-lives have elapsed, meaning that for each mole (6.02×1023 atoms) of that nuclide being present at the formation of Earth, only 4 atoms remain today.

    The four shortest-lived primordial nuclides (i.e. nuclides with shortest half-lives) to have been indisputably experimentally verified are 232
    Th
    (1.4 x 1010 years), 238
    U
    (4.5 x 109 years), 40
    K
    (1.25 x 109 years), and 235
    U
    (7.0 x 108 years).

    These are the 4 nuclides with half-lives comparable to, or somewhat less than, the estimated age of the universe. (232Th has a half life slightly longer than the age of the universe.) For a complete list of the 34 known primordial radionuclides, including the next 30 with half-lives much longer than the age of the universe, see the complete list below. For practical purposes, nuclides with half-lives much longer than the age of the universe may be treated as if they were stable. 232Th and 238U have half-lives long enough that their decay is limited over geological time scales; 40K and 235U have shorter half-lives and are hence severely depleted, but are still long-lived enough to persist significantly in nature.

    The next longest-living nuclide after the end of the list given in the table is 244
    Pu
    , with a half-life of 8.08×107 years. It has been reported to exist in nature as a primordial nuclide, although a later study did not detect it. Likewise, the second-longest-lived isotope not empirically identified as primordial is 146
    Sm
    , which has a half-life of 6.8×107 years, about double that of the third-longest-lived such isotope 92
    Nb
    (3.5×107 years). Taking into account that all these nuclides must exist for at least 4.6×109 years, 244Pu must survive 57 half-lives (and hence be reduced by a factor of 257 ≈ 1.4×1017), 146Sm must survive 67 (and be reduced by 267 ≈ 1.5×1020), and 92Nb must survive 130 (and be reduced by 2130 ≈ 1.4×1039). Mathematically, considering the likely initial abundances of these nuclides, 244Pu and 146Sm should persist somewhere within the Earth today, even if they are not identifiable in the relatively minor portion of the Earth's crust available to human assays, while they should not for 92Nb and all shorter-lived nuclides. Nuclides such as 92Nb that were present in the primordial solar nebula but have long since decayed away completely are termed extinct radionuclides if they have no other means of being regenerated.

    Because primordial chemical elements often consist of more than one primordial isotope, there are only 83 distinct primordial chemical elements. Of these, 80 have at least one observationally stable isotope and three additional primordial elements have only radioactive isotopes (bismuth, thorium, and uranium).

    Naturally occurring nuclides that are not primordial

    Some unstable isotopes which occur naturally (such as 14
    C
    , 3
    H
    , and 239
    Pu
    ) are not primordial, as they must be constantly regenerated. This occurs by cosmic radiation (in the case of cosmogenic nuclides such as 14
    C
    and 3
    H
    ), or (rarely) by such processes as geonuclear transmutation (neutron capture of uranium in the case of 237
    Np
    and 239
    Pu
    ). Other examples of common naturally occurring but non-primordial nuclides are isotopes of radon, polonium, and radium, which are all radiogenic nuclide daughters of uranium decay and are found in uranium ores. A similar radiogenic series is derived from the long-lived radioactive primordial nuclide 232Th. These nuclides are described as geogenic, meaning that they are decay or fission products of uranium or other actinides in subsurface rocks. All such nuclides have shorter half-lives than their parent radioactive primordial nuclides. Some other geogenic nuclides do not occur in the decay chains of 232Th, 235U, or 238U but can still fleetingly occur naturally as products of the spontaneous fission of one of these three long-lived nuclides, such as 126Sn, which makes up about 10−14 of all natural tin.

    Primordial elements

    There are 252 stable primordial nuclides and 34 radioactive primordial nuclides, but only 80 primordial stable elements (1 through 82, i.e. hydrogen through lead, exclusive of 43 and 61, technetium and promethium respectively) and three radioactive primordial elements (bismuth, thorium, and uranium).

    Bismuth's half-life is so long that it is often classed with the 80 primordial stable elements instead, since its radioactivity is not a cause for serious concern. The number of elements is fewer than the number of nuclides, because many of the primordial elements are represented by multiple isotopes.

    Naturally occurring stable nuclides

    As noted, these number about 252.  For a complete list noting which of the "stable" 252 nuclides may be in some respect unstable, see list of nuclides and stable nuclide. These questions do not impact the question of whether a nuclide is primordial, since all "nearly stable" nuclides, with half-lives longer than the age of the universe, are also primordial.

    Radioactive primordial nuclides

    Although it is estimated that about 34 primordial nuclides are radioactive (list below), it becomes very difficult to determine the exact total number of radioactive primordials, because the total number of stable nuclides is uncertain. There exist many extremely long-lived nuclides whose half-lives are still unknown. For example, it is predicted theoretically that all isotopes of tungsten, including those indicated by even the most modern empirical methods to be stable, must be radioactive and can decay by alpha emission, but as of 2013 this could only be measured experimentally for 180
    W
    . Similarly, all four primordial isotopes of lead are expected to decay to mercury, but the predicted half-lives are so long (some exceeding 10100 years) that this can hardly be observed in the near future. Nevertheless, the number of nuclides with half-lives so long that they cannot be measured with present instruments—and are considered from this viewpoint to be stable nuclides—is limited. Even when a "stable" nuclide is found to be radioactive, it merely moves from the stable to the unstable list of primordial nuclides, and the total number of primordial nuclides remains unchanged.

    List of 34 radioactive primordial nuclides and measured half-lives

    These 34 primordial nuclides represent radioisotopes of 28 distinct chemical elements (cadmium, neodymium, samarium, tellurium, uranium, and xenon each have two primordial radioisotopes). The radionuclides are listed in order of stability, with the longest half-life beginning the list. These radionuclides in many cases are so nearly stable that they compete for abundance with stable isotopes of their respective elements. For three chemical elements, indium, tellurium, and rhenium, a very long-lived radioactive primordial nuclide is found in greater abundance than a stable nuclide.

    The longest-lived radionuclide has a half-life of 2.2×1024 years, which is 160 trillion times the age of the Universe. Only four of these 34 nuclides have half-lives shorter than, or equal to, the age of the universe. Most of the remaining 30 have half-lives much longer. The shortest-lived primordial isotope, 235U, has a half-life of 704 million years, about one sixth of the age of the Earth and the Solar System.

    No. Nuclide Energy Half-
    life
    (years)
    Decay
    mode
    Decay energy
    (MeV)
    Approx. ratio
    half-life to
    age of universe
    253 128Te 8.743261 2.2×1024 2 β 2.530 160 trillion
    254 124Xe 8.778264 1.8×1022 KK 2.864 1 trillion
    255 78Kr 9.022349 9.2×1021 KK 2.846 670 billion
    256 136Xe 8.706805 2.165×1021 2 β 2.462 150 billion
    257 76Ge 9.034656 1.8×1021 2 β 2.039 130 billion
    258 130Ba 8.742574 1.2×1021 KK 2.620 90 billion
    259 82Se 9.017596 1.1×1020 2 β 2.995 8 billion
    260 116Cd 8.836146 3.102×1019 2 β 2.809 2 billion
    261 48Ca 8.992452 2.301×1019 2 β 4.274, .0058 2 billion
    262 96Zr 8.961359 2.0×1019 2 β 3.4 1 billion
    263 209Bi 8.158689 1.9×1019 α 3.137 1 billion
    264 130Te 8.766578 8.806×1018 2 β .868 600 million
    265 150Nd 8.562594 7.905×1018 2 β 3.367 600 million
    266 100Mo 8.933167 7.804×1018 2 β 3.035 600 million
    267 151Eu 8.565759 5.004×1018 α 1.9644 300 million
    268 180W 8.347127 1.801×1018 α 2.509 100 million
    269 50V 9.055759 1.4×1017 β+ or β 2.205, 1.038 10 million
    270 113Cd 8.859372 7.7×1015 β .321 600,000
    271 148Sm 8.607423 7.005×1015 α 1.986 500,000
    272 144Nd 8.652947 2.292×1015 α 1.905 200,000
    273 186Os 8.302508 2.002×1015 α 2.823 100,000
    274 174Hf 8.392287 2.002×1015 α 2.497 100,000
    275 115In 8.849910 4.4×1014 β .499 30,000
    276 152Gd 8.562868 1.1×1014 α 2.203 8000
    277 190Pt 8.267764 6.5×1011 α 3.252 47
    278 147Sm 8.610593 1.061×1011 α 2.310 7.7
    279 138La 8.698320 1.021×1011 K or β 1.737, 1.044 7.4
    280 87Rb 9.043718 4.972×1010 β .283 3.6
    281 187Re 8.291732 4.122×1010 β .0026 3
    282 176Lu 8.374665 3.764×1010 β 1.193 2.7
    283 232Th 7.918533 1.406×1010 α or SF 4.083 1
    284 238U 7.872551 4.471×109 α or SF or 2 β 4.270 0.3
    285 40K 8.909707 1.25×109 β or K or β+ 1.311, 1.505, 1.505 0.09
    286 235U 7.897198 7.04×108 α or SF 4.679 0.05

    List legends

    No. (number)
    A running positive integer for reference. These numbers may change slightly in the future since there are 162 nuclides now classified as stable, but which are theoretically predicted to be unstable, so that future experiments may show that some are in fact unstable. The number starts at 253, to follow the 252 (observationally) stable nuclides.
    Nuclide
    Nuclide identifiers are given by their mass number A and the symbol for the corresponding chemical element (implies a unique proton number).
    Energy
    Mass of the average nucleon of this nuclide relative to the mass of a neutron (so all nuclides get a positive value) in MeV/c2, formally: mnmnuclide / A.
    Half-life
    All times are given in years.
    Decay mode
    α α decay
    β β decay
    K electron capture
    KK double electron capture
    β+ β+ decay
    SF spontaneous fission
    2 β double β decay
    2 β+ double β+ decay
    I isomeric transition
    p proton emission
    n neutron emission
    Decay energy
    Multiple values for (maximal) decay energy in MeV are mapped to decay modes in their order.

    Abundance of the chemical elements

    From Wikipedia, the free encyclopedia

    The abundance of the chemical elements is a measure of the occurrence of the chemical elements relative to all other elements in a given environment. Abundance is measured in one of three ways: by the mass-fraction (the same as weight fraction); by the mole-fraction (fraction of atoms by numerical count, or sometimes fraction of molecules in gases); or by the volume-fraction. Volume-fraction is a common abundance measure in mixed gases such as planetary atmospheres, and is similar in value to molecular mole-fraction for gas mixtures at relatively low densities and pressures, and ideal gas mixtures. Most abundance values in this article are given as mass-fractions.

    For example, the abundance of oxygen in pure water can be measured in two ways: the mass fraction is about 89%, because that is the fraction of water's mass which is oxygen. However, the mole-fraction is about 33% because only 1 atom of 3 in water, H2O, is oxygen. As another example, looking at the mass-fraction abundance of hydrogen and helium in both the Universe as a whole and in the atmospheres of gas-giant planets such as Jupiter, it is 74% for hydrogen and 23–25% for helium; while the (atomic) mole-fraction for hydrogen is 92%, and for helium is 8%, in these environments. Changing the given environment to Jupiter's outer atmosphere, where hydrogen is diatomic while helium is not, changes the molecular mole-fraction (fraction of total gas molecules), as well as the fraction of atmosphere by volume, of hydrogen to about 86%, and of helium to 13%.

    The abundance of chemical elements in the universe is dominated by the large amounts of hydrogen and helium which were produced in the Big Bang. Remaining elements, making up only about 2% of the universe, were largely produced by supernovae and certain red giant stars. Lithium, beryllium and boron are rare because although they are produced by nuclear fusion, they are then destroyed by other reactions in the stars. The elements from carbon to iron are relatively more abundant in the universe because of the ease of making them in supernova nucleosynthesis. Elements of higher atomic number than iron (element 26) become progressively rarer in the universe, because they increasingly absorb stellar energy in their production. Also, elements with even atomic numbers are generally more common than their neighbors in the periodic table, due to favorable energetics of formation.

    The abundance of elements in the Sun and outer planets is similar to that in the universe. Due to solar heating, the elements of Earth and the inner rocky planets of the Solar System have undergone an additional depletion of volatile hydrogen, helium, neon, nitrogen, and carbon (which volatilizes as methane). The crust, mantle, and core of the Earth show evidence of chemical segregation plus some sequestration by density. Lighter silicates of aluminum are found in the crust, with more magnesium silicate in the mantle, while metallic iron and nickel compose the core. The abundance of elements in specialized environments, such as atmospheres, or oceans, or the human body, are primarily a product of chemical interactions with the medium in which they reside.

    Universe

    The elements – that is, ordinary (baryonic) matter made of protons, neutrons, and electrons, are only a small part of the content of the Universe. Cosmological observations suggest that only 4.6% of the universe's energy (including the mass contributed by energy, E = mc² ↔ m = E / c²) comprises the visible baryonic matter that constitutes stars, planets, and living beings. The rest is thought to be made up of dark energy (68%) and dark matter (27%). These are forms of matter and energy believed to exist on the basis of scientific theory and inductive reasoning based on observations, but they have not been directly observed and their nature is not well understood.

    Most standard (baryonic) matter is found in intergalactic gas, stars, and interstellar clouds, in the form of atoms or ions (plasma), although it can be found in degenerate forms in extreme astrophysical settings, such as the high densities inside white dwarfs and neutron stars.

    Hydrogen is the most abundant element in the Universe; helium is second. However, after this, the rank of abundance does not continue to correspond to the atomic number; oxygen has abundance rank 3, but atomic number 8. All others are substantially less common.

    The abundance of the lightest elements is well predicted by the standard cosmological model, since they were mostly produced shortly (i.e., within a few hundred seconds) after the Big Bang, in a process known as Big Bang nucleosynthesis. Heavier elements were mostly produced much later, inside of stars.

    Hydrogen and helium are estimated to make up roughly 74% and 24% of all baryonic matter in the universe respectively. Despite comprising only a very small fraction of the universe, the remaining "heavy elements" can greatly influence astronomical phenomena. Only about 2% (by mass) of the Milky Way galaxy's disk is composed of heavy elements.

    These other elements are generated by stellar processes. In astronomy, a "metal" is any element other than hydrogen or helium. This distinction is significant because hydrogen and helium are the only elements that were produced in significant quantities in the Big Bang. Thus, the metallicity of a galaxy or other object is an indication of stellar activity after the Big Bang.

    In general, elements up to iron are made in large stars in the process of becoming supernovae. Iron-56 is particularly common, since it is the most stable nuclide (in that it has the highest nuclear binding energy per nucleon) and can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with increasing atomic number.

    Periodic table showing the cosmological origin of each element

    Solar system

    Most abundant nuclides
    in the Solar System
    Nuclide
    Mass fraction in parts per million Atom fraction in parts per million
    Hydrogen-1 1 705,700 909,964
    Helium-4 4 275,200 88,714
    Oxygen-16 16 9,592 477
    Carbon-12 12 3,032 326
    Nitrogen-14 14 1,105 102
    Neon-20 20 1,548 100
    Spacer.gif
    Other nuclides: 3,879 149
    Silicon-28 28 653 30
    Magnesium-24 24 513 28
    Iron-56 56 1,169 27
    Sulfur-32 32 396 16
    Helium-3 3 35 15
    Hydrogen-2 2 23 15
    Neon-22 22 208 12
    Magnesium-26 26 79 4
    Carbon-13 13 37 4
    Magnesium-25 25 69 4
    Aluminium-27 27 58 3
    Argon-36 36 77 3
    Calcium-40 40 60 2
    Sodium-23 23 33 2
    Iron-54 54 72 2
    Silicon-29 29 34 2
    Nickel-58 58 49 1
    Silicon-30 30 23 1
    Iron-57 57 28 1

    The following graph (note log scale) shows abundance of elements in the Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass.[3] Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. Since physical laws and processes are uniform throughout the universe, however, it is expected that these galaxies will likewise have evolved similar abundances of elements.

    The abundance of elements is in keeping with their origin from the Big Bang and nucleosynthesis in a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, while the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays).

    Beginning with carbon, elements have been produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). The effect of odd-numbered chemical elements generally being more rare in the universe was empirically noticed in 1914, and is known as the Oddo-Harkins rule.

    Estimated abundances of the chemical elements in the Solar System (logarithmic scale)

    Relation to nuclear binding energy

    Loose correlations have been observed between estimated elemental abundances in the universe and the nuclear binding energy curve. Roughly speaking, the relative stability of various atomic nuclides has exerted a strong influence on the relative abundance of elements formed in the Big Bang, and during the development of the universe thereafter. See the article about nucleosynthesis for an explanation of how certain nuclear fusion processes in stars (such as carbon burning, etc.) create the elements heavier than hydrogen and helium.

    A further observed peculiarity is the jagged alternation between relative abundance and scarcity of adjacent atomic numbers in the elemental abundance curve, and a similar pattern of energy levels in the nuclear binding energy curve. This alternation is caused by the higher relative binding energy (corresponding to relative stability) of even atomic numbers compared with odd atomic numbers and is explained by the Pauli Exclusion Principle. The semi-empirical mass formula (SEMF), also called Weizsäcker's formula or the Bethe-Weizsäcker mass formula, gives a theoretical explanation of the overall shape of the curve of nuclear binding energy.

    Earth

    The Earth formed from the same cloud of matter that formed the Sun, but the planets acquired different compositions during the formation and evolution of the solar system. In turn, the natural history of the Earth caused parts of this planet to have differing concentrations of the elements.

    The mass of the Earth is approximately 5.98×1024 kg. In bulk, by mass, it is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%); with the remaining 1.2% consisting of trace amounts of other elements.

    The bulk composition of the Earth by elemental-mass is roughly similar to the gross composition of the solar system, with the major differences being that Earth is missing a great deal of the volatile elements hydrogen, helium, neon, and nitrogen, as well as carbon which has been lost as volatile hydrocarbons. The remaining elemental composition is roughly typical of the "rocky" inner planets, which formed in the thermal zone where solar heat drove volatile compounds into space. The Earth retains oxygen as the second-largest component of its mass (and largest atomic-fraction), mainly from this element being retained in silicate minerals which have a very high melting point and low vapor pressure.

    Crust

    Abundance (atom fraction) of the chemical elements in Earth's upper continental crust as a function of atomic number. The rarest elements in the crust (shown in yellow) are rare due to a combination of factors: all but one are the densest siderophiles (iron-loving) elements in the Goldschmidt classification, meaning they have a tendency to mix well with metallic iron, depleting them by being relocated deeper into the Earth's core. Their abundance in meteoroids is higher. Additionally, tellurium has been depleted by preaccretional sorting in the nebula via formation of volatile hydrogen telluride.

    The mass-abundance of the nine most abundant elements in the Earth's crust is approximately: oxygen 46%, silicon 28%, aluminum 8.3%, iron 5.6%, calcium 4.2%, sodium 2.5%, magnesium 2.4%, potassium 2.0%, and titanium 0.61%. Other elements occur at less than 0.15%. For a complete list, see abundance of elements in Earth's crust.

    The graph at right illustrates the relative atomic-abundance of the chemical elements in Earth's upper continental crust—the part that is relatively accessible for measurements and estimation.

    Many of the elements shown in the graph are classified into (partially overlapping) categories:

    1. rock-forming elements (major elements in green field, and minor elements in light green field);
    2. rare earth elements (lanthanides, La-Lu, Sc and Y; labeled in blue);
    3. major industrial metals (global production >~3×107 kg/year; labeled in red);
    4. precious metals (labeled in purple);
    5. the nine rarest "metals" – the six platinum group elements plus Au, Re, and Te (a metalloid) – in the yellow field. These are rare in the crust from being soluble in iron and thus concentrated in the Earth's core. Tellurium is the single most depleted element in the silicate Earth relative to cosmic abundance, because in addition to being concentrated as dense chalcogenides in the core it was severely depleted by preaccretional sorting in the nebula as volatile hydrogen telluride.

    Note that there are two breaks where the unstable (radioactive) elements technetium (atomic number 43) and promethium (atomic number 61) would be. These elements are surrounded by stable elements, yet both have relatively short half lives (~4 million years and ~18 years respectively). These are thus extremely rare, since any primordial initial fractions of these in pre-Solar System materials have long since decayed. These two elements are now only produced naturally through the spontaneous fission of very heavy radioactive elements (for example, uranium, thorium, or the trace amounts of plutonium that exist in uranium ores), or by the interaction of certain other elements with cosmic rays. Both technetium and promethium have been identified spectroscopically in the atmospheres of stars, where they are produced by ongoing nucleosynthetic processes.

    There are also breaks in the abundance graph where the six noble gases would be, since they are not chemically bound in the Earth's crust, and they are only generated by decay chains from radioactive elements in the crust, and are therefore extremely rare there.

    The eight naturally occurring very rare, highly radioactive elements (polonium, astatine, francium, radium, actinium, protactinium, neptunium, and plutonium) are not included, since any of these elements that were present at the formation of the Earth have decayed away eons ago, and their quantity today is negligible and is only produced from the radioactive decay of uranium and thorium.

    Oxygen and silicon are notably the most common elements in the crust. On Earth and in rocky planets in general, silicon and oxygen are far more common than their cosmic abundance. The reason is that they combine with each other to form silicate minerals. In this way, they are the lightest of all of the two-percent "astronomical metals" (i.e., non-hydrogen and helium elements) to form a solid that is refractory to the Sun's heat, and thus cannot boil away into space. All elements lighter than oxygen have been removed from the crust in this way, as have the heavier chalcogens sulfur, selenium and tellurium.

    Rare-earth elements

    "Rare" earth elements is a historical misnomer. The persistence of the term reflects unfamiliarity rather than true rarity. The more abundant rare earth elements are similarly concentrated in the crust compared to commonplace industrial metals such as chromium, nickel, copper, zinc, molybdenum, tin, tungsten, or lead. The two least abundant rare earth elements (thulium and lutetium) are nearly 200 times more common than gold. However, in contrast to the ordinary base and precious metals, rare earth elements have very little tendency to become concentrated in exploitable ore deposits. Consequently, most of the world's supply of rare earth elements comes from only a handful of sources. Furthermore, the rare earth metals are all quite chemically similar to each other, and they are thus quite difficult to separate into quantities of the pure elements.

    Differences in abundances of individual rare earth elements in the upper continental crust of the Earth represent the superposition of two effects, one nuclear and one geochemical. First, the rare earth elements with even atomic numbers (58Ce, 60Nd, ...) have greater cosmic and terrestrial abundances than the adjacent rare earth elements with odd atomic numbers (57La, 59Pr, ...). Second, the lighter rare earth elements are more incompatible (because they have larger ionic radii) and therefore more strongly concentrated in the continental crust than the heavier rare earth elements. In most rare earth ore deposits, the first four rare earth elements – lanthanum, cerium, praseodymium, and neodymium – constitute 80% to 99% of the total amount of rare earth metal that can be found in the ore.

    Mantle

    The mass-abundance of the eight most abundant elements in the Earth's mantle (see main article above) is approximately: oxygen 45%, magnesium 23%, silicon 22%, iron 5.8%, calcium 2.3%, aluminum 2.2%, sodium 0.3%, potassium 0.3%.

    Core

    Due to mass segregation, the core of the Earth is believed to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements.

    Ocean

    The most abundant elements in the ocean by proportion of mass in percent are oxygen (85.84), hydrogen (10.82), chlorine (1.94), sodium (1.08), magnesium (0.1292), sulfur (0.091), calcium (0.04), potassium (0.04), bromine (0.0067), carbon (0.0028), and boron (0.00043).

    Atmosphere

    The order of elements by volume-fraction (which is approximately molecular mole-fraction) in the atmosphere is nitrogen (78.1%), oxygen (20.9%), argon (0.96%), followed by (in uncertain order) carbon and hydrogen because water vapor and carbon dioxide, which represent most of these two elements in the air, are variable components. Sulfur, phosphorus, and all other elements are present in significantly lower proportions.

    According to the abundance curve graph (above right), argon, a significant if not major component of the atmosphere, does not appear in the crust at all. This is because the atmosphere has a far smaller mass than the crust, so argon remaining in the crust contributes little to mass-fraction there, while at the same time buildup of argon in the atmosphere has become large enough to be significant.

    Human body

    By mass, human cells consist of 65–90% water (H2O), and a significant portion of the remainder is composed of carbon-containing organic molecules. Oxygen therefore contributes a majority of a human body's mass, followed by carbon. Almost 99% of the mass of the human body is made up of six elements: hydrogen (H), carbon (C), nitrogen (N), oxygen (O), calcium (Ca), and phosphorus (P) (CHNOPS for short). The next 0.75% is made up of the next five elements: potassium (K), sulfur (S), chlorine (Cl), sodium (Na), and magnesium (Mg). Only 17 elements are known for certain to be necessary to human life, with one additional element (fluorine) thought to be helpful for tooth enamel strength. A few more trace elements may play some role in the health of mammals. Boron and silicon are notably necessary for plants but have uncertain roles in animals. The elements aluminium and silicon, although very common in the earth's crust, are conspicuously rare in the human body.

    Below is a periodic table highlighting nutritional elements.

    Nutritional elements in the periodic table
    H   He
    Li Be   B C N O F Ne
    Na Mg   Al Si P S Cl Ar
    K Ca Sc   Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr
    Rb Sr Y   Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe
    Cs Ba La * Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn
    Fr Ra Ac ** Rf Db Sg Bh Hs Mt Ds Rg Cn Nh Fl Mc Lv Ts Og
     
      * Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu
      ** Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No Lr

    Legend:
     The four basic organic elements
     Quantity elements
     Essential trace elements
     Deemed essential trace element by U.S., not by European Union
     Suggested function from deprivation effects or active metabolic handling, but no clearly-identified biochemical function in humans
     Limited circumstantial evidence for trace benefits or biological action in mammals.  
     No evidence for biological action in mammals, but essential in some lower organisms.
     

    Entropy (information theory)

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...