Search This Blog

Friday, November 15, 2024

Zero-sum game

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Zero-sum_game

Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.

If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a more significant piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally. Other examples of zero-sum games in daily life include games like poker, chess, sport and bridge where one person gains and another person loses, which results in a zero-net benefit for every player. In the markets and financial instruments, futures contracts and options are zero-sum games as well.

In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses can be less than or more than zero. A zero-sum game is also called a strictly competitive game, while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality, or with Nash equilibrium. Prisoner's Dilemma is a classic non-zero-sum game.

Definition


Choice 1 Choice 2
Choice 1 −A, A B, −B
Choice 2 C, −C −D, D
Generic zero-sum game

Option 1 Option 2
Option 1 2, −2 −2, 2
Option 2 −2, 2 2, −2
Another example of the classic zero-sum game

The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal. Generally, any game where all strategies are Pareto optimal is called a conflict game.

Zero-sum games are a specific example of constant sum games where the sum of each outcome is always zero. Such games are distributive, not integrative; the pie cannot be enlarged by good negotiation.

In situation where one decision maker's gain (or loss) does not necessarily result in the other decision makers' loss (or gain), they are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players is sometimes more or less than what they began with.

The idea of Pareto optimal payoff in a zero-sum game gives rise to a generalized relative selfish rationality standard, the punishing-the-opponent standard, where both players always seek to minimize the opponent's payoff at a favourable cost to themselves rather than prefer more over less. The punishing-the-opponent standard can be used in both zero-sum games (e.g. warfare game, chess) and non-zero-sum games (e.g. pooling selection games). The player in the game has a simple enough desire to maximise the profit for them, and the opponent wishes to minimise it.

Solution

For two-player finite zero-sum games, if the players are allowed to play a mixed strategy, the game always has an one equilibrium solution. The different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. Notice that this is not true for pure strategy.

Example

A zero-sum game (Two person)
Blue
Red
A B C
1
−30
30
10
−10
−20
20
2
10
−10
−20
20
20
−20

A game's payoff matrix is a convenient representation. Consider these situations as an example, the two-player zero-sum game pictured at right or above.

The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.

Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points.

In this example game, both players know the payoff matrix and attempt to maximize the number of their points. Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, and with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. If Blue anticipates Red's reasoning and choice of action 1, Blue may choose action B, so as to win 10 points. If Red, in turn, anticipates this trick and goes for action 2, this wins Red 20 points.

Émile Borel and John von Neumann had the fundamental insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute probably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with probability 3/7, and Blue should assign the probabilities 0, 4/7, and 3/7 to the three actions A, B, and C. Red will then win 20/7 points on average per game.

Solving

The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix M where element Mi,j is the payoff obtained when the minimizing player chooses pure strategy i and the maximizing player chooses pure strategy j (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of M is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (Raghavan 1994, p. 740) by solving the following linear program to find a vector u:

Minimize:

Subject to the constraints:

u ≥ 0
M u ≥ 1.

The first constraint says each element of the u vector must be nonnegative, and the second constraint says each element of the M u vector must be at least 1. For the resulting u vector, the inverse of the sum of its elements is the value of the game. Multiplying u by that value gives a probability vector, giving the probability that the maximizing player will choose each possible pure strategy.

If the game matrix does not have all positive elements, add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will not affect the equilibrium mixed strategies for the equilibrium.

The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Alternatively, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of M (adding a constant so it is positive), then solving the resulting game.

If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations and thus such games are equivalent to linear programs, in general.

Universal solution

If avoiding a zero-sum game is an action choice with some probability for players, avoiding is always an equilibrium strategy for at least one player at a zero-sum game. For any two players zero-sum game where a zero-zero draw is impossible or non-credible after the play is started, such as poker, there is no Nash equilibrium strategy other than avoiding the play. Even if there is a credible zero-zero draw after a zero-sum game is started, it is not better than the avoiding strategy. In this sense, it's interesting to find reward-as-you-go in optimal choice computation shall prevail over all two players zero-sum games concerning starting the game or not.

The most common or simple example from the subfield of social psychology is the concept of "social traps". In some cases pursuing individual personal interest can enhance the collective well-being of the group, but in other situations, all parties pursuing personal interest results in mutually destructive behaviour.

Copeland's review notes that an n-player non-zero-sum game can be converted into an (n+1)-player zero-sum game, where the n+1st player, denoted the fictitious player, receives the negative of the sum of the gains of the other n-players (the global gain / loss).

Zero-sum three-person games

Zero-sum three-person game

It is clear that there are manifold relationships between players in a zero-sum three-person game, in a zero-sum two-person game, anything one player wins is necessarily lost by the other and vice versa; therefore, there is always an absolute antagonism of interests, and that is similar in the three-person game. A particular move of a player in a zero-sum three-person game would be assumed to be clearly beneficial to him and may disbenefits to both other players, or benefits to one and disbenefits to the other opponent. Particularly, parallelism of interests between two players makes a cooperation desirable; it may happen that a player has a choice among various policies: Get into a parallelism interest with another player by adjusting his conduct, or the opposite; that he can choose with which of other two players he prefers to build such parallelism, and to what extent. The picture on the left shows that a typical example of a zero-sum three-person game. If Player 1 chooses to defence, but Player 2 & 3 chooses to offence, both of them will gain one point. At the same time, Player 1 will lose two-point because points are taken away by other players, and it is evident that Player 2 & 3 has parallelism of interests.

Real life example

Economic benefits of low-cost airlines in saturated markets - net benefits or a zero-sum game 

Studies show that the entry of low-cost airlines into the Hong Kong market brought in $671 million in revenue and resulted in an outflow of $294 million.

Therefore, the replacement effect should be considered when introducing a new model, which will lead to economic leakage and injection. Thus introducing new models requires caution. For example, if the number of new airlines departing from and arriving at the airport is the same, the economic contribution to the host city may be a zero-sum game. Because for Hong Kong, the consumption of overseas tourists in Hong Kong is income, while the consumption of Hong Kong residents in opposite cities is outflow. In addition, the introduction of new airlines can also have a negative impact on existing airlines.

Consequently, when a new aviation model is introduced, feasibility tests need to be carried out in all aspects, taking into account the economic inflow and outflow and displacement effects caused by the model.

Zero-sum games in financial markets

Derivatives trading may be considered a zero-sum game, as each dollar gained by one party in a transaction must be lost by the other, hence yielding a net transfer of wealth of zero.

An options contract - whereby a buyer purchases a derivative contract which provides them with the right to buy an underlying asset from a seller at a specified strike price before a specified expiration date – is an example of a zero-sum game. A futures contract – whereby a buyer purchases a derivative contract to buy an underlying asset from the seller for a specified price on a specified date – is also an example of a zero-sum game. This is because the fundamental principle of these contracts is that they are agreements between two parties, and any gain made by one party must be matched by a loss sustained by the other.

If the price of the underlying asset increases before the expiration date the buyer may exercise/ close the options/ futures contract. The buyers gain and corresponding sellers loss will be the difference between the strike price and value of the underlying asset at that time. Hence, the net transfer of wealth is zero.

Swaps, which involve the exchange of cash flows from two different financial instruments, are also considered a zero-sum game. Consider a standard interest rate swap whereby Firm A pays a fixed rate and receives a floating rate; correspondingly Firm B pays a floating rate and receives a fixed rate. If rates increase, then Firm A will gain, and Firm B will lose by the rate differential (floating rate – fixed rate). If rates decrease, then Firm A will lose, and Firm B will gain by the rate differential (fixed rate – floating rate).

Whilst derivatives trading may be considered a zero-sum game, it is important to remember that this is not an absolute truth. The financial markets are complex and multifaceted, with a range of participants engaging in a variety of activities. While some trades may result in a simple transfer of wealth from one party to another, the market as a whole is not purely competitive, and many transactions serve important economic functions.

The stock market is an excellent example of a positive-sum game, often erroneously labelled as a zero-sum game. This is a zero-sum fallacy: the perception that one trader in the stock market may only increase the value of their holdings if another trader decreases their holdings.

The primary goal of the stock market is to match buyers and sellers, but the prevailing price is the one which equilibrates supply and demand. Stock prices generally move according to changes in future expectations, such as acquisition announcements, upside earnings surprises, or improved guidance.

For instance, if Company C announces a deal to acquire Company D, and investors believe that the acquisition will result in synergies and hence increased profitability for Company C, there will be an increased demand for Company C stock. In this scenario, all existing holders of Company C stock will enjoy gains without incurring any corresponding measurable losses to other players.

Furthermore, in the long run, the stock market is a positive-sum game. As economic growth occurs, demand increases, output increases, companies grow, and company valuations increase, leading to value creation and wealth addition in the market.

Complexity

It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent.

Extensions

In 1944, John von Neumann and Oskar Morgenstern proved that any non-zero-sum game for n players is equivalent to a zero-sum game with n + 1 players; the (n + 1)th player representing the global profit or loss.

Misunderstandings

Zero-sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions. Furthermore, the word "game" does not imply the model is valid only for recreational games.

Politics is sometimes called zero sum because in common usage the idea of a stalemate is perceived to be "zero sum"; politics and macroeconomics are not zero sum games, however, because they do not constitute conserved systems.

Zero-sum thinking

In psychology, zero-sum thinking refers to the perception that a given situation is like a zero-sum game, where one person's gain is equal to another person's loss.

Homeokinetics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Homeokinetics

Homeokinetics is the study of self-organizing, complex systems. Standard physics studies systems at separate levels, such as atomic physics, nuclear physics, biophysics, social physics, and galactic physics. Homeokinetic physics studies the up-down processes that bind these levels. Tools such as mechanics, quantum field theory, and the laws of thermodynamics provide the key relationships. The subject, described as the physics and thermodynamics associated with the up down movement between levels of systems, originated in the late 1970s work of American physicists Harry Soodak and Arthur Iberall. Complex systems are universes, galaxies, social systems, people, or even those that seem as simple as gases. The basic premise is that the entire universe consists of atomistic-like units bound in interactive ensembles to form systems, level by level, in a nested hierarchy. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems.

History

Arthur Iberall, Warren McCulloch and Harry Soodak developed the concept of homeokinetics as a new branch of physics. It began through Iberall's biophysical research for the NASA exobiology program into the dynamics of mammalian physiological processes They were observing an area that physics has neglected, that of complex systems with their very long internal factory day delays. They were observing systems associated with nested hierarchy and with an extensive range of time scale processes. It was such connections, referred to as both up-down or in-out connections (as nested hierarchy) and side-side or flatland physics among atomistic-like components (as heterarchy) that became the hallmark of homeokinetic problems. By 1975, they began to put a formal catch-phrase name on those complex problems, associating them with nature, life, human, mind, and society. The major method of exposition that they began using was a combination of engineering physics and a more academic pure physics. In 1981, Iberall was invited to the Crump Institute for Medical Engineering of UCLA, where he further refined the key concepts of homeokinetics, developing a physical scientific foundation for complex systems.

Self-organizing complex Systems

A system is a collective of interacting ‘atomistic’-like entities. The word ‘atomism’ is used to stand both for the entity and the doctrine. As is known from ‘kinetic’ theory, in mobile or simple systems, the atomisms share their ‘energy’ in interactive collisions. That so-called ‘equipartitioning’ process takes place within a few collisions. Physically, if there is little or no interaction, the process is considered to be very weak. Physics deals basically with the forces of interaction—few in number—that influence the interactions. They all tend to emerge with considerable force at high ‘density’ of atomistic interaction. In complex systems, there is also a result of internal processes in the atomisms. They exhibit, in addition to the pair-by-pair interactions, internal actions such as vibrations, rotations, and association. If the energy and time involved internally creates a very large—in time—cycle of performance of their actions compared to their pair interactions, the collective system is complex. If you eat a cookie and you do not see the resulting action for hours, that is complex; if boy meets girl and they become ‘engaged’ for a protracted period, that is complex. What emerges from that physics is a broad host of changes in state and stability transitions in state. Viewing Aristotle as having defined a general basis for systems in their static-logical states and trying to identify a logic-metalogic for physics, e.g., metaphysics, then homeokinetics is viewed to be an attempt to define the dynamics of all those systems in the universe.

Flatland physics vs. homeokinetic physics

Ordinary physics is a flatland physics, a physics at some particular level. Examples include nuclear and atomic physics, biophysics, social physics, and stellar physics. Homeokinetic physics combines flatland physics with the study of the up down processes that binds the levels. Tools, such as mechanics, quantum field theory, and the laws of thermodynamics, provide key relationships for the binding of the levels, how they connect, and how the energy flows up and down. And whether the atomisms are atoms, molecules, cells, people, stars, galaxies, or universes, the same tools can be used to understand them. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems.

Applications

A homeokinetic approach to complex systems has been applied to understanding life, ecological psychology, mind, anthropology, geology, law, motor control, bioenergetics, healing modalities, and political science.

It has also been applied to social physics where a homeokinetics analysis shows that one must account for flow variables such as the flow of energy, of materials, of action, reproduction rate, and value-in-exchange. Iberall's conjectures on life and mind have been used as a springboard to develop theories of mental activity and action.

Geophysics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Geophysics
false color image
Age of the sea floor. Much of the dating information comes from magnetic anomalies.
Computer simulation of the Earth's magnetic field in a period of normal polarity between reversals

Geophysics (/ˌˈfɪzɪks/) is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists, who usually study geophysics, physics, or one of the Earth sciences at the graduate level, complete investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields ; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets.

Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics.

Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. In exploration geophysics, geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, locate groundwater, find archaeological relics, determine the thickness of glaciers and soils, and assess sites for environmental remediation.

Physical phenomena

Geophysics is a highly interdisciplinary subject, and geophysicists contribute to every area of the Earth sciences, while some geophysicists conduct research in the planetary sciences. To provide a more clear idea on what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth and its surroundings. Geophysicists also investigate the physical processes and properties of the Earth, its fluid layers, and magnetic field along with the near-Earth environment in the Solar System, which includes other planetary bodies.

Gravity

Image of globe combining color with topography.
A map of deviations in gravity from a perfectly smooth, idealized Earth

The gravitational pull of the Moon and Sun gives rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide.

Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see gravity anomaly and gravimetry). The surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals).

Heat flow

Pseudocolor image in vertical profile.
A model of thermal convection in the Earth's mantle. The thin red columns are mantle plumes.

The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are: primordial heat due to Earth's cooling and radioactivity in the planets upper crust. There is also some contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers – the core–mantle boundary and the lithosphere – in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about 4.2 × 1013 W, and it is a potential source of geothermal energy.

Vibrations

Deformed blocks with grids on surface.
Illustration of the deformations of a block by body waves and surface waves (see seismic wave)

Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection.

Recording of seismic waves from controlled sources provides information on the region that the waves travel through. If the density or composition of the rock changes, waves are reflected. Reflections recorded using Reflection Seismology can provide a wealth of information on the structure of the earth up to several kilometers deep and are used to increase our understanding of the geology as well as to explore for oil and gas. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth.

Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering.

Electricity

Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 volts per meter. Relative to the solid Earth, the ionization of the planet's atmosphere is a result of the galactic cosmic rays penetrating it, which leaves it with a net positive charge. A current of about 1800 amperes flows in the global circuit. It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above.

A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of human-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field. The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography).

Electromagnetic waves

Electromagnetic waves occur in the ionosphere and magnetosphere as well as in Earth's outer core. Dawn chorus is believed to be caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics).

In the highly conductive liquid iron of the outer core, magnetic fields are generated by electric currents through electromagnetic induction. Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the Earth's magnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation.

Electromagnetic methods that are used for geophysical survey include transient electromagnetics, magnetotellurics, surface nuclear magnetic resonance and electromagnetic seabed logging.

Magnetism

The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the outer core. The magnetic field in the upper atmosphere gives rise to the auroras.

Diagram with field lines, axes and magnet lines.
Earth's dipole axis (pink line) is tilted away from the rotational axis (blue line).

The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging 440,000 to a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals, analyzed within a Geomagnetic Polarity Time Scale, contain 184 polarity intervals in the last 83 million years, with change in frequency over time, with the most recent brief complete reversal of the Laschamp event occurring 41,000 years ago during the last glacial period. Geologists observed geomagnetic reversal recorded in volcanic rocks, through magnetostratigraphy correlation (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. They are the basis of magnetostratigraphy, which correlates magnetic reversals with other stratigraphies to construct geologic time scales. In addition, the magnetization in rocks can be used to measure the motion of continents.

Radioactivity

Diagram with compound balls representing nuclei and arrows.
Example of a radioactive decay chain (see Radiometric dating)

Radioactive decay accounts for about 80% of the Earth's internal heat, powering the geodynamo and plate tectonics. The main heat-producing isotopes are potassium-40, uranium-238, uranium-235, and thorium-232. Radioactive elements are used for radiometric dating, the primary method for establishing an absolute time scale in geochronology.

Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras. Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration.

Fluid dynamics

Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals. This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo.

Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere, it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean, they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns.

Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics.

Mineral physics

The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals; their high-pressure phase diagrams, melting points and equations of state at high pressure; and the rheological properties of rocks, or their ability to flow. Deformation of rocks by creep make flow possible, although over short times the rocks are brittle. The viscosity of rocks is affected by temperature and pressure, and in turn, determines the rates at which tectonic plates move.

Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans.

The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost).

Regions of the Earth

Size and form of the Earth

Contrary to popular belief, the earth is not entirely spherical but instead generally exhibits an ellipsoid shape- which is a result of the centrifugal forces the planet generates due to its constant motion. These forces cause the planets diameter to bulge towards the Equator and results in the ellipsoid shape. Earth's shape is constantly changing, and different factors including glacial isostatic rebound (large ice sheets melting causing the Earth's crust to the rebound due to the release of the pressure), geological features such as mountains or ocean trenches, tectonic plate dynamics, and natural disasters can further distort the planet's shape.

Structure of the interior

Diagram with concentric shells and curved paths.
Seismic velocities and boundaries in the interior of the Earth sampled by seismic waves

Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior – its composition, density, temperature, pressure. For example, the Earth's mean specific gravity (5.515) is far higher than the typical specific gravity of rocks at the surface (2.7–3.3), implying that the deeper material is denser. This is also implied by its low moment of inertia ( 0.33 M R2, compared to 0.4 M R2 for a sphere of constant density). However, some of the density increase is compression under the enormous pressures inside the Earth. The effect of pressure can be calculated using the Adams–Williamson equation. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals.

Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field. Earth's inner core, however, is solid because of the enormous pressure.

Reconstruction of seismic reflections in the deep interior indicates some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity.

The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the preliminary reference Earth model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are consistent with phase transitions.

The mantle acts as a solid for seismic waves, but under high pressures and temperatures, it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible.

Magnetosphere

Diagram with colored surfaces and lines.
Schematic of Earth's magnetosphere. The solar wind flows from left to right.

If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere. Early space probes mapped out the gross dimensions of the Earth's magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles called the Van Allen radiation belts.

Methods

Geodesy

Geophysical measurements are generally at a particular time and place. Accurate measurements of position, along with earth deformation and gravity, are the province of geodesy. While geodesy and geophysics are separate fields, the two are so closely connected that many scientific organizations such as the American Geophysical Union, the Canadian Geophysical Union and the International Union of Geodesy and Geophysics encompass both.

Absolute positions are most frequently determined using the global positioning system (GPS). A three-dimensional position is calculated using messages from four or more visible satellites and referred to the 1980 Geodetic Reference System. An alternative, optical astronomy, combines astronomical coordinates and the local gravity vector to get geodetic coordinates. This method only provides the position in two coordinates and is more difficult to use than GPS. However, it is useful for measuring motions of the Earth such as nutation and Chandler wobble. Relative positions of two or more points can be determined using very-long-baseline interferometry.

Gravity measurements became part of geodesy because they were needed to related measurements at the surface of the Earth to the reference coordinate system. Gravity measurements on land can be made using gravimeters deployed either on the surface or in helicopter flyovers. Since the 1960s, the Earth's gravity field has been measured by analyzing the motion of satellites. Sea level can also be measured by satellites using radar altimetry, contributing to a more accurate geoid. In 2002, NASA launched the Gravity Recovery and Climate Experiment (GRACE), wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers.

Satellites and space probes

Satellites in space have made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.

Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.

Global positioning systems (GPS) and geographical information systems (GIS)

Since geophysics is concerned with the shape of the Earth, and by extension the mapping of features around and in the planet, geophysical measurements include high accuracy GPS measurements. These measurements are processed to increase their accuracy through differential GPS processing. Once the geophysical measurements have been processed and inverted, the interpreted results are plotted using GIS. Programs such as ArcGIS and Geosoft were built to meet these needs and include many geophysical functions that are built-in, such as upward continuation, and the calculation of the measurement derivative such as the first-vertical derivative. Many geophysics companies have designed in-house geophysics programs that pre-date ArcGIS and GeoSoft in order to meet the visualization requirements of a geophysical dataset.

Remote sensing

Exploration geophysics is a branch of applied geophysics that involves the development and utilization of different seismic or electromagnetic methods which the aim of investigating different energy, mineral and water resources. This is done through the uses of various remote sensing platforms such as; satellites, aircraft, boats, drones, borehole sensing equipment and seismic receivers. These equipment are often used in conjunction with different geophysical methods such as magnetic, gravimetry, electromagnetic, radiometric, barometry methods in order to gather the data. The remote sensing platforms used in exploration geophysics are not perfect and need adjustments done on them in order to accurately account for the effects that the platform itself may have on the collected data. For example, when gathering aeromagnetic data (aircraft gathered magnetic data) using a conventional fixed-wing aircraft- the platform has to be adjusted to account for the electromagnetic currents that it may generate as it passes through Earth's magnetic field. There are also corrections related to changes in measured potential field intensity as the Earth rotates, as the Earth orbits the Sun, and as the moon orbits the Earth.

Signal processing

Geophysical measurements are often recorded as time-series with GPS location. Signal processing involves the correction of time-series data for unwanted noise or errors introduced by the measurement platform, such as aircraft vibrations in gravity data. It also involves the reduction of sources of noise, such as diurnal corrections in magnetic data. In seismic data, electromagnetic data, and gravity data, processing continues after error corrections to include computational geophysics which result in the final interpretation of the geophysical data into a geological interpretation of the geophysical measurements

History

Geophysics emerged as a separate discipline only in the 19th century, from the intersection of physical geography, geology, astronomy, meteorology, and physics. The first known use of the word geophysics was in German ("Geophysik") by Julius Fröbel in 1834. However, many geophysical phenomena – such as the Earth's magnetic field and earthquakes – have been investigated since the ancient era.

Ancient and classical eras

Picture of ornate urn-like device with spouts in the shape of dragons
Replica of Zhang Heng's seismoscope, possibly the first contribution to seismology

The magnetic compass existed in China back as far as the fourth century BC. It was used as much for feng shui as for navigation on land. It was not until good steel needles could be forged that compasses were used for navigation at sea; before that, they could not retain their magnetism long enough to be useful. The first mention of a compass in Europe was in 1190 AD.

In circa 240 BC, Eratosthenes of Cyrene deduced that the Earth was round and measured the circumference of Earth with great precision. He developed a system of latitude and longitude.

Perhaps the earliest contribution to seismology was the invention of a seismoscope by the prolific inventor Zhang Heng in 132 AD. This instrument was designed to drop a bronze ball from the mouth of a dragon into the mouth of a toad. By looking at which of eight toads had the ball, one could determine the direction of the earthquake. It was 1571 years before the first design for a seismoscope was published in Europe, by Jean de la Hautefeuille. It was never built.

Beginnings of modern science

The 17th century had major milestones that marked the beginning of modern science. In 1600, William Gilbert release a publication titled De Magnete (1600) where he conducted series of experiments on both natural magnets (called 'loadstones') and artificially magnetized iron. His experiments lead to observations involving a small compass needle (versorium) which replicated magnetic behaviours when subjected to a spherical magnet, along with it experiencing 'magnetic dips' when it was pivoted on a horizontal axis. HIs findings led to the deduction that compasses point north due to the Earth itself being a giant magnet.

In 1687 Isaac Newton published his work titled Principia which was pivotal in the development of modern scientific fields such as astronomy and physics. In it, Newton both laid the foundations for classical mechanics and gravitation, as well as explained different geophysical phenomena such as the precession of the equinox (the orbit of whole star patterns along an ecliptic axis. Newton's theory of gravity had gained so much success, that it resulted in changing the main objective of physics in that era to unravel natures fundamental forces, and their characterizations in laws.

The first seismometer, an instrument capable of keeping a continuous record of seismic activity, was built by James Forbes in 1844.

Gulf Stream

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Gulf_Stre...