Search This Blog

Saturday, November 16, 2024

Rogue wave

From Wikipedia, the free encyclopedia
A merchant ship in heavy seas as a large wave looms ahead, Bay of Biscay, c. 1940

Rogue waves (also known as freak waves or killer waves) are large and unpredictable surface waves that can be extremely dangerous to ships and isolated structures such as lighthouses. They are distinct from tsunamis, which are long wavelength waves, often almost unnoticeable in deep waters and are caused by the displacement of water due to other phenomena (such as earthquakes). A rogue wave at the shore is sometimes called a sneaker wave.

In oceanography, rogue waves are more precisely defined as waves whose height is more than twice the significant wave height (Hs or SWH), which is itself defined as the mean of the largest third of waves in a wave record. Rogue waves do not appear to have a single distinct cause but occur where physical factors such as high winds and strong currents cause waves to merge to create a single large wave. Recent research suggests sea state crest-trough correlation leading to linear superposition may be a dominant factor in predicting the frequency of rogue waves.

Among other causes, studies of nonlinear waves such as the Peregrine soliton, and waves modeled by the nonlinear Schrödinger equation (NLS), suggest that modulational instability can create an unusual sea state where a "normal" wave begins to draw energy from other nearby waves, and briefly becomes very large. Such phenomena are not limited to water and are also studied in liquid helium, nonlinear optics, and microwave cavities. A 2012 study reported that in addition to the Peregrine soliton reaching up to about three times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes and demonstrated the creation of a "super rogue wave" (a breather around five times higher than surrounding waves) in a water-wave tank.

A 2012 study supported the existence of oceanic rogue holes, the inverse of rogue waves, where the depth of the hole can reach more than twice the significant wave height. Although it is often claimed that rogue holes have never been observed in nature despite replication in wave tank experiments, there is a rogue hole recording from an oil platform in the North Sea, revealed in Kharif et al. The same source also reveals a recording of what is known as the 'Three Sisters'.

Background

Although commonly described as a tsunami, the titular wave in The Great Wave off Kanagawa by Hokusai is more likely an example of a large rogue wave.

Rogue waves are waves in open water that are much larger than surrounding waves. More precisely, rogue waves have a height which is more than twice the significant wave height (Hs or SWH). They can be caused when currents or winds cause waves to travel at different speeds, and the waves merge to create a single large wave; or when nonlinear effects cause energy to move between waves to create a single extremely large wave.

Once considered mythical and lacking hard evidence, rogue waves are now proven to exist and are known to be natural ocean phenomena. Eyewitness accounts from mariners and damage inflicted on ships have long suggested they occur. Still, the first scientific evidence of their existence came with the recording of a rogue wave by the Gorm platform in the central North Sea in 1984. A stand-out wave was detected with a wave height of 11 m (36 ft) in a relatively low sea state. However, what caught the attention of the scientific community was the digital measurement of a rogue wave at the Draupner platform in the North Sea on January 1, 1995; called the "Draupner wave", it had a recorded maximum wave height of 25.6 m (84 ft) and peak elevation of 18.5 m (61 ft). During that event, minor damage was inflicted on the platform far above sea level, confirming the accuracy of the wave-height reading made by a downwards pointing laser sensor.

The existence of rogue waves has since been confirmed by video and photographs, satellite imagery, radar of the ocean surface, stereo wave imaging systems, pressure transducers on the sea-floor, and oceanographic research vessels. In February 2000, a British oceanographic research vessel, the RRS Discovery, sailing in the Rockall Trough west of Scotland, encountered the largest waves ever recorded by any scientific instruments in the open ocean, with a SWH of 18.5 metres (61 ft) and individual waves up to 29.1 metres (95 ft).[12] In 2004, scientists using three weeks of radar images from European Space Agency satellites found ten rogue waves, each 25 metres (82 ft) or higher.

A rogue wave is a natural ocean phenomenon that is not caused by land movement, only lasts briefly, occurs in a limited location, and most often happens far out at sea. Rogue waves are considered rare, but potentially very dangerous, since they can involve the spontaneous formation of massive waves far beyond the usual expectations of ship designers, and can overwhelm the usual capabilities of ocean-going vessels which are not designed for such encounters. Rogue waves are, therefore, distinct from tsunamis. Tsunamis are caused by a massive displacement of water, often resulting from sudden movements of the ocean floor, after which they propagate at high speed over a wide area. They are nearly unnoticeable in deep water and only become dangerous as they approach the shoreline and the ocean floor becomes shallower; therefore, tsunamis do not present a threat to shipping at sea (e.g., the only ships lost in the 2004 Asian tsunami were in port.). These are also different from the wave known as a "hundred-year wave", which is a purely statistical description of a particularly high wave with a 1% chance to occur in any given year in a particular body of water.

Rogue waves have now been proven to cause the sudden loss of some ocean-going vessels. Well-documented instances include the freighter MS München, lost in 1978. Rogue waves have been implicated in the loss of other vessels, including the Ocean Ranger, a semisubmersible mobile offshore drilling unit that sank in Canadian waters on 15 February 1982. In 2007, the United States' National Oceanic and Atmospheric Administration (NOAA) compiled a catalogue of more than 50 historical incidents probably associated with rogue waves.

History of rogue wave knowledge

Early reports

In 1826, French scientist and naval officer Jules Dumont d'Urville reported waves as high as 33 m (108 ft) in the Indian Ocean with three colleagues as witnesses, yet he was publicly ridiculed by fellow scientist François Arago. In that era, the thought was widely held that no wave could exceed 9 m (30 ft). Author Susan Casey wrote that much of that disbelief came because there were very few people who had seen a rogue wave and survived; until the advent of steel double-hulled ships of the 20th century, "people who encountered 100-foot [30 m] rogue waves generally weren't coming back to tell people about it."

Pre-1995 research

Unusual waves have been studied scientifically for many years (for example, John Scott Russell's wave of translation, an 1834 study of a soliton wave). Still, these were not linked conceptually to sailors' stories of encounters with giant rogue ocean waves, as the latter were believed to be scientifically implausible.

Since the 19th century, oceanographers, meteorologists, engineers, and ship designers have used a statistical model known as the Gaussian function (or Gaussian Sea or standard linear model) to predict wave height, on the assumption that wave heights in any given sea are tightly grouped around a central value equal to the average of the largest third, known as the significant wave height (SWH). In a storm sea with an SWH of 12 m (39 ft), the model suggests hardly ever would a wave higher than 15 m (49 ft) occur. It suggests one of 30 m (98 ft) could indeed happen, but only once in 10,000 years. This basic assumption was well accepted, though acknowledged to be an approximation. Using a Gaussian form to model waves has been the sole basis of virtually every text on that topic for the past 100 years.

The first known scientific article on "freak waves" was written by Professor Laurence Draper in 1964. In that paper, he documented the efforts of the National Institute of Oceanography in the early 1960s to record wave height, and the highest wave recorded at that time, which was about 20 metres (67 ft). Draper also described freak wave holes.

Research on cross-swell waves and their contribution to rogue wave studies

Before the Draupner wave was recorded in 1995, early research had already made significant strides in understanding extreme wave interactions. In 1979, Dik Ludikhuize and Henk Jan Verhagen at TU Delft successfully generated cross-swell waves in a wave basin. Although only monochromatic waves could be produced at the time, their findings, reported in 1981, showed that individual wave heights could be added together even when exceeding breaker criteria. This phenomenon provided early evidence that waves could grow significantly larger than anticipated by conventional theories of wave breaking.

This work highlighted that in cases of crossing waves, wave steepness could increase beyond usual limits. Although the waves studied were not as extreme as rogue waves, the research provided an understanding of how multidirectional wave interactions could lead to extreme wave heights - a key concept in the formation of rogue waves. The crossing wave phenomenon studied in the Delft Laboratory therefore had direct relevance to the unpredictable rogue waves encountered at sea.

Research published in 2024 by TU Delft and other institutions has subsequently demonstrated that waves coming from multiple directions can grow up to four times steeper than previously imagined.

The 1995 Draupner wave

Measured amplitude graph showing the Draupner wave (spike in the middle)

The Draupner wave (or New Year's wave) was the first rogue wave to be detected by a measuring instrument. The wave was recorded in 1995 at Unit E of the Draupner platform, a gas pipeline support complex located in the North Sea about 160 km (100 miles) southwest from the southern tip of Norway.

The rig was built to withstand a calculated 1-in-10,000-years wave with a predicted height of 20 m (64 ft) and was fitted with state-of-the-art sensors, including a laser rangefinder wave recorder on the platform's underside. At 3 pm on 1 January 1995, the device recorded a rogue wave with a maximum wave height of 25.6 m (84 ft). Peak elevation above still water level was 18.5 m (61 ft). The reading was confirmed by the other sensors. The platform sustained minor damage in the event.

In the area, the SWH at the time was about 12 m (39 ft), so the Draupner wave was more than twice as tall and steep as its neighbors, with characteristics that fell outside any known wave model. The wave caused enormous interest in the scientific community.

Subsequent research

Following the evidence of the Draupner wave, research in the area became widespread.

The first scientific study to comprehensively prove that freak waves exist, which are clearly outside the range of Gaussian waves, was published in 1997. Some research confirms that observed wave height distribution, in general, follows well the Rayleigh distribution. Still, in shallow waters during high energy events, extremely high waves are rarer than this particular model predicts. From about 1997, most leading authors acknowledged the existence of rogue waves with the caveat that wave models could not replicate rogue waves.

Statoil researchers presented a paper in 2000, collating evidence that freak waves were not the rare realizations of a typical or slightly non-gaussian sea surface population (classical extreme waves) but were the typical realizations of a rare and strongly non-gaussian sea surface population of waves (freak extreme waves). A workshop of leading researchers in the world attended the first Rogue Waves 2000 workshop held in Brest in November 2000.

In 2000, British oceanographic vessel RRS Discovery recorded a 29 m (95 ft) wave off the coast of Scotland near Rockall. This was a scientific research vessel fitted with high-quality instruments. Subsequent analysis determined that under severe gale-force conditions with wind speeds averaging 21 metres per second (41 kn), a ship-borne wave recorder measured individual waves up to 29.1 m (95.5 ft) from crest to trough, and a maximum SWH of 18.5 m (60.7 ft). These were some of the largest waves recorded by scientific instruments up to that time. The authors noted that modern wave prediction models are known to significantly under-predict extreme sea states for waves with a significant height (Hs) above 12 m (39.4 ft). The analysis of this event took a number of years and noted that "none of the state-of-the-art weather forecasts and wave models— the information upon which all ships, oil rigs, fisheries, and passenger boats rely— had predicted these behemoths." In simple terms, a scientific model (and also ship design method) to describe the waves encountered did not exist. This finding was widely reported in the press, which reported that "according to all of the theoretical models at the time under this particular set of weather conditions, waves of this size should not have existed".

In 2004, the ESA MaxWave project identified more than 10 individual giant waves above 25 m (82 ft) in height during a short survey period of three weeks in a limited area of the South Atlantic. By 2007, it was further proven via satellite radar studies that waves with crest-to-trough heights of 20 to 30 m (66 to 98 ft) occur far more frequently than previously thought. Rogue waves are now known to occur in all of the world's oceans many times each day.

Rogue waves are now accepted as a common phenomenon. Professor Akhmediev of the Australian National University has stated that 10 rogue waves exist in the world's oceans at any moment. Some researchers have speculated that roughly three of every 10,000 waves on the oceans achieve rogue status, yet in certain spots— such as coastal inlets and river mouths— these extreme waves can make up three of every 1,000 waves, because wave energy can be focused.

Rogue waves may also occur in lakes. A phenomenon known as the "Three Sisters" is said to occur in Lake Superior when a series of three large waves forms. The second wave hits the ship's deck before the first wave clears. The third incoming wave adds to the two accumulated backwashes and suddenly overloads the ship deck with large amounts of water. The phenomenon is one of various theorized causes of the sinking of the SS Edmund Fitzgerald on Lake Superior in November 1975.

A 2012 study reported that in addition to the Peregrine soliton reaching up to about 3 times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes, and demonstrated the creation of a "super rogue wave"— a breather around 5 times higher than surrounding waves— in a water tank. Also in 2012, researchers at the Australian National University proved the existence of "rogue wave holes", an inverted profile of a rogue wave. Their research created rogue wave holes on the water surface in a water-wave tank. In maritime folklore, stories of rogue holes are as common as stories of rogue waves. They had followed from theoretical analysis but had never been proven experimentally.

"Rogue wave" has become a near-universal term used by scientists to describe isolated, large-amplitude waves that occur more frequently than expected for normal, Gaussian-distributed, statistical events. Rogue waves appear ubiquitous and are not limited to the oceans. They appear in other contexts and have recently been reported in liquid helium, nonlinear optics, and microwave cavities. Marine researchers universally now accept that these waves belong to a specific kind of sea wave, not considered by conventional models for sea wind waves. A 2015 paper studied the wave behavior around a rogue wave, including optical and the Draupner wave, and concluded, "rogue events do not necessarily appear without warning but are often preceded by a short phase of relative order".

In 2019, researchers succeeded in producing a wave with similar characteristics to the Draupner wave (steepness and breaking), and proportionately greater height, using multiple wavetrains meeting at an angle of 120°. Previous research had strongly suggested that the wave resulted from an interaction between waves from different directions ("crossing seas"). Their research also highlighted that wave-breaking behavior was not necessarily as expected. If waves met at an angle less than about 60°, then the top of the wave "broke" sideways and downwards (a "plunging breaker"). Still, from about 60° and greater, the wave began to break vertically upwards, creating a peak that did not reduce the wave height as usual but instead increased it (a "vertical jet"). They also showed that the steepness of rogue waves could be reproduced in this manner. Lastly, they observed that optical instruments such as the laser used for the Draupner wave might be somewhat confused by the spray at the top of the wave if it broke, and this could lead to uncertainties of around 1.0 to 1.5 m (3 to 5 ft) in the wave height. They concluded, "... the onset and type of wave breaking play a significant role and differ significantly for crossing and noncrossing waves. Crucially, breaking becomes less crest-amplitude limiting for sufficiently large crossing angles and involves the formation of near-vertical jets".

Images from the 2019 simulation of the Draupner wave show how the steepness of the wave forms, and how the crest of a rogue wave breaks when waves cross at different angles. (Click image for full resolution)
  • In the first row (0°), the crest breaks horizontally and plunges, limiting the wave size.
  • In the middle row (60°), somewhat upward-lifted breaking behavior occurs.
  • In the third row (120°), described as the most accurate simulation achieved of the Draupner wave, the wave breaks upward, as a vertical jet, and the wave crest height is not limited by breaking.

Extreme rogue wave events

On 17 November 2020, a buoy moored in 45 metres (148 ft) of water on Amphitrite Bank in the Pacific Ocean 7 kilometres (4.3 mi; 3.8 nmi) off Ucluelet, Vancouver Island, British Columbia, Canada, at 48.9°N 125.6°W recorded a lone 17.6-metre (58 ft) tall wave among surrounding waves about 6 metres (20 ft) in height. The wave exceeded the surrounding significant wave heights by a factor of 2.93. When the wave's detection was revealed to the public in February 2022, one scientific paper and many news outlets christened the event as "the most extreme rogue wave event ever recorded" and a "once-in-a-millennium" event, claiming that at about three times the height of the waves around it, the Ucluelet wave set a record as the most extreme rogue wave ever recorded at the time in terms of its height in proportion to surrounding waves, and that a wave three times the height of those around it was estimated to occur on average only once every 1,300 years worldwide.

The Ucluelet event generated controversy. Analysis of scientific papers dealing with rogue wave events since 2005 revealed the claims for the record-setting nature and rarity of the wave to be incorrect. The paper Oceanic rogue waves by Dysthe, Krogstad and Muller reports on an event in the Black Sea in 2004 which was far more extreme than the Ucluelet wave, where the Datawell Waverider buoy reported a wave whose height was 10.32 metres (33.86 ft) higher and 3.91 times the significant wave height, as detailed in the paper. Thorough inspection of the buoy after the recording revealed no malfunction. The authors of the paper that reported the Black Sea event assessed the wave as "anomalous" and suggested several theories on how such an extreme wave may have arisen. The Black Sea event differs in the fact that it, unlike the Ucluelet wave, was recorded with a high-precision instrument. The Oceanic rogue waves paper also reports even more extreme waves from a different source, but these were possibly overestimated, as assessed by the data's own authors. The Black Sea wave occurred in relatively calm weather.

Furthermore, a paper by I. Nikolkina and I. Didenkulova also reveals waves more extreme than the Ucluelet wave. In the paper, they infer that in 2006 a 21-metre (69 ft) wave appeared in the Pacific Ocean off the Port of Coos Bay, Oregon, with a significant wave height of 3.9 metres (13 ft). The ratio is 5.38, almost twice that of the Ucluelet wave. The paper also reveals the MV Pont-Aven incident as marginally more extreme than the Ucluelet event. The paper also assesses a report of an 11-metre (36 ft) wave in a significant wave height of 1.9 metres (6 ft 3 in), but the authors cast doubt on that claim. A paper written by Craig B. Smith in 2007 reported on an incident in the North Atlantic, in which the submarine 'Grouper' was hit by a 30-meter wave in calm seas.

Causes

Because the phenomenon of rogue waves is still a matter of active research, clearly stating what the most common causes are or whether they vary from place to place is premature. The areas of highest predictable risk appear to be where a strong current runs counter to the primary direction of travel of the waves; the area near Cape Agulhas off the southern tip of Africa is one such area. The warm Agulhas Current runs to the southwest, while the dominant winds are westerlies, but since this thesis does not explain the existence of all waves that have been detected, several different mechanisms are likely, with localized variation. Suggested mechanisms for freak waves include:

Diffractive focusing
According to this hypothesis, coast shape or seabed shape directs several small waves to meet in phase. Their crest heights combine to create a freak wave.
Focusing by currents
Waves from one current are driven into an opposing current. This results in shortening of wavelength, causing shoaling (i.e., increase in wave height), and oncoming wave trains to compress together into a rogue wave. This happens off the South African coast, where the Agulhas Current is countered by westerlies.
Nonlinear effects (modulational instability)
A rogue wave may occur by natural, nonlinear processes from a random background of smaller waves. In such a case, it is hypothesized, an unusual, unstable wave type may form, which "sucks" energy from other waves, growing to a near-vertical monster itself, before becoming too unstable and collapsing shortly thereafter. One simple model for this is a wave equation known as the nonlinear Schrödinger equation (NLS), in which a normal and perfectly accountable (by the standard linear model) wave begins to "soak" energy from the waves immediately fore and aft, reducing them to minor ripples compared to other waves. The NLS can be used in deep-water conditions. In shallow water, waves are described by the Korteweg–de Vries equation or the Boussinesq equation. These equations also have nonlinear contributions and show solitary-wave solutions. The terms soliton (a type of self-reinforcing wave) and breather (a wave where energy concentrates in a localized and oscillatory fashion) are used for some of these waves, including the well-studied Peregrine soliton. Studies show that nonlinear effects could arise in bodies of water. A small-scale rogue wave consistent with the NLS on (the Peregrine soliton) was produced in a laboratory water-wave tank in 2011.
Normal part of the wave spectrum
Some studies argue that many waves classified as rogue waves (with the sole condition that they exceed twice the SWH) are not freaks but just rare, random samples of the wave height distribution, and are, as such, statistically expected to occur at a rate of about one rogue wave every 28 hours. This is commonly discussed as the question "Freak Waves: Rare Realizations of a Typical Population Or Typical Realizations of a Rare Population?" According to this hypothesis, most real-world encounters with huge waves can be explained by linear wave theory (or weakly nonlinear modifications thereof), without the need for special mechanisms like the modulational instability. Recent studies analyzing billions of wave measurements by wave buoys demonstrate that rogue wave occurrence rates in the ocean can be explained with linear theory when the finite spectral bandwidth of the wave spectrum is taken into account. However, whether weakly nonlinear dynamics can explain even the largest rogue waves (such as those exceeding three times the significant wave height, which would be exceedingly rare in linear theory) is not yet known. This has also led to criticism questioning whether defining rogue waves using only their relative height is meaningful in practice.
Constructive interference of elementary waves
Rogue waves can result from the constructive interference (dispersive and directional focusing) of elementary three-dimensional waves enhanced by nonlinear effects.
Wind wave interactions
While wind alone is unlikely to generate a rogue wave, its effect combined with other mechanisms may provide a fuller explanation of freak wave phenomena. As the wind blows over the ocean, energy is transferred to the sea surface. When strong winds from a storm blow in the ocean current's opposing direction, the forces might be strong enough to generate rogue waves randomly. Theories of instability mechanisms for the generation and growth of wind waves – although not on the causes of rogue waves – are provided by Phillips and Miles.

The spatiotemporal focusing seen in the NLS equation can also occur when the non-linearity is removed. In this case, focusing is primarily due to different waves coming into phase rather than any energy-transfer processes. Further analysis of rogue waves using a fully nonlinear model by R. H. Gibbs (2005) brings this mode into question, as it is shown that a typical wave group focuses in such a way as to produce a significant wall of water at the cost of a reduced height.

A rogue wave, and the deep trough commonly seen before and after it, may last only for some minutes before either breaking or reducing in size again. Apart from a single one, the rogue wave may be part of a wave packet consisting of a few rogue waves. Such rogue wave groups have been observed in nature.

Research efforts

A number of research programmes are currently underway or have concluded whose focus is/was on rogue waves, including:

  • In the course of Project MaxWave, researchers from the GKSS Research Centre, using data collected by ESA satellites, identified a large number of radar signatures that have been portrayed as evidence for rogue waves. Further research is underway to develop better methods of translating the radar echoes into sea surface elevation, but at present this technique is not proven.
  • The Australian National University, working in collaboration with Hamburg University of Technology and the University of Turin, have been conducting experiments in nonlinear dynamics to try to explain rogue or killer waves. The "Lego Pirate" video has been widely used and quoted to describe what they call "super rogue waves", which their research suggests can be up to five times bigger than the other waves around them.
  • The European Space Agency continues to do research into rogue waves by radar satellite.
  • United States Naval Research Laboratory, the science arm of the Navy and Marine Corps published results of their modelling work in 2015.
  • Massachusetts Institute of Technology(MIT)'s research in this field is ongoing. Two researchers there partially supported by the Naval Engineering Education Consortium (NEEC) considered the problem of short-term prediction of rare, extreme water waves and developed and published their research on a predictive tool of about 25 wave periods. This tool can give ships and their crews a two to three-minute warning of a potentially catastrophic impact allowing crew some time to shut down essential operations on a ship (or offshore platform). The authors cite landing on an aircraft carrier as a prime example.
  • The University of Colorado and the University of Stellenbosch
  • Kyoto University
  • Swinburne University of Technology in Australia recently published work on the probabilities of rogue waves.
  • The University of Oxford Department of Engineering Science published a comprehensive review of the science of rogue waves in 2014. In 2019, A team from the Universities of Oxford and Edinburgh recreated the Draupner wave in a lab.
  • University of Western Australia
  • Tallinn University of Technology in Estonia
  • Extreme Seas Project funded by the EU.
  • At Umeå University in Sweden, a research group in August 2006 showed that normal stochastic wind-driven waves can suddenly give rise to monster waves. The nonlinear evolution of the instabilities was investigated by means of direct simulations of the time-dependent system of nonlinear equations.
  • The Great Lakes Environmental Research Laboratory did research in 2002, which dispelled the long-held contentions that rogue waves were of rare occurrence.
  • The University of Oslo has conducted research into crossing sea state and rogue wave probability during the Prestige accident; nonlinear wind-waves, their modification by tidal currents, and application to Norwegian coastal waters; general analysis of realistic ocean waves; modelling of currents and waves for sea structures and extreme wave events; rapid computations of steep surface waves in three dimensions, and comparison with experiments; and very large internal waves in the ocean.
  • The National Oceanography Centre in the United Kingdom
  • Scripps Institute of Oceanography in the United States
  • Ritmare project in Italy.
  • University of Copenhagen and University of Victoria

Other media

Researchers at UCLA observed rogue-wave phenomena in microstructured optical fibers near the threshold of soliton supercontinuum generation and characterized the initial conditions for generating rogue waves in any medium. Research in optics has pointed out the role played by a Peregrine soliton that may explain those waves that appear and disappear without leaving a trace.

Rogue waves in other media appear to be ubiquitous and have also been reported in liquid helium, in quantum mechanics, in nonlinear optics, in microwave cavities, in Bose–Einstein condensate, in heat and diffusion, and in finance.

Reported encounters

Many of these encounters are reported only in the media, and are not examples of open-ocean rogue waves. Often, in popular culture, an endangering huge wave is loosely denoted as a "rogue wave", while the case has not been established that the reported event is a rogue wave in the scientific sense – i.e. of a very different nature in characteristics as the surrounding waves in that sea state] and with a very low probability of occurrence.

This section lists a limited selection of notable incidents.

19th century

  • Eagle Island lighthouse (1861) – Water broke the glass of the structure's east tower and flooded it, implying a wave that surmounted the 40 m (130 ft) cliff and overwhelmed the 26 m (85 ft) tower.
  • Flannan Isles Lighthouse (1900) – Three lighthouse keepers vanished after a storm that resulted in wave-damaged equipment being found 34 m (112 ft) above sea level.

20th century

  • SS Kronprinz Wilhelm, September 18, 1901 – The most modern German ocean liner of its time (winner of the Blue Riband) was damaged on its maiden voyage from Cherbourg to New York by a huge wave. The wave struck the ship head-on.
  • RMS Lusitania (1910) – On the night of 10 January 1910, a 23 m (75 ft) wave struck the ship over the bow, damaging the forecastle deck and smashing the bridge windows.
  • Voyage of the James Caird (1916) – Sir Ernest Shackleton encountered a wave he termed "gigantic" while piloting a lifeboat from Elephant Island to South Georgia.
  • USS Memphis, August 29, 1916 – An armored cruiser, formerly known as the USS Tennessee, wrecked while stationed in the harbor of Santo Domingo, with 43 men killed or lost, by a succession of three waves, the largest estimated at 70 feet.
  • RMS Homeric (1924) – Hit by a 24 m (80 ft) wave while sailing through a hurricane off the East Coast of the United States, injuring seven people, smashing numerous windows and portholes, carrying away one of the lifeboats, and snapping chairs and other fittings from their fastenings.
  • USS Ramapo (1933) – Triangulated at 34 m (112 ft).
  • RMS Queen Mary (1942) – Broadsided by a 28 m (92 ft) wave and listed briefly about 52° before slowly righting.
  • SS Michelangelo (1966) – Hole torn in superstructure, heavy glass was smashed by the wave 24 m (80 ft) above the waterline, and three deaths.
  • SS Edmund Fitzgerald (1975) – Lost on Lake Superior, a Coast Guard report blamed water entry to the hatches, which gradually filled the hold, or errors in navigation or charting causing damage from running onto shoals. However, another nearby ship, the SS Arthur M. Anderson, was hit at a similar time by two rogue waves and possibly a third, and this appeared to coincide with the sinking around 10 minutes later.
  • MS München (1978) – Lost at sea, leaving only scattered wreckage and signs of sudden damage including extreme forces 20 m (66 ft) above the water line. Although more than one wave was probably involved, this remains the most likely sinking due to a freak wave.
  • Esso Languedoc (1980) – A 25-to-30 m (80-to-100 ft) wave washed across the deck from the stern of the French supertanker near Durban, South Africa.
  • Fastnet Lighthouse – Struck by a 48-metre (157 ft) wave in 1985
  • Draupner wave (North Sea, 1995) – The first rogue wave confirmed with scientific evidence, it had a maximum height of 26 metres (85 ft)
  • Queen Elizabeth 2 (1995) – Encountered a 29 m (95 ft) wave in the North Atlantic, during Hurricane Luis. The master said it "came out of the darkness" and "looked like the White Cliffs of Dover." Newspaper reports at the time described the cruise liner as attempting to "surf" the near-vertical wave in order not to be sunk.

21st century

  • U.S. Naval Research Laboratory ocean-floor pressure sensors detected a freak wave caused by Hurricane Ivan in the Gulf of Mexico, 2004. The wave was around 27.7 m (91 ft) high from peak to trough, and around 200 m (660 ft) long. Their computer models also indicated that waves may have exceeded 40 metres (130 ft) in the eyewall.
  • Aleutian Ballad, (Bering Sea, 2005) footage of what is identified as an 18 m (60 ft) wave appears in an episode of Deadliest Catch. The wave strikes the ship at night and cripples the vessel, causing the boat to tip for a short period onto its side. This is one of the few video recordings of what might be a rogue wave.
  • In 2006, researchers from U.S. Naval Institute theorized rogue waves may be responsible for the unexplained loss of low-flying aircraft, such as U.S. Coast Guard helicopters during search-and-rescue missions.
  • MS Louis Majesty (Mediterranean Sea, March 2010) was struck by three successive 8 m (26 ft) waves while crossing the Gulf of Lion on a Mediterranean cruise between Cartagena and Marseille. Two passengers were killed by flying glass when the second and third waves shattered a lounge window. The waves, which struck without warning, were all abnormally high in respect to the sea swell at the time of the incident.
  • In 2011, the Sea Shepherd vessel MV Brigitte Bardot was damaged by a rogue wave of 11 m (36.1 ft) while pursuing the Japanese whaling fleet off the western coast of Australia on 28 December 2011. The MV Brigitte Bardot was escorted back to Fremantle by the SSCS flagship, MV Steve Irwin. The main hull was cracked, and the port side pontoon was being held together by straps. The vessel arrived at Fremantle Harbor on 5 January 2012. Both ships were followed by the ICR security vessel MV Shōnan Maru 2 at a distance of 5 nautical miles (9 km).
  • In 2019, Hurricane Dorian's extratropical remnant generated a 30 m (100 ft) rogue wave off the coast of Newfoundland.
  • In 2022, the Viking cruise ship Viking Polaris was hit by a rogue wave on its way to Ushuaia, Argentina. One person died, four more were injured, and the ship's scheduled route to Antarctica was canceled.

Quantifying the impact of rogue waves on ships

The loss of the MS München in 1978 provided some of the first physical evidence of the existence of rogue waves. München was a state-of-the-art cargo ship with multiple water-tight compartments and an expert crew. She was lost with all crew, and the wreck has never been found. The only evidence found was the starboard lifeboat recovered from floating wreckage sometime later. The lifeboats hung from forward and aft blocks 20 m (66 ft) above the waterline. The pins had been bent back from forward to aft, indicating the lifeboat hanging below it had been struck by a wave that had run from fore to aft of the ship and had torn the lifeboat from the ship. To exert such force, the wave must have been considerably higher than 20 m (66 ft). At the time of the inquiry, the existence of rogue waves was considered so statistically unlikely as to be near impossible. Consequently, the Maritime Court investigation concluded that the severe weather had somehow created an "unusual event" that had led to the sinking of the München.

In 1980, the MV Derbyshire was lost during Typhoon Orchid south of Japan, along with all of her crew. The Derbyshire was an ore-bulk oil combination carrier built in 1976. At 91,655 gross register tons, she was— and remains to be— the largest British ship ever lost at sea. The wreck was found in June 1994. The survey team deployed a remotely operated vehicle to photograph the wreck. A private report published in 1998 prompted the British government to reopen a formal investigation into the sinking. The investigation included a comprehensive survey by the Woods Hole Oceanographic Institution, which took 135,774 pictures of the wreck during two surveys. The formal forensic investigation concluded that the ship sank because of structural failure and absolved the crew of any responsibility. Most notably, the report determined the detailed sequence of events that led to the structural failure of the vessel. A third comprehensive analysis was subsequently done by Douglas Faulkner, professor of marine architecture and ocean engineering at the University of Glasgow. His 2001 report linked the loss of the Derbyshire with the emerging science on freak waves, concluding that the Derbyshire was almost certainly destroyed by a rogue wave.

Work by sailor and author Craig B. Smith in 2007 confirmed prior forensic work by Faulkner in 1998 and determined that the Derbyshire was exposed to a hydrostatic pressure of a "static head" of water of about 20 m (66 ft) with a resultant static pressure of 201 kilopascals (2.01 bar; 29.2 psi). This is in effect 20 m (66 ft) of seawater (possibly a super rogue wave) flowing over the vessel. The deck cargo hatches on the Derbyshire were determined to be the key point of failure when the rogue wave washed over the ship. The design of the hatches only allowed for a static pressure less than 2 m (6.6 ft) of water or 17.1 kPa (0.171 bar; 2.48 psi), meaning that the typhoon load on the hatches was more than 10 times the design load. The forensic structural analysis of the wreck of the Derbyshire is now widely regarded as irrefutable.

In addition, fast-moving waves are now known to also exert extremely high dynamic pressure. Plunging or breaking waves are known to cause short-lived impulse pressure spikes called Gifle peaks. These can reach pressures of 200 kPa (2.0 bar; 29 psi) (or more) for milliseconds, which is sufficient pressure to lead to brittle fracture of mild steel. Evidence of failure by this mechanism was also found on the Derbyshire. Smith documented scenarios where hydrodynamic pressure up to 5,650 kPa (56.5 bar; 819 psi) or over 500 metric tonnes/m2 could occur.

In 2004, an extreme wave was recorded impacting the Alderney Breakwater, Alderney, in the Channel Islands. This breakwater is exposed to the Atlantic Ocean. The peak pressure recorded by a shore-mounted transducer was 745 kPa (7.45 bar; 108.1 psi). This pressure far exceeds almost any design criteria for modern ships, and this wave would have destroyed almost any merchant vessel.

Design standards

In November 1997, the International Maritime Organization(IMO) adopted new rules covering survivability and structural requirements for bulk carriers of 150 m (490 ft) and upwards. The bulkhead and double bottom must be strong enough to allow the ship to survive flooding in hold one unless loading is restricted.

Rogue waves present considerable danger for several reasons: they are rare, unpredictable, may appear suddenly or without warning, and can impact with tremendous force. A 12 m (39 ft) wave in the usual "linear" model would have a breaking force of 6 metric tons per square metre [t/m2] (8.5 psi). Although modern ships are typically designed to tolerate a breaking wave of 15 t/m2, a rogue wave can dwarf both of these figures with a breaking force far exceeding 100 t/m2. Smith presented calculations using the International Association of Classification Societies (IACS) Common Structural Rules for a typical bulk carrier.

Peter Challenor, a scientist from the National Oceanography Centre in the United Kingdom, was quoted in Casey's book in 2010 as saying: "We don’t have that random messy theory for nonlinear waves. At all." He added, "People have been working actively on this for the past 50 years at least. We don’t even have the start of a theory."

In 2006, Smith proposed that the IACS recommendation 34 pertaining to standard wave data be modified so that the minimum design wave height be increased to 19.8 m (65 ft). He presented analysis that sufficient evidence exists to conclude that 20.1 m (66 ft) high waves can be experienced in the 25-year lifetime of oceangoing vessels, and that 29.9 m (98 ft) high waves are less likely, but not out of the question. Therefore, a design criterion based on 11.0 m (36 ft) high waves seems inadequate when the risk of losing crew and cargo is considered. Smith also proposed that the dynamic force of wave impacts should be included in the structural analysis. The Norwegian offshore standards now consider extreme severe wave conditions and require that a 10,000-year wave does not endanger the ships' integrity. W. Rosenthal noted that as of 2005, rogue waves were not explicitly accounted for in Classification Society's rules for ships' design. As an example, DNV GL, one of the world's largest international certification bodies and classification society with main expertise in technical assessment, advisory, and risk management publishes their Structure Design Load Principles which remain largely based on the Significant Wave Height, and as of January 2016, still have not included any allowance for rogue waves.

The U.S. Navy historically took the design position that the largest wave likely to be encountered was 21.4 m (70 ft). Smith observed in 2007 that the navy now believes that larger waves can occur and the possibility of extreme waves that are steeper (i.e. do not have longer wavelengths) is now recognized. The navy has not had to make any fundamental changes in ship design due to new knowledge of waves greater than 21.4 m because the ships are built to higher standards than required.

The more than 50 classification societies worldwide each has different rules. However, most new ships are built to the standards of the 12 members of the International Association of Classification Societies, which implemented two sets of common structural rules - one for oil tankers and one for bulk carriers, in 2006. These were later harmonised into a single set of rules.

Complex system

From Wikipedia, the free encyclopedia

A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, complex software and electronic systems, social and economic organizations (like cities), an ecosystem, a living cell, and, ultimately, for some authors, the entire universe.

Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.

The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.

As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study of self-organization and critical phenomena from physics, of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.

Key concepts

Gosper's Glider Gun creating "gliders" in the cellular automaton Conway's Game of Life

Adaptation

Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.

Features

Complex systems may have the following features:

Complex systems may be open
Complex systems are usually open systems — that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Graphical representation of alternative stable states and the direction of critical slowing down prior to a critical transition (taken from Lever et al. 2020). Top panels (a) indicate stability landscapes at different conditions. Middle panels (b) indicate the rates of change akin to the slope of the stability landscapes, and bottom panels (c) indicate a recovery from a perturbation towards the system's future state (c.I) and in another direction (c.II).
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial and economic systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells – all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, empirical food webs display regular, scale-invariant features across aquatic and terrestrial ecosystems when studied at the level of clustered 'trophic' species. Another example is offered by the termites in a mound which have physiology, biochemistry and biological development at one level of analysis, whereas their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.

History

In 1948, Dr. Warren Weaver published an essay on "Science and Complexity", exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole."

While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.

Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.

The 2021 Nobel Prize in Physics was awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi for their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate.

Applications

Complexity in practice

The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.

Complexity of cities

Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay. As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities.

Complexity economics

Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann.

Recurrence quantification analysis has been employed to detect the characteristic of business cycles and economic development. To this end, Orlando et al. developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al., over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics.

Complexity and education

Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".

Complexity in healthcare research and practice

Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops. Complexity science in healthcare frames knowledge translation as a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems, complexity science advocates for continuous stakeholder engagement, transdisciplinary collaboration, and flexible strategies to effectively translate research into practice.

Complexity and biology

Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field of fractal physiology, bodily signals, such as heart rate or brain activity, are characterized using entropy or fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses.

Complexity and chaos theory

Complex systems theory is related to chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy.

The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".

A plot of the Lorenz attractor

When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions. For recent examples in economics and business see Stoop et al. who discussed Android's market position, Orlando  who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al. who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model.

Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.

Complexity and network science

A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies, airline networks, and biological networks.

Friday, November 15, 2024

Zero-sum game

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Zero-sum_game

Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.

If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a more significant piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally. Other examples of zero-sum games in daily life include games like poker, chess, sport and bridge where one person gains and another person loses, which results in a zero-net benefit for every player. In the markets and financial instruments, futures contracts and options are zero-sum games as well.

In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses can be less than or more than zero. A zero-sum game is also called a strictly competitive game, while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality, or with Nash equilibrium. Prisoner's Dilemma is a classic non-zero-sum game.

Definition


Choice 1 Choice 2
Choice 1 −A, A B, −B
Choice 2 C, −C −D, D
Generic zero-sum game

Option 1 Option 2
Option 1 2, −2 −2, 2
Option 2 −2, 2 2, −2
Another example of the classic zero-sum game

The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal. Generally, any game where all strategies are Pareto optimal is called a conflict game.

Zero-sum games are a specific example of constant sum games where the sum of each outcome is always zero. Such games are distributive, not integrative; the pie cannot be enlarged by good negotiation.

In situation where one decision maker's gain (or loss) does not necessarily result in the other decision makers' loss (or gain), they are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players is sometimes more or less than what they began with.

The idea of Pareto optimal payoff in a zero-sum game gives rise to a generalized relative selfish rationality standard, the punishing-the-opponent standard, where both players always seek to minimize the opponent's payoff at a favourable cost to themselves rather than prefer more over less. The punishing-the-opponent standard can be used in both zero-sum games (e.g. warfare game, chess) and non-zero-sum games (e.g. pooling selection games). The player in the game has a simple enough desire to maximise the profit for them, and the opponent wishes to minimise it.

Solution

For two-player finite zero-sum games, if the players are allowed to play a mixed strategy, the game always has an one equilibrium solution. The different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. Notice that this is not true for pure strategy.

Example

A zero-sum game (Two person)
Blue
Red
A B C
1
−30
30
10
−10
−20
20
2
10
−10
−20
20
20
−20

A game's payoff matrix is a convenient representation. Consider these situations as an example, the two-player zero-sum game pictured at right or above.

The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.

Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points.

In this example game, both players know the payoff matrix and attempt to maximize the number of their points. Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, and with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. If Blue anticipates Red's reasoning and choice of action 1, Blue may choose action B, so as to win 10 points. If Red, in turn, anticipates this trick and goes for action 2, this wins Red 20 points.

Émile Borel and John von Neumann had the fundamental insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute probably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with probability 3/7, and Blue should assign the probabilities 0, 4/7, and 3/7 to the three actions A, B, and C. Red will then win 20/7 points on average per game.

Solving

The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix M where element Mi,j is the payoff obtained when the minimizing player chooses pure strategy i and the maximizing player chooses pure strategy j (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of M is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (Raghavan 1994, p. 740) by solving the following linear program to find a vector u:

Minimize:

Subject to the constraints:

u ≥ 0
M u ≥ 1.

The first constraint says each element of the u vector must be nonnegative, and the second constraint says each element of the M u vector must be at least 1. For the resulting u vector, the inverse of the sum of its elements is the value of the game. Multiplying u by that value gives a probability vector, giving the probability that the maximizing player will choose each possible pure strategy.

If the game matrix does not have all positive elements, add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will not affect the equilibrium mixed strategies for the equilibrium.

The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Alternatively, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of M (adding a constant so it is positive), then solving the resulting game.

If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations and thus such games are equivalent to linear programs, in general.

Universal solution

If avoiding a zero-sum game is an action choice with some probability for players, avoiding is always an equilibrium strategy for at least one player at a zero-sum game. For any two players zero-sum game where a zero-zero draw is impossible or non-credible after the play is started, such as poker, there is no Nash equilibrium strategy other than avoiding the play. Even if there is a credible zero-zero draw after a zero-sum game is started, it is not better than the avoiding strategy. In this sense, it's interesting to find reward-as-you-go in optimal choice computation shall prevail over all two players zero-sum games concerning starting the game or not.

The most common or simple example from the subfield of social psychology is the concept of "social traps". In some cases pursuing individual personal interest can enhance the collective well-being of the group, but in other situations, all parties pursuing personal interest results in mutually destructive behaviour.

Copeland's review notes that an n-player non-zero-sum game can be converted into an (n+1)-player zero-sum game, where the n+1st player, denoted the fictitious player, receives the negative of the sum of the gains of the other n-players (the global gain / loss).

Zero-sum three-person games

Zero-sum three-person game

It is clear that there are manifold relationships between players in a zero-sum three-person game, in a zero-sum two-person game, anything one player wins is necessarily lost by the other and vice versa; therefore, there is always an absolute antagonism of interests, and that is similar in the three-person game. A particular move of a player in a zero-sum three-person game would be assumed to be clearly beneficial to him and may disbenefits to both other players, or benefits to one and disbenefits to the other opponent. Particularly, parallelism of interests between two players makes a cooperation desirable; it may happen that a player has a choice among various policies: Get into a parallelism interest with another player by adjusting his conduct, or the opposite; that he can choose with which of other two players he prefers to build such parallelism, and to what extent. The picture on the left shows that a typical example of a zero-sum three-person game. If Player 1 chooses to defence, but Player 2 & 3 chooses to offence, both of them will gain one point. At the same time, Player 1 will lose two-point because points are taken away by other players, and it is evident that Player 2 & 3 has parallelism of interests.

Real life example

Economic benefits of low-cost airlines in saturated markets - net benefits or a zero-sum game 

Studies show that the entry of low-cost airlines into the Hong Kong market brought in $671 million in revenue and resulted in an outflow of $294 million.

Therefore, the replacement effect should be considered when introducing a new model, which will lead to economic leakage and injection. Thus introducing new models requires caution. For example, if the number of new airlines departing from and arriving at the airport is the same, the economic contribution to the host city may be a zero-sum game. Because for Hong Kong, the consumption of overseas tourists in Hong Kong is income, while the consumption of Hong Kong residents in opposite cities is outflow. In addition, the introduction of new airlines can also have a negative impact on existing airlines.

Consequently, when a new aviation model is introduced, feasibility tests need to be carried out in all aspects, taking into account the economic inflow and outflow and displacement effects caused by the model.

Zero-sum games in financial markets

Derivatives trading may be considered a zero-sum game, as each dollar gained by one party in a transaction must be lost by the other, hence yielding a net transfer of wealth of zero.

An options contract - whereby a buyer purchases a derivative contract which provides them with the right to buy an underlying asset from a seller at a specified strike price before a specified expiration date – is an example of a zero-sum game. A futures contract – whereby a buyer purchases a derivative contract to buy an underlying asset from the seller for a specified price on a specified date – is also an example of a zero-sum game. This is because the fundamental principle of these contracts is that they are agreements between two parties, and any gain made by one party must be matched by a loss sustained by the other.

If the price of the underlying asset increases before the expiration date the buyer may exercise/ close the options/ futures contract. The buyers gain and corresponding sellers loss will be the difference between the strike price and value of the underlying asset at that time. Hence, the net transfer of wealth is zero.

Swaps, which involve the exchange of cash flows from two different financial instruments, are also considered a zero-sum game. Consider a standard interest rate swap whereby Firm A pays a fixed rate and receives a floating rate; correspondingly Firm B pays a floating rate and receives a fixed rate. If rates increase, then Firm A will gain, and Firm B will lose by the rate differential (floating rate – fixed rate). If rates decrease, then Firm A will lose, and Firm B will gain by the rate differential (fixed rate – floating rate).

Whilst derivatives trading may be considered a zero-sum game, it is important to remember that this is not an absolute truth. The financial markets are complex and multifaceted, with a range of participants engaging in a variety of activities. While some trades may result in a simple transfer of wealth from one party to another, the market as a whole is not purely competitive, and many transactions serve important economic functions.

The stock market is an excellent example of a positive-sum game, often erroneously labelled as a zero-sum game. This is a zero-sum fallacy: the perception that one trader in the stock market may only increase the value of their holdings if another trader decreases their holdings.

The primary goal of the stock market is to match buyers and sellers, but the prevailing price is the one which equilibrates supply and demand. Stock prices generally move according to changes in future expectations, such as acquisition announcements, upside earnings surprises, or improved guidance.

For instance, if Company C announces a deal to acquire Company D, and investors believe that the acquisition will result in synergies and hence increased profitability for Company C, there will be an increased demand for Company C stock. In this scenario, all existing holders of Company C stock will enjoy gains without incurring any corresponding measurable losses to other players.

Furthermore, in the long run, the stock market is a positive-sum game. As economic growth occurs, demand increases, output increases, companies grow, and company valuations increase, leading to value creation and wealth addition in the market.

Complexity

It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent.

Extensions

In 1944, John von Neumann and Oskar Morgenstern proved that any non-zero-sum game for n players is equivalent to a zero-sum game with n + 1 players; the (n + 1)th player representing the global profit or loss.

Misunderstandings

Zero-sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions. Furthermore, the word "game" does not imply the model is valid only for recreational games.

Politics is sometimes called zero sum because in common usage the idea of a stalemate is perceived to be "zero sum"; politics and macroeconomics are not zero sum games, however, because they do not constitute conserved systems.

Zero-sum thinking

In psychology, zero-sum thinking refers to the perception that a given situation is like a zero-sum game, where one person's gain is equal to another person's loss.

Homeokinetics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Homeokinetics

Homeokinetics is the study of self-organizing, complex systems. Standard physics studies systems at separate levels, such as atomic physics, nuclear physics, biophysics, social physics, and galactic physics. Homeokinetic physics studies the up-down processes that bind these levels. Tools such as mechanics, quantum field theory, and the laws of thermodynamics provide the key relationships. The subject, described as the physics and thermodynamics associated with the up down movement between levels of systems, originated in the late 1970s work of American physicists Harry Soodak and Arthur Iberall. Complex systems are universes, galaxies, social systems, people, or even those that seem as simple as gases. The basic premise is that the entire universe consists of atomistic-like units bound in interactive ensembles to form systems, level by level, in a nested hierarchy. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems.

History

Arthur Iberall, Warren McCulloch and Harry Soodak developed the concept of homeokinetics as a new branch of physics. It began through Iberall's biophysical research for the NASA exobiology program into the dynamics of mammalian physiological processes They were observing an area that physics has neglected, that of complex systems with their very long internal factory day delays. They were observing systems associated with nested hierarchy and with an extensive range of time scale processes. It was such connections, referred to as both up-down or in-out connections (as nested hierarchy) and side-side or flatland physics among atomistic-like components (as heterarchy) that became the hallmark of homeokinetic problems. By 1975, they began to put a formal catch-phrase name on those complex problems, associating them with nature, life, human, mind, and society. The major method of exposition that they began using was a combination of engineering physics and a more academic pure physics. In 1981, Iberall was invited to the Crump Institute for Medical Engineering of UCLA, where he further refined the key concepts of homeokinetics, developing a physical scientific foundation for complex systems.

Self-organizing complex Systems

A system is a collective of interacting ‘atomistic’-like entities. The word ‘atomism’ is used to stand both for the entity and the doctrine. As is known from ‘kinetic’ theory, in mobile or simple systems, the atomisms share their ‘energy’ in interactive collisions. That so-called ‘equipartitioning’ process takes place within a few collisions. Physically, if there is little or no interaction, the process is considered to be very weak. Physics deals basically with the forces of interaction—few in number—that influence the interactions. They all tend to emerge with considerable force at high ‘density’ of atomistic interaction. In complex systems, there is also a result of internal processes in the atomisms. They exhibit, in addition to the pair-by-pair interactions, internal actions such as vibrations, rotations, and association. If the energy and time involved internally creates a very large—in time—cycle of performance of their actions compared to their pair interactions, the collective system is complex. If you eat a cookie and you do not see the resulting action for hours, that is complex; if boy meets girl and they become ‘engaged’ for a protracted period, that is complex. What emerges from that physics is a broad host of changes in state and stability transitions in state. Viewing Aristotle as having defined a general basis for systems in their static-logical states and trying to identify a logic-metalogic for physics, e.g., metaphysics, then homeokinetics is viewed to be an attempt to define the dynamics of all those systems in the universe.

Flatland physics vs. homeokinetic physics

Ordinary physics is a flatland physics, a physics at some particular level. Examples include nuclear and atomic physics, biophysics, social physics, and stellar physics. Homeokinetic physics combines flatland physics with the study of the up down processes that binds the levels. Tools, such as mechanics, quantum field theory, and the laws of thermodynamics, provide key relationships for the binding of the levels, how they connect, and how the energy flows up and down. And whether the atomisms are atoms, molecules, cells, people, stars, galaxies, or universes, the same tools can be used to understand them. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems.

Applications

A homeokinetic approach to complex systems has been applied to understanding life, ecological psychology, mind, anthropology, geology, law, motor control, bioenergetics, healing modalities, and political science.

It has also been applied to social physics where a homeokinetics analysis shows that one must account for flow variables such as the flow of energy, of materials, of action, reproduction rate, and value-in-exchange. Iberall's conjectures on life and mind have been used as a springboard to develop theories of mental activity and action.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...