Search This Blog

Wednesday, February 23, 2022

Wheeler–Feynman absorber theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorber_theory

The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is an interpretation of electrodynamics derived from the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal transformation, as are the field equations themselves. Indeed, there is no apparent reason for the time-reversal symmetry breaking, which singles out a preferential time direction and thus makes a distinction between past and future. A time-reversal invariant theory is more logical and elegant. Another key principle, resulting from this interpretation and reminiscent of Mach's principle due to Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of self-energies.

T-symmetry and causality

The requirement of time-reversal symmetry, in general, is difficult to conjugate with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time and point , which will arrive at point at the instant (here is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves. In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori.

Feynman and Wheeler obtained this result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is

Then they observed that if the relation

holds, then , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field

The total field is retarded, and causality is not violated.

The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves.

The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling.

Alternatively, the way that Wheeler/Feyman came up with the primary equation is: They assumed that their Lagrangian only interacted when and where the fields for the individual particles were separated by a proper time of zero. So since only massless particles propagate from emission to detection with zero proper time separation, this Lagrangian automatically demands an electromagnetic like interaction.

T-symmetry and self-interaction

One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle () must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Lorentz and Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position:

However, Dirac did not propose any physical explanation of this interpretation.

A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle at its own position (the point ) is then

If we sum the free-field term of this expression, we obtain

and, thanks to Dirac's result,

Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac.

Criticism

The Abraham–Lorentz force is, however, not free of problems. Written in the non-relativistic limit, it gives

Since the third derivative with respect to the time (also called the "jerk" or "jolt") enters in the equation of motion, to derive a solution one needs not only the initial position and velocity of the particle, but also its initial acceleration. This apparent problem, however, can be solved in the absorber theory by observing that the equation of motion for the particle has to be solved together with the Maxwell equations for the field. In this case, instead of the initial acceleration, one only needs to specify the initial field and the boundary condition. This interpretation restores the coherence of the physical interpretation of the theory.

Other difficulties may arise trying to solve the equation of motion for a charged particle in the presence of this damping force. It is commonly stated that the Maxwell equations are classical and cannot correctly account for microscopic phenomena, such as the behavior of a point-like particle, where quantum-mechanical effects should appear. Nevertheless, with absorber theory, Wheeler and Feynman were able to create a coherent classical approach to the problem (see also the "paradoxes" section in the Abraham–Lorentz force).

Also, the time-symmetric interpretation of the electromagnetic waves appears to be in contrast with the experimental evidence that time flows in a given direction and, thus, that the T-symmetry is broken in our world. It is commonly believed, however, that this symmetry breaking appears only in the thermodynamical limit (see, for example, the arrow of time). Wheeler himself accepted that the expansion of the universe is not time-symmetric in the thermodynamic limit. This, however, does not imply that the T-symmetry must be broken also at the microscopic level.

Finally, the main drawback of the theory turned out to be the result that particles are not self-interacting. Indeed, as demonstrated by Hans Bethe, the Lamb shift necessitated a self-energy term to be explained. Feynman and Bethe had an intense discussion over that issue, and eventually Feynman himself stated that self-interaction is needed to correctly account for this effect.

Developments since original formulation

Gravity theory

Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding.

Transactional interpretation of quantum mechanics

Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality.

Attempted resolution of causality

T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea. The Lagrangian describing a particle () under the influence of the time-symmetric potential generated by another particle () is

where is the relativistic kinetic energy functional of particle , and and are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle and generated by particle . The corresponding Lagrangian for particle is

It was originally demonstrated with computer algebra and then proven analytically that

is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore

The resulting Lagrangian is symmetric under the exchange of with . For this Lagrangian will generate exactly the same equations of motion of and . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle.[13] Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaoticlike solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory.

This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein.

Alternative Lamb shift calculation

As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jaynes' alternative provides a solution to the process of "addition and subtraction of infinities" associated with renormalization.

This model leads to the same type of Bethe logarithm (an essential part of the Lamb shift calculation), vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality.

Conclusions

This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising in view of the fact that, in field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions.

Tuesday, February 22, 2022

Hugh Everett III

From Wikipedia, the free encyclopedia
 
Hugh Everett III
Hugh-Everett.jpg
Hugh Everett in 1964
BornNovember 11, 1930
DiedJuly 19, 1982 (aged 51)
CitizenshipUnited States
Alma materCatholic University of America
Princeton University (Ph.D.)
Known forMany-worlds interpretation
Everett's theorem
ChildrenElizabeth Everett, Mark Oliver Everett
Scientific career
FieldsPhysics
Operations research
Optimization
Game theory
InstitutionsInstitute for Defense Analyses
American Management Systems
Monowave Corporation
Doctoral advisorJohn Archibald Wheeler

Hugh Everett III (/ˈɛvərɪt/; November 11, 1930 – July 19, 1982) was an American physicist who first proposed the many-worlds interpretation (MWI) of quantum physics, which he termed his "relative state" formulation. In contrast to the then-dominant Copenhagen interpretation, the MWI posits that the wave function never collapses and that all possibilities of a quantum superposition are objectively real.

Discouraged by the scorn of other physicists for MWI, Everett ended his physics career after completing his PhD. Afterwards, he developed the use of generalized Lagrange multipliers for operations research and applied this commercially as a defense analyst and a consultant. In poor health later in life, he died at the age of 51 in 1982. He is the father of musician Mark Oliver Everett.

Although largely disregarded until near the end of Everett's lifetime, the MWI received more credibility with the discovery of quantum decoherence in the 1970s and has received increased attention in recent decades, becoming one of the mainstream interpretations of quantum mechanics alongside Copenhagen, pilot wave theories, and consistent histories.

Early life and education

Hugh Everett III was born in 1930 and raised in the Washington, D.C. area. Everett's parents separated when he was young. Initially raised by his mother (Katherine Lucille Everett née Kennedy), he was raised by his father (Hugh Everett Jr) and stepmother (Sarah Everett née Thrift) from the age of seven.

At the age of twelve he wrote a letter to Albert Einstein asking him whether that which maintained the universe was something random or unifying. Einstein responded as follows:

Dear Hugh: There is no such thing like an irresistible force and immovable body. But there seems to be a very stubborn boy who has forced his way victoriously through strange difficulties created by himself for this purpose. Sincerely yours, A. Einstein

Everett won a half scholarship to St. John's College High School in Washington, D.C. From there, he moved to the nearby Catholic University of America to study chemical engineering as an undergraduate. While there, he read about Dianetics in Astounding Science Fiction. Although he never exhibited any interest in Scientology (as Dianetics became), he did retain a distrust of conventional medicine throughout his life.

During World War II his father was away fighting in Europe as a lieutenant colonel on the general staff. After World War II, Everett's father was stationed in West Germany, and Hugh joined him, during 1949, taking a year out from his undergraduate studies. Father and son were both keen photographers and took hundreds of pictures of West Germany being rebuilt. Reflecting their technical interests, the pictures were "almost devoid of people".

Princeton

Everett graduated from the Catholic University of America in 1953 with a degree in chemical engineering, although he had completed sufficient courses for a mathematics degree as well. He received a National Science Foundation fellowship that allowed him to attend Princeton University for graduate studies. He started his studies at Princeton in the mathematics department, where he worked on the then-new field of game theory under Albert W. Tucker, but slowly drifted into physics. In 1953 he started taking his first physics courses, notably Introductory Quantum Mechanics with Robert Dicke.

During 1954, he attended Methods of Mathematical Physics with Eugene Wigner, although he remained active with mathematics and presented a paper on military game theory in December. He passed his general examinations in the spring of 1955, thereby gaining his master's degree, and then started work on his dissertation that would (much) later make him famous. He switched thesis advisor to John Archibald Wheeler some time in 1955, wrote a couple of short papers on quantum theory and completed his long paper, Wave Mechanics Without Probability in April 1956.

In his third year at Princeton, Everett moved into an apartment which he shared with three friends he had made during his first year, Hale Trotter, Harvey Arnold and Charles Misner. Arnold later described Everett as follows:

He was smart in a very broad way. I mean, to go from chemical engineering to mathematics to physics and spending most of the time buried in a science fiction book, I mean, this is talent.

It was during this time that he met Nancy Gore, who typed up his Wave Mechanics Without Probability paper. Everett married Nancy Gore the next year. The long paper was later retitled as The Theory of the Universal Wave Function.

Wheeler himself had traveled to Copenhagen in May 1956 with the goal of getting a favorable reception for at least part of Everett's work, but in vain. In June 1956 Everett started defense work in the Pentagon's Weapons Systems Evaluation Group, returning briefly to Princeton to defend his thesis after some delay in the spring of 1957. A short article, which was a compromise between Everett and Wheeler about how to present the concept and almost identical to the final version of his thesis, was published in Reviews of Modern Physics Vol 29 #3 454-462, (July 1957), accompanied by an approving review by Wheeler. Everett was not happy with the final form of the article. Everett received his Ph.D. in physics from Princeton in 1957 after completing his doctoral dissertation titled "On the foundations of quantum mechanics."

After Princeton

Everett's attendance marked the transition from academia to commercial work.

Upon graduation in September 1956, Everett was invited to join the Pentagon's newly-forming Weapons Systems Evaluation Group (WSEG), managed by the Institute for Defense Analyses. Between 23–26 October 1956 he attended a weapons orientation course managed by Sandia National Laboratories at Albuquerque, New Mexico to learn about nuclear weapons and became a fan of computer modeling while there. In 1957, he became director of the WSEG's Department of Physical and Mathematical Sciences. After a brief intermission to defend his thesis on quantum theory at Princeton, Everett returned to WSEG and recommenced his research, much of which, but by no means all, remains classified. He worked on various studies of the Minuteman missile project, which was then starting, as well as the influential study The Distribution and Effects of Fallout in Large Nuclear Weapon Campaigns.

During March and April 1959, at Wheeler's request, Everett visited Copenhagen, on vacation with his wife and baby daughter, in order to meet with Niels Bohr, the "father of the Copenhagen interpretation of quantum mechanics". The visit was a complete disaster; Everett was unable to communicate the main idea that the universe is describable, in theory, by an objectively existing universal wave function (which does not "collapse"); this was simply heresy to Bohr and the others at Copenhagen. The conceptual gulf between their positions was too wide to allow any meeting of minds; Léon Rosenfeld, one of Bohr's devotees, talking about Everett's visit, described Everett as being "undescribably [sic] stupid and could not understand the simplest things in quantum mechanics". Everett later described this experience as "hell...doomed from the beginning".

However, while in Copenhagen, in his hotel, he started work on a new idea to use generalized Lagrange multipliers for mathematical optimization. Everett's theorem, published in 1963, relates the Lagrangian bidual to the primal problem.

In 1962 Everett accepted an invitation to present the relative-state formulation (as it was still called) at a conference on the foundations of quantum mechanics held at Xavier University in Cincinnati. In his exposition Everett presented his derivation of probability and also stated explicitly that observers in all branches of the wavefunction were equally "real." He also agreed with an observation from the floor that the number of branches of the universal wavefunction was an uncountable infinity.

In August 1964, Everett and several WSEG colleagues started Lambda Corp. to apply military modeling solutions to various civilian problems. During the early 1970s, defense budgets were curtailed and most money went to operational duties in the Vietnam War, resulting in Lambda eventually being absorbed by the General Research Corp.

In 1973, Everett and Donald Reisler (a Lambda colleague and fellow physicist) left the firm to establish DBS Corporation in Arlington, Virginia. Although the firm conducted defense research (including work on United States Navy ship maintenance optimization and weapons applications), it primarily specialized in "analyzing the socioeconomic effects of government affirmative action programs" as a contractor under the auspices of the Department of Justice and the Department of Health, Education and Welfare. For a period of time, the company was partially supported by American Management Systems, a business consulting firm that drew upon algorithms developed by Everett. He concurrently held a non-administrative vice presidency at AMS and was frequently consulted by the firm's founders.

Everett cultivated an early aptitude for computer programming at IDA and favored the TRS-80 at DBS, where he primarily worked for the rest of his life.

Later recognition

In 1970 Bryce DeWitt wrote an article for Physics Today on Everett's relative-state theory, which evoked a number of letters from physicists. These letters, and DeWitt's responses to the technical objections raised, were also published. Meanwhile DeWitt, who had corresponded with Everett on the many-worlds / relative state interpretation when originally published in 1957, started editing an anthology on the many-worlds interpretation of quantum mechanics. In addition to the original articles by Everett and Wheeler, the anthology was dominated by the inclusion of Everett's 1956 paper The Theory of the Universal Wavefunction, which had never been published before. The book was published late in 1973, sold out completely, and it was not long before an article on Everett's work appeared in the science fiction magazine, Analog.

In 1977, Everett was invited to give a talk at a conference Wheeler had organised at Wheeler's new location at the University of Texas at Austin. As with the Copenhagen visit, Everett vacationed from his defense work and traveled with his family. Everett met DeWitt there for the first and only time. Everett's talk was quite well received and influenced a number of physicists in the audience, including Wheeler’s graduate student, David Deutsch, who later promoted the many-worlds interpretation to a wider audience. Everett, who "never wavered in his belief in his many-worlds theory", enjoyed the presentation; it was the first time for years he had talked about his quantum work in public. Wheeler started the process of returning Everett to a physics career by establishing a new research institute in California, but nothing came of this proposal. Wheeler, although happy to introduce Everett's ideas to a wider audience, was not happy to have his own name associated with Everett's ideas. Eventually, after Everett's death, he formally renounced the theory.

Death and legacy

At the age of 51, Everett, who believed in quantum immortality, died suddenly of a heart attack at home in his bed on the night of July 18–19, 1982. Everett's obesity, frequent chain-smoking and alcohol drinking almost certainly contributed to this, although he seemed healthy at the time. A committed atheist, he had asked that his remains be disposed of in the trash after his death. His wife kept his ashes in an urn for a few years, before complying with his wishes. About Hugh's death his son, Mark Oliver Everett, later said:

I think about how angry I was that my dad didn't take better care of himself. How he never went to a doctor, let himself become grossly overweight, smoked three packs a day, drank like a fish and never exercised. But then I think about how his colleague mentioned that, days before dying, my dad had said he lived a good life and that he was satisfied. I realize that there is a certain value in my father's way of life. He ate, smoked and drank as he pleased, and one day he just suddenly and quickly died. Given some of the other choices I'd witnessed, it turns out that enjoying yourself and then dying quickly is not such a hard way to go.

Of the companies Everett initiated, only Monowave Corporation still exists (in Seattle as of March 2015). It is managed by co-founder Elaine Tsiang, who received a Ph.D. in physics under Bryce DeWitt at the University of North Carolina at Chapel Hill before working for DBS as a programmer.

Everett's daughter, Elizabeth, died by suicide in 1996 (saying in her suicide note that she wished her ashes to be thrown out with the garbage so that she might "end up in the correct parallel universe to meet up w[ith] Daddy"), and in 1998, his wife, Nancy, died of cancer. Everett's son, Mark Oliver Everett, who found Everett dead, is also known as "E" and is the main singer and songwriter for the band Eels. The Eels album Electro-Shock Blues, which was written during this time period, is representative of these deaths.

Mark Everett explored his father's work in the hour-long BBC television documentary Parallel Worlds, Parallel Lives. The program was edited and shown on the Public Broadcasting Service's Nova series in the USA during October 2008. In the program, Mark mentions how he wasn't aware of his father's status as a brilliant and influential physicist until his death in 1982.

Quantum suicide and immortality

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can falsify any interpretation of quantum mechanics other than the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. This concept is sometimes conjectured to be applicable to real-world causes of death as well.

Most experts hold that neither the experiment nor the related idea of immortality would work in the real world. As a thought experiment, quantum suicide is an intellectual exercise in which an abstract setup is followed through to its logical consequences merely to prove a theoretical point. Virtually all physicists and philosophers of science who have described it, especially in popularized treatments, underscore that it relies on contrived, idealized circumstances that may be impossible or exceedingly difficult to realize in real life, and that its theoretical premises are controversial even among supporters of the many-worlds interpretation. Thus, as cosmologist Anthony Aguirre warns, "[...] it would be foolish (and selfish) in the extreme to let this possibility guide one's actions in any life-and-death question."

History

Hugh Everett did not mention quantum suicide or quantum immortality in writing; his work was intended as a solution to the paradoxes of quantum mechanics. Eugene Shikhovtsev's biography of Everett states that "Everett firmly believed that his many-worlds theory guaranteed him immortality: his consciousness, he argued, is bound at each branching to follow whatever path does not lead to death". Peter Byrne, author of a published biography of Everett, reports that Everett also privately discussed quantum suicide (such as to play high-stakes Russian roulette and survive in the winning branch), but adds that "[i]t is unlikely, however, that Everett subscribed to this [quantum immortality] view, as the only sure thing it guarantees is that the majority of your copies will die, hardly a rational goal."

Among scientists, the thought experiment was introduced by Euan Squires in 1986. Afterwards, it was published independently by Hans Moravec in 1987 and Bruno Marchal in 1988; it was also described by Huw Price in 1997, who credited it to Dieter Zeh, and independently presented formally by Max Tegmark in 1998. It was later discussed by philosophers Peter J. Lewis in 2000 and David Lewis in 2001.

Thought experiment

The quantum suicide thought experiment involves a similar apparatus to Schrödinger's cat – a box which kills the occupant in a given time frame with probability one-half due to quantum uncertainty. The only difference is to have the experimenter recording observations be the one inside the box. The significance of this is that someone whose life or death depends on a qubit could possibly distinguish between interpretations of quantum mechanics. By definition, fixed observers cannot.

At the start of the first iteration, under both interpretations, the probability of surviving the experiment is 50%, as given by the squared norm of the wave function. At the start of the second iteration, assuming a single-world interpretation of quantum mechanics (like the widely-held Copenhagen interpretation) is true, the wave function has already collapsed; thus, if the experimenter is already dead, there is a 0% chance of survival for any further iterations. However, if the many-worlds interpretation is true, a superposition of the live experimenter necessarily exists (as also does the one who dies). Now, barring the possibility of life after death, after every iteration only one of the two experimenter superpositions – the live one – is capable of having any sort of conscious experience. Putting aside the philosophical problems associated with individual identity and its persistence, under the many-worlds interpretation, the experimenter, or at least a version of them, continues to exist through all of their superpositions where the outcome of the experiment is that they live. In other words, a version of the experimenter survives all iterations of the experiment. Since the superpositions where a version of the experimenter lives occur by quantum necessity (under the many-worlds interpretation), it follows that their survival, after any realizable number of iterations, is physically necessary; hence, the notion of quantum immortality.

A version of the experimenter surviving stands in stark contrast to the implications of the Copenhagen interpretation, according to which, although the survival outcome is possible in every iteration, its probability tends towards zero as the number of iterations increases. According to the many-worlds interpretation, the above scenario has the opposite property: the probability of a version of the experimenter living is necessarily one for any number of iterations.

In the book Our Mathematical Universe, Max Tegmark lays out three criteria that, in abstract, a quantum suicide experiment must fulfill:

  • The random number generator must be quantum, not deterministic, so that the experimenter enters a state of superposition of being dead and alive.
  • The experimenter must be rendered dead (or at least unconscious) on a time scale shorter than that on which they can become aware of the outcome of the quantum measurement.
  • The experiment must be virtually certain to kill the experimenter, and not merely injure him or her.

Analysis of real-world feasibility

In response to questions about "subjective immortality" from normal causes of death, Tegmark suggested that the flaw in that reasoning is that dying is not a binary event as in the thought experiment; it is a progressive process, with a continuum of states of decreasing consciousness. He states that in most real causes of death, one experiences such a gradual loss of self-awareness. It is only within the confines of an abstract scenario that an observer finds they defy all odds. Referring to the above criteria, he elaborates as follows: "[m]ost accidents and common causes of death clearly don't satisfy all three criteria, suggesting you won't feel immortal after all. In particular, regarding criterion 2, under normal circumstances dying isn't a binary thing where you're either alive or dead [...] What makes the quantum suicide work is that it forces an abrupt transition."

David Lewis' commentary and subsequent criticism

The philosopher David Lewis explored the possibility of quantum immortality in a 2001 lecture titled "How Many Lives Has Schrödinger's Cat?", his first - and last, due to his death less than four months afterwards - academic foray into the field of the interpretation of quantum mechanics. In the lecture, published posthumously in 2004, Lewis rejected the many-worlds interpretation, allowing that it offers initial theoretical attractions, but also arguing that it suffers from irremediable flaws, mainly regarding probabilities, and came to tentatively endorse the Ghirardi-Rimini-Weber theory instead. Lewis concluded the lecture by stating that the quantum suicide thought experiment, if applied to real-world causes of death, would entail what he deemed a "terrifying corollary": as all causes of death are ultimately quantum-mechanical in nature, if the many-worlds interpretation were true, in Lewis' view an observer should subjectively "expect with certainty to go on forever surviving whatever dangers [he or she] may encounter", as there will always be possibilities of survival, no matter how unlikely; faced with branching events of survival and death, an observer should not "equally expect to experience life and death", as there is no such thing as experiencing death, and should thus divide his or her expectations only among branches where he or she survives. If survival is guaranteed, however, this is not the case for good health or integrity. This would lead to a cumulative deterioration that indefinitively stops just short of death.

Interviewed for the 2004 book Schrödinger's Rabbits, Tegmark rejected this scenario for the reason that "the fading of consciousness is a continuous process. Although I cannot experience a world line in which I am altogether absent, I can enter one in which my speed of thought is diminishing, my memories and other faculties fading [...] [Tegmark] is confident that even if he cannot die all at once, he can gently fade away." In the same book, philosopher of science and many-worlds proponent David Wallace undermines the case for real-world quantum immortality on the basis that death can be understood as a continuum of decreasing states of consciousness not only in time, as argued by Tegmark, but also in space: "our consciousness is not located at one unique point in the brain, but is presumably a kind of emergent or holistic property of a sufficiently large group of neurons [...] our consciousness might not be able to go out like a light, but it can dwindle exponentially until it is, for all practical purposes, gone."

Directly responding to Lewis' lecture, British philosopher and many-worlds proponent David Papineau, while finding Lewis' other objections to the many-worlds interpretation lacking, strongly denies that any modification to the usual probability rules is warranted in death situations. Assured subjective survival can follow from the quantum suicide idea only if an agent reasons in terms of "what will be experienced next" instead of the more obvious "what will happen next, whether will be experienced or not". He writes: "[...] it is by no means obvious why Everettians should modify their intensity rule in this way. For it seems perfectly open for them to apply the unmodified intensity rule in life-or-death situations, just as elsewhere. If they do this, then they can expect all futures in proportion to their intensities, whether or not those futures contain any of their live successors. For example, even when you know you are about to be the subject in a fifty-fifty Schrödinger’s experiment, you should expect a future branch where you perish, to just the same degree as you expect a future branch where you survive."

On a similar note, quoting Lewis' position that death should not be expected as an experience, philosopher of science Charles Sebens concedes that, in a quantum suicide experiment, "[i]t is tempting to think you should expect survival with certainty." However, he remarks that expectation of survival could follow only if the quantum branching and death were absolutely simultaneous, otherwise normal chances of death apply: "[i]f death is indeed immediate on all branches but one, the thought has some plausibility. But if there is any delay it should be rejected. In such a case, there is a short period of time when there are multiple copies of you, each (effectively) causally isolated from the others and able to assign a credence to being the one who will live. Only one will survive. Surely rationality does not compel you to be maximally optimistic in such a scenario." Sebens also explores the possibility that death might not be simultaneous to branching, but still faster than a human can mentally realize the outcome of the experiment. Again, an agent should expect to die with normal probabilities: "[d]o the copies need to last long enough to have thoughts to cause trouble? I think not. If you survive, you can consider what credences you should have assigned during the short period after splitting when you coexisted with the other copies."

Writing in the journal Ratio, philosopher István Aranyosi, while noting that "[the] tension between the idea of states being both actual and probable is taken as the chief weakness of the many-worlds interpretation of quantum mechanics," summarizes that most of the critical commentary of Lewis' immortality argument has revolved around its premises. But even if, for the sake of argument, one were willing to entirely accept Lewis' assumptions, Aranyosi strongly denies that the "terrifying corollary" would be the correct implication of said premises. Instead, the two scenarios that would most likely follow would be what Aranyosi describes as the "comforting corollary", in which an observer should never expect to get very sick in the first place, or the "momentary life" picture, in which an observer should expect "eternal life, spent almost entirely in an unconscious state", punctuated by extremely brief, amnesiac moments of consciousness. Thus, Aranyosi concludes that while "[w]e can't assess whether one or the other [of the two alternative scenarios] gets the lion's share of the total intensity associated with branches compatible with self-awareness, [...] we can be sure that they together (i.e. their disjunction) do indeed get the lion's share, which is much reassuring."

Analysis by other proponents of the many-worlds interpretation

Physicist David Deutsch, though a proponent of the many-worlds interpretation, states regarding quantum suicide that "that way of applying probabilities does not follow directly from quantum theory, as the usual one does. It requires an additional assumption, namely that when making decisions one should ignore the histories in which the decision-maker is absent....[M]y guess is that the assumption is false."

Tegmark now believes experimenters should only expect a normal probability of survival, not immortality. The experimenter's probability amplitude in the wavefunction decreases significantly, meaning they exist with a much lower measure than they had before. Per the anthropic principle, a person is less likely to find themselves in a world where they are less likely to exist, that is, a world with a lower measure has a lower probability of being observed by them. Therefore, the experimenter will have a lower probability of observing the world in which they survive than the earlier world in which they set up the experiment. This same problem of reduced measure was pointed out by Lev Vaidman in the Stanford Encyclopedia of Philosophy. In the 2001 paper, "Probability and the many-worlds interpretation of quantum theory", Vaidman writes that an agent should not agree to undergo a quantum suicide experiment: "The large 'measures' of the worlds with dead successors is a good reason not to play." Vaidman argues that it is the instantaneity of death that may seem to imply subjective survival of the experimenter, but that normal probabilities nevertheless must apply even in this special case: "[i]ndeed, the instantaneity makes it difficult to establish the probability postulate, but after it has been justified in the wide range of other situations it is natural to apply the postulate for all cases."

In his 2013 book The Emergent Multiverse, Wallace opines that the reasons for expecting subjective survival in the thought experiment "do not really withstand close inspection", although he concedes that it would be "probably fair to say [...] that precisely because death is philosophically complicated, my objections fall short of being a knock-down refutation". Besides re-stating that there appears to be no motive to reason in terms of expectations of experience instead of expectations of what will happen, he suggests that a decision-theoretic analysis shows that "an agent who prefers certain life to certain death is rationally compelled to prefer life in high-weight branches and death in low-weight branches to the opposite."

Physicist Sean M. Carroll, another proponent of the many-worlds interpretation, states regarding quantum suicide that neither experiences nor rewards should be thought of as being shared between future versions of oneself, as they become distinct persons when the world splits. He further states that one cannot pick out some future versions of oneself as "really you" over others, and that quantum suicide still cuts off the existence of some of these future selves, which would be worth objecting to just as if there were a single world.

Analysis by skeptics of the many-world interpretation

Cosmologist Anthony Aguirre, while personally skeptical of most accounts of the many-worlds interpretation, in his book Cosmological Koans writes that "[p]erhaps reality actually is this bizarre, and we really do subjectively 'survive' any form of death that is both instantaneous and binary." Aguirre notes, however, that most causes of death do not fulfill these two requirements: "If there are degrees of survival, things are quite different." If loss of consciousness was binary like in the thought experiment, the quantum suicide effect would prevent an observer from subjectively falling asleep or undergoing anesthesia, conditions in which mental activities are greatly diminished but not altogether abolished. Consequently, upon most causes of death, even outwardly sudden, if the quantum suicide effect holds true an observer is more likely to progressively slip into an attenuated state of consciousness, rather than remain fully awake by some very improbable means. Aguirre further states that quantum suicide as a whole might be characterized as a sort of reductio ad absurdum against the current understanding of both the many-worlds interpretation and theory of mind. He finally hypothesizes that a different understanding of the relationship between the mind and time should remove the bizarre implications of necessary subjective survival.

Physicist and writer Philip Ball, a critic of the many-worlds interpretation, in his book Beyond Weird, describes the quantum suicide experiment as "cognitively unstable" and exemplificatory of the difficulties of the many-worlds theory with probabilities. While he acknowledges Lev Vaidman's argument that an experimenter should subjectively expect outcomes in proportion of the "measure of existence" of the worlds in which they happen, Ball ultimately rejects this explanation. "What this boils down to is the interpretation of probabilities in the MWI. If all outcomes occur with 100% probability, where does that leave the probabilistic character of quantum mechanics?" Furthermore, Ball explains that such arguments highlight what he recognizes as another major problem of the many-world interpretation, connected but independent from the issue of probability: the incompatibility with the notion of selfhood. Ball ascribes most attempts of justifying probabilities in the many-worlds interpretation to "saying that quantum probabilities are just what quantum mechanics look like when consciousness is restricted to only one world" but that "there is in fact no meaningful way to explain or justify such a restriction." Before performing a quantum measurement, an "Alice before" experimenter "can't use quantum mechanics to predict what will happen to her in a way that can be articulated - because there is no logical way to talk about 'her' at any moment except the conscious present (which, in a frantically splitting universe, doesn't exist). Because it is logically impossible to connect the perceptions of Alice Before to Alice After [the experiment], "Alice" has disappeared. [...] [The MWI] eliminates any coherent notion of what we can experience, or have experienced, or are experiencing right now."

Philosopher of science Peter J. Lewis, a critic of the many-worlds interpretation, considers the whole thought experiment an example of the difficulty of accommodating probability within the many-worlds framework: "[s]tandard quantum mechanics yields probabilities for various future occurrences, and these probabilities can be fed into an appropriate decision theory. But if every physically possible consequence of the current state of affairs is certain to occur, on what basis should I decide what to do? For example, if I point a gun at my head and pull the trigger, it looks like Everett’s theory entails that I am certain to survive—and that I am certain to die. This is at least worrying, and perhaps rationally disabling." In his book Quantum Ontology, Lewis explains that for the subjective immortality argument to be drawn out of the many-worlds theory, one has to adopt an understanding of probability - the so-called "branch-counting" approach, in which an observer can meaningfully ask "which post-measurement branch will I end up on?" - that is ruled out by experimental, empirical evidence as it would yield probabilities that do not match with the well-confirmed Born rule. Lewis identifies instead in the Deutsch-Wallace decision-theoretic analysis the most promising (although still, to his judgement, incomplete) way of addressing probabilities in the many-worlds interpretation, in which it is not possible to count branches (and, similarly, the persons that "end up" on each branch). Lewis concludes that "[t]he immortality argument is perhaps best viewed as a dramatic demonstration of the fundamental conflict between branch-counting (or person-counting) intuitions about probability and the decision theoretic approach. The many-worlds theory, to the extent that it is viable, does not entail that you should expect to live forever."

Information cascade

From Wikipedia, the free encyclopedia

An Information cascade or informational cascade is a phenomenon described in behavioral economics and network theory in which a number of people make the same decision in a sequential fashion. It is similar to, but distinct from herd behavior.

An information cascade is generally accepted as a two-step process. For a cascade to begin an individual must encounter a scenario with a decision, typically a binary one. Second, outside factors can influence this decision (typically, through the observation of actions and their outcomes of other individuals in similar scenarios).

The two-step process of an informational cascade can be broken down into five basic components:

  1. There is a decision to be made – for example; whether to adopt a new technology, wear a new style of clothing, eat in a new restaurant, or support a particular political position
  2. A limited action space exists (e.g. an adopt/reject decision)
  3. People make the decision sequentially, and each person can observe the choices made by those who acted earlier
  4. Each person has some information aside from their own that helps guide their decision
  5. A person can't directly observe the outside information that other people know, but he or she can make inferences about this information from what they do

Social perspectives of cascades, which suggest that agents may act irrationally (e.g., against what they think is optimal) when social pressures are great, exist as complements to the concept of information cascades. More often the problem is that the concept of an information cascade is confused with ideas that do not match the two key conditions of the process, such as social proof, information diffusion, and social influence. Indeed, the term information cascade has even been used to refer to such processes.

Basic model

This section provides some basic examples of information cascades, as originally described by Bikchandani et al. (1992). The basic model has since been developed in a variety of directions to examine its robustness and better understand its implications.

Qualitative example

Information cascades occur when external information obtained from previous participants in an event overrides one's own private signal, irrespective of the correctness of the former over the latter. The experiment conducted by Anderson is a useful example of this process. The experiment consisted of two urns labeled A and B. Urn A contains two balls labeled "a" and one labeled "b". Urn B contains one ball labeled "a" and two labeled "b". The urn from which a ball must be drawn during each run is determined randomly and with equal probabilities (from the throw of a dice). The contents of the chosen urn are emptied into a neutral container. The participants are then asked in random order to draw a marble from this container. This entire process may be termed a "run", and a number of such runs are performed.

Each time a participant picks up a marble, he is to decide which urn it belongs to. His decision is then announced for the benefit of the remaining participants in the room. Thus, the (n+1)th participant has information about the decisions made by all the n participants preceding him, and also his private signal which is the label on the ball that he draws during his turn. The experimenters observed that an information cascade was observed in 41 of 56 such runs. This means, in the runs where the cascade occurred, at least one participant gave precedence to earlier decisions over his own private signal. It is possible for such an occurrence to produce the wrong result. This phenomenon is known as "Reverse Cascade".

Quantitative description

A person's signal telling them to accept is denoted as H (a high signal, where high signifies he should accept), and a signal telling them not to accept is L (a low signal). The model assumes that when the correct decision is to accept, individuals will be more likely to see an H, and conversely, when the correct decision is to reject, individuals are more likely to see an L signal. This is essentially a conditional probability – the probability of H when the correct action is to accept, or . Similarly is the probability that an agent gets an L signal when the correct action is reject. If these likelihoods are represented by q, then q > 0.5. This is summarized in the table below.

Agent signal True probability state
Reject Accept
L q 1-q
H 1-q q

The first agent determines whether or not to accept solely based on his own signal. As the model assumes that all agents act rationally, the action (accept or reject) the agent feels is more likely is the action he will choose to take. This decision can be explained using Bayes' rule:

If the agent receives an H signal, then the likelihood of accepting is obtained by calculating . The equation says that, by virtue of the fact that q > 0.5, the first agent, acting only on his private signal, will always increase his estimate of p with an H signal. Similarly, it can be shown that an agent will always decrease his expectation of p when he receives a low signal. Recalling that, if the value, V, of accepting is equal to the value of rejecting, then an agent will accept if he believes p > 0.5, and reject otherwise. Because this agent started out with the assumption that both accepting and rejecting are equally viable options (p = 0.5), the observation of an H signal will allow him to conclude that accepting is the rational choice.

The second agent then considers both the first agent's decision and his own signal, again in a rational fashion. In general, the nth agent considers the decisions of the previous n-1 agents, and his own signal. He makes a decision based on Bayesian reasoning to determine the most rational choice.

Where a is the number of accepts in the previous set plus the agent's own signal, and b is the number of rejects. Thus, . The decision is based on how the value on the right hand side of the equation compares with p.

Explicit model assumptions

The original model makes several assumptions about human behavior and the world in which humans act, some of which are relaxed in later versions or in alternate definitions of similar problems, such as the diffusion of innovations.

  1. Boundedly Rational Agents: The original Independent Cascade model assumes humans are boundedly rational – that is, they will always make rational decisions based on the information they can observe, but the information they observe may not be complete or correct. In other words, agents do not have complete knowledge of the world around them (which would allow them to make the correct decision in any and all situations). In this way, there is a point at which, even if a person has correct knowledge of the idea or action cascading, they can be convinced via social pressures to adopt some alternate, incorrect view of the world.
  2. Incomplete Knowledge of Others: The original information cascade model assumes that agents have incomplete knowledge of the agents which precede them in the specified order. As opposed to definitions where agents have some knowledge of the "private information" held by previous agents, the current agent makes a decision based only on the observable action (whether or not to imitate) of those preceding him. It is important to note that the original creators argue this is a reason why information cascades can be caused by small shocks.
  3. Behavior of all previous agents is known

Resulting conditions

  1. Cascades will always occur – as discussed, in the simple mode, the likelihood of a cascade occurring increases towards 1 as the number of people making decisions increases towards infinity.
  2. Cascades can be incorrect – because agents make decisions with both bounded rationality and probabilistic knowledge of the initial truth (e.g. whether accepting or rejecting is the correct decision), the incorrect behavior may cascade through the system.
  3. Cascades can be based on little information – mathematically, a cascade of an infinite length can occur based only on the decision of two people. More generally, a small set of people who strongly promote an idea as being rational can rapidly influence a much larger subset of the general population
  4. Cascades are fragile – because agents receive no extra information after the difference between a and b increases beyond 2, and because such differences can occur at small numbers of agents, agents considering opinions from those agents who are making decisions based on actual information can be dissuaded from a choice rather easily. This suggests that cascades are susceptible to the release of public information. also discusses this result in the context of the underlying value p changing over time, in which case a cascade can rapidly change course.

Responding

A literature exists that examines how individuals or firms might respond to the existence of informational cascades when they have products to sell but where buyers are unsure of the quality of those products. Curtis Taylor (1999) shows that when selling a house the seller might wish to start with high prices, as failure to sell with low prices is indicative of low quality and might start a cascade on not buying, while failure to sell with high prices could be construed as meaning the house is just over-priced, and prices can then be reduced to get a sale. Daniel Sgroi (2002) shows that firms might use "guinea pigs" who are given the opportunity to buy early to kick-start an informational cascade through their early and public purchasing decisions, and work by David Gill and Daniel Sgroi (2008) show that early public tests might have a similar effect (and in particular that passing a "tough test" which is biased against the seller can instigate a cascade all by itself). Bose et al. have examined how prices set by a monopolist might evolve in the presence of potential cascade behavior where the monopolist and consumers are unsure of a products quality.

Examples and fields of application

Information cascades occur in situations where seeing many people make the same choice provides evidence that outweighs one's own judgment. That is, one thinks: "It's more likely that I'm wrong than that all those other people are wrong. Therefore, I will do as they do."

In what has been termed a reputational cascade, late responders sometimes go along with the decisions of early responders, not just because the late responders think the early responders are right, but also because they perceive their reputation will be damaged if they dissent from the early responders.

Market cascades

Information cascades have become one of the topics of behavioral economics, as they are often seen in financial markets where they can feed speculation and create cumulative and excessive price moves, either for the whole market (market bubble) or a specific asset, like a stock that becomes overly popular among investors.

Marketers also use the idea of cascades to attempt to get a buying cascade started for a new product. If they can induce an initial set of people to adopt the new product, then those who make purchasing decisions later on may also adopt the product even if it is no better than, or perhaps even worse than, competing products. This is most effective if these later consumers are able to observe the adoption decisions, but not how satisfied the early customers actually were with the choice. This is consistent with the idea that cascades arise naturally when people can see what others do but not what they know.

An example is Hollywood movies. If test screenings suggest a big-budget movie might be a flop, studios often decide to spend more on initial marketing rather than less, with the aim of making as much money as possible on the opening weekend, before word gets around that it's a turkey.

Information cascades are usually considered by economists:

  • as products of rational expectations at their start,
  • as irrational herd behavior if they persist for too long, which signals that collective emotions come also into play to feed the cascade.

Social network analysis

Dotey et al. state that information flows in the form of cascades on the social network. According to the authors, analysis of virality of information cascades on a social network may lead to many useful applications like determining the most influential individuals within a network. This information can be used for maximizing market effectiveness or influencing public opinion. Various structural and temporal features of a network affect cascade virality. Additionally, these models are widely exploited in the problem of Rumor spread in social network to investigate it and reduce its influence in online social networks.

In contrast to work on information cascades in social networks, the social influence model of belief spread argues that people have some notion of the private beliefs of those in their network. The social influence model, then, relaxes the assumption of information cascades that people are acting only on observable actions taken by others. In addition, the social influence model focuses on embedding people within a social network, as opposed to a queue. Finally, the social influence model relaxes the assumption of the information cascade model that people will either complete an action or not by allowing for a continuous scale of the "strength" of an agents belief that an action should be completed.

Historical examples

  • Small protests began in Leipzig, Germany in 1989 with just a handful of activists challenging the German Democratic Republic. For almost a year, protesters met every Monday growing by a few people each time. By the time the government attempted to address it in September 1989, it was too big to quash. In October, the number of protesters reached 100,000 and by the first Monday in November, over 400,000 people marched the streets of Leipzig. Two days later the Berlin Wall was dismantled.
  • The adoption rate of drought-resistant hybrid seed corn during the Great Depression and Dust Bowl was slow despite its significant improvement over the previously available seed corn. Researchers at Iowa State University were interested in understanding the public's hesitation to the adoption of this significantly improved technology. After conducting 259 interviews with farmers it was observed that the slow rate of adoption was due to how the farmers valued the opinion of their friends and neighbors instead of the word of a salesman. See for the original report.

Empirical Studies

In addition to the examples above, Information Cascades have been shown to exist in several empirical studies. Perhaps the best example, given above, is. Participants stood in a line behind an urn which had balls of different colors. Sequentially, participants would pick a ball out of the urn, looks at it, and then places it back into the urn. The agent then voices their opinion of which color of balls (red or blue) there is a majority of in the urn for the rest of the participants to hear. Participants get a monetary reward if they guess correctly, forcing the concept of rationality.

Other examples include

  • De Vany and Walls create a statistical model of information cascades where an action is required. They apply this model to the actions people take to go see a movie that has come out at the theatre. De Vany and Walls validate their model on this data, finding a similar Pareto distribution of revenue for different movies.
  • Walden and Browne also adopt the original Information Cascade model, here into an operational model more practical for real world studies, which allows for analysis based on observed variables. Walden and Browne test their model on data about adoption of new technologies by businesses, finding support for their hypothesis that information cascades play a role in this adoption.

Legal aspects

The negative effects of informational cascades sometimes become a legal concern and laws have been enacted to neutralize them. Ward Farnsworth, a law professor, analyzed the legal aspects of informational cascades and gave several examples in his book The Legal Analyst: in many military courts, the officers voting to decide a case vote in reverse rank order (the officer of the lowest rank votes first), and he suggested it may be done so the lower-ranked officers would not be tempted by the cascade to vote with the more senior officers, who are believed to have more accurate judgement; another example is that countries such as Israel and France have laws that prohibit polling days or weeks before elections to prevent the effect of informational cascade that may influence the election results.

Globalization

As previously stated, informational cascades are logical processes describing how an individual's decision process changes based upon outside information. Cascades have never been a household name; at best, they exist hypothetically. Over the past few decades, cascades saw an increase in popularity through various fields of study. Specifically, they have been quite useful in comparing thought processes between Greek and German organic farmers. The aforementioned study suggests discrepancies between Greek and German thought processes based upon their cultural and socioeconomic differences. Even further, cascades have been extrapolated to ideas such as financial volatility and monetary policy. In 2004 Helmut Wagner and Wolfram Berger suggested cascades as an analytical vehicle to examine changes to the financial market as it became more globalized. Wagner and Berger noticed structural changes to the framework of understanding financial markets due to globalization; giving rise to volatility in capital flow and spawning uncertainty which affected central banks. Additionally, information cascades are useful in understanding the origins of terrorist tactics. When the attack by Black September occurred in 1972 it was hard not to see the similarities between their tactics and the Baader-Meinhof group (also known as the Red Army Faction [RAF]). All of these examples portray how the process of cascades were put into use. Moreover, it is important to understand the framework of cascades to move forward in a more globalized society. Establishing a foundation to understanding the passage of information through transnational and multinational organizations, and even more, is critical to the arising modern society. Summing up all of these points, cascades, as a general term, encompass a spectrum of different concepts. Information cascades have been the underlying thread in how information is transferred, overwritten, and understood through various cultures spanning from a multitude of different countries.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...