Search This Blog

Tuesday, February 22, 2022

Quantum suicide and immortality

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can falsify any interpretation of quantum mechanics other than the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. This concept is sometimes conjectured to be applicable to real-world causes of death as well.

Most experts hold that neither the experiment nor the related idea of immortality would work in the real world. As a thought experiment, quantum suicide is an intellectual exercise in which an abstract setup is followed through to its logical consequences merely to prove a theoretical point. Virtually all physicists and philosophers of science who have described it, especially in popularized treatments, underscore that it relies on contrived, idealized circumstances that may be impossible or exceedingly difficult to realize in real life, and that its theoretical premises are controversial even among supporters of the many-worlds interpretation. Thus, as cosmologist Anthony Aguirre warns, "[...] it would be foolish (and selfish) in the extreme to let this possibility guide one's actions in any life-and-death question."

History

Hugh Everett did not mention quantum suicide or quantum immortality in writing; his work was intended as a solution to the paradoxes of quantum mechanics. Eugene Shikhovtsev's biography of Everett states that "Everett firmly believed that his many-worlds theory guaranteed him immortality: his consciousness, he argued, is bound at each branching to follow whatever path does not lead to death". Peter Byrne, author of a published biography of Everett, reports that Everett also privately discussed quantum suicide (such as to play high-stakes Russian roulette and survive in the winning branch), but adds that "[i]t is unlikely, however, that Everett subscribed to this [quantum immortality] view, as the only sure thing it guarantees is that the majority of your copies will die, hardly a rational goal."

Among scientists, the thought experiment was introduced by Euan Squires in 1986. Afterwards, it was published independently by Hans Moravec in 1987 and Bruno Marchal in 1988; it was also described by Huw Price in 1997, who credited it to Dieter Zeh, and independently presented formally by Max Tegmark in 1998. It was later discussed by philosophers Peter J. Lewis in 2000 and David Lewis in 2001.

Thought experiment

The quantum suicide thought experiment involves a similar apparatus to Schrödinger's cat – a box which kills the occupant in a given time frame with probability one-half due to quantum uncertainty. The only difference is to have the experimenter recording observations be the one inside the box. The significance of this is that someone whose life or death depends on a qubit could possibly distinguish between interpretations of quantum mechanics. By definition, fixed observers cannot.

At the start of the first iteration, under both interpretations, the probability of surviving the experiment is 50%, as given by the squared norm of the wave function. At the start of the second iteration, assuming a single-world interpretation of quantum mechanics (like the widely-held Copenhagen interpretation) is true, the wave function has already collapsed; thus, if the experimenter is already dead, there is a 0% chance of survival for any further iterations. However, if the many-worlds interpretation is true, a superposition of the live experimenter necessarily exists (as also does the one who dies). Now, barring the possibility of life after death, after every iteration only one of the two experimenter superpositions – the live one – is capable of having any sort of conscious experience. Putting aside the philosophical problems associated with individual identity and its persistence, under the many-worlds interpretation, the experimenter, or at least a version of them, continues to exist through all of their superpositions where the outcome of the experiment is that they live. In other words, a version of the experimenter survives all iterations of the experiment. Since the superpositions where a version of the experimenter lives occur by quantum necessity (under the many-worlds interpretation), it follows that their survival, after any realizable number of iterations, is physically necessary; hence, the notion of quantum immortality.

A version of the experimenter surviving stands in stark contrast to the implications of the Copenhagen interpretation, according to which, although the survival outcome is possible in every iteration, its probability tends towards zero as the number of iterations increases. According to the many-worlds interpretation, the above scenario has the opposite property: the probability of a version of the experimenter living is necessarily one for any number of iterations.

In the book Our Mathematical Universe, Max Tegmark lays out three criteria that, in abstract, a quantum suicide experiment must fulfill:

  • The random number generator must be quantum, not deterministic, so that the experimenter enters a state of superposition of being dead and alive.
  • The experimenter must be rendered dead (or at least unconscious) on a time scale shorter than that on which they can become aware of the outcome of the quantum measurement.
  • The experiment must be virtually certain to kill the experimenter, and not merely injure him or her.

Analysis of real-world feasibility

In response to questions about "subjective immortality" from normal causes of death, Tegmark suggested that the flaw in that reasoning is that dying is not a binary event as in the thought experiment; it is a progressive process, with a continuum of states of decreasing consciousness. He states that in most real causes of death, one experiences such a gradual loss of self-awareness. It is only within the confines of an abstract scenario that an observer finds they defy all odds. Referring to the above criteria, he elaborates as follows: "[m]ost accidents and common causes of death clearly don't satisfy all three criteria, suggesting you won't feel immortal after all. In particular, regarding criterion 2, under normal circumstances dying isn't a binary thing where you're either alive or dead [...] What makes the quantum suicide work is that it forces an abrupt transition."

David Lewis' commentary and subsequent criticism

The philosopher David Lewis explored the possibility of quantum immortality in a 2001 lecture titled "How Many Lives Has Schrödinger's Cat?", his first - and last, due to his death less than four months afterwards - academic foray into the field of the interpretation of quantum mechanics. In the lecture, published posthumously in 2004, Lewis rejected the many-worlds interpretation, allowing that it offers initial theoretical attractions, but also arguing that it suffers from irremediable flaws, mainly regarding probabilities, and came to tentatively endorse the Ghirardi-Rimini-Weber theory instead. Lewis concluded the lecture by stating that the quantum suicide thought experiment, if applied to real-world causes of death, would entail what he deemed a "terrifying corollary": as all causes of death are ultimately quantum-mechanical in nature, if the many-worlds interpretation were true, in Lewis' view an observer should subjectively "expect with certainty to go on forever surviving whatever dangers [he or she] may encounter", as there will always be possibilities of survival, no matter how unlikely; faced with branching events of survival and death, an observer should not "equally expect to experience life and death", as there is no such thing as experiencing death, and should thus divide his or her expectations only among branches where he or she survives. If survival is guaranteed, however, this is not the case for good health or integrity. This would lead to a cumulative deterioration that indefinitively stops just short of death.

Interviewed for the 2004 book Schrödinger's Rabbits, Tegmark rejected this scenario for the reason that "the fading of consciousness is a continuous process. Although I cannot experience a world line in which I am altogether absent, I can enter one in which my speed of thought is diminishing, my memories and other faculties fading [...] [Tegmark] is confident that even if he cannot die all at once, he can gently fade away." In the same book, philosopher of science and many-worlds proponent David Wallace undermines the case for real-world quantum immortality on the basis that death can be understood as a continuum of decreasing states of consciousness not only in time, as argued by Tegmark, but also in space: "our consciousness is not located at one unique point in the brain, but is presumably a kind of emergent or holistic property of a sufficiently large group of neurons [...] our consciousness might not be able to go out like a light, but it can dwindle exponentially until it is, for all practical purposes, gone."

Directly responding to Lewis' lecture, British philosopher and many-worlds proponent David Papineau, while finding Lewis' other objections to the many-worlds interpretation lacking, strongly denies that any modification to the usual probability rules is warranted in death situations. Assured subjective survival can follow from the quantum suicide idea only if an agent reasons in terms of "what will be experienced next" instead of the more obvious "what will happen next, whether will be experienced or not". He writes: "[...] it is by no means obvious why Everettians should modify their intensity rule in this way. For it seems perfectly open for them to apply the unmodified intensity rule in life-or-death situations, just as elsewhere. If they do this, then they can expect all futures in proportion to their intensities, whether or not those futures contain any of their live successors. For example, even when you know you are about to be the subject in a fifty-fifty Schrödinger’s experiment, you should expect a future branch where you perish, to just the same degree as you expect a future branch where you survive."

On a similar note, quoting Lewis' position that death should not be expected as an experience, philosopher of science Charles Sebens concedes that, in a quantum suicide experiment, "[i]t is tempting to think you should expect survival with certainty." However, he remarks that expectation of survival could follow only if the quantum branching and death were absolutely simultaneous, otherwise normal chances of death apply: "[i]f death is indeed immediate on all branches but one, the thought has some plausibility. But if there is any delay it should be rejected. In such a case, there is a short period of time when there are multiple copies of you, each (effectively) causally isolated from the others and able to assign a credence to being the one who will live. Only one will survive. Surely rationality does not compel you to be maximally optimistic in such a scenario." Sebens also explores the possibility that death might not be simultaneous to branching, but still faster than a human can mentally realize the outcome of the experiment. Again, an agent should expect to die with normal probabilities: "[d]o the copies need to last long enough to have thoughts to cause trouble? I think not. If you survive, you can consider what credences you should have assigned during the short period after splitting when you coexisted with the other copies."

Writing in the journal Ratio, philosopher István Aranyosi, while noting that "[the] tension between the idea of states being both actual and probable is taken as the chief weakness of the many-worlds interpretation of quantum mechanics," summarizes that most of the critical commentary of Lewis' immortality argument has revolved around its premises. But even if, for the sake of argument, one were willing to entirely accept Lewis' assumptions, Aranyosi strongly denies that the "terrifying corollary" would be the correct implication of said premises. Instead, the two scenarios that would most likely follow would be what Aranyosi describes as the "comforting corollary", in which an observer should never expect to get very sick in the first place, or the "momentary life" picture, in which an observer should expect "eternal life, spent almost entirely in an unconscious state", punctuated by extremely brief, amnesiac moments of consciousness. Thus, Aranyosi concludes that while "[w]e can't assess whether one or the other [of the two alternative scenarios] gets the lion's share of the total intensity associated with branches compatible with self-awareness, [...] we can be sure that they together (i.e. their disjunction) do indeed get the lion's share, which is much reassuring."

Analysis by other proponents of the many-worlds interpretation

Physicist David Deutsch, though a proponent of the many-worlds interpretation, states regarding quantum suicide that "that way of applying probabilities does not follow directly from quantum theory, as the usual one does. It requires an additional assumption, namely that when making decisions one should ignore the histories in which the decision-maker is absent....[M]y guess is that the assumption is false."

Tegmark now believes experimenters should only expect a normal probability of survival, not immortality. The experimenter's probability amplitude in the wavefunction decreases significantly, meaning they exist with a much lower measure than they had before. Per the anthropic principle, a person is less likely to find themselves in a world where they are less likely to exist, that is, a world with a lower measure has a lower probability of being observed by them. Therefore, the experimenter will have a lower probability of observing the world in which they survive than the earlier world in which they set up the experiment. This same problem of reduced measure was pointed out by Lev Vaidman in the Stanford Encyclopedia of Philosophy. In the 2001 paper, "Probability and the many-worlds interpretation of quantum theory", Vaidman writes that an agent should not agree to undergo a quantum suicide experiment: "The large 'measures' of the worlds with dead successors is a good reason not to play." Vaidman argues that it is the instantaneity of death that may seem to imply subjective survival of the experimenter, but that normal probabilities nevertheless must apply even in this special case: "[i]ndeed, the instantaneity makes it difficult to establish the probability postulate, but after it has been justified in the wide range of other situations it is natural to apply the postulate for all cases."

In his 2013 book The Emergent Multiverse, Wallace opines that the reasons for expecting subjective survival in the thought experiment "do not really withstand close inspection", although he concedes that it would be "probably fair to say [...] that precisely because death is philosophically complicated, my objections fall short of being a knock-down refutation". Besides re-stating that there appears to be no motive to reason in terms of expectations of experience instead of expectations of what will happen, he suggests that a decision-theoretic analysis shows that "an agent who prefers certain life to certain death is rationally compelled to prefer life in high-weight branches and death in low-weight branches to the opposite."

Physicist Sean M. Carroll, another proponent of the many-worlds interpretation, states regarding quantum suicide that neither experiences nor rewards should be thought of as being shared between future versions of oneself, as they become distinct persons when the world splits. He further states that one cannot pick out some future versions of oneself as "really you" over others, and that quantum suicide still cuts off the existence of some of these future selves, which would be worth objecting to just as if there were a single world.

Analysis by skeptics of the many-world interpretation

Cosmologist Anthony Aguirre, while personally skeptical of most accounts of the many-worlds interpretation, in his book Cosmological Koans writes that "[p]erhaps reality actually is this bizarre, and we really do subjectively 'survive' any form of death that is both instantaneous and binary." Aguirre notes, however, that most causes of death do not fulfill these two requirements: "If there are degrees of survival, things are quite different." If loss of consciousness was binary like in the thought experiment, the quantum suicide effect would prevent an observer from subjectively falling asleep or undergoing anesthesia, conditions in which mental activities are greatly diminished but not altogether abolished. Consequently, upon most causes of death, even outwardly sudden, if the quantum suicide effect holds true an observer is more likely to progressively slip into an attenuated state of consciousness, rather than remain fully awake by some very improbable means. Aguirre further states that quantum suicide as a whole might be characterized as a sort of reductio ad absurdum against the current understanding of both the many-worlds interpretation and theory of mind. He finally hypothesizes that a different understanding of the relationship between the mind and time should remove the bizarre implications of necessary subjective survival.

Physicist and writer Philip Ball, a critic of the many-worlds interpretation, in his book Beyond Weird, describes the quantum suicide experiment as "cognitively unstable" and exemplificatory of the difficulties of the many-worlds theory with probabilities. While he acknowledges Lev Vaidman's argument that an experimenter should subjectively expect outcomes in proportion of the "measure of existence" of the worlds in which they happen, Ball ultimately rejects this explanation. "What this boils down to is the interpretation of probabilities in the MWI. If all outcomes occur with 100% probability, where does that leave the probabilistic character of quantum mechanics?" Furthermore, Ball explains that such arguments highlight what he recognizes as another major problem of the many-world interpretation, connected but independent from the issue of probability: the incompatibility with the notion of selfhood. Ball ascribes most attempts of justifying probabilities in the many-worlds interpretation to "saying that quantum probabilities are just what quantum mechanics look like when consciousness is restricted to only one world" but that "there is in fact no meaningful way to explain or justify such a restriction." Before performing a quantum measurement, an "Alice before" experimenter "can't use quantum mechanics to predict what will happen to her in a way that can be articulated - because there is no logical way to talk about 'her' at any moment except the conscious present (which, in a frantically splitting universe, doesn't exist). Because it is logically impossible to connect the perceptions of Alice Before to Alice After [the experiment], "Alice" has disappeared. [...] [The MWI] eliminates any coherent notion of what we can experience, or have experienced, or are experiencing right now."

Philosopher of science Peter J. Lewis, a critic of the many-worlds interpretation, considers the whole thought experiment an example of the difficulty of accommodating probability within the many-worlds framework: "[s]tandard quantum mechanics yields probabilities for various future occurrences, and these probabilities can be fed into an appropriate decision theory. But if every physically possible consequence of the current state of affairs is certain to occur, on what basis should I decide what to do? For example, if I point a gun at my head and pull the trigger, it looks like Everett’s theory entails that I am certain to survive—and that I am certain to die. This is at least worrying, and perhaps rationally disabling." In his book Quantum Ontology, Lewis explains that for the subjective immortality argument to be drawn out of the many-worlds theory, one has to adopt an understanding of probability - the so-called "branch-counting" approach, in which an observer can meaningfully ask "which post-measurement branch will I end up on?" - that is ruled out by experimental, empirical evidence as it would yield probabilities that do not match with the well-confirmed Born rule. Lewis identifies instead in the Deutsch-Wallace decision-theoretic analysis the most promising (although still, to his judgement, incomplete) way of addressing probabilities in the many-worlds interpretation, in which it is not possible to count branches (and, similarly, the persons that "end up" on each branch). Lewis concludes that "[t]he immortality argument is perhaps best viewed as a dramatic demonstration of the fundamental conflict between branch-counting (or person-counting) intuitions about probability and the decision theoretic approach. The many-worlds theory, to the extent that it is viable, does not entail that you should expect to live forever."

Information cascade

From Wikipedia, the free encyclopedia

An Information cascade or informational cascade is a phenomenon described in behavioral economics and network theory in which a number of people make the same decision in a sequential fashion. It is similar to, but distinct from herd behavior.

An information cascade is generally accepted as a two-step process. For a cascade to begin an individual must encounter a scenario with a decision, typically a binary one. Second, outside factors can influence this decision (typically, through the observation of actions and their outcomes of other individuals in similar scenarios).

The two-step process of an informational cascade can be broken down into five basic components:

  1. There is a decision to be made – for example; whether to adopt a new technology, wear a new style of clothing, eat in a new restaurant, or support a particular political position
  2. A limited action space exists (e.g. an adopt/reject decision)
  3. People make the decision sequentially, and each person can observe the choices made by those who acted earlier
  4. Each person has some information aside from their own that helps guide their decision
  5. A person can't directly observe the outside information that other people know, but he or she can make inferences about this information from what they do

Social perspectives of cascades, which suggest that agents may act irrationally (e.g., against what they think is optimal) when social pressures are great, exist as complements to the concept of information cascades. More often the problem is that the concept of an information cascade is confused with ideas that do not match the two key conditions of the process, such as social proof, information diffusion, and social influence. Indeed, the term information cascade has even been used to refer to such processes.

Basic model

This section provides some basic examples of information cascades, as originally described by Bikchandani et al. (1992). The basic model has since been developed in a variety of directions to examine its robustness and better understand its implications.

Qualitative example

Information cascades occur when external information obtained from previous participants in an event overrides one's own private signal, irrespective of the correctness of the former over the latter. The experiment conducted by Anderson is a useful example of this process. The experiment consisted of two urns labeled A and B. Urn A contains two balls labeled "a" and one labeled "b". Urn B contains one ball labeled "a" and two labeled "b". The urn from which a ball must be drawn during each run is determined randomly and with equal probabilities (from the throw of a dice). The contents of the chosen urn are emptied into a neutral container. The participants are then asked in random order to draw a marble from this container. This entire process may be termed a "run", and a number of such runs are performed.

Each time a participant picks up a marble, he is to decide which urn it belongs to. His decision is then announced for the benefit of the remaining participants in the room. Thus, the (n+1)th participant has information about the decisions made by all the n participants preceding him, and also his private signal which is the label on the ball that he draws during his turn. The experimenters observed that an information cascade was observed in 41 of 56 such runs. This means, in the runs where the cascade occurred, at least one participant gave precedence to earlier decisions over his own private signal. It is possible for such an occurrence to produce the wrong result. This phenomenon is known as "Reverse Cascade".

Quantitative description

A person's signal telling them to accept is denoted as H (a high signal, where high signifies he should accept), and a signal telling them not to accept is L (a low signal). The model assumes that when the correct decision is to accept, individuals will be more likely to see an H, and conversely, when the correct decision is to reject, individuals are more likely to see an L signal. This is essentially a conditional probability – the probability of H when the correct action is to accept, or . Similarly is the probability that an agent gets an L signal when the correct action is reject. If these likelihoods are represented by q, then q > 0.5. This is summarized in the table below.

Agent signal True probability state
Reject Accept
L q 1-q
H 1-q q

The first agent determines whether or not to accept solely based on his own signal. As the model assumes that all agents act rationally, the action (accept or reject) the agent feels is more likely is the action he will choose to take. This decision can be explained using Bayes' rule:

If the agent receives an H signal, then the likelihood of accepting is obtained by calculating . The equation says that, by virtue of the fact that q > 0.5, the first agent, acting only on his private signal, will always increase his estimate of p with an H signal. Similarly, it can be shown that an agent will always decrease his expectation of p when he receives a low signal. Recalling that, if the value, V, of accepting is equal to the value of rejecting, then an agent will accept if he believes p > 0.5, and reject otherwise. Because this agent started out with the assumption that both accepting and rejecting are equally viable options (p = 0.5), the observation of an H signal will allow him to conclude that accepting is the rational choice.

The second agent then considers both the first agent's decision and his own signal, again in a rational fashion. In general, the nth agent considers the decisions of the previous n-1 agents, and his own signal. He makes a decision based on Bayesian reasoning to determine the most rational choice.

Where a is the number of accepts in the previous set plus the agent's own signal, and b is the number of rejects. Thus, . The decision is based on how the value on the right hand side of the equation compares with p.

Explicit model assumptions

The original model makes several assumptions about human behavior and the world in which humans act, some of which are relaxed in later versions or in alternate definitions of similar problems, such as the diffusion of innovations.

  1. Boundedly Rational Agents: The original Independent Cascade model assumes humans are boundedly rational – that is, they will always make rational decisions based on the information they can observe, but the information they observe may not be complete or correct. In other words, agents do not have complete knowledge of the world around them (which would allow them to make the correct decision in any and all situations). In this way, there is a point at which, even if a person has correct knowledge of the idea or action cascading, they can be convinced via social pressures to adopt some alternate, incorrect view of the world.
  2. Incomplete Knowledge of Others: The original information cascade model assumes that agents have incomplete knowledge of the agents which precede them in the specified order. As opposed to definitions where agents have some knowledge of the "private information" held by previous agents, the current agent makes a decision based only on the observable action (whether or not to imitate) of those preceding him. It is important to note that the original creators argue this is a reason why information cascades can be caused by small shocks.
  3. Behavior of all previous agents is known

Resulting conditions

  1. Cascades will always occur – as discussed, in the simple mode, the likelihood of a cascade occurring increases towards 1 as the number of people making decisions increases towards infinity.
  2. Cascades can be incorrect – because agents make decisions with both bounded rationality and probabilistic knowledge of the initial truth (e.g. whether accepting or rejecting is the correct decision), the incorrect behavior may cascade through the system.
  3. Cascades can be based on little information – mathematically, a cascade of an infinite length can occur based only on the decision of two people. More generally, a small set of people who strongly promote an idea as being rational can rapidly influence a much larger subset of the general population
  4. Cascades are fragile – because agents receive no extra information after the difference between a and b increases beyond 2, and because such differences can occur at small numbers of agents, agents considering opinions from those agents who are making decisions based on actual information can be dissuaded from a choice rather easily. This suggests that cascades are susceptible to the release of public information. also discusses this result in the context of the underlying value p changing over time, in which case a cascade can rapidly change course.

Responding

A literature exists that examines how individuals or firms might respond to the existence of informational cascades when they have products to sell but where buyers are unsure of the quality of those products. Curtis Taylor (1999) shows that when selling a house the seller might wish to start with high prices, as failure to sell with low prices is indicative of low quality and might start a cascade on not buying, while failure to sell with high prices could be construed as meaning the house is just over-priced, and prices can then be reduced to get a sale. Daniel Sgroi (2002) shows that firms might use "guinea pigs" who are given the opportunity to buy early to kick-start an informational cascade through their early and public purchasing decisions, and work by David Gill and Daniel Sgroi (2008) show that early public tests might have a similar effect (and in particular that passing a "tough test" which is biased against the seller can instigate a cascade all by itself). Bose et al. have examined how prices set by a monopolist might evolve in the presence of potential cascade behavior where the monopolist and consumers are unsure of a products quality.

Examples and fields of application

Information cascades occur in situations where seeing many people make the same choice provides evidence that outweighs one's own judgment. That is, one thinks: "It's more likely that I'm wrong than that all those other people are wrong. Therefore, I will do as they do."

In what has been termed a reputational cascade, late responders sometimes go along with the decisions of early responders, not just because the late responders think the early responders are right, but also because they perceive their reputation will be damaged if they dissent from the early responders.

Market cascades

Information cascades have become one of the topics of behavioral economics, as they are often seen in financial markets where they can feed speculation and create cumulative and excessive price moves, either for the whole market (market bubble) or a specific asset, like a stock that becomes overly popular among investors.

Marketers also use the idea of cascades to attempt to get a buying cascade started for a new product. If they can induce an initial set of people to adopt the new product, then those who make purchasing decisions later on may also adopt the product even if it is no better than, or perhaps even worse than, competing products. This is most effective if these later consumers are able to observe the adoption decisions, but not how satisfied the early customers actually were with the choice. This is consistent with the idea that cascades arise naturally when people can see what others do but not what they know.

An example is Hollywood movies. If test screenings suggest a big-budget movie might be a flop, studios often decide to spend more on initial marketing rather than less, with the aim of making as much money as possible on the opening weekend, before word gets around that it's a turkey.

Information cascades are usually considered by economists:

  • as products of rational expectations at their start,
  • as irrational herd behavior if they persist for too long, which signals that collective emotions come also into play to feed the cascade.

Social network analysis

Dotey et al. state that information flows in the form of cascades on the social network. According to the authors, analysis of virality of information cascades on a social network may lead to many useful applications like determining the most influential individuals within a network. This information can be used for maximizing market effectiveness or influencing public opinion. Various structural and temporal features of a network affect cascade virality. Additionally, these models are widely exploited in the problem of Rumor spread in social network to investigate it and reduce its influence in online social networks.

In contrast to work on information cascades in social networks, the social influence model of belief spread argues that people have some notion of the private beliefs of those in their network. The social influence model, then, relaxes the assumption of information cascades that people are acting only on observable actions taken by others. In addition, the social influence model focuses on embedding people within a social network, as opposed to a queue. Finally, the social influence model relaxes the assumption of the information cascade model that people will either complete an action or not by allowing for a continuous scale of the "strength" of an agents belief that an action should be completed.

Historical examples

  • Small protests began in Leipzig, Germany in 1989 with just a handful of activists challenging the German Democratic Republic. For almost a year, protesters met every Monday growing by a few people each time. By the time the government attempted to address it in September 1989, it was too big to quash. In October, the number of protesters reached 100,000 and by the first Monday in November, over 400,000 people marched the streets of Leipzig. Two days later the Berlin Wall was dismantled.
  • The adoption rate of drought-resistant hybrid seed corn during the Great Depression and Dust Bowl was slow despite its significant improvement over the previously available seed corn. Researchers at Iowa State University were interested in understanding the public's hesitation to the adoption of this significantly improved technology. After conducting 259 interviews with farmers it was observed that the slow rate of adoption was due to how the farmers valued the opinion of their friends and neighbors instead of the word of a salesman. See for the original report.

Empirical Studies

In addition to the examples above, Information Cascades have been shown to exist in several empirical studies. Perhaps the best example, given above, is. Participants stood in a line behind an urn which had balls of different colors. Sequentially, participants would pick a ball out of the urn, looks at it, and then places it back into the urn. The agent then voices their opinion of which color of balls (red or blue) there is a majority of in the urn for the rest of the participants to hear. Participants get a monetary reward if they guess correctly, forcing the concept of rationality.

Other examples include

  • De Vany and Walls create a statistical model of information cascades where an action is required. They apply this model to the actions people take to go see a movie that has come out at the theatre. De Vany and Walls validate their model on this data, finding a similar Pareto distribution of revenue for different movies.
  • Walden and Browne also adopt the original Information Cascade model, here into an operational model more practical for real world studies, which allows for analysis based on observed variables. Walden and Browne test their model on data about adoption of new technologies by businesses, finding support for their hypothesis that information cascades play a role in this adoption.

Legal aspects

The negative effects of informational cascades sometimes become a legal concern and laws have been enacted to neutralize them. Ward Farnsworth, a law professor, analyzed the legal aspects of informational cascades and gave several examples in his book The Legal Analyst: in many military courts, the officers voting to decide a case vote in reverse rank order (the officer of the lowest rank votes first), and he suggested it may be done so the lower-ranked officers would not be tempted by the cascade to vote with the more senior officers, who are believed to have more accurate judgement; another example is that countries such as Israel and France have laws that prohibit polling days or weeks before elections to prevent the effect of informational cascade that may influence the election results.

Globalization

As previously stated, informational cascades are logical processes describing how an individual's decision process changes based upon outside information. Cascades have never been a household name; at best, they exist hypothetically. Over the past few decades, cascades saw an increase in popularity through various fields of study. Specifically, they have been quite useful in comparing thought processes between Greek and German organic farmers. The aforementioned study suggests discrepancies between Greek and German thought processes based upon their cultural and socioeconomic differences. Even further, cascades have been extrapolated to ideas such as financial volatility and monetary policy. In 2004 Helmut Wagner and Wolfram Berger suggested cascades as an analytical vehicle to examine changes to the financial market as it became more globalized. Wagner and Berger noticed structural changes to the framework of understanding financial markets due to globalization; giving rise to volatility in capital flow and spawning uncertainty which affected central banks. Additionally, information cascades are useful in understanding the origins of terrorist tactics. When the attack by Black September occurred in 1972 it was hard not to see the similarities between their tactics and the Baader-Meinhof group (also known as the Red Army Faction [RAF]). All of these examples portray how the process of cascades were put into use. Moreover, it is important to understand the framework of cascades to move forward in a more globalized society. Establishing a foundation to understanding the passage of information through transnational and multinational organizations, and even more, is critical to the arising modern society. Summing up all of these points, cascades, as a general term, encompass a spectrum of different concepts. Information cascades have been the underlying thread in how information is transferred, overwritten, and understood through various cultures spanning from a multitude of different countries.

Late Cenozoic Ice Age

From Wikipedia, the free encyclopedia

The Late Cenozoic Ice Age, or Antarctic Glaciation began 33.9 million years ago at the Eocene-Oligocene Boundary and is ongoing. It is Earth's current ice age or icehouse period. Its beginning is marked by the formation of the Antarctic ice sheets. The Late Cenozoic Ice Age gets its name due to the fact that it covers roughly the last half of Cenozoic era so far.

Six million years after the start of the Late Cenozoic Ice Age, the East Antarctic Ice Sheet had formed, and 14 million years ago it had reached its current extent. It has persisted to the current time.

In the last three million years, glaciations have spread to the northern hemisphere. It commenced with Greenland becoming increasingly covered by an ice sheet in late Pliocene (2.9-2.58 Ma ago). During the Pleistocene Epoch (starting 2.58 Ma ago), the Quaternary glaciation developed with decreasing mean temperatures and increasing amplitudes between glacials and interglacials. During the glacial periods of the Pleistocene, large areas of northern North America and northern Eurasia have been covered by ice sheets.

All palaeotemps.svg

History of discovery and naming

German naturalist Karl Friedrich Schimper coined the term Eiszeit, meaning ice age, in 1837. For a long time, the term referred only to glacial periods. Over time, this developed into the concept that they were all part of a much longer ice age.

The concept that the earth is currently in an ice age that began around 30 million years ago can be dated back to at least 1966.

As a geologic time period, the Late Cenozoic Ice Age was used at least as early as 1973.

The climate before the polar ice caps

This type of vegetation grew in Antarctica during the Eocene Epoch - Photo taken at Palm Canyon, California, US in 2005

The last greenhouse period began 260 million years ago during the late Permian Period at the end of the Karoo Ice Age. It lasted all through the time of the non-avian dinosaurs during the Mesozoic Era, and ended 33.9 million years ago in the middle of the Cenozoic Era (the current Era). This greenhouse period lasted 226.1 million years.

The hottest part of the last greenhouse earth was the Late Paleocene - Early Eocene. This was a hothouse period that lasted from 65 to 55 million years ago. The hottest part of this torrid age was the Paleocene-Eocene Thermal Maximum, 55.5 million years ago. Average global temperatures were around 30 °C (86 °F). This was only the second time that Earth reached this level of warmth since the Precambrian. The other time was during the Cambrian Period, which ran from 541 million years ago to 485.4 million years ago.

During the early Eocene, Australia and South America were connected to Antarctica.

53 million years ago during the Eocene Epoch, summer high temperatures in Antarctica were around 25 °C (77 °F). Temperatures during winter were around 10 °C (50 °F). It did not frost during the winter. The climate was so warm that trees grew in Antarctica. Arecaceae (palm trees) grew on the coastal lowlands, and Fagus (beech trees) and Pinophyta (conifers) grew on the hills just inland from the coast.

As the global climate became cooler, the planet was seeing a decrease in forests, and an increase in savannas. Animals were evolving to have a larger body size.

Glaciation of the southern hemisphere

Antarctica from space on 21 September 2005

Australia drifted away from Antarctica forming the Tasmanian Passage, and South America drifted away from Antarctica forming the Drake Passage. This caused the formation of the Antarctic Circumpolar Current, a current of cold water surrounding Antarctica. This current still exists today, and is a major reason for why Antarctica has such an exceptionally cold climate.

The Eocene-Oligocene Boundary 33.9 million years ago was the transition from the last greenhouse period to the present icehouse climate. At this point CO2 levels had dropped to 750 ppm. This was the beginning of the Late Cenozoic Ice Age. This was when the ice sheets reached the ocean, the defining point.

33 million years ago was the evolution of the thylacinid marsupial (Badjcinus).

The first balanids, cats, eucalypts, and pigs came about 30 million years ago. The brontothere and embrithopod mammals went extinct at this time.

At 29.2 million years ago, there were three ice caps in the high elevations of Antarctica. One ice cap formed in the Dronning Maud Land. Another ice cap formed in the Gamburtsev Mountain Range. Another ice cap formed in the Transantarctic Mountains. At this point, the ice caps weren't very big yet. Most of Antarctica wasn't covered by ice.

By 28.7 million years ago, the Gamburtsev ice cap was now much larger due to the colder climate.

CO2 continued to fall and the climate continued to get colder. At 28.1 million years ago, the Gamburtsev and Transantarctic ice caps merged into a main central ice cap. At this point, ice was now covering a majority of the continent.

28 million years ago was the time period in which the largest land mammal existed, the Paraceratherium.

The Dronning Maud ice cap merged with the main ice cap 27.9 million years ago. This was the formation of the East Antarctic Ice Sheet.

25 million years ago brought about the first deer. It also was the time period in which the largest flying bird existed, the Pelagornis sandersi.

Global refrigeration set in 22 million years ago.

20 million years ago brought about the first bears, giraffes, giant anteaters, and hyenas. There was also an increase in the diversity of birds.

The first bovids, kangaroos, and mastodons came about 15 million years ago. This was the warmest part of the Late Cenozoic Ice Age, with average global temperatures around 18.4 °C (65.1 °F). Atmospheric CO2 levels were around 700 ppm. This time period was called the Mid-Miocene Climatic Optimum (MMCO).

By 14 million years ago, the Antarctic ice sheets were similar in size and volume to present times. Glaciers were starting to form in the mountains of the Northern Hemisphere.

The Great American Interchange began 9.5 million years ago (with the highest rate of species crossing occurring around 2.7 million years ago). This was the migration of different land and freshwater animals between North and South America. During this time, armadillos, glyptodonts, ground sloths, hummingbirds, meridiungulates, opossums, and phorusrhacids migrated from South America to North America. Also, bears, deer, coatis, ferrets, horses, jaguars, otters, saber-toothed cats, skunks, and tapirs migrated from North America to South America.

Around 7 million years ago, the first potential hominin, Sahelanthropus is estimated to have lived.

The australopithecines first appear in the fossil record around 4 million years ago, and diversified vastly over the next 2 million years. The Mediterranean Sea was dry between 6 and 5 million years ago.

Five million years ago brought about the first hippopotami and tree sloths. Elephants, zebras, and other grazing herbivores became more diverse. Lions, members of the genus Canis, and other large carnivores became more diverse. The burrowing rodents, birds, kangaroos, small carnivores, and vultures increase in size. There was a decrease in the number of perissodactyl mammals, and the nimravid carnivores went extinct.

The first mammoths came about 4.8 million years ago.

The evolution of the Australopithecus occurred four million years ago. This was also the time of the largest freshwater turtle, Stupendemys. The first modern elephants, gazelles, giraffes, lions, rhinoceros, and zebras came about at this time.

Between 3.6 and 3.4 million years ago, there was a sudden but brief warming period.

Glaciation of the northern hemisphere

Arctic sea ice from space on 6 March 2010

The glaciation of the Arctic in the Northern Hemisphere commenced with Greenland becoming increasingly covered by an ice sheet in late Pliocene (2.9-2.58 Ma ago).

The evolution of the Paranthropus occurred 2.7 million years ago.

2.58 million years ago was the official start of the Quaternary glaciation, the current phase of the Late Cenozoic Ice Age. Throughout the Pleistocene, there have been glacial periods (cold periods with extended glaciation) and interglacial periods (warm periods with less glaciation).

The term stadial is another word for glacial period, and interstadial is another word for interglacial period.

The oscillation between glacial and interglacial periods is due to the Milankovitch cycles. These are cycles that have to do with Earth's axial tilt and orbital eccentricity.

Earth is currently tilted at 23.5 degrees. Over a 41,000 year cycle, the tilt oscillates between 22.1 and 24.5 degrees. When the tilt is greater (high obliquity), the seasons are more extreme. During times when the tilt is less (low obliquity), the seasons are less extreme. Less tilt also means that the polar regions receive less light from the sun. This causes a colder global climate as ice sheets start to build up.

The shape of Earth's orbit around the sun affects the Earth's climate. Over a 100,000 year cycle, Earth oscillates between having a circular orbit to having a more elliptical orbit.

From 2.58 million years ago to about 1.73 million ± 50,000 years ago, the degree of axial tilt was the main cause of glacial and interglacial periods.

2.5 million years ago brought about the evolution of the earliest Smilodon species.

Homo habilis appeared about two million years ago. This is the first named species of the genus Homo, although this classification is increasingly controversial. Conifer trees became more diverse in the high latitudes. The ancestor of cattle evolved in India, the Bos primigenus (aurochs).

Australopithecines are estimated to have become extinct around 1.7 million years ago.

The evolution of Homo antecessor occurred 1.2 million years ago. Paranthropus also became extinct.

Around 850,000 ± 50,000 years ago, the degree of orbital eccentricity became the main driver of glacial and interglacial periods rather than the degree of tilt, and this pattern continues to present-day.

800,000 years ago, the short-faced bear (Arctodus simus) became abundant in North America.

The evolution of the Homo heidelbergensis happened 600,000 years ago.

The evolution of Neanderthals occurred 350,000 years ago.

300,000 years ago, Gigantopithicus went extinct.

250,000 years ago in Africa were the first anatomically modern humans.

Last Glacial Period

Neanderthals during the last glacial period.
 
Map of the Northern Hemisphere ice during the last glacial maximum.

The last glacial period began 115,000 years ago and ended 11,700 years ago. This time period saw the great advancement of polar ice sheets into the middle latitudes of the Northern Hemisphere.

The Toba eruption 75,000 years ago in present day Sumatra, Indonesia has been linked to a bottleneck in the human DNA. The six to ten years of cold weather during the volcanic winter destroyed many food sources and greatly reduced the human population.

50,000 years ago, Homo sapiens migrated out of Africa. They began replacing other Hominins in Asia. They also began replacing Neanderthals in Europe. However some of the Homo sapiens and Neanderthals interbred. Currently, persons of European descent are two to four percent Neanderthal. With the exception of this small amount of Neanderthal DNA that exists today, Neanderthals became extinct 30,000 years ago.

The last glacial maximum ran from 26,500 years ago to 20,000 years ago. Although different ice sheets reached maximum extent at somewhat different times, this was the time when ice sheets overall were at maximum extent.

According to Blue Marble 3000 (a video by the Zurich University of Applied Sciences), the average global temperature around 19,000 BCE (about 21,000 years ago) was 9.0 °C (48.2 °F). This is about 4.8 °C (8.6 °F) colder than the 1850-1929 average, and 6.0 °C (10.8 °F) colder than the 2011-2020 average.

The figures given by the Intergovernmental Panel On Climate Change (IPCC) estimate a slightly lower global temperature than the figures given by the Zurich University of Applied Sciences. However, these figures are not exact figures and are open more to interpretation. According to the IPCC, average global temperatures increased by 5.5 ± 1.5 °C (9.9 ± 2.7 °F) since the last glacial maximum, and the rate of warming was about 10 times slower than that of the 20th century. It appears that they are defining the present as the early period of instrumental records when temperatures were less affected by human activity, but they do not specify exact years, or give a temperature for the present.

Berkeley Earth publishes a list of average global temperatures by year. It shows that temperatures were stable from the beginning of records in 1850 until 1929. The average temperature during these years was 13.8 °C (56.8 °F). When subtracting 5.5 ± 1.5 °C (9.9 ± 2.7 °F) from the 1850-1929 average, the average temperature for the last glacial maximum comes out to 8.3 ± 1.5 °C (46.9 ± 2.7 °F). This is about 6.7 ± 1.5 °C (12.0 ± 2.7 °F) colder than the 2011-2020 average. This figure is open to interpretation because the IPCC does not specify 1850-1829 as being the present, or give any exact set of years as being the present. It also does not state whether or not they agree with the figures given by Berkeley Earth.

According to the United States Geographical Survey (USGS), permanent summer ice covered about 8% of Earth's surface and 25% of the land area during the last glacial maximum. The USGS also states that sea level was about 125 m (410 ft) lower than in present times (2012). The volume of ice on Earth was around 17,000,000 mi3 (71,000,000 km3), which is about 2.1 times Earth's current volume of ice.

The extinction of the woolly rhinoceros (Coelodonta antiquitus) occurred 15,000 years ago after the last glacial maximum.

Current Interglacial Period

Agriculture and the rise of civilization came about during the current interglacial period.

The Earth is currently in an interglacial period which began 11,700 years ago. This is traditionally referred to as the Holocene epoch and is currently (as of 2018) recognized as such by the International Commission on Stratigraphy. However, there is debate as to whether it is actually a separate epoch or merely an interglacial period within the Pleistocene epoch. This period can also be referred to as the Flandrian interglacial or Flandrian stage.

According to Blue Marble 3000, the average global temperature at the beginning of the current interglacial period was around 12.9 °C (55.2 °F).

Agriculture began 11,500 years ago.

Equidae, giant ground sloths, and short-faced bears became extinct 11,000 years ago.

The Smilodon became extinct 10,000 years ago, as well as the mainland species of the woolly mammoth.

Between 9,000 and 5,000 years ago was the Holocene Climatic Optimum. According to Blue Marble 3000, in 6,000 BCE, the average global temperature was 14.2 °C (57.6 °F). This was a warmer period than when instrumental records began in the 19th century, but now it's cooler than the present. The giant lemur went extinct around this time.

Around 3,200 BCE (5,200 years ago) the first writing system was invented. It was the cuneiform script used in Mesopotamia (present day Iraq).

The last mammoths at Wrangel Island off the coast of Siberia went extinct around 3,700 years ago.

Being in an interglacial, there is less ice than there was during the last glacial period. However, the last glacial period was just one part of the ice age that still continues today. Even though Earth is in an interglacial, there is still more ice than times outside of ice ages. There are also currently ice sheets in the Northern Hemisphere, which means that there is more ice on Earth than there was during the first 31 million years of the Late Cenozoic Ice Age. During that time, only the Antarctic ice sheets existed. Currently (as of 2012), about 3.1% of Earth's surface and 10.7% of the land area is covered in year-round ice according to the USGS. The total volume of ice presently on Earth is about 33,000,000 km3 (8,000,000 mi3) (as of 2004). The current sea level (as of 2009) is 70 m (230 ft) lower than it would be without the ice sheets of Antarctica and Greenland.

Based on the Milankovitch cycles, the current interglacial period is predicted to be unusually long, continuing for another 25,000 to 50,000 years beyond present times. There are also high concentrations of greenhouse gases in the atmosphere from human activity, and it is almost certain to get higher in the coming decades. This will lead to higher temperatures. In 25,000 to 50,000 years, the climate will begin to cool due to the Milankovich cycles. However, the high levels of greenhouse gases are predicted to keep it from getting cold enough to build up enough ice to meet the criteria of a glacial period. This would effectively extend the current interglacial period an additional 100,000 years placing the next glacial period 125,000 to 150,000 years in the future.

Thermal energy storage

From Wikipedia, the free encyclopedia
 
District heating accumulation tower from Theiss near Krems an der Donau in Lower Austria with a thermal capacity of 2 GWh
 
Thermal energy storage tower inaugurated in 2017 in Bozen-Bolzano, South Tyrol, Italy.
 
Construction of the Salt Tanks which provide efficient thermal energy storage so that output can be provided after the sun goes down, and output can be scheduled to meet demand requirements. The 280 MW Solana Generating Station is designed to provide six hours of energy storage. This allows the plant to generate about 38 percent of its rated capacity over the course of a year.

Thermal energy storage (TES) is achieved with widely different technologies. Depending on the specific technology, it allows excess thermal energy to be stored and used hours, days, months later, at scales ranging from the individual process, building, multiuser-building, district, town, or region. Usage examples are the balancing of energy demand between daytime and nighttime, storing summer heat for winter heating, or winter cold for summer air conditioning (Seasonal thermal energy storage). Storage media include water or ice-slush tanks, masses of native earth or bedrock accessed with heat exchangers by means of boreholes, deep aquifers contained between impermeable strata; shallow, lined pits filled with gravel and water and insulated at the top, as well as eutectic solutions and phase-change materials.

Other sources of thermal energy for storage include heat or cold produced with heat pumps from off-peak, lower cost electric power, a practice called peak shaving; heat from combined heat and power (CHP) power plants; heat produced by renewable electrical energy that exceeds grid demand and waste heat from industrial processes. Heat storage, both seasonal and short term, is considered an important means for cheaply balancing high shares of variable renewable electricity production and integration of electricity and heating sectors in energy systems almost or completely fed by renewable energy.

Categories

The different kinds of thermal energy storage can be divided into three separate categories: sensible heat, latent heat, and thermo-chemical heat storage. Each of these has different advantages and disadvantages that determine their applications.

Sensible heat storage

Sensible heat storage (SHS) is the most straightforward method. It simply means the temperature of some medium is either increased or decreased. This type of storage is the most commercially available out of the three, as the others are still being researched and developed.

The materials are generally inexpensive and safe. One of the cheapest, most commonly used options is a water tank, but materials such as molten salts or metals can be heated to higher temperatures and therefore offer a higher storage capacity. Energy can also be stored underground (UTES), either in an underground tank or in some kind of heat-transfer fluid (HTF) flowing through a system of pipes, either placed vertically in U-shapes (boreholes) or horizontally in trenches. Yet another system is known as a packed-bed (or pebble-bed) storage unit, in which some fluid, usually air, flows through a bed of loosely packed material (usually rock, pebbles or ceramic brick) to add or extract heat.

A disadvantage of SHS is its dependence on the properties of the storage medium. Storage capacities are limited by its specific heat, and the system needs to be properly designed in order to ensure energy extraction at a constant temperature.

Molten-salt technology

The sensible heat of molten salt is also used for storing solar energy at a high temperature. It is termed molten-salt technology or molten salt energy storage (MSES). Molten salts can be employed as a thermal energy storage method to retain thermal energy. Presently, this is a commercially used technology to store the heat collected by concentrated solar power (e.g., from a solar tower or solar trough). The heat can later be converted into superheated steam to power conventional steam turbines and generate electricity in bad weather or at night. It was demonstrated in the Solar Two project from 1995–1999. Estimates in 2006 predicted an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity. Various eutectic mixtures of different salts are used (e.g., sodium nitrate, potassium nitrate and calcium nitrate). Experience with such systems exists in non-solar applications in the chemical and metals industries as a heat-transport fluid.

The salt melts at 131 °C (268 °F). It is kept liquid at 288 °C (550 °F) in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused sun heats it to 566 °C (1,051 °F). It is then sent to a hot storage tank. With proper insulation of the tank the thermal energy can be usefully stored for up to a week. When electricity is needed, the hot molten salt is pumped to a conventional steam-generator to produce superheated steam for driving a conventional turbine/generator set as used in any coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank of about 9.1 metres (30 ft) tall and 24 metres (79 ft) in diameter to drive it for four hours by this design.

Single tank with divider plate to hold both cold and hot molten salt, is under development. It is more economical by achieving 100% more heat storage per unit volume over the dual tanks system as the molten-salt storage tank is costly due to its complicated construction. Phase Change Material (PCMs) are also used in molten-salt energy storage, while research on obtaining shape-stabilized PCMs using high porosity matrices is ongoing.

Most solar thermal power plants use this thermal energy storage concept. The Solana Generating Station in the U.S. can store 6 hours worth of generating capacity in molten salt. During the summer of 2013 the Gemasolar Thermosolar solar power-tower/molten-salt plant in Spain achieved a first by continuously producing electricity 24 hours per day for 36 days. The Cerro Dominador Solar Thermal Plant, inaugurated in June 2021, has 17.5 hours of heat storage.

Heat storage in tanks or rock caverns

A steam accumulator consists of an insulated steel pressure tank containing hot water and steam under pressure. As a heat storage device, it is used to mediate heat production by a variable or steady source from a variable demand for heat. Steam accumulators may take on a significance for energy storage in solar thermal energy projects.

Large stores are widely used in Nordic countries to store heat for several days, to decouple heat and power production and to help meet peak demands. Interseasonal storage in caverns has been investigated and appears to be economical and plays a significant role in heating in Finland. Helen Oy estimates an 11.6 GWh capacity and 120 MW thermal output for its 260,000 m3 water cistern under Mustikkamaa (fully charged or discharged in 4 days at capacity), operating from 2021 to offset days of peak production/demand; while the 300,000 m3 rock caverns 50 m under sea level in Kruunuvuorenranta (near Laajasalo) were designated in 2018 to store heat in summer from warm seawater and release it in winter for district heating.

Hot silicon technology

Solid or molten silicon offers much higher storage temperatures than salts with consequent greater capacity and efficiency. It is being researched as a possible more energy efficient storage technology. Silicon is able to store more than 1 MWh of energy per cubic metre at 1400 °C. An additional advantage is the relative abundance of silicon when compared to the salts used for the same purpose.

Molten silicon thermal energy storage is currently being developed by the Australian company 1414 Degrees as a more energy efficient storage technology, with a combined heat and power (cogeneration) output.

Molten aluminum

Another medium that can store thermal energy is molten (recycled) aluminum. This technology was developed by the Swedish company Azelio. The material is heated to 600 degrees C. When needed, the energy is transported to a Stirling engine using a heat-transfer fluid.

Heat storage in hot rocks or concrete

Water has one of the highest thermal capacities at 4.2 kJ/(kg⋅K) whereas concrete has about one third of that. On the other hand, concrete can be heated to much higher temperatures (1200 °C) by for example electrical heating and therefore has a much higher overall volumetric capacity. Thus in the example below, an insulated cube of about 2.8 m3 would appear to provide sufficient storage for a single house to meet 50% of heating demand. This could, in principle, be used to store surplus wind or solar heat due to the ability of electrical heating to reach high temperatures. At the neighborhood level, the Wiggenhausen-Süd solar development at Friedrichshafen in southern Germany has received international attention. This features a 12,000 m3 (420,000 cu ft) reinforced concrete thermal store linked to 4,300 m2 (46,000 sq ft) of solar collectors, which will supply the 570 houses with around 50% of their heating and hot water. Siemens-Gamesa built a 130 MWh thermal storage near Hamburg with 750 °C in basalt and 1.5 MW electric output. A similar system is scheduled for Sorø, Denmark, with 41–58% of the stored 18 MWh heat returned for the town's district heating, and 30–41% returned as electricity.

Latent Heat Storage

Because Latent Heat Storage (LHS) is associated with a phase transition, the general term for the associated media is Phase-Change Material (PCM). During these transitions, heat can be added or extracted without affecting the material’s temperature, giving it an advantage over SHS-technologies. Storage capacities are often higher as well.

There are a multitude of PCMs available, including but not limited to salts, polymers, gels, paraffin waxes and metal alloys, each with different properties. This allows for a more target-oriented system design. As the process is isothermal at the PCM’s melting point, the material can be picked to have the desired temperature range. Desirable qualities include high latent heat and thermal conductivity. Furthermore, the storage unit can be more compact if volume changes during the phase transition are small.

PCMs are further subdivided into organic, inorganic and eutectic materials. Compared to organic PCMs, inorganic materials are less flammable, cheaper and more widely available. They also have higher storage capacity and thermal conductivity. Organic PCMs, on the other hand, are less corrosive and not as prone to phase-separation. Eutectic materials, as they are mixtures, are more easily adjusted to obtain specific properties, but have low latent and specific heat capacities.

Another important factor in LHS is the encapsulation of the PCM. Some materials are more prone to erosion and leakage than others. The system must be carefully designed in order to avoid unnecessary loss of heat.

Miscibility gap alloy technology

Miscibility gap alloys rely on the phase change of a metallic material (see: latent heat) to store thermal energy.

Rather than pumping the liquid metal between tanks as in a molten-salt system, the metal is encapsulated in another metallic material that it cannot alloy with (immiscible). Depending on the two materials selected (the phase changing material and the encapsulating material) storage densities can be between 0.2 and 2 MJ/L.

A working fluid, typically water or steam, is used to transfer the heat into and out of the system. Thermal conductivity of miscibility gap alloys is often higher (up to 400 W/(m⋅K)) than competing technologies which means quicker "charge" and "discharge" of the thermal storage is possible. The technology has not yet been implemented on a large scale.

Ice-based technology

Several applications are being developed where ice is produced during off-peak periods and used for cooling at a later time. For example, air conditioning can be provided more economically by using low-cost electricity at night to freeze water into ice, then using the cooling capacity of ice in the afternoon to reduce the electricity needed to handle air conditioning demands. Thermal energy storage using ice makes use of the large heat of fusion of water. Historically, ice was transported from mountains to cities for use as a coolant. One metric ton of water (= one cubic meter) can store 334 million joules (MJ) or 317,000 BTUs (93 kWh). A relatively small storage facility can hold enough ice to cool a large building for a day or a week.

In addition to using ice in direct cooling applications, it is also being used in heat pump based heating systems. In these applications, the phase change energy provides a very significant layer of thermal capacity that is near the bottom range of temperature that water source heat pumps can operate in. This allows the system to ride out the heaviest heating load conditions and extends the timeframe by which the source energy elements can contribute heat back into the system.

Cryogenic energy storage

Cryogenic energy storage uses liquification of air or nitrogen as an energy store.

A pilot cryogenic energy system that uses liquid air as the energy store, and low-grade waste heat to drive the thermal re-expansion of the air, operated at a power station in Slough, UK in 2010.

Thermo-chemical Heat Storage

Thermo-chemical heat storage (TCS) involves some kind of reversible exotherm/endotherm chemical reaction with thermo-chemical materials (TCM). Depending on the reactants, this method can allow for an even higher storage capacity than LHS.

In one type of TCS, heat is applied to decompose certain molecules. The reaction products are then separated, and mixed again when required, resulting in a release of energy. Some examples are the decomposition of potassium oxide (over a range of 300-800 degrees C, with a heat decomposition of 2.1 MJ/kg), lead oxide (300-350 degrees C, 0.26 MJ/kg) and calcium hydroxide (above 450 degrees C, where the reaction rates can be increased by adding zinc or aluminum). The photochemical decomposition of nitrosyl chloride can also be used and, since it needs photons to occur, works especially well when paired with solar energy.

Adsorption (or Sorption) solar heating and storage

Adsorption processes also fall into this category. It can be used to not only store thermal energy, but also control air humidity. Zeolites (microporous crystalline alumina-silicates) and silica gels are well suited for this purpose. In hot, humid environments, this technology is often used in combination with lithium chloride to cool water.

The low cost ($200/ton) and high cycle rate (2,000X) of synthetic zeolites such as Linde 13X with water adsorbate has garnered much academic and commercial interest recently for use for thermal energy storage (TES), specifically of low-grade solar and waste heat. Several pilot projects have been funded in the EU from 2000 to the present (2020). The basic concept is to store solar thermal energy as chemical latent energy in the zeolite. Typically, hot dry air from flat plate solar collectors is made to flow through a bed of zeolite such that any water adsorbate present is driven off. Storage can be diurnal, weekly, monthly, or even seasonal depending on the volume of the zeolite and the area of the solar thermal panels. When heat is called for during the night, or sunless hours, or winter, humidified air flows through the zeolite. As the humidity is adsorbed by the zeolite, heat is released to the air and subsequently to the building space. This form of TES, with specific use of zeolites, was first taught by Guerra in 1978. Advantages over molten salts and other high temperature TES include that (1) the temperature required is only the stagnation temperature typical of a solar flat plate thermal collector, and (2) as long as the zeolite is kept dry, the energy is stored indefinitely. Because of the low temperature, and because the energy is stored as latent heat of adsorption, thus eliminating the insulation requirements of a molten salt storage system, costs are significantly lower.

Salt hydrate technology

One example of an experimental storage system based on chemical reaction energy is the salt hydrate technology. The system uses the reaction energy created when salts are hydrated or dehydrated. It works by storing heat in a container containing 50% sodium hydroxide (NaOH) solution. Heat (e.g. from using a solar collector) is stored by evaporating the water in an endothermic reaction. When water is added again, heat is released in an exothermic reaction at 50 °C (120 °F). Current systems operate at 60% efficiency. The system is especially advantageous for seasonal thermal energy storage, because the dried salt can be stored at room temperature for prolonged times, without energy loss. The containers with the dehydrated salt can even be transported to a different location. The system has a higher energy density than heat stored in water and the capacity of the system can be designed to store energy from a few months to years.

In 2013 the Dutch technology developer TNO presented the results of the MERITS project to store heat in a salt container. The heat, which can be derived from a solar collector on a rooftop, expels the water contained in the salt. When the water is added again, the heat is released, with almost no energy losses. A container with a few cubic meters of salt could store enough of this thermochemical energy to heat a house throughout the winter. In a temperate climate like that of the Netherlands, an average low-energy household requires about 6.7 GJ/winter. To store this energy in water (at a temperature difference of 70 °C), 23 m3 insulated water storage would be needed, exceeding the storage abilities of most households. Using salt hydrate technology with a storage density of about 1 GJ/m3, 4–8 m3 could be sufficient.

As of 2016, researchers in several countries are conducting experiments to determine the best type of salt, or salt mixture. Low pressure within the container seems favourable for the energy transport. Especially promising are organic salts, so called ionic liquids. Compared to lithium halide based sorbents they are less problematic in terms of limited global resources, and compared to most other halides and sodium hydroxide (NaOH) they are less corrosive and not negatively affected by CO2 contaminations.

Molecular bonds

Storing energy in molecular bonds is being investigated. Energy densities equivalent to lithium-ion batteries have been achieved. This has been done by a DSPEC (dys-sensitized photoelectrosythesis cell). This is a cell that can store energy that has been acquired by solar panels during the day for night-time (or even later) use. It is designed by taking an indication from, well known, natural photosynthesis.

The DSPEC generates hydrogen fuel by making use of the acquired solar energy to split water molecules into its elements. As the result of this split, the hydrogen is isolated and the oxygen is released into the air. This sounds easier than it actually is. Four electrons of the water molecules need to be separated and transported elsewhere. Another difficult part is the process of merging the two separate hydrogen molecules.

The DSPEC consist out of two components: a molecule and a nanoparticle. The molecule is called a chromophore-catalyst assembly which absorbs sunlight and kick starts the catalyst. This catalyst separates the electrons and the water molecules. The nanoparticles are assembled into a thin layer and a single nanoparticle has many chromophore-catalyst on it. The function of this thin layer of nanoparticles is to transfer away the electrons which are separated from the water. This thin layer of nanoparticles is coated by a layer of titanium dioxide. With this coating, the electrons that come free can be transferred more quickly so that hydrogen could be made. This coating is, again, coated with a protective coating that strengthens the connection between the chromophore-catalyst and the nanoparticle.

Using this method, the solar energy acquired from the solar panels is converted into fuel (hydrogen) without releasing the so-called greenhouse gasses. This fuel can be stored into a fuel cell and, at a later time, used to generate electricity.

MOST

Another promising way to store solar energy for electricity and heat production is a so called ‘molecular solar thermal system’ (MOST). With this approach a molecule is converted by photoisomerization into a higher-energy isomer. Photoisomerization is a process in which one (cis-trans) isomer is converted into another by light( solar energy). This isomer is capable of storing the solar energy until the energy is released by a heat trigger or catalyst (than the isomer is converted into its original isomer). A promising candidate for such a MOST are Norbornadienes (NBD). This is because there is a high energy difference between the NBD and the quadricyclane (QC) photoisomer. This energy difference is approximately 96 kJ/mol. It is also known that for such systems, the donor-acceptor substitutions provide an effective means for redshifting the longest-wavelength absorption. This improves the solar spectrum match.

A crucial challenge for a useful MOST system is to acquire a satisfactory high energy storage density (if possible, higher than 300 kJ/kg). Another challenge of a MOST system is that light can be harvested in the visible region. The functionalization of the NBD with the donor and acceptor units is used to adjust this absorption maxima. However, this positive effect on the solar absorption is compensated by a higher molecular weight. This implies a lower energy density. This positive effect on the solar absorption has another downside. Namely that the energy storage time is lowered when the absorption is redshifted. A possible solution to overcome this anti-correlation between the energy density and the redshifting is to couple one chromophore unit to several photo switches. In this case, it is advantageous to form so called dimers or trimers. The NBD share a common donor and/or acceptor.

In a recent published article in Nature Communications, Kasper Moth-Poulsen and his team tried to engineer the stability of the high energy photo isomer by having two electronically coupled photo switches with separate barriers for thermal conversion. By doing so, a blue shift occurred after the first isomerisation (NBD-NBD to QC-NBD). This led to a higher energy of isomerisation of the second switching event (QC-NBD to QC-QC). Another advantage of this system, by sharing a donor, is that the molecular weight per norbornadiene unit is reduced. This leads to an increase of the energy density.

Eventually, this system could reach a quantum yield of photoconversion up 94% per NBD unit. A quantum yield is a measure of the efficiency of photon emission. With this system the measured energy densities reached up to 559 kJ/kg (exceeding the target of 300 kJ/kg). So, the potential of the molecular photo switches is enormous. Not only for solar thermal energy storage, but for other applications as well.

Electric thermal storage heaters

Storage heaters are commonplace in European homes with time-of-use metering (traditionally using cheaper electricity at night time). They consist of high-density ceramic bricks or feolite blocks heated to a high temperature with electricity, and may or may not have good insulation and controls to release heat over a number of hours.

Solar energy storage

Solar energy is one example of an application of thermal energy storage. Most practical active solar heating systems provide storage from a few hours to a day's worth of energy collected. However, there are a growing number of facilities that use seasonal thermal energy storage (STES), enabling solar energy to be stored in summer for space heating use during winter. The Drake Landing Solar Community in Alberta, Canada, has now achieved a year-round 97% solar heating fraction, a world record made possible only by incorporating STES.

The use of both latent heat and sensible heat are also possible with high temperature solar thermal input. Various eutectic mixtures of metals, such as Aluminium and Silicon (AlSi12) offer a high melting point suited to efficient steam generation, while high alumina cement-based materials offer good thermal storage capabilities.

Pumped-heat electricity storage

In pumped-heat electricity storage (PHES), a reversible heat-pump system is used to store energy as a temperature difference between two heat stores.

Isentropic

One system which was being developed by the now-bankrupt UK company Isentropic operates as follows. It involves two insulated containers filled with crushed rock or gravel; a hot vessel storing thermal energy at high temperature and high pressure, and a cold vessel storing thermal energy at low temperature and low pressure. The vessels are connected at top and bottom by pipes and the whole system is filled with the inert gas argon.

During the charging cycle, the system uses off-peak electricity to work as a heat pump. Argon at ambient temperature and pressure from the top of the cold store is compressed adiabatically to a pressure of 12 bar, heating it to around 500 °C (900 °F). The compressed gas is transferred to the top of the hot vessel where it percolates down through the gravel, transferring its heat to the rock and cooling to ambient temperature. The cooled, but still pressurized, gas emerging at the bottom of the vessel is then expanded (again adiabatically) back down to 1 bar, which lowers its temperature to −150 °C. The cold gas is then passed up through the cold vessel where it cools the rock while being warmed back to its initial condition.

The energy is recovered as electricity by reversing the cycle. The hot gas from the hot vessel is expanded to drive a generator and then supplied to the cold store. The cooled gas retrieved from the bottom of the cold store is compressed which heats the gas to ambient temperature. The gas is then transferred to the bottom of the hot vessel to be reheated.

The compression and expansion processes are provided by a specially designed reciprocating machine using sliding valves. Surplus heat generated by inefficiencies in the process is shed to the environment through heat exchangers during the discharging cycle.

The developer claimed that a round trip efficiency of 72–80% was achievable. This compares to >80% achievable with pumped hydro energy storage.

Another proposed system uses turbomachinery and is capable of operating at much higher power levels. Use of phase change material as heat storage material would enhance the performance further.

Right to property

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Right_to_property The right to property , or the right to own property ...