Search This Blog

Saturday, August 2, 2014

A Yellowstone Super Eruption: Another Doomsday Scenario put to Rest

A Yellowstone Super Eruption: Another Doomsday Scenario put to Rest

August 2, 2014 Science
From Link:  http://www.fromquarkstoquasars.com/a-yellowstone-super-eruption-another-doomsday-scenario-put-to-rest/      
Image Credit: Unknown (source)
Image Credit: Nina B via Shutterstock

If you’ve heard of Yellowstone National Park then, chances are, you’ve heard doomsday scenarios about Yellowstone National park. The 2005 movie “Supervolcano” highlights how these scenarios generally play out: Yellowstone erupts; people are drowned beneath mountains of lava; a looming cloud of sulfur dioxide gets carried over the globe; the Earth plunges into a volcanic winter; we all die.

Fun times…

In truth, Yellowstone is quite massive…and so is its underground magma reservoir. At 3,472 square miles (8,987 square km), the park is larger than Rhode Island and Delaware combined. And as we all know, a portion of the park sits on top of a giant volcanic caldera (an earthen cap that covers a huge reservoir of superhot liquid rock and gasses). The underground magma chamber is about 37 miles long (60 km), 18 miles wide (30 km), and 3 to 7 miles deep (5 to 12 km). That may sound rather terrifying; however, fortunately for us, all that magma is tucked safely beneath the surface of the Earth.

Image Source
Image Source

But what if it wasn’t? What if Yellowstone erupted? Would the Earth be plunged into a volcanic winter, as some sources indicate?

Geologist Jake Lowenstern (scientist-in-charge of the Yellowstone Volcano Observatory) has the answers that we seek. According Lowenstern, although the Yellowstone magma source is enormous, walls of lava won’t come pouring across the continent if there’s a super eruption. Instead, the lava flows would be limited to a 30-40 mile radius. Of course, this is still widespread enough to cause significant devastation. There would be no hope for any life forms living within this radius, and the surrounding areas would be engulfed in flames—forest fires would likely rage out of control…but a majority of the immediate damage would be contained within the surrounding area.

A bit dramatic, but you get the idea. Photograph by Carlos Gutierrez/UPI/Landov via National Geographic
A bit dramatic, but you get the idea.
Photograph by Carlos Gutierrez/UPI/Landov via National Geographic

Most of the long-range damage would come from “cold ash” and pumice borne on the wind. 4 or more inches (10cm) would cover the ground in a radius of about 500 miles. This would prevent photosynthesis and destroy much of the plant life in the region. Lighter dustings would traverse the United States– polluting farms in the Midwest, covering cars in New York, and contaminating the Mississippi River. It would clog waterways and agricultural areas with toxic sludge. Thus, the worst outcome of this event would be the destruction of our food supplies and waterways.

It’s likely that we’d see a global effect on temperatures from all the extra particles in the Earth’s atmosphere. However, these effects would only last a few years as Yellowstone isn’t nearly big enough to cause the long-term catastrophes that we see play out in doomsday scenarios (so no need to worry about a new ice age).

Moreover, contrary to what Hollywood would have you believe, the eruption won’t come without warning.

A super eruption, like all volcanic eruptions, begins with an earthquake. And if Yellowstone were to have a super eruption, we’d have some big ones. These earthquakes would begin weeks or months before the final eruption. So this eruption wouldn’t come out of nowhere. In fact, most scientists agree that such an eruption won’t come at all as the caldera has gone through many regular eruptions that release pressure.

So it seems that you can add “A Yellowstone Super Eruption” to your list of ways that the world will not end (Yay!).

Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content

Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content


Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University and author of Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

Two of the world’s premiere ocean scientists from Harvard and MIT have addressed the data limitations that currently prevent the oceanographic community from resolving the differences among various estimates of changing ocean heat content (in print but available here).3 They point out where future data is most needed so these ambiguities do not persist into the next several decades of change.
As a by-product of that analysis they 1) determined the deepest oceans are cooling, 2) estimated a much slower rate of ocean warming, 3) highlighted where the greatest uncertainties existed due to the ever changing locations of heating and cooling, and 4) specified concerns with previous methods used to construct changes in ocean heat content, such as Balmaseda and Trenberth’s re-analysis (see below).13 They concluded, “Direct determination of changes in oceanic heat content over the last 20 years are not in conflict with estimates of the radiative forcing, but the uncertainties remain too large to rationalize e.g., the apparent “pause” in warming.”

clip_image001
Wunsch and Heimbach (2014) humbly admit that their “results differ in detail and in numerical values from other estimates, but the determining whether any are “correct” is probably not possible with the existing data sets.”

They estimate the changing states of the ocean by synthesizing diverse data sets using models developed by the consortium for Estimating the Circulation and Climate of the Ocean, ECCO. The ECCO “state estimates” have eliminated deficiencies of previous models and they claim, “unlike most “data assimilation” products, [ECCO] satisfies the model equations without any artificial sources or sinks or forces. The state estimate is from the free running, but adjusted, model and hence satisfies all of the governing model equations, including those for basic conservation of mass, heat, momentum, vorticity, etc. up to numerical accuracy.”

Their results (Figure 18. below) suggest a flattening or slight cooling in the upper 100 meters since 2004, in agreement with the -0.04 Watts/m2 cooling reported by Lyman (2014).6 The consensus of previous researchers has been that temperatures in the upper 300 meters have flattened or cooled since 2003,4 while Wunsch and Heimbach (2014) found the upper 700 meters still warmed up to 2009.

The deep layers contain twice as much heat as the upper 100 meters, and overall exhibit a clear cooling trend for the past 2 decades. Unlike the upper layers, which are dominated by the annual cycle of heating and cooling, they argue that deep ocean trends must be viewed as part of the ocean’s long term memory which is still responding to “meteorological forcing of decades to thousands of years ago”. If Balmaseda and Trenberth’s model of deep ocean warming was correct, any increase in ocean heat content must have occurred between 700 and 2000 meters, but the mechanisms that would warm that “middle layer” remains elusive.
clip_image003
The detected cooling of the deepest oceans is quite remarkable given geothermal warming from the ocean floor. Wunsch and Heimbach (2014) note, “As with other extant estimates, the present state estimate does not yet account for the geothermal flux at the sea floor whose mean values (Pollack et al., 1993) are of order 0.1 W/m2,” which is small but “not negligible compared to any vertical heat transfer into the abyss.3 (A note of interest is an increase in heat from the ocean floor has recently been associated with increased basal melt of Antarctica’s Thwaites glacier. ) Since heated waters rise, I find it reasonable to assume that, at least in part, any heating of the “middle layers” likely comes from heat that was stored in the deepest ocean decades to thousands of years ago.

Wunsch and Heimbach (2014) emphasize the many uncertainties involved in attributing the cause of changes in the overall heat content concluding, “As with many climate-related records, the unanswerable question here is whether these changes are truly secular, and/or a response to anthropogenic forcing, or whether they are instead fragments of a general red noise behavior seen over durations much too short to depict the long time-scales of Fig. 6, 7, or the result of sampling and measurement biases, or changes in the temporal data density.”

Given those uncertainties, they concluded that much less heat is being added to the oceans compared to claims in previous studies (seen in the table below). It is interesting to note that compared to Hansen’s study that ended in 2003 before the observed warming pause, subsequent studies also suggest less heat is entering the oceans. Whether those declining trends are a result of improved methodologies, or due to a cooler sun, or both requires more observations.


StudyYears ExaminedWatts/m2
9Hansen 20051993-20030.86 +/- 0.12
5Lyman 20101993-20080.64 +/- 0.11
10von Schuckmann 20112005-20100.54 +/- 0.1
3Wunsch 20141992-20110.2 +/- 0.1

No climate model had predicted the dramatically rising temperatures in the deep oceans calculated by the Balmaseda/Trenberth re-analysis,13 and oceanographers suggest such a sharp rise is more likely an artifact of shifting measuring systems. Indeed the unusual warming correlates with the switch to the Argo observing system. Wunsch and Heimbach (2013)2 wrote, “clear warnings have appeared in the literature—that spurious trends and values are artifacts of changing observation systems (see, e.g., Elliott and Gaffen, 1991; Marshall et al., 2002; Thompson et al., 2008)—the reanalyses are rarely used appropriately, meaning with the recognition that they are subject to large errors.3
More specifically Wunsch and Heimbach (2014) warned, “Data assimilation schemes running over decades are usually labeled “reanalyses.” Unfortunately, these cannot be used for heat or other budgeting purposes because of their violation of the fundamental conservation laws; see Wunsch and Heimbach (2013) for discussion of this important point. The problem necessitates close examination of claimed abyssal warming accuracies of 0.01 W/m2 based on such methods (e.g., Balmaseda et al., 2013).” 3

So who to believe?

Because ocean heat is stored asymmetrically and that heat is shifting 24/7, any limited sampling scheme will be riddled with large biases and uncertainties. In Figure 12 below Wunsch and Heimbach (2014) map the uneven densities of regionally stored heat. Apparently associated with its greater salinity, most of the central North Atlantic stores twice as much heat as any part of the Pacific and Indian Oceans. Regions where there are steep heat gradients require a greater sampling effort to avoid misleading results. They warned, “The relatively large heat content of the Atlantic Ocean could, if redistributed, produce large changes elsewhere in the system and which, if not uniformly observed, show artificial changes in the global average.” 3

clip_image005

Furthermore, due to the constant time-varying heat transport, regions of warming are usually compensated by regions of cooling as illustrated in their Figure 15. It offers a wonderful visualization of the current state of those natural ocean oscillations by comparing changes in heat content between1992 and 2011. Those patterns of heat re-distributions evolve enormous amounts of heat and that make detection of changes in heat content that are many magnitudes smaller extremely difficult. Again any uneven sampling regime in time or space, would result in “artificial changes in the global average”.

Figure 15 shows the most recent effects of La Nina and the negative Pacific Decadal Oscillation. The eastern Pacific has cooled, while simultaneously the intensifying trade winds have swept more warm water into the western Pacific causing it to warm. Likewise heat stored in the mid‑Atlantic has likely been transported northward as that region has cooled while simultaneously the sub‑polar seas have warmed. This northward change in heat content is in agreement with earlier discussions about cycles of warm water intrusions that effect Arctic sea ice, confounded climate models of the Arctic and controls the distribution of marine organisms.

Most interesting is the observed cooling throughout the upper 700 meters of the Arctic. There have been 2 competing explanations for the unusually warm Arctic air temperature that heavily weights the global average. CO2 driven hypotheses argue global warming has reduced polar sea ice that previously reflected sunlight, and now the exposed dark waters are absorbing more heat and raising water and air temperatures. But clearly a cooling upper Arctic Ocean suggests any absorbed heat is insignificant. Despite greater inflows of warm Atlantic water, declining heat content of the upper 700 meters supports the competing hypothesis that warmer Arctic air temperatures are, at least in part, the result of increased ventilation of heat that was previously trapped by a thick insulating ice cover.7
That second hypothesis is also in agreement with extensive observations that Arctic air temperatures had been cooling in the 80s and 90s. Warming occurred after subfreezing winds, re‑directed by the Arctic Oscillation, drove thick multi-year ice out from the Arctic.11

Regional cooling is also detected along the storm track from the Caribbean and along eastern USA. This evidence contradicts speculation that hurricanes in the Atlantic will or have become more severe due to increasing ocean temperatures. This also confirms earlier analyses of blogger Bob Tisdale and others that Superstorm Sandy was not caused by warmer oceans.
clip_image007

In order to support their contention that the deep ocean has been dramatically absorbing heat, Balmaseda/Trenberth must provide a mechanism and the regional observations where heat has been carried from the surface to those depths. But few are to be found. Warming at great depths and simultaneous cooling of the surface is antithetical to climate models predictions. Models had predicted global warming would store heat first in the upper layer and stratify that layer. Diffusion would require hundreds to thousands of years, so it is not the mechanism. Trenberth, Rahmstorf, and others have argued the winds could drive heat below the surface. Indeed winds can drive heat downward in a layer that oceanographers call the “mixed-layer,” but the depth where wind mixing occurs is restricted to a layer roughly 10-200 meters thick over most of the tropical and mid-latitude belts. And those depths have been cooling slightly.

The only other possible mechanism that could reasonably explain heat transfer to the deep ocean was that the winds could tilt the thermocline. The thermocline delineates a rapid transition between the ocean’s warm upper layer and cold lower layer. As illustrated above in Figure 15, during a La Nina warm waters pile up in the western Pacific and deepens the thermocline. But the tilting Pacific thermocline typically does not dip below the 700 meters, if ever.8

Unfortunately the analysis by Wunsch and Heimbach (2014) does not report on changes in the layer between 700 meters and 2000 meters. However based on changes in heat content below 2000 meters (their Figure 16 below), deeper layers of the Pacific are practically devoid of any deep warming.
clip_image009
The one region transporting the greatest amount of heat into the deep oceans is the ice forming regions around Antarctica, especially the eastern Weddell Sea where annually sea ice has been expanding.12 Unlike the Arctic, the Antarctic is relatively insulated from intruding subtropical waters (discussed here) so any deep warming is mostly from heat descending from above with a small contribution from geothermal.

Counter‑intuitively greater sea ice production can deliver relatively warmer subsurface water to the ocean abyss. When oceans freeze, the salt is ejected to form a dense brine with a temperature that always hovers at the freezing point. Typically this unmodified water is called shelf water. Dense shelf water readily sinks to the bottom of the polar seas. However in transit to the bottom, shelf water must pass through layers of variously modified Warm Deep Water or Antarctic Circumpolar Water.
Turbulent mixing also entrains some of the warmer water down to the abyss. Warm Deep Water typically comprises 62% of the mixed water that finally reaches the bottom. Any altered dynamic (such as increasing sea ice production, or circulation effects that entrain a greater proportion of Warm Deep Water), can redistribute more heat to the abyss.14. Due to the Antarctic Oscillation the warmer waters carried by the Antarctic Circumpolar Current have been observed to undulate southward bringing those waters closer to ice forming regions. Shelf waters have generally cooled and there has been no detectable warming of the Warm Deep Water core, so this region’s deep ocean warming is likely just re-distributing heat and not adding to the ocean heat content.

So it remains unclear if and how Trenberth’s “missing heat” has sunk to the deep ocean. The depiction of a dramatic rise in deep ocean heat is highly questionable, even though alarmists have flaunted it as proof of Co2’s power. As Dr. Wunsch had warned earlier, “Convenient assumptions should not be turned prematurely into ‘facts,’ nor uncertainties and ambiguities suppressed.” … “Anyone can write a model: the challenge is to demonstrate its accuracy and precision… Otherwise, the scientific debate is controlled by the most articulate, colorful, or adamant players.” 1

To reiterate, “the uncertainties remain too large to rationalize e.g., the apparent “pause” in warming.”

==================================

Literature Cited

1. C. Wunsch, 2007. The Past and Future Ocean Circulation from a Contemporary Perspective, in AGU Monograph, 173, A. Schmittner, J. Chiang and S. Hemming, Eds., 53-74
2. Wunsch, C. and P. Heimbach (2013) Dynamically and Kinematically Consistent Global Ocean Circulation and Ice State Estimates. In Ocean Circulation and Climate, Vol. 103. http://dx.doi.org/10.1016/B978-0-12-391851-2.00021-0
3. Wunsch, C., and P. Heimbach, (2014) Bidecadal Thermal Changes in the Abyssal Ocean, J. Phys. Oceanogr., http://dx.doi.org/10.1175/JPO-D-13-096.1
4. Xue,Y., et al., (2012) A Comparative Analysis of Upper-Ocean Heat Content Variability from an Ensemble of Operational Ocean Reanalyses. Journal of Climate, vol 25, 6905-6929.
5. Lyman, J. et al, (2010) Robust warming of the global upper ocean. Nature, vol. 465,334-
337.
6. Lyman, J. and G. Johnson (2014) Estimating Global Ocean Heat Content Changes in the Upper 1800m since 1950 and the Influence of Climatology Choice*. Journal of Climate, vol 27.
7. Rigor, I.G., J.M. Wallace, and R.L. Colony (2002), Response of Sea Ice to the Arctic Oscillation, J. Climate, v. 15, no. 18, pp. 2648 – 2668.
8. Zhang, R. et al. (2007) Decadal change in the relationship between the oceanic entrainment temperature and thermocline depth in the far western tropical Pacific. Geophysical Research Letters, Vol. 34.
9. Hansen, J., and others, 2005: Earth’s energy imbalance: confirrmation and implications. Science, vol. 308, 1431-1435.
10. von Schuckmann, K., and P.-Y. Le Traon, 2011: How well can we derive Global Ocean Indicators
from Argo data?, Ocean Sci., 7, 783-791, doi:10.5194/os-7-783-2011.
11. Kahl, J., et al., (1993) Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years. Nature, vol. 361, p. 335‑337, doi:10.1038/361335a0
12. Parkinson, C. and D. Cavalieri (2012) Antarctic sea ice variability and trends, 1979–2010. The Cryosphere, vol. 6, 871–880.
13. Balmaseda, M. A., K. E. Trenberth, and E. Kallen, 2013: Distinctive climate signals in reanalysis of global ocean heat content. Geophysical Research Letters, 40, 1754-1759.
14. Azaneau, M. et al. (2013) Trends in the deep Southern Ocean (1958–2010): Implications for Antarctic Bottom Water properties and volume export. Journal Of Geophysical Research: Oceans, Vol. 118

How the Ebola Outbreak Became Deadliest in History

How the Ebola Outbreak Became Deadliest in History

Original Link:  https://richarddawkins.net/2014/08/how-the-ebola-outbreak-became-deadliest-in-history/
 
By Bahar Gholipour

The reasons why the Ebola outbreak in West Africa has grown so large, and why it is happening now, may have to do with the travel patterns of bats across Africa and recent weather patterns in the region, as well as other factors, according to a researcher who worked in the region.

The outbreak began with Ebola cases that surfaced in Guinea, and subsequently spread to the neighboring countries of Liberia and Sierra Leone. Until now, none of these three West African countries had ever experienced an Ebola outbreak, let alone cases involving a type of Ebola virus that had been found only in faraway Central Africa.

But despite the image of Ebola as a virus that mysteriously and randomly emerges from the forest, the sites of the cases are far from random, said Daniel Bausch, a tropical medicine researcher at Tulane University who just returned from Guinea and Sierra Leone, where he had worked as part of the outbreak response team.

“A very dangerous virus got into a place in the world that is the least prepared to deal with it,” Bausch told Live Science.

In a new article published today (July 31) in the journal PLOS Neglected Tropical Diseases, Bausch
and a colleague reviewed the factors that potentially turned the current outbreak into the largest and deadliest Ebola outbreak in history. Although the focus is now on getting the outbreak under control, for long-term prevention, underlying factors need to be addressed, they said.

Here are five potential reasons why this outbreak is so severe:

The virus causing this outbreak is the deadliest type of Ebola virus.

The Ebola virus has five species, and each species has caused outbreaks in different regions. Experts were surprised to see that instead of the Taï Forest Ebola virus, which is found near Guinea, it was the Zaire Ebola virus that is the culprit in the current outbreak. This virus was previously found only in three countries in Central Africa: the Democratic Republic of the Congo, the Republic of the Congo and Gabon.

Zaire Ebola virus is the deadliest type of Ebola virus — in previous outbreaks it has killed up to 90 percent of those it infected.

But how did the Zaire Ebola virus get to Guinea? Few people travel between those two regions, and Guéckédou, the remote epicenter of first cases of disease, is far off the beaten path, Bausch said. “If
Ebola virus was introduced into Guinea from afar, the more likely traveler was a bat,” he said.
It is also possible that the virus was actually in West Africa before the current outbreak, circulating in bats — and perhaps even infected people but so sporadically that it was never recognized, Bausch said. Some preliminary analysis of blood samples collected from patients with other diseases before the outbreak suggests people in this region were exposed to Ebola previously, but more research is needed to know for sure.

Pondering The Second Law

I glanced through a few posts and comments the other day about creationism and evolution, in which the famous Second Law of Thermodynamics was mentioned several times.  I also know, or it is alleged, that the US Patent Office will not even consider any application if it defies the 2'nd Law in any way -- but maybe that is a legend, I really don't know.  In either case, it got me thinking about the law, and the ideas of laws of physics in general.  What is a law?

I always find a good starting place Wikipedia, that famous repository of seemingly all knowledge of learned minds, yet notorious at the same time because seemingly anyone can change its contents (I've never even tried).  I do know that when it comes to subjects I know something about, I've always found it both agreeable and further educational.  So I looked up the Second Law on it, and found this:  http://en.wikipedia.org/wiki/Second_law_of_thermodynamics

________________________________________

Second law of thermodynamics

From Wikipedia, the free encyclopedia
   
The second law of thermodynamics states that the entropy of an isolated system never decreases, because isolated systems always evolve toward thermodynamic equilibrium, a state with maximum entropy.

The second law is an empirically validated postulate of thermodynamics. In classical thermodynamics, the second law is a basic postulate defining the concept of thermodynamic entropy, applicable to any system involving measurable heat transfer. In statistical thermodynamics, the second law is a consequence of unitarity in quantum mechanics. In statistical mechanics information entropy is defined from information theory, known as the Shannon entropy. In the language of statistical mechanics, entropy is a measure of the number of alternative microscopic configurations corresponding to a single macroscopic state.

The second law refers to increases in entropy that can be analyzed into two varieties, due to dissipation of energy and due to dispersion of matter. One may consider a compound thermodynamic system that initially has interior walls that restrict transfers within it. The second law refers to events over time after a thermodynamic operation on the system, that allows internal heat transfers, removes or weakens the constraints imposed by its interior walls, and isolates it from the surroundings. As for dissipation of energy, the temperature becomes spatially homogeneous, regardless of the presence or absence of an externally imposed unchanging external force field. As for dispersion of matter, in the absence of an externally imposed force field, the chemical concentrations also become as spatially homogeneous as is allowed by the permeabilities of the interior walls. Such homogeneity is one of the characteristics of the state of internal thermodynamic equilibrium of a thermodynamic system.

________________________________________

There is more, much more, and please read it all, for it is good.  To begin, it immediately takes us to the concept of entropy, which a measure of disorder in an (isolated, closed) system.

Yet, this has always struck me as bizarre and counter-intuitive.  Why don't we speak of the order in a system, in positive terms?  In science, as in everyday life, we are accustomed to measuring how much of something a thing has, not how much non-something it possesses.  So why isn't entropy the same, a measurement of what's there, not what's lacking?  It's as if we defined matter in terms of the space surrounding it.

The entropy of the cosmos is always increasing, we are also told, as another invocation of the Second Law.  Information content is always decreasing.  Efficiencies are always less than 100%.  We're always losing.  Growing older, and dying.  Death and decay -- what could be a better metaphor for a process that also describes a Carnot Engine?  How do all these ends tie together anyway?

________________________________________

Yet they do, in a very mathematical, and, yes, intuitive, way.  The mathematics I speak of here is that branch called Probability and Statistics.

Stop.  Don't run and hide for cover.  I'm not a mathematician, or even a physicist.  I'm just a plain old chemist, without even his PhD.  I'm not even going to look anything up, or present any strange looking equations or even charts to use.  I'm going to try to talk about it the same down to earth language that I used in convincing myself of the validity of the Second Law years ago.

Think of a deck of cards.  Better yet, if you have one handy, go grab it.  Riffle through it, in your mind or in your hands (or in your mental hands, if you've got nimble ones).  What do you notice?  First, that all the cards are different -- ah, if this isn't the case, you aren't holding a proper deck.  If it is, do like me and count them.  Fifty-two of them, all spread before you, name and face cards, black and red, tops and bottoms.

Now shuffle them as randomly as you can (if you find this difficult, let your dog or a small child do it for you).  Drop them on the floor, kick them around for a while, then walk about, picking them here and there, at whim, until they're all in your hands again.  The only thing I ask you to do while doing this is to keep the faces (all different) down and the tops (all the same, I think) up.  Pick them all up.  Nudge every corner, every side, every edge, into place, so that the deck is neatly piled.

Now guess the first card.  A protest?  "I've only a one in fifty-two chance of being right,"  you exclaim in dismay.  If you did, that's good, for we're already making progress.  You have some sense of what a probability means.  One in fifty-two is not a very good chance.  You certainly wouldn't bet any money on it (unless you're a compulsive gambler, in which case Chance help you).

Another way of stating your predicament is that you haven't sufficient information to make a good guess.  If you could only know whether it was a red card or a black card, a face card or a number card, or something, anything like this, you could start thinking about your pocketbook before making you guess.  But you know nothing, nothing! Well, except that it has to be one of the fifty-two cards comprising the deck.

Hold on, because I'm going to make things worse.  What if I asked you to guess, not just the first card, but to guess every card in the deck?  If punching me in the mouth isn't your answer, you might just hunker down and wonder how to determine what your chances were of accomplishing such an amazing feat.  How could you do this?

This is where a little mathematics comes in.  Create a mental deck of cards in your head.  Choose the first card at random -- say, seven of spades.  That could have been in any one of fifty-two cards, but you placed it first.  Then the second card -- what is it?  How many remaining cards could you have chosen?  Why, fifty-two minus one, equaling fifty-one.  Now the third card.  Fifty-one minus one, equaling fifty.  And so on, and on, and on, etc., until we come to the last card.

So in the placement of the cards you have 53 X 51 X 50 X ... all the way down to the last card, or X 1.  Mathematicians have a nice way of expressing a product like this:  it's called a factorial, and its represented the "!" symbol.  In this case, it would be fifty-two factorial, or 52!.

It's one thing to state it.  Actually carrying out the calculation, even with a calculator or on your computer, isn't very easy.  Fortunately, all those years ago I already did it, so I will present you with the approximate answer I recall (approximate because my calculator couldn't handle that many digits).  That answer is "8 X 10 E67".  This is another mathematical shorthand, meaning in this case "Eight times ten, where ten is multiplied by itself 67 times over first".  Or, if your prefer, because this is a very, very large number, actually eight followed by sixty-seven zeroes, we can take its based-10 logarithm, around 67.9.  Well, round that off to 68, and you're there as good as not.

A number like that is so large (though it's only a trillionth of the number of atoms in the universe), that you wouldn't bet the tiniest tip of one hair leg of the louse living off the tip of one of your hair legs on it.  It might as well be infinite, as far as you're concerned.  But it's not infinite, not even the even tiniest bit close, all of which brings me back to the subject of thermodynamics.
________________________________________
 
Heat, as we all now know thanks to Ludwig Boltzmann, is merely the random motion of atoms.  That motion, thanks to Newton's equations, represent the kinetic energy of the atoms, and with this in mind I am going to attempt a magical transformation:  Imagine, instead of those cards in a deck, the kinetic energies of all the atoms in a given body of matter.  As gas is the simplest state of matter, we'll work with that.  Imagine a hollow magic glass globe, if you like, filled with atoms (or molecules; in this analysis we can treat them the same) of a gas.  And instead of fifty-two cards, consider uncountable trillions upon trillions of different states of kinetic energy among the (otherwise alike) gas atoms.

I want to make this crystal clear.  I am not relating the cards to the individuals atoms, but to the quantity of kinetic energy each particular atom has.  We can even quantize the energy in integer units, from one unit all the way up to -- well, as high as you wish.  If there are a trillion atoms of gas in this globe, then let's say that there are anywhere from one to a trillion energy levels available to each atom.  The precise number doesn't really matter -- I am using trillions here but any number large enough to be unimaginable will do; and there isn't actually any relationship between the number of atoms and number of energy levels.  All of this is just simplification for the purpose of explanation.

Very well.  Consider this glass globe full of gas atoms.  There were two ways we can go about measuring properties of it.  The easiest way is to measure its macroscopic properties.  These are properties such as volume, pressure, temperature, the number of atoms (in conveniently large units like a mole, or almost a trillion times a trillion), the mass, and so on.  They're convenient because we have devices like thermometers, scales, barometers, etc., that we can use to do this.

But there is another way to measure the properties of the gas:  the microscopic way.  In this, we take into account each atom and its quantity of kinetic energy, or some measure of its motion, one by one, and sum the whole thing up.  I'm sure you'll agree that this would be a very tedious, and in practice, absurd and impossible way to make the measurements -- for one thing, even if we could do it at all (a very dubious if, to say the least) it would take nearly forever to get anywhere with any measurement at all.  Fortunately, however, there is a correspondence between these macroscopic and microscopic properties, or states as I shall now call them.  That correspondence is via entropy, or the heart of the Second Law.

Recall the statement from Wiki:  "Entropy is a measure of the number of alternative microscopic configurations corresponding to a single macroscopic state."  That card deck of energy units assigned to each gas atom, like the cards in an actual deck, can be arranged in many, many ways:  about a number whose base ten logarithm is 67.9, recall.  Excuse me, that's for the 52 cards in a deck; for the trillions of possible energy states in a trillion X trillion atoms, that number would be astronomically large; so large that even its logarithm could not be expressed in a format like this, possibly not in any format available in the universe (if anyone can calculate that).  From a microscopic view, the probability of that particular state might as well be zero, for our ability to calculate it.  Like a well-shuffled, deck of cards, there's just no useful information in it.  Another way of saying this, returning to our randomly moving globe of gas atoms, is that there is no way of doing any useful work with it.

That's for a well-shuffled deck of cards / highly randomized energy distribution among atoms.  What about a highly ordered deck or distribution?  First, we have to specify what we mean by "ordered."  For a deck, this might mean ordered by suit (spades, heart, diamond, and club, say), and by value (ace, king, queen, jack, ten ... two), while for our gas it could mean the one atom owns all the units of energies, and all the others none.  I hope you can see that, defined this way, there is only one particular distribution; and once we, either by shuffling the deck or allowing the gas atoms to bump into each other, thereby releasing or gaining units of energy, the distributions become progressively less and less ordered, eventually (though this may take an enormous amount of time) become highly randomized.  The overall macroscopic properties, such as temperature or pressure, don't change, but the ways those properties can be achieved, increase dramatically.  This is why we talk about the " number of alternative microscopic configurations corresponding to a single macroscopic state", or entropy, and why we say that, in the cosmos as a whole, entropy is always increasing.  It is why the ordered state contains a great deal of information and can do a great deal of work, while increasing disorder, or entropy, means less of both.

Now if you've ever worked with decks of cards, you've noticed something quite obvious in retrospect:  you rarely go from perfect order to complete disorder in one shuffle.  There are many, many (also almost innumerable) in-between states of the deck that still have some order in some places with disorder in others.  In fact, even a completely randomized deck will have, by pure chance, some small pockets of order , which can still be exploited as information or for work).  The same is true in nature of course, which is why the Second Law is really a statement of probabilities, not absolutes.  Or, to quote the late Jacob Bronowski in his famous book Ascent of Man:  "It is not true that orderly states constantly run down to disorder.  It is a statistical law, which says that order will tend to vanish.  But statistics do not say 'always'.  Statistics allow order to be built up in some islands of the universe (here on earth, in you, in me, in the stars, in all kinds of places) while disorder takes over in others."

Information, order, the ability for work:  these are always things that a universe has to some degree, however incompletely.  Indeed, by our current understanding of cosmic evolution, our universe started off in a very high state of order, a perhaps highly improbable state of affairs, but quite permissible by the deeply understood laws of thermodynamics.  This initial high degree of order has allowed all the galaxies and stars, and atoms, and of course you an me and all other living things in this universe, to come into existence; and in the same way, will see all these things we regard as precious to us now pass out of existence.  But do not despair.  Order can never run down to absolutely zero; or, from the opposite perspective, disorder, or entropy, can never increase to an infinite however great it becomes; because of this simultaneously subtle but obvious observation, life in some quantity -- organized consciousness in some form is maybe the better word -- doesn't have to completely vanish.  If fact, if the reality really is infinite, as I suspect it is -- consisting of an infinite number of universes in an infinite space-time, all subtly different but all obeying the same fundamental laws of logic -- then we never have to worry about the light of mind being utterly snuffed out everywhere, for all times and places, at any point in any future.  That light has certainly shown long before life on Earth began to organize, and will continue to, somehow, long after our solar system and even entire universe has long burnt out into a heatless cinder.
________________________________________

What I've put forth here is a necessarily very limited explanation of the Second Law of Thermodynamics, of order, disorder, entropy, information, and work.  There are many more explanations, and whatever you have gained from mine -- I presume more questions than answers -- you should seek deeper comprehension in others' explanations, and in your own mental work on the subject.  There are many topics I've overlooked, or hit upon only in sketchy form, concepts you may need to fully explore and gain clarity in first before grasping the Great Second Law.  If it is any consolation -- assuming you need any -- I am no doubt in much the same situation, perhaps even more so.  If so, I wish you prosperity in your quest for full comprehension, as I have had in my own.  Thank you for attending to these words.

Tidal forces gave moon its shape, according to new analysis

Tidal forces gave moon its shape, according to new analysis

July 30, 2014
moon-350.jpg
NASA's Lunar Reconnaissance Orbiter Camera acquired this image of the nearside of the moon in 2010. (Credit: NASA/GSFC/Arizona State University)
 
The shape of the moon deviates from a simple sphere in ways that scientists have struggled to explain. A new study by researchers at UC Santa Cruz shows that most of the moon's overall shape can be explained by taking into account tidal effects acting early in the moon's history.

The results, published July 30 in Nature, provide insights into the moon's early history, its orbital evolution, and its current orientation in the sky, according to lead author Ian Garrick-Bethell, assistant professor of Earth and planetary sciences at UC Santa Cruz.

As the moon cooled and solidified more than 4 billion years ago, the sculpting effects of tidal and rotational forces became frozen in place. The idea of a frozen tidal-rotational bulge, known as the "fossil bulge" hypothesis, was first described in 1898. "If you imagine spinning a water balloon, it will start to flatten at the poles and bulge at the equator," Garrick-Bethell explained. "On top of that you have tides due to the gravitational pull of the Earth, and that creates sort of a lemon shape with the long axis of the lemon pointing at the Earth."

But this fossil bulge process cannot fully account for the current shape of the moon. In the new paper, Garrick-Bethell and his coauthors incorporated other tidal effects into their analysis. They also took into account the large impact basins that have shaped the moon's topography, and they considered the moon's gravity field together with its topography.

Impact craters

Efforts to analyze the moon's overall shape are complicated by the large basins and craters created by powerful impacts that deformed the lunar crust and ejected large amounts of material. "When we try to analyze the global shape of the moon using spherical harmonics, the craters are like gaps in the data," Garrick-Bethell said. "We did a lot of work to estimate the uncertainties in the analysis that result from those gaps."

Their results indicate that variations in the thickness of the moon's crust caused by tidal heating during its formation can account for most of the moon's large-scale topography, while the remainder is consistent with a frozen tidal-rotational bulge that formed later.

A previous paper by Garrick-Bethell and some of the same coauthors described the effects of tidal stretching and heating of the moon's crust at a time 4.4 billion years ago when the solid outer crust still floated on an ocean of molten rock. Tidal heating would have caused the crust to be thinner at the poles, while the thickest crust would have formed in the regions in line with the Earth. Published in Science in 2010, the earlier study found that the shape of one area of unusual topography on the moon, the lunar farside highlands, was consistent with the effects of tidal heating during the formation of the crust.

"In 2010, we found one area that fits the tidal heating effect, but that study left open the rest of the moon and didn't include the tidal-rotational deformation. In this paper we tried to bring all those considerations together," Garrick-Bethell said.

Tidal heating and tidal-rotational deformation had similar effects on the moon's overall shape, giving it a slight lemon shape with a bulge on the side facing the Earth and another bulge on the opposite side. The two processes left distinct signatures, however, in the moon's gravity field. Because the crust is lighter than the underlying mantle, gravity signals reveal variations in the thickness of the crust that were caused by tidal heating.

Gravity field

Interestingly, the researchers found that the moon's overall gravity field is no longer aligned with the topography, as it would have been when the tidal bulges were frozen into the moon's shape. The principal axis of the moon's overall shape (the long axis of the lemon) is now separated from the gravity principal axis by about 34 degrees. (Excluding the large basins from the data, the difference is still about 30 degrees.)

"The moon that faced us a long time ago has shifted, so we're no longer looking at the primordial face of the moon," Garrick-Bethell said. "Changes in the mass distribution shifted the orientation of the moon. The craters removed some mass, and there were also internal changes, probably related to when the moon became volcanically active."

The details and timing of these processes are still uncertain. But Garrick-Bethell said the new analysis should help efforts to work out the details of the moon's early history. While the new study shows that tidal effects can account for the overall shape of the moon, tidal processes don't explain the topographical differences between the near side and the far side.


In addition to Garrick-Bethell, the coauthors of the paper include Viranga Perera, who worked on the study as a UCSC graduate student and is now at Arizona State University; Francis Nimmo, professor of Earth and planetary sciences at UCSC; and Maria Zuber, a planetary scientist at the Massachusetts Institute of Technology. This work was funded by the Ministry of Education of Korea through the National Research Foundation.

Black Sea deluge hypothesis

From Wikipedia, the free encyclopedia https://en.wikipedi...