Google+ Badge

Follow by Email

Search This Blog

Saturday, August 2, 2014

A Yellowstone Super Eruption: Another Doomsday Scenario put to Rest

A Yellowstone Super Eruption: Another Doomsday Scenario put to Rest

August 2, 2014 Science
From Link:      
Image Credit: Unknown (source)
Image Credit: Nina B via Shutterstock

If you’ve heard of Yellowstone National Park then, chances are, you’ve heard doomsday scenarios about Yellowstone National park. The 2005 movie “Supervolcano” highlights how these scenarios generally play out: Yellowstone erupts; people are drowned beneath mountains of lava; a looming cloud of sulfur dioxide gets carried over the globe; the Earth plunges into a volcanic winter; we all die.

Fun times…

In truth, Yellowstone is quite massive…and so is its underground magma reservoir. At 3,472 square miles (8,987 square km), the park is larger than Rhode Island and Delaware combined. And as we all know, a portion of the park sits on top of a giant volcanic caldera (an earthen cap that covers a huge reservoir of superhot liquid rock and gasses). The underground magma chamber is about 37 miles long (60 km), 18 miles wide (30 km), and 3 to 7 miles deep (5 to 12 km). That may sound rather terrifying; however, fortunately for us, all that magma is tucked safely beneath the surface of the Earth.

Image Source
Image Source

But what if it wasn’t? What if Yellowstone erupted? Would the Earth be plunged into a volcanic winter, as some sources indicate?

Geologist Jake Lowenstern (scientist-in-charge of the Yellowstone Volcano Observatory) has the answers that we seek. According Lowenstern, although the Yellowstone magma source is enormous, walls of lava won’t come pouring across the continent if there’s a super eruption. Instead, the lava flows would be limited to a 30-40 mile radius. Of course, this is still widespread enough to cause significant devastation. There would be no hope for any life forms living within this radius, and the surrounding areas would be engulfed in flames—forest fires would likely rage out of control…but a majority of the immediate damage would be contained within the surrounding area.

A bit dramatic, but you get the idea. Photograph by Carlos Gutierrez/UPI/Landov via National Geographic
A bit dramatic, but you get the idea.
Photograph by Carlos Gutierrez/UPI/Landov via National Geographic

Most of the long-range damage would come from “cold ash” and pumice borne on the wind. 4 or more inches (10cm) would cover the ground in a radius of about 500 miles. This would prevent photosynthesis and destroy much of the plant life in the region. Lighter dustings would traverse the United States– polluting farms in the Midwest, covering cars in New York, and contaminating the Mississippi River. It would clog waterways and agricultural areas with toxic sludge. Thus, the worst outcome of this event would be the destruction of our food supplies and waterways.

It’s likely that we’d see a global effect on temperatures from all the extra particles in the Earth’s atmosphere. However, these effects would only last a few years as Yellowstone isn’t nearly big enough to cause the long-term catastrophes that we see play out in doomsday scenarios (so no need to worry about a new ice age).

Moreover, contrary to what Hollywood would have you believe, the eruption won’t come without warning.

A super eruption, like all volcanic eruptions, begins with an earthquake. And if Yellowstone were to have a super eruption, we’d have some big ones. These earthquakes would begin weeks or months before the final eruption. So this eruption wouldn’t come out of nowhere. In fact, most scientists agree that such an eruption won’t come at all as the caldera has gone through many regular eruptions that release pressure.

So it seems that you can add “A Yellowstone Super Eruption” to your list of ways that the world will not end (Yay!).

Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content

Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content

Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University and author of Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

Two of the world’s premiere ocean scientists from Harvard and MIT have addressed the data limitations that currently prevent the oceanographic community from resolving the differences among various estimates of changing ocean heat content (in print but available here).3 They point out where future data is most needed so these ambiguities do not persist into the next several decades of change.
As a by-product of that analysis they 1) determined the deepest oceans are cooling, 2) estimated a much slower rate of ocean warming, 3) highlighted where the greatest uncertainties existed due to the ever changing locations of heating and cooling, and 4) specified concerns with previous methods used to construct changes in ocean heat content, such as Balmaseda and Trenberth’s re-analysis (see below).13 They concluded, “Direct determination of changes in oceanic heat content over the last 20 years are not in conflict with estimates of the radiative forcing, but the uncertainties remain too large to rationalize e.g., the apparent “pause” in warming.”

Wunsch and Heimbach (2014) humbly admit that their “results differ in detail and in numerical values from other estimates, but the determining whether any are “correct” is probably not possible with the existing data sets.”

They estimate the changing states of the ocean by synthesizing diverse data sets using models developed by the consortium for Estimating the Circulation and Climate of the Ocean, ECCO. The ECCO “state estimates” have eliminated deficiencies of previous models and they claim, “unlike most “data assimilation” products, [ECCO] satisfies the model equations without any artificial sources or sinks or forces. The state estimate is from the free running, but adjusted, model and hence satisfies all of the governing model equations, including those for basic conservation of mass, heat, momentum, vorticity, etc. up to numerical accuracy.”

Their results (Figure 18. below) suggest a flattening or slight cooling in the upper 100 meters since 2004, in agreement with the -0.04 Watts/m2 cooling reported by Lyman (2014).6 The consensus of previous researchers has been that temperatures in the upper 300 meters have flattened or cooled since 2003,4 while Wunsch and Heimbach (2014) found the upper 700 meters still warmed up to 2009.

The deep layers contain twice as much heat as the upper 100 meters, and overall exhibit a clear cooling trend for the past 2 decades. Unlike the upper layers, which are dominated by the annual cycle of heating and cooling, they argue that deep ocean trends must be viewed as part of the ocean’s long term memory which is still responding to “meteorological forcing of decades to thousands of years ago”. If Balmaseda and Trenberth’s model of deep ocean warming was correct, any increase in ocean heat content must have occurred between 700 and 2000 meters, but the mechanisms that would warm that “middle layer” remains elusive.
The detected cooling of the deepest oceans is quite remarkable given geothermal warming from the ocean floor. Wunsch and Heimbach (2014) note, “As with other extant estimates, the present state estimate does not yet account for the geothermal flux at the sea floor whose mean values (Pollack et al., 1993) are of order 0.1 W/m2,” which is small but “not negligible compared to any vertical heat transfer into the abyss.3 (A note of interest is an increase in heat from the ocean floor has recently been associated with increased basal melt of Antarctica’s Thwaites glacier. ) Since heated waters rise, I find it reasonable to assume that, at least in part, any heating of the “middle layers” likely comes from heat that was stored in the deepest ocean decades to thousands of years ago.

Wunsch and Heimbach (2014) emphasize the many uncertainties involved in attributing the cause of changes in the overall heat content concluding, “As with many climate-related records, the unanswerable question here is whether these changes are truly secular, and/or a response to anthropogenic forcing, or whether they are instead fragments of a general red noise behavior seen over durations much too short to depict the long time-scales of Fig. 6, 7, or the result of sampling and measurement biases, or changes in the temporal data density.”

Given those uncertainties, they concluded that much less heat is being added to the oceans compared to claims in previous studies (seen in the table below). It is interesting to note that compared to Hansen’s study that ended in 2003 before the observed warming pause, subsequent studies also suggest less heat is entering the oceans. Whether those declining trends are a result of improved methodologies, or due to a cooler sun, or both requires more observations.

StudyYears ExaminedWatts/m2
9Hansen 20051993-20030.86 +/- 0.12
5Lyman 20101993-20080.64 +/- 0.11
10von Schuckmann 20112005-20100.54 +/- 0.1
3Wunsch 20141992-20110.2 +/- 0.1

No climate model had predicted the dramatically rising temperatures in the deep oceans calculated by the Balmaseda/Trenberth re-analysis,13 and oceanographers suggest such a sharp rise is more likely an artifact of shifting measuring systems. Indeed the unusual warming correlates with the switch to the Argo observing system. Wunsch and Heimbach (2013)2 wrote, “clear warnings have appeared in the literature—that spurious trends and values are artifacts of changing observation systems (see, e.g., Elliott and Gaffen, 1991; Marshall et al., 2002; Thompson et al., 2008)—the reanalyses are rarely used appropriately, meaning with the recognition that they are subject to large errors.3
More specifically Wunsch and Heimbach (2014) warned, “Data assimilation schemes running over decades are usually labeled “reanalyses.” Unfortunately, these cannot be used for heat or other budgeting purposes because of their violation of the fundamental conservation laws; see Wunsch and Heimbach (2013) for discussion of this important point. The problem necessitates close examination of claimed abyssal warming accuracies of 0.01 W/m2 based on such methods (e.g., Balmaseda et al., 2013).” 3

So who to believe?

Because ocean heat is stored asymmetrically and that heat is shifting 24/7, any limited sampling scheme will be riddled with large biases and uncertainties. In Figure 12 below Wunsch and Heimbach (2014) map the uneven densities of regionally stored heat. Apparently associated with its greater salinity, most of the central North Atlantic stores twice as much heat as any part of the Pacific and Indian Oceans. Regions where there are steep heat gradients require a greater sampling effort to avoid misleading results. They warned, “The relatively large heat content of the Atlantic Ocean could, if redistributed, produce large changes elsewhere in the system and which, if not uniformly observed, show artificial changes in the global average.” 3


Furthermore, due to the constant time-varying heat transport, regions of warming are usually compensated by regions of cooling as illustrated in their Figure 15. It offers a wonderful visualization of the current state of those natural ocean oscillations by comparing changes in heat content between1992 and 2011. Those patterns of heat re-distributions evolve enormous amounts of heat and that make detection of changes in heat content that are many magnitudes smaller extremely difficult. Again any uneven sampling regime in time or space, would result in “artificial changes in the global average”.

Figure 15 shows the most recent effects of La Nina and the negative Pacific Decadal Oscillation. The eastern Pacific has cooled, while simultaneously the intensifying trade winds have swept more warm water into the western Pacific causing it to warm. Likewise heat stored in the mid‑Atlantic has likely been transported northward as that region has cooled while simultaneously the sub‑polar seas have warmed. This northward change in heat content is in agreement with earlier discussions about cycles of warm water intrusions that effect Arctic sea ice, confounded climate models of the Arctic and controls the distribution of marine organisms.

Most interesting is the observed cooling throughout the upper 700 meters of the Arctic. There have been 2 competing explanations for the unusually warm Arctic air temperature that heavily weights the global average. CO2 driven hypotheses argue global warming has reduced polar sea ice that previously reflected sunlight, and now the exposed dark waters are absorbing more heat and raising water and air temperatures. But clearly a cooling upper Arctic Ocean suggests any absorbed heat is insignificant. Despite greater inflows of warm Atlantic water, declining heat content of the upper 700 meters supports the competing hypothesis that warmer Arctic air temperatures are, at least in part, the result of increased ventilation of heat that was previously trapped by a thick insulating ice cover.7
That second hypothesis is also in agreement with extensive observations that Arctic air temperatures had been cooling in the 80s and 90s. Warming occurred after subfreezing winds, re‑directed by the Arctic Oscillation, drove thick multi-year ice out from the Arctic.11

Regional cooling is also detected along the storm track from the Caribbean and along eastern USA. This evidence contradicts speculation that hurricanes in the Atlantic will or have become more severe due to increasing ocean temperatures. This also confirms earlier analyses of blogger Bob Tisdale and others that Superstorm Sandy was not caused by warmer oceans.

In order to support their contention that the deep ocean has been dramatically absorbing heat, Balmaseda/Trenberth must provide a mechanism and the regional observations where heat has been carried from the surface to those depths. But few are to be found. Warming at great depths and simultaneous cooling of the surface is antithetical to climate models predictions. Models had predicted global warming would store heat first in the upper layer and stratify that layer. Diffusion would require hundreds to thousands of years, so it is not the mechanism. Trenberth, Rahmstorf, and others have argued the winds could drive heat below the surface. Indeed winds can drive heat downward in a layer that oceanographers call the “mixed-layer,” but the depth where wind mixing occurs is restricted to a layer roughly 10-200 meters thick over most of the tropical and mid-latitude belts. And those depths have been cooling slightly.

The only other possible mechanism that could reasonably explain heat transfer to the deep ocean was that the winds could tilt the thermocline. The thermocline delineates a rapid transition between the ocean’s warm upper layer and cold lower layer. As illustrated above in Figure 15, during a La Nina warm waters pile up in the western Pacific and deepens the thermocline. But the tilting Pacific thermocline typically does not dip below the 700 meters, if ever.8

Unfortunately the analysis by Wunsch and Heimbach (2014) does not report on changes in the layer between 700 meters and 2000 meters. However based on changes in heat content below 2000 meters (their Figure 16 below), deeper layers of the Pacific are practically devoid of any deep warming.
The one region transporting the greatest amount of heat into the deep oceans is the ice forming regions around Antarctica, especially the eastern Weddell Sea where annually sea ice has been expanding.12 Unlike the Arctic, the Antarctic is relatively insulated from intruding subtropical waters (discussed here) so any deep warming is mostly from heat descending from above with a small contribution from geothermal.

Counter‑intuitively greater sea ice production can deliver relatively warmer subsurface water to the ocean abyss. When oceans freeze, the salt is ejected to form a dense brine with a temperature that always hovers at the freezing point. Typically this unmodified water is called shelf water. Dense shelf water readily sinks to the bottom of the polar seas. However in transit to the bottom, shelf water must pass through layers of variously modified Warm Deep Water or Antarctic Circumpolar Water.
Turbulent mixing also entrains some of the warmer water down to the abyss. Warm Deep Water typically comprises 62% of the mixed water that finally reaches the bottom. Any altered dynamic (such as increasing sea ice production, or circulation effects that entrain a greater proportion of Warm Deep Water), can redistribute more heat to the abyss.14. Due to the Antarctic Oscillation the warmer waters carried by the Antarctic Circumpolar Current have been observed to undulate southward bringing those waters closer to ice forming regions. Shelf waters have generally cooled and there has been no detectable warming of the Warm Deep Water core, so this region’s deep ocean warming is likely just re-distributing heat and not adding to the ocean heat content.

So it remains unclear if and how Trenberth’s “missing heat” has sunk to the deep ocean. The depiction of a dramatic rise in deep ocean heat is highly questionable, even though alarmists have flaunted it as proof of Co2’s power. As Dr. Wunsch had warned earlier, “Convenient assumptions should not be turned prematurely into ‘facts,’ nor uncertainties and ambiguities suppressed.” … “Anyone can write a model: the challenge is to demonstrate its accuracy and precision… Otherwise, the scientific debate is controlled by the most articulate, colorful, or adamant players.” 1

To reiterate, “the uncertainties remain too large to rationalize e.g., the apparent “pause” in warming.”


Literature Cited

1. C. Wunsch, 2007. The Past and Future Ocean Circulation from a Contemporary Perspective, in AGU Monograph, 173, A. Schmittner, J. Chiang and S. Hemming, Eds., 53-74
2. Wunsch, C. and P. Heimbach (2013) Dynamically and Kinematically Consistent Global Ocean Circulation and Ice State Estimates. In Ocean Circulation and Climate, Vol. 103.
3. Wunsch, C., and P. Heimbach, (2014) Bidecadal Thermal Changes in the Abyssal Ocean, J. Phys. Oceanogr.,
4. Xue,Y., et al., (2012) A Comparative Analysis of Upper-Ocean Heat Content Variability from an Ensemble of Operational Ocean Reanalyses. Journal of Climate, vol 25, 6905-6929.
5. Lyman, J. et al, (2010) Robust warming of the global upper ocean. Nature, vol. 465,334-
6. Lyman, J. and G. Johnson (2014) Estimating Global Ocean Heat Content Changes in the Upper 1800m since 1950 and the Influence of Climatology Choice*. Journal of Climate, vol 27.
7. Rigor, I.G., J.M. Wallace, and R.L. Colony (2002), Response of Sea Ice to the Arctic Oscillation, J. Climate, v. 15, no. 18, pp. 2648 – 2668.
8. Zhang, R. et al. (2007) Decadal change in the relationship between the oceanic entrainment temperature and thermocline depth in the far western tropical Pacific. Geophysical Research Letters, Vol. 34.
9. Hansen, J., and others, 2005: Earth’s energy imbalance: confirrmation and implications. Science, vol. 308, 1431-1435.
10. von Schuckmann, K., and P.-Y. Le Traon, 2011: How well can we derive Global Ocean Indicators
from Argo data?, Ocean Sci., 7, 783-791, doi:10.5194/os-7-783-2011.
11. Kahl, J., et al., (1993) Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years. Nature, vol. 361, p. 335‑337, doi:10.1038/361335a0
12. Parkinson, C. and D. Cavalieri (2012) Antarctic sea ice variability and trends, 1979–2010. The Cryosphere, vol. 6, 871–880.
13. Balmaseda, M. A., K. E. Trenberth, and E. Kallen, 2013: Distinctive climate signals in reanalysis of global ocean heat content. Geophysical Research Letters, 40, 1754-1759.
14. Azaneau, M. et al. (2013) Trends in the deep Southern Ocean (1958–2010): Implications for Antarctic Bottom Water properties and volume export. Journal Of Geophysical Research: Oceans, Vol. 118

How the Ebola Outbreak Became Deadliest in History

How the Ebola Outbreak Became Deadliest in History

Original Link:
By Bahar Gholipour

The reasons why the Ebola outbreak in West Africa has grown so large, and why it is happening now, may have to do with the travel patterns of bats across Africa and recent weather patterns in the region, as well as other factors, according to a researcher who worked in the region.

The outbreak began with Ebola cases that surfaced in Guinea, and subsequently spread to the neighboring countries of Liberia and Sierra Leone. Until now, none of these three West African countries had ever experienced an Ebola outbreak, let alone cases involving a type of Ebola virus that had been found only in faraway Central Africa.

But despite the image of Ebola as a virus that mysteriously and randomly emerges from the forest, the sites of the cases are far from random, said Daniel Bausch, a tropical medicine researcher at Tulane University who just returned from Guinea and Sierra Leone, where he had worked as part of the outbreak response team.

“A very dangerous virus got into a place in the world that is the least prepared to deal with it,” Bausch told Live Science.

In a new article published today (July 31) in the journal PLOS Neglected Tropical Diseases, Bausch
and a colleague reviewed the factors that potentially turned the current outbreak into the largest and deadliest Ebola outbreak in history. Although the focus is now on getting the outbreak under control, for long-term prevention, underlying factors need to be addressed, they said.

Here are five potential reasons why this outbreak is so severe:

The virus causing this outbreak is the deadliest type of Ebola virus.

The Ebola virus has five species, and each species has caused outbreaks in different regions. Experts were surprised to see that instead of the Taï Forest Ebola virus, which is found near Guinea, it was the Zaire Ebola virus that is the culprit in the current outbreak. This virus was previously found only in three countries in Central Africa: the Democratic Republic of the Congo, the Republic of the Congo and Gabon.

Zaire Ebola virus is the deadliest type of Ebola virus — in previous outbreaks it has killed up to 90 percent of those it infected.

But how did the Zaire Ebola virus get to Guinea? Few people travel between those two regions, and Guéckédou, the remote epicenter of first cases of disease, is far off the beaten path, Bausch said. “If
Ebola virus was introduced into Guinea from afar, the more likely traveler was a bat,” he said.
It is also possible that the virus was actually in West Africa before the current outbreak, circulating in bats — and perhaps even infected people but so sporadically that it was never recognized, Bausch said. Some preliminary analysis of blood samples collected from patients with other diseases before the outbreak suggests people in this region were exposed to Ebola previously, but more research is needed to know for sure.

Pondering The Second Law

I glanced through a few posts and comments the other day about creationism and evolution, in which the famous Second Law of Thermodynamics was mentioned several times.  I also know, or it is alleged, that the US Patent Office will not even consider any application if it defies the 2'nd Law in any way -- but maybe that is a legend, I really don't know.  In either case, it got me thinking about the law, and the ideas of laws of physics in general.  What is a law?

I always find a good starting place Wikipedia, that famous repository of seemingly all knowledge of learned minds, yet notorious at the same time because seemingly anyone can change its contents (I've never even tried).  I do know that when it comes to subjects I know something about, I've always found it both agreeable and further educational.  So I looked up the Second Law on it, and found this:


Second law of thermodynamics

From Wikipedia, the free encyclopedia
The second law of thermodynamics states that the entropy of an isolated system never decreases, because isolated systems always evolve toward thermodynamic equilibrium, a state with maximum entropy.

The second law is an empirically validated postulate of thermodynamics. In classical thermodynamics, the second law is a basic postulate defining the concept of thermodynamic entropy, applicable to any system involving measurable heat transfer. In statistical thermodynamics, the second law is a consequence of unitarity in quantum mechanics. In statistical mechanics information entropy is defined from information theory, known as the Shannon entropy. In the language of statistical mechanics, entropy is a measure of the number of alternative microscopic configurations corresponding to a single macroscopic state.

The second law refers to increases in entropy that can be analyzed into two varieties, due to dissipation of energy and due to dispersion of matter. One may consider a compound thermodynamic system that initially has interior walls that restrict transfers within it. The second law refers to events over time after a thermodynamic operation on the system, that allows internal heat transfers, removes or weakens the constraints imposed by its interior walls, and isolates it from the surroundings. As for dissipation of energy, the temperature becomes spatially homogeneous, regardless of the presence or absence of an externally imposed unchanging external force field. As for dispersion of matter, in the absence of an externally imposed force field, the chemical concentrations also become as spatially homogeneous as is allowed by the permeabilities of the interior walls. Such homogeneity is one of the characteristics of the state of internal thermodynamic equilibrium of a thermodynamic system.


There is more, much more, and please read it all, for it is good.  To begin, it immediately takes us to the concept of entropy, which a measure of disorder in an (isolated, closed) system.

Yet, this has always struck me as bizarre and counter-intuitive.  Why don't we speak of the order in a system, in positive terms?  In science, as in everyday life, we are accustomed to measuring how much of something a thing has, not how much non-something it possesses.  So why isn't entropy the same, a measurement of what's there, not what's lacking?  It's as if we defined matter in terms of the space surrounding it.

The entropy of the cosmos is always increasing, we are also told, as another invocation of the Second Law.  Information content is always decreasing.  Efficiencies are always less than 100%.  We're always losing.  Growing older, and dying.  Death and decay -- what could be a better metaphor for a process that also describes a Carnot Engine?  How do all these ends tie together anyway?


Yet they do, in a very mathematical, and, yes, intuitive, way.  The mathematics I speak of here is that branch called Probability and Statistics.

Stop.  Don't run and hide for cover.  I'm not a mathematician, or even a physicist.  I'm just a plain old chemist, without even his PhD.  I'm not even going to look anything up, or present any strange looking equations or even charts to use.  I'm going to try to talk about it the same down to earth language that I used in convincing myself of the validity of the Second Law years ago.

Think of a deck of cards.  Better yet, if you have one handy, go grab it.  Riffle through it, in your mind or in your hands (or in your mental hands, if you've got nimble ones).  What do you notice?  First, that all the cards are different -- ah, if this isn't the case, you aren't holding a proper deck.  If it is, do like me and count them.  Fifty-two of them, all spread before you, name and face cards, black and red, tops and bottoms.

Now shuffle them as randomly as you can (if you find this difficult, let your dog or a small child do it for you).  Drop them on the floor, kick them around for a while, then walk about, picking them here and there, at whim, until they're all in your hands again.  The only thing I ask you to do while doing this is to keep the faces (all different) down and the tops (all the same, I think) up.  Pick them all up.  Nudge every corner, every side, every edge, into place, so that the deck is neatly piled.

Now guess the first card.  A protest?  "I've only a one in fifty-two chance of being right,"  you exclaim in dismay.  If you did, that's good, for we're already making progress.  You have some sense of what a probability means.  One in fifty-two is not a very good chance.  You certainly wouldn't bet any money on it (unless you're a compulsive gambler, in which case Chance help you).

Another way of stating your predicament is that you haven't sufficient information to make a good guess.  If you could only know whether it was a red card or a black card, a face card or a number card, or something, anything like this, you could start thinking about your pocketbook before making you guess.  But you know nothing, nothing! Well, except that it has to be one of the fifty-two cards comprising the deck.

Hold on, because I'm going to make things worse.  What if I asked you to guess, not just the first card, but to guess every card in the deck?  If punching me in the mouth isn't your answer, you might just hunker down and wonder how to determine what your chances were of accomplishing such an amazing feat.  How could you do this?

This is where a little mathematics comes in.  Create a mental deck of cards in your head.  Choose the first card at random -- say, seven of spades.  That could have been in any one of fifty-two cards, but you placed it first.  Then the second card -- what is it?  How many remaining cards could you have chosen?  Why, fifty-two minus one, equaling fifty-one.  Now the third card.  Fifty-one minus one, equaling fifty.  And so on, and on, and on, etc., until we come to the last card.

So in the placement of the cards you have 53 X 51 X 50 X ... all the way down to the last card, or X 1.  Mathematicians have a nice way of expressing a product like this:  it's called a factorial, and its represented the "!" symbol.  In this case, it would be fifty-two factorial, or 52!.

It's one thing to state it.  Actually carrying out the calculation, even with a calculator or on your computer, isn't very easy.  Fortunately, all those years ago I already did it, so I will present you with the approximate answer I recall (approximate because my calculator couldn't handle that many digits).  That answer is "8 X 10 E67".  This is another mathematical shorthand, meaning in this case "Eight times ten, where ten is multiplied by itself 67 times over first".  Or, if your prefer, because this is a very, very large number, actually eight followed by sixty-seven zeroes, we can take its based-10 logarithm, around 67.9.  Well, round that off to 68, and you're there as good as not.

A number like that is so large (though it's only a trillionth of the number of atoms in the universe), that you wouldn't bet the tiniest tip of one hair leg of the louse living off the tip of one of your hair legs on it.  It might as well be infinite, as far as you're concerned.  But it's not infinite, not even the even tiniest bit close, all of which brings me back to the subject of thermodynamics.
Heat, as we all now know thanks to Ludwig Boltzmann, is merely the random motion of atoms.  That motion, thanks to Newton's equations, represent the kinetic energy of the atoms, and with this in mind I am going to attempt a magical transformation:  Imagine, instead of those cards in a deck, the kinetic energies of all the atoms in a given body of matter.  As gas is the simplest state of matter, we'll work with that.  Imagine a hollow magic glass globe, if you like, filled with atoms (or molecules; in this analysis we can treat them the same) of a gas.  And instead of fifty-two cards, consider uncountable trillions upon trillions of different states of kinetic energy among the (otherwise alike) gas atoms.

I want to make this crystal clear.  I am not relating the cards to the individuals atoms, but to the quantity of kinetic energy each particular atom has.  We can even quantize the energy in integer units, from one unit all the way up to -- well, as high as you wish.  If there are a trillion atoms of gas in this globe, then let's say that there are anywhere from one to a trillion energy levels available to each atom.  The precise number doesn't really matter -- I am using trillions here but any number large enough to be unimaginable will do; and there isn't actually any relationship between the number of atoms and number of energy levels.  All of this is just simplification for the purpose of explanation.

Very well.  Consider this glass globe full of gas atoms.  There were two ways we can go about measuring properties of it.  The easiest way is to measure its macroscopic properties.  These are properties such as volume, pressure, temperature, the number of atoms (in conveniently large units like a mole, or almost a trillion times a trillion), the mass, and so on.  They're convenient because we have devices like thermometers, scales, barometers, etc., that we can use to do this.

But there is another way to measure the properties of the gas:  the microscopic way.  In this, we take into account each atom and its quantity of kinetic energy, or some measure of its motion, one by one, and sum the whole thing up.  I'm sure you'll agree that this would be a very tedious, and in practice, absurd and impossible way to make the measurements -- for one thing, even if we could do it at all (a very dubious if, to say the least) it would take nearly forever to get anywhere with any measurement at all.  Fortunately, however, there is a correspondence between these macroscopic and microscopic properties, or states as I shall now call them.  That correspondence is via entropy, or the heart of the Second Law.

Recall the statement from Wiki:  "Entropy is a measure of the number of alternative microscopic configurations corresponding to a single macroscopic state."  That card deck of energy units assigned to each gas atom, like the cards in an actual deck, can be arranged in many, many ways:  about a number whose base ten logarithm is 67.9, recall.  Excuse me, that's for the 52 cards in a deck; for the trillions of possible energy states in a trillion X trillion atoms, that number would be astronomically large; so large that even its logarithm could not be expressed in a format like this, possibly not in any format available in the universe (if anyone can calculate that).  From a microscopic view, the probability of that particular state might as well be zero, for our ability to calculate it.  Like a well-shuffled, deck of cards, there's just no useful information in it.  Another way of saying this, returning to our randomly moving globe of gas atoms, is that there is no way of doing any useful work with it.

That's for a well-shuffled deck of cards / highly randomized energy distribution among atoms.  What about a highly ordered deck or distribution?  First, we have to specify what we mean by "ordered."  For a deck, this might mean ordered by suit (spades, heart, diamond, and club, say), and by value (ace, king, queen, jack, ten ... two), while for our gas it could mean the one atom owns all the units of energies, and all the others none.  I hope you can see that, defined this way, there is only one particular distribution; and once we, either by shuffling the deck or allowing the gas atoms to bump into each other, thereby releasing or gaining units of energy, the distributions become progressively less and less ordered, eventually (though this may take an enormous amount of time) become highly randomized.  The overall macroscopic properties, such as temperature or pressure, don't change, but the ways those properties can be achieved, increase dramatically.  This is why we talk about the " number of alternative microscopic configurations corresponding to a single macroscopic state", or entropy, and why we say that, in the cosmos as a whole, entropy is always increasing.  It is why the ordered state contains a great deal of information and can do a great deal of work, while increasing disorder, or entropy, means less of both.

Now if you've ever worked with decks of cards, you've noticed something quite obvious in retrospect:  you rarely go from perfect order to complete disorder in one shuffle.  There are many, many (also almost innumerable) in-between states of the deck that still have some order in some places with disorder in others.  In fact, even a completely randomized deck will have, by pure chance, some small pockets of order , which can still be exploited as information or for work).  The same is true in nature of course, which is why the Second Law is really a statement of probabilities, not absolutes.  Or, to quote the late Jacob Bronowski in his famous book Ascent of Man:  "It is not true that orderly states constantly run down to disorder.  It is a statistical law, which says that order will tend to vanish.  But statistics do not say 'always'.  Statistics allow order to be built up in some islands of the universe (here on earth, in you, in me, in the stars, in all kinds of places) while disorder takes over in others."

Information, order, the ability for work:  these are always things that a universe has to some degree, however incompletely.  Indeed, by our current understanding of cosmic evolution, our universe started off in a very high state of order, a perhaps highly improbable state of affairs, but quite permissible by the deeply understood laws of thermodynamics.  This initial high degree of order has allowed all the galaxies and stars, and atoms, and of course you an me and all other living things in this universe, to come into existence; and in the same way, will see all these things we regard as precious to us now pass out of existence.  But do not despair.  Order can never run down to absolutely zero; or, from the opposite perspective, disorder, or entropy, can never increase to an infinite however great it becomes; because of this simultaneously subtle but obvious observation, life in some quantity -- organized consciousness in some form is maybe the better word -- doesn't have to completely vanish.  If fact, if the reality really is infinite, as I suspect it is -- consisting of an infinite number of universes in an infinite space-time, all subtly different but all obeying the same fundamental laws of logic -- then we never have to worry about the light of mind being utterly snuffed out everywhere, for all times and places, at any point in any future.  That light has certainly shown long before life on Earth began to organize, and will continue to, somehow, long after our solar system and even entire universe has long burnt out into a heatless cinder.

What I've put forth here is a necessarily very limited explanation of the Second Law of Thermodynamics, of order, disorder, entropy, information, and work.  There are many more explanations, and whatever you have gained from mine -- I presume more questions than answers -- you should seek deeper comprehension in others' explanations, and in your own mental work on the subject.  There are many topics I've overlooked, or hit upon only in sketchy form, concepts you may need to fully explore and gain clarity in first before grasping the Great Second Law.  If it is any consolation -- assuming you need any -- I am no doubt in much the same situation, perhaps even more so.  If so, I wish you prosperity in your quest for full comprehension, as I have had in my own.  Thank you for attending to these words.

Tidal forces gave moon its shape, according to new analysis

Tidal forces gave moon its shape, according to new analysis

July 30, 2014
NASA's Lunar Reconnaissance Orbiter Camera acquired this image of the nearside of the moon in 2010. (Credit: NASA/GSFC/Arizona State University)
The shape of the moon deviates from a simple sphere in ways that scientists have struggled to explain. A new study by researchers at UC Santa Cruz shows that most of the moon's overall shape can be explained by taking into account tidal effects acting early in the moon's history.

The results, published July 30 in Nature, provide insights into the moon's early history, its orbital evolution, and its current orientation in the sky, according to lead author Ian Garrick-Bethell, assistant professor of Earth and planetary sciences at UC Santa Cruz.

As the moon cooled and solidified more than 4 billion years ago, the sculpting effects of tidal and rotational forces became frozen in place. The idea of a frozen tidal-rotational bulge, known as the "fossil bulge" hypothesis, was first described in 1898. "If you imagine spinning a water balloon, it will start to flatten at the poles and bulge at the equator," Garrick-Bethell explained. "On top of that you have tides due to the gravitational pull of the Earth, and that creates sort of a lemon shape with the long axis of the lemon pointing at the Earth."

But this fossil bulge process cannot fully account for the current shape of the moon. In the new paper, Garrick-Bethell and his coauthors incorporated other tidal effects into their analysis. They also took into account the large impact basins that have shaped the moon's topography, and they considered the moon's gravity field together with its topography.

Impact craters

Efforts to analyze the moon's overall shape are complicated by the large basins and craters created by powerful impacts that deformed the lunar crust and ejected large amounts of material. "When we try to analyze the global shape of the moon using spherical harmonics, the craters are like gaps in the data," Garrick-Bethell said. "We did a lot of work to estimate the uncertainties in the analysis that result from those gaps."

Their results indicate that variations in the thickness of the moon's crust caused by tidal heating during its formation can account for most of the moon's large-scale topography, while the remainder is consistent with a frozen tidal-rotational bulge that formed later.

A previous paper by Garrick-Bethell and some of the same coauthors described the effects of tidal stretching and heating of the moon's crust at a time 4.4 billion years ago when the solid outer crust still floated on an ocean of molten rock. Tidal heating would have caused the crust to be thinner at the poles, while the thickest crust would have formed in the regions in line with the Earth. Published in Science in 2010, the earlier study found that the shape of one area of unusual topography on the moon, the lunar farside highlands, was consistent with the effects of tidal heating during the formation of the crust.

"In 2010, we found one area that fits the tidal heating effect, but that study left open the rest of the moon and didn't include the tidal-rotational deformation. In this paper we tried to bring all those considerations together," Garrick-Bethell said.

Tidal heating and tidal-rotational deformation had similar effects on the moon's overall shape, giving it a slight lemon shape with a bulge on the side facing the Earth and another bulge on the opposite side. The two processes left distinct signatures, however, in the moon's gravity field. Because the crust is lighter than the underlying mantle, gravity signals reveal variations in the thickness of the crust that were caused by tidal heating.

Gravity field

Interestingly, the researchers found that the moon's overall gravity field is no longer aligned with the topography, as it would have been when the tidal bulges were frozen into the moon's shape. The principal axis of the moon's overall shape (the long axis of the lemon) is now separated from the gravity principal axis by about 34 degrees. (Excluding the large basins from the data, the difference is still about 30 degrees.)

"The moon that faced us a long time ago has shifted, so we're no longer looking at the primordial face of the moon," Garrick-Bethell said. "Changes in the mass distribution shifted the orientation of the moon. The craters removed some mass, and there were also internal changes, probably related to when the moon became volcanically active."

The details and timing of these processes are still uncertain. But Garrick-Bethell said the new analysis should help efforts to work out the details of the moon's early history. While the new study shows that tidal effects can account for the overall shape of the moon, tidal processes don't explain the topographical differences between the near side and the far side.

In addition to Garrick-Bethell, the coauthors of the paper include Viranga Perera, who worked on the study as a UCSC graduate student and is now at Arizona State University; Francis Nimmo, professor of Earth and planetary sciences at UCSC; and Maria Zuber, a planetary scientist at the Massachusetts Institute of Technology. This work was funded by the Ministry of Education of Korea through the National Research Foundation.

Friday, August 1, 2014

Today's parallels with 1914 are very worrying

Today's parallels with 1914 are very worrying

Armed conflict is worsening in Gaza, Syria, Ukraine and Iraq, while financial problems in emerging markets are growing

The global financial crisis of 1914 was in some respects even bigger and more internationally all-embracing than its early 21st-century version
The global financial crisis of 1914 was in some respects even bigger and more internationally all-embracing than its early 21st-century version Photo: GETTY
When events escalate, it’s time to worry. Almost everyone will know that the assassination of Archduke Franz Ferdinand in Sarajevo lit the fuse on the First World War – or if they don’t, they’ve not been reading the newspapers, filled as they have been of late with retold accounts to mark the 100th anniversary of the war to end all wars.

Less well known is that the shooting was also the trigger for the first truly global financial crisis of the 20th century, one that in some respects was even bigger and internationally all-embracing than its early 21st-century version. As the clouds of war gathered, financial markets were gripped by panic, closing stock exchanges around the world and forcing governments to bail out and support banks in the same manner as today. In the City, restaurants and shops began refusing coinage and notes. Only gold would do as payment.

When borders closed, many foreign assets became worthless, causing a chain reaction of defaults, banking runs and insolvencies. It scarcely needs saying that the potential for meltdown had been almost wholly unanticipated by money markets and the central banks that oversaw them.

Only a few years previously, the British journalist Norman Angell had argued in his book The Great Illusion that countries had become so economically interdependent and integrated that it made war not just futile but virtually unthinkable.

Poor Mr Angell has been much misrepresented since as one who was blind to the geopolitical tensions of his age, and their ability to override the assumptions of rational, economic self-interest. In fact, he never actually said that war was impossible, only that no one had anything to gain from it.
None the less, he came to epitomise the misplaced complacency of his age. This was a time of unprecedented international travel and trade, of exchange of ideas and technology. It was entirely reasonable to assume that tribal, national and regional conflict was a thing of the past.

By now, you will have guessed where I am going with this. In some respects, the world as it was just before the Great War bore a remarkable resemblance to our own. Gaza, Ukraine, Iraq and Syria – with the S&P 500 reaching new highs on an almost daily basis, all these crises have been met with a quite astonishing degree of indifference by financial markets.

Even the latest Argentinian default has failed to have any significant effect on this blissful insouciance, though this showed ominous signs of cracking last night amid a serious sell-off in US equities.

With the benefit of hindsight, trigger events for wider geopolitical and economic upheaval are always obvious. It’s easy to see them looking back, not so easy looking forward. Shocking though it was, the assassination of the heir to the Austro-Hungarian throne initially had very little impact. It was not until nations started declaring war, a month after the event, that markets became seriously rattled. Right up until the last moment, investors managed to convince themselves that things would turn out fine in the end.

Much the same point might be made about financial events. The collapse of Lehman’s, a comparatively minor investment bank, prompted the worst financial and economic crisis since the Great Depression. Few if any anticipated the scale of its impact. Similarly, the cascading series of banking collapses that marked the start of the Great Depression began with the failure of Creditanstalt, an Austrian bank that scarcely anyone had heard of at the time.

Looking at today’s events, a similar complacency afflicts investors and commentators as they weigh the carnage of the Middle East and the disgusting expansionism of Vladimir Putin’s Russia. I’ve lost count of the number of City reports I’ve read explaining why today’s geopolitical events don’t matter for financial markets.

These are considered small wars in faraway places, of no relevance – beyond the constant pounding of the 24-hour news agenda – for the economic powerhouses of the West. No major player in the global financial system, it is reasonably postulated, would be quite so stupid as to go to war over them. Well perhaps, but just consider the way events have already escalated. The murder of three Israeli teenagers – a shocking but tiny atrocity by the standards of the region – has led to the invasion of Gaza. Few could doubt, post this response, that Israel would also strike at Iran if Tehran gets any closer to arming itself with nuclear weapons.

Consider also the escalation of events in Ukraine, and the economically perilous ratcheting up of sanctions in retaliation. For a Europe still struggling to extract itself from the ravages of the financial crisis, these developments could hardly have come at a worse time.

All this might not matter so much if it were against the backdrop of a generally stable world economy. But very few would describe it as such. Pregnant with record amounts of debt – emerging markets are now piling it on with the same reckless abandon as the West – and highly reliant on the steroids of artificial monetary support, financial markets have rarely looked more vulnerable to unexpected shocks. I don’t want to over-egg the point, but the parallels with the calm before the storm of 100 years ago are impossible to ignore.

Are there emotional no-go areas where logic dare not show its face?

Are there emotional no-go areas where logic dare not show its face?

Original Link
by Richard Dawkins

Are there kingdoms of emotion where logic is taboo, dare not show its face, zones where reason is too intimidated to speak?

Moral philosophers make full use of the technique of thought experiment. In a hospital there are four dying men. Each could be saved by a transplant of a different organ, but no donors are available. In the hospital waiting room is a healthy man who, if we killed him, could provide the requisite organ to each dying patient, thereby saving four lives for the price of one. Is it morally right to kill the healthy man and harvest his organs?

Everyone says no, but the moral philosopher wants to discuss the question further. Why is it wrong? Is it because of Kant’s Principle: “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” How do we justify Kant’s principle? Are there ever exceptions? Could we imagine a hypothetical scenario in which . . .

What if the dying men were Beethoven, Shakespeare, Einstein and Martin Luther King? Would it be then right to sacrifice a man who is homeless and friendless, dragged in from a ditch? And so on.
Two miners are trapped underground by an explosion. They could be saved, but it would cost a million dollars. That million could be spent on saving the lives of thousands of starving people.
Could it ever be morally right to abandon the miners to their fate and spend the money on saving the thousands? Most of us would say no. Would you? Or do you think it is wrong even to raise such questions?

These dilemmas are uncomfortable. It is the business of moral philosophers to face up to the discomfort and teach their students to do the same. A friend, a professor of moral philosophy, told me he received hate-mail when he raised the hypothetical case of the miners. He also told me there are certain thought experiments that divide his students down the middle. Some students are capable of temporarily accepting a noxious hypothetical, to explore where it might lead. Others are so blinded by emotion that they cannot even contemplate the hypothetical. They simply stop up their ears and refuse to join the discussion.

“We all agree it isn’t true that some human races are genetically superior to others in intelligence. But let’s for a moment suspend disbelief and consider the consequences if it were true. Would it ever be right to discriminate in job hiring? Etcetera.” My friend sometimes poses this very question, and he tells me that about half the students are willing to entertain the hypothetical counterfactual and rationally discuss the consequences. The other half respond emotionally to the hypothetical, are too revolted to proceed and simply opt out of the conversation.

Could eugenics ever be justified? Could torture? A clock triggering a gigantic nuclear weapon hidden in a suitcase is ticking. A spy has been captured who knows where it is and how to disable it, but he refuses to speak. Is it morally right to torture him, or even his innocent children, to make him reveal the secret? What if the weapon were a doomsday machine that would blow up the whole world?

There are those whose love of reason allows them to enter such disagreeable hypothetical worlds and see where the discussion might lead. And there are those whose emotions prevent them from going anywhere near the conversation. Some of these will vilify and hurl vicious insults at anybody who is prepared to discuss such matters. Some will pursue active witch-hunts against moral philosophers for daring to consider obnoxious hypothetical thought experiments.

“A woman has an absolute right to do what she wants with her own body and that includes any foetus that it might contain. I don’t care if the foetus is fully conscious and writing poetry in the womb, the woman still has the right to abort it because it is her body and her choice.” Do we discuss the hypothetical intra-uterine poet, or does emotion simply close down the discussion, in either direction?
Do we think the woman’s right is absolute, absolute, absolute – end of? Or do we think abortion is wrong, wrong, wrong; abortion is murder, no further discussion.?

“We agree that cannibalism is wrong. But if we don’t need to kill someone in order to eat them, can we discuss why it would be wrong? Why don’t we eat human road-kills? Yes, it would be horrible for the friends and relatives of the dead person, but suppose we hypothetically know that this person has no friends or relatives of any kind, why wouldn’t we eat him? Or is there a slippery slope that we should consider?” Do we proceed to discuss such questions rationally and logically with the professor of moral philosophy? Or do we throw an emotional fit and run screaming from the room?

I believe that, as non-religious rationalists, we should be prepared to discuss such questions using logic and reason. We shouldn’t compel people to enter into painful hypothetical discussions, but nor should we conduct witch-hunts against people who are prepared to do so. I fear that some of us may be erecting taboo zones, where emotion is king and where reason is not admitted; where reason, in some cases, is actively intimidated and dare not show its face. And I regret this. We get enough of that from the religious faithful. Wouldn’t it be a pity if we became seduced by a different sort of sacred, the sacred of the emotional taboo zone?

Moving from the hypothetical to the real, if you raise the question of female genital mutilation, you can guarantee that about half the responses you get will be of the form “What about male circumcision?” and this often seems calculated to derail the campaign against FGM and take the steam out of it. If you try and say “Yes yes, male infant circumcision may be bad but FGM is worse”, you will be stopped in your tracks. Both are violations of a defenceless child, you cannot discuss whether one is worse than the other. How dare you even think about ranking them?

When a show-business personality is convicted of pedophilia, is it right that you actually need courage to say something like this: “Did he penetratively rape children or did he just touch them with his hands? The latter is bad but I think the former is worse”? How dare you rank different kinds of pedophilia? They are all equally bad, equally terrible. What are you, some kind of closet pedophile yourself?

I have met the following reaction when discussing the vexed and terrible question of Israel/Palestine. Israeli friends have said to me things like, “We needed a Jewish state because, after the Holocaust, we realised that nobody else was going to look after us, we’d have to look after ourselves. Jews have been downtrodden for too long. From now on, we Jews are going to stand tall and take care of ourselves.” To which, on one occasion, I replied, “Yes, of course I sympathise with that, but can you explain why Palestinian Arabs should be the ones to pay for Hitler’s crimes? Why Palestine? You surely aren’t going to stoop to some kind of biblical justification for picking on that land rather than, say, Bavaria or Madagascar?” My friend earnestly said, “Richard, I think we had better just terminate this conversation.” I had blundered into another taboo zone, a sacred emotional sanctuary where discussion is forbidden. The emotions aroused by the Holocaust are so painful that we are not allowed even to discuss such questions. A friend will terminate the conversation rather than allow entry to the sanctuary of hurt emotion.

On Twitter during the current horrible events in Gaza, I wrote the following:
“The extent of the destruction in Gaza is obscene. Poor people. Poor people who have lost their homes, their relatives, everything.” I was immediately bitterly attacked by friends of Israel. But then I quoted Sam Harris to the effect that “Hamas publicly says they’d like to kill every Jew in the world” and I went on to raise Sam’s hypothetical question: What does that say about Hamas’s probable actions if positions were reversed and they had Israel’s military strength? Sam’s suggestion that this contrast might actually be demonstrating restraint on Israel’s part, unleashed a storm of furious accusations that he, and I, relished the bombing of Gaza’s children.

I also quoted Sam as saying “I don’t think Israel should exist as a Jewish state.” So of course I, and Sam, got vituperative brickbats from Israel and from American Jewish interests. I summed up my position on the fence (linking to an interview with Christopher Hitchens) as follows: “It is reasonable to deplore both the original founding of the Jewish State of Israel & aspirations now to destroy it.”
But I swiftly learned that emotion can be so powerful that reasonable discussion – looking at both sides of the question dispassionately – becomes impossible.

Apparently I didn’t learn swiftly enough – and I now turn to the other Twitter controversy in which I have been involved this week.

‘“Being raped by a stranger is bad. Being raped by a formerly trusted friend is worse.” If you think that hypothetical quotation is an endorsement of rape by strangers, go away and learn how to think.’

That was one way I put the hypothetical. It seemed to me entirely reasonable that the loss of trust, the disillusionment that a woman might feel if raped by a man whom she had thought to be a friend, might be even more horrible than violation by a stranger. I had previously put the opposite hypothetical, but drew an equivalent logical conclusion:

“Date rape is bad. Stranger rape at knifepoint is worse. If you think that’s an endorsement of date rape, go away and learn how to think.”

These two opposite hypothetical statements were both versions of the general case, which I also tweeted:

“X is bad. Y is worse. If you think that’s an endorsement of X, go away and don’t come back until you’ve learned how to think properly.”

The point was a purely logical one: to judge something bad and something else very bad is not an endorsement of the lesser of two evils. Both are bad. I wasn’t making a point about which of the two was worse. I was merely asserting that to express an opinion one way or the other is not tantamount to approving the lesser evil.

Some people angrily failed to understand that it was a point of logic using a hypothetical quotation about rape. They thought it was an active judgment about which kind of rape was worse than which.
Other people got the point of logic but attacked me, equally furiously, for choosing the emotionally loaded example of rape to illustrate it. To quote one blogger, prominent in the atheist movement, ‘What would have been wrong with, “Slapping someone’s face is bad, breaking their nose is worse”? Why need to use rape?’

Yes, I could have used the broken nose example. I accept that I must explain why I chose to use the particular example of rape. I was emphatically not trying to hurt rape victims or trivialise their awful experience. They get enough of that already from the “She was wearing a short skirt, I bet she was really begging for it Hur Hur Hur” brigade. So why did I choose rape as my unpleasant hypothetical (in both directions) rather than the “breaking someone’s nose” example? Here’s why.

I hope I have said enough above to justify my belief that rationalists like us should be free to follow moral philosophic questions without emotion swooping in to cut off all discussion, however hypothetical. I’ve listed cannibalism, trapped miners, transplant donors, aborted poets, circumcision, Israel and Palestine, all examples of no-go zones, taboo areas where reason may fear to tread because emotion is king. Broken noses are not in that taboo zone. Rape is. So is pedophilia. They should not be, in my opinion. Nor should anything else.

I didn’t know quite how deeply those two sensitive issues had infiltrated the taboo zone. I know now, with a vengeance. I really do care passionately about reason and logic. I think dispassionate logic and reason should not be banned from entering into discussion of cannibalism or trapped miners. And I was distressed to see that rape and pedophilia were also becoming taboo zones; no-go areas, off limits to reason and logic.

“Rape is rape is rape.” You cannot discuss whether one kind of rape (say by a ‘friend”) is worse than another kind of rape (say by a stranger). Rape is rape and you are not allowed even to contemplate the question of whether some rape is bad but other rape is worse. I don’t want to listen to this horrible discussion. The very idea of classifying some rapes as worse than others, whether it’s date rape or stranger rape, is unconscionable, unbearable, intolerable, beyond the pale, taboo. There is no allowable distinction between one kind of rape and another.

If that were really right, judges shouldn’t be allowed to impose harsher sentences for some rapes than for others. Do we really want our courts to impose a single mandatory sentence – a life sentence, perhaps – for all rapes regardless? To all rapes, from getting a woman drunk and taking advantage at one end of the spectrum, to holding a knife to her throat in a dark alley at the other? Do we really want our judges to ignore such distinctions when they pass sentence? I don’t, and I don’t think any reasonable person would if they thought it through. And yet that would seem to be the message of the agonisingly passionate tweets that I have been reading. The message seems to be, no, there is no spectrum, you are wicked, evil, a monster, to even ask whether there might be a spectrum.

I don’t think rationalists and sceptics should have taboo zones into which our reason, our logic, must not trespass. Hypothetical cannibalism of human road kills should be up for discussion (and rejection in my opinion – but let’s discuss it). Same for eugenics. Same for circumcision and FGM. And the question of whether there is a spectrum of rapes, from bad to worse to very very much worse, should also be up for discussion, no less than the spectrum from a slap in the face to a broken nose.

There would have been no point in my using the broken nose example to illustrate my logic, because nobody would ever accuse us of endorsing face-slapping when we say, “Broken nose is worse than slap in face”. The point is trivially obvious, as it is with the symbolic case of “X is worse than Y”. But I knew that not everybody would think it obvious in the special cases of rape and pedophilia, and that was precisely why I raised them for discussion. I didn’t care whether we chose to say date rape was worse than dark alley stranger rape, or vice versa. Nor was I unaware that it is a sensitive issue, as is pedophilia. I deliberately wanted to challenge the taboo against rational discussion of sensitive issues.

That, then, is why I chose rape and pedophilia for my hypothetical examples. I think rationalists should be free to discuss spectrums of nastiness, even if only to reject them. I had noticed indications that rape and pedophilia had moved out of the discussion zone into a no-go taboo area. I wanted to challenge the taboo, just as I want to challenge all taboos against free discussion.

Nothing should be off limits to discussion. No, let me amend that. If you think some things should be off limits, let’s sit down together and discuss that proposition itself. Let’s not just insult each other and cut off all discussion because we rationalists have somehow wandered into a land where emotion is king.

It is utterly deplorable that there are people, including in our atheist community, who suffer rape threats because of things they have said. And it is also deplorable that there are many people in the same atheist community who are literally afraid to think and speak freely, afraid to raise even hypothetical questions such as those I have mentioned in this article. They are afraid – and I promise you I am not exaggerating – of witch-hunts: hunts for latter day blasphemers by latter day Inquisitions and latter day incarnations of Orwell’s Thought Police.

AAAS Board of Directors: Legally Mandating GM Food Labels Could “Mislead and Falsely Alarm Consumers”

AAAS Board of Directors: Legally Mandating GM Food Labels Could “Mislead and Falsely Alarm Consumers”

Foods containing ingredients from genetically modified (GM) crops pose no greater risk than the same foods made from crops modified by conventional plant breeding techniques, the AAAS Board of Directors has concluded. Legally mandating labels on GM foods could therefore “mislead and falsely alarm consumers,” the Board said in a statement approved 20 October.

In releasing the Board’s statement, AAAS noted that it is important to distinguish between labeling intended to protect public health—about the presence of allergens, for example—and optional labeling that aids consumer decision-making, such as “kosher” or “USDA organic,” which reflects verifiable and certifiable standards about production and handling.

Several current efforts to require labeling of GM foods are not being driven by any credible scientific evidence that these foods are dangerous, AAAS said. Rather, GM labeling initiatives are being advanced by “the persistent perception that such foods are somehow ‘unnatural,’” as well as efforts to gain competitive advantages within the marketplace, and the false belief that GM crops are untested.
In the United States, in fact, each new GM crop must be subjected to rigorous analysis and testing in order to receive regulatory approval, AAAS noted. It must be shown to be the same as the parent crop from which it was derived and if a new protein trait has been added, the protein must be shown to be neither toxic nor allergenic. “As a result and contrary to popular misconceptions,” AAAS reported, “GM crops are the most extensively tested crops ever.”

Moreover, the AAAS Board said, the World Health Organization, the American Medical Association, the U.S. National Academy of Sciences, the British Royal Society, and “every other respected organization that has examined the evidence has come to the same conclusion: consuming foods containing ingredients derived from GM crops is no riskier than consuming the same foods containing ingredients from crop plants modified by conventional plant improvement techniques.”

The European Commission (EU) recently concluded, based on more than 130 studies covering 25 years of research involving at least 500 independent research groups, that genetic modification technologies “are not per se more risky than…conventional plant breeding technologies.” Occasional claims that feeding GM foods to animals can cause health problems have not stood up to rigorous scientific scrutiny, AAAS said.

“Civilization rests on people’s ability to modify plants to make them more suitable as food, feed and fiber plants and all of these modifications are genetic,” the AAAS Board concluded. “Modern molecular genetics and the invention of large-scale DNA sequencing methods have fueled rapid advances in our knowledge of how genes work and what they do, permitting the development of new methods that allow the very precise addition of useful traits to crops, such as the ability to resist an insect pest or a viral disease, much as immunizations protect people from disease.”

Read the full statement by the AAAS Board of Directors on labeling of genetically modified foods.