Search This Blog

Sunday, August 3, 2014

Corporate Profits Grow and Wages Slide

Corporate Profits Grow and Wages Slide

CORPORATE profits are at their highest level in at least 85 years. Employee compensation is at the lowest level in 65 years.
 
The Commerce Department last week estimated that corporations earned $2.1 trillion during 2013, and paid $419 billion in corporate taxes. The after-tax profit of $1.7 trillion amounted to 10 percent of gross domestic product during the year, the first full year it has been that high. In 2012, it was 9.7 percent, itself a record.
 
Until 2010, the highest level of after-tax profits ever recorded was 9.1 percent, in 1929, the first year that the government began calculating the number.
Before taxes, corporate profits accounted for 12.5 percent of the total economy, tying the previous record that was set in 1942, when World War II pushed up profits for many companies. But in 1942, most of those profits were taxed away. The effective corporate tax rate was nearly 55 percent, in sharp contrast to last year’s figure of under 20 percent.
 
The trend of higher profits and lower effective taxes has been gaining strength for years, but really picked up after the Great Recession temporarily depressed profits in 2009. The effective rate has been below 20 percent in three of the last five years. Before 2009, the rate had not been that low since 1931.
 
The statutory top corporate tax rate in the United States is 35 percent, and corporations have been vigorously lobbying to reduce that, saying it puts them at a competitive disadvantage against companies based in other countries, where rates are lower. But there are myriad tax credits, deductions and preferences available, particularly to multinational companies, and the result is that effective tax rates have fallen for many companies.
 
The Commerce Department also said total wages and salaries last year amounted to $7.1 trillion, or 42.5 percent of the entire economy. That was down from 42.6 percent in 2012 and was lower than in any year previously measured.
 
Including the cost of employer-paid benefits, like health insurance and pensions, as well as the employer’s share of Social Security and Medicare contributions, the total cost of compensation was $8.9 trillion, or 52.7 percent of G.D.P., down from 53 percent in 2012 and the lowest level since 1948.

Profits High, Wages Low

After-tax corporate profits in 2013 rose to a record of 10 percent of gross domestic product, while total compensation of employees slipped to a 65-year low. Corporate tax rates — under 20 percent of pretax corporate income in three of the last five years — have not been that low since Herbert Hoover was president. During the Obama administration, profits have taken a higher share of national income than during any administration since 1929.
After-tax corporate profits
Employee compensation
As a percentage of G.D.P.
Effective corporate tax rate*
 
As a percentage of G.D.P.
RECESSION YEARS
10
%
60
%
60
%
50
58
8
40
56
6
30
54
4
20
52
2
10
50
0
48
0
’30
’50
’70
’90
’13
’30
’50
’70
’90
’13
’30
’50
’70
’90
’13
By presidential term
AFTER-TAX
CORPORATE PROFITS
EFFECTIVE
CORPORATE
TAX RATE
 
EMPLOYEE
COMPENSATION
 
CHANGE IN S.&P. 500
Highest in each
category is highlighted
AS A PCT. OF G.D.P.
AS A PCT. OF G.D.P.
TOTAL CHANGE
ANNUALIZED
Obama
G.W. Bush
Clinton
G.H.W. Bush
Reagan
Carter
Ford
Nixon
Johnson
Kennedy
Eisenhower
Truman
F.D. Roosevelt
Hoover
9.3
7.2
6.0
4.8
5.2
5.8
5.5
5.4
7.2
6.1
5.8
5.6
5.1
5.4
%
20.5
26.0
31.0
32.9
31.7
36.7
37.3
39.0
36.1
41.1
44.1
47.3
44.2
14.7
%
53.2
55.0
55.5
55.9
55.6
56.3
56.1
57.4
55.5
55.2
55.1
53.6
52.9
51.0
%
+
+
+
+
+
+
+
+
+
+
+
133
40
210
51
118
28
27
20
46
16
129
86
141
77
%
+
+
+
+
+
+
+
+
+
+
+
17.7
6.2
15.2
10.9
10.2
6.3
10.4
4.0
7.6
5.4
10.9
8.3
7.5
30.8
%
Benefits were a steadily rising cost for employers for many decades, but that trend seems to have ended. In 2013, the figure was 10.2 percent, the lowest since 2000.
 
One way to look at the current situation is to compare 2013 with 2006, the last full year before the recession began. Adjusted for inflation, corporate profits were 28 percent higher, before taxes, last year. But taxes were down by 21 percent,so after-tax profits were up by 36 percent. At the same time, total employee compensation was up by 5 percent, or less than the 7 percent increase in the working-age population over the same period.
 
Several reasons that have been offered as explanations for the declining share of national income going to workers, including the effects of globalization that have shifted some jobs to lower-paid overseas workers and the declining bargaining power of unions.
 
The accompanying charts compare President Obama’s administration with each of his predecessors, going back to Herbert Hoover. After-tax corporate profits in President Obama’s five years in office have averaged 9.3 percent of G.D.P. That is a full two percentage points higher than the 7.2 percent averages under Lyndon B. Johnson and George W. Bush, previously the presidents with the highest ratios of corporate profits.
 
The stock market has reflected that strong performance. Through the end of March, the Standard & Poor’s 500-stock index was up 133 percent since Mr. Obama’s inauguration in 2009. Of the 13 presidents since 1929, only Bill Clinton and Franklin D. Roosevelt saw a larger total increase. On an annualized basis, the Obama administration gains come to 17.7 percent a year, higher than any of the previous presidents. The figures reflect price changes, and are not adjusted for dividends or inflation.

The Incredible Shrinking Dinosaurs

The Incredible Shrinking Dinosaurs

 
Saturday, August 2, 2014 18:22
 

For decades, paleontologists have been uncovering the remarkable evolutionary relationships between fearsome, two-legged, meat-eating dinosaurs and birds.

A new study suggests that the pace of the transition from one to the other was quick by dinosaur standards. In the 50 million years preceding the appearance of the first birds some 163 million years ago, the size and weight of theropods along the direct line of descent to birds shrank one group after another – slowly at first, but going into free fall during the final 10 to 15 million years once Maniraptors took the evolutionary baton from their direct ancestors, the Coelurosaurs.

The skeletal changes taking place during the 50-million-year dinosaur-to-bird transition were occurring four times faster than for dinosaurs as a whole, according to the analysis, conducted by an international team of researchers led by Michael Lee, with the South Australia Museum in Adelaide.

Rapid rates of change in body size have appeared before in the fossil record. Following a mass extinction at the end of the Cretaceous period some 65 million years ago, an event that drove non-avian dinosaurs extinct, the size and diversity of mammals exploded over a 15-million-year period, researchers say. This came as mammals began to fill ecological niches vacated by the late, great dinosaurs.

The interplay between evolution and ecological niches was likely at work for the ancestors to birds as well – in this case, the Great Escape.

Some researchers surmise that the changes in body size and skeletal structures that led to the first birds, particularly during the phase of accelerated change, could have occurred as the now-smaller theropods moved into trees to escape becoming another animal’s meal or to take advantage of new sources of food.

The continuing reduction in size needed to succeed as tree dwellers would have triggered a cascade of evolutionary changes, suggests University of Bristol paleontologist Mike Benton. These changes would have improved vision, improved the aerodynamics of forelimbs to allow for increasingly ambitious leaps from tree to tree, or encouraged the evolution of feathers to insulate the new tree dwellers.

“Being smaller and lighter in the land of giants, with rapidly evolving anatomical adaptations, provided these bird ancestors with new ecological opportunities,” Dr. Lee said in a prepared statement.

Past studies of animal sizes in the run-up to birds had looked at individual branches of the avian ancestral tree or used trees that used physical traits to establish relationships, but no dates.

Lee and colleagues were able to take advantage of the explosion of small feathered theropod fossils coming out of China since the mid-1990s, known collectively as Paraves. These animals were trying to exploit various ways of getting from tree to tree – jumping, gliding, or parachuting, notes Dr. Benton in an article in the current issue of the journal Science. The article accompanies the analysis Lee and his colleagues performed.

The researchers gathered data on 1,549 skeletal traits from 120 species of theropods, including the length of the thigh bones and the ages of the specimens. The team used the femur as a marker of body mass. They then applied sophisticated statistical techniques to reconstruct the relationships among the species, their chronology, and track their evolutionary changes.

Some 200 million years ago, direct ancestors of the first birds tipped the scales at about 360 pounds. By about 175 million years ago, the typical weight of a new generation of direct ancestor had fallen to 100 pounds. Over the next 10 million to 15 million years, body weights would plummet, winding up at about a pound for the first birds.

The study is significant on two levels, suggests Daniel Field, a PhD candidate in paleontology at Yale University and a predoctoral fellow at the university’s Peabody Museum of Natural History.

Researchers have a good idea of what the pattern of evolutionary relationships are along that lineage, he says, “but we don’t have quite as good an idea of how those evolutionary transitions actually played out,” he says. The analysis Lee and his colleagues have performed help fill in that information.

But the study has broader implications, Mr. Field adds. The team amassed a remarkable set of data that will be valuable in its own right and raises additional, intriguing questions.

For instance, the Great Jurassic Shrink Off was apparent at each of 12 or more points along the main line of evolution between theropods and birds. Those points represent branches in the family tree where other theropods went off in their own evolutionary directions – directions in which body size either remained stable or often increase significantly, in one case giving the world Tyrannosaurus rex. It’s a pattern that repeats along each branch. Explaining that repetition even as the avian lineage was yielding ever smaller animals over the same time span is a fresh mystery the data present, according to Field.



Click to zoom




Source: http://www.ascensionearth2012.org/2014/08/the-incredible-shrinking-dinosaurs-video.html

Saturday, August 2, 2014

A Yellowstone Super Eruption: Another Doomsday Scenario put to Rest

A Yellowstone Super Eruption: Another Doomsday Scenario put to Rest

August 2, 2014 Science
From Link:  http://www.fromquarkstoquasars.com/a-yellowstone-super-eruption-another-doomsday-scenario-put-to-rest/      
Image Credit: Unknown (source)
Image Credit: Nina B via Shutterstock

If you’ve heard of Yellowstone National Park then, chances are, you’ve heard doomsday scenarios about Yellowstone National park. The 2005 movie “Supervolcano” highlights how these scenarios generally play out: Yellowstone erupts; people are drowned beneath mountains of lava; a looming cloud of sulfur dioxide gets carried over the globe; the Earth plunges into a volcanic winter; we all die.

Fun times…

In truth, Yellowstone is quite massive…and so is its underground magma reservoir. At 3,472 square miles (8,987 square km), the park is larger than Rhode Island and Delaware combined. And as we all know, a portion of the park sits on top of a giant volcanic caldera (an earthen cap that covers a huge reservoir of superhot liquid rock and gasses). The underground magma chamber is about 37 miles long (60 km), 18 miles wide (30 km), and 3 to 7 miles deep (5 to 12 km). That may sound rather terrifying; however, fortunately for us, all that magma is tucked safely beneath the surface of the Earth.

Image Source
Image Source

But what if it wasn’t? What if Yellowstone erupted? Would the Earth be plunged into a volcanic winter, as some sources indicate?

Geologist Jake Lowenstern (scientist-in-charge of the Yellowstone Volcano Observatory) has the answers that we seek. According Lowenstern, although the Yellowstone magma source is enormous, walls of lava won’t come pouring across the continent if there’s a super eruption. Instead, the lava flows would be limited to a 30-40 mile radius. Of course, this is still widespread enough to cause significant devastation. There would be no hope for any life forms living within this radius, and the surrounding areas would be engulfed in flames—forest fires would likely rage out of control…but a majority of the immediate damage would be contained within the surrounding area.

A bit dramatic, but you get the idea. Photograph by Carlos Gutierrez/UPI/Landov via National Geographic
A bit dramatic, but you get the idea.
Photograph by Carlos Gutierrez/UPI/Landov via National Geographic

Most of the long-range damage would come from “cold ash” and pumice borne on the wind. 4 or more inches (10cm) would cover the ground in a radius of about 500 miles. This would prevent photosynthesis and destroy much of the plant life in the region. Lighter dustings would traverse the United States– polluting farms in the Midwest, covering cars in New York, and contaminating the Mississippi River. It would clog waterways and agricultural areas with toxic sludge. Thus, the worst outcome of this event would be the destruction of our food supplies and waterways.

It’s likely that we’d see a global effect on temperatures from all the extra particles in the Earth’s atmosphere. However, these effects would only last a few years as Yellowstone isn’t nearly big enough to cause the long-term catastrophes that we see play out in doomsday scenarios (so no need to worry about a new ice age).

Moreover, contrary to what Hollywood would have you believe, the eruption won’t come without warning.

A super eruption, like all volcanic eruptions, begins with an earthquake. And if Yellowstone were to have a super eruption, we’d have some big ones. These earthquakes would begin weeks or months before the final eruption. So this eruption wouldn’t come out of nowhere. In fact, most scientists agree that such an eruption won’t come at all as the caldera has gone through many regular eruptions that release pressure.

So it seems that you can add “A Yellowstone Super Eruption” to your list of ways that the world will not end (Yay!).

Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content

Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content


Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University and author of Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

Two of the world’s premiere ocean scientists from Harvard and MIT have addressed the data limitations that currently prevent the oceanographic community from resolving the differences among various estimates of changing ocean heat content (in print but available here).3 They point out where future data is most needed so these ambiguities do not persist into the next several decades of change.
As a by-product of that analysis they 1) determined the deepest oceans are cooling, 2) estimated a much slower rate of ocean warming, 3) highlighted where the greatest uncertainties existed due to the ever changing locations of heating and cooling, and 4) specified concerns with previous methods used to construct changes in ocean heat content, such as Balmaseda and Trenberth’s re-analysis (see below).13 They concluded, “Direct determination of changes in oceanic heat content over the last 20 years are not in conflict with estimates of the radiative forcing, but the uncertainties remain too large to rationalize e.g., the apparent “pause” in warming.”

clip_image001
Wunsch and Heimbach (2014) humbly admit that their “results differ in detail and in numerical values from other estimates, but the determining whether any are “correct” is probably not possible with the existing data sets.”

They estimate the changing states of the ocean by synthesizing diverse data sets using models developed by the consortium for Estimating the Circulation and Climate of the Ocean, ECCO. The ECCO “state estimates” have eliminated deficiencies of previous models and they claim, “unlike most “data assimilation” products, [ECCO] satisfies the model equations without any artificial sources or sinks or forces. The state estimate is from the free running, but adjusted, model and hence satisfies all of the governing model equations, including those for basic conservation of mass, heat, momentum, vorticity, etc. up to numerical accuracy.”

Their results (Figure 18. below) suggest a flattening or slight cooling in the upper 100 meters since 2004, in agreement with the -0.04 Watts/m2 cooling reported by Lyman (2014).6 The consensus of previous researchers has been that temperatures in the upper 300 meters have flattened or cooled since 2003,4 while Wunsch and Heimbach (2014) found the upper 700 meters still warmed up to 2009.

The deep layers contain twice as much heat as the upper 100 meters, and overall exhibit a clear cooling trend for the past 2 decades. Unlike the upper layers, which are dominated by the annual cycle of heating and cooling, they argue that deep ocean trends must be viewed as part of the ocean’s long term memory which is still responding to “meteorological forcing of decades to thousands of years ago”. If Balmaseda and Trenberth’s model of deep ocean warming was correct, any increase in ocean heat content must have occurred between 700 and 2000 meters, but the mechanisms that would warm that “middle layer” remains elusive.
clip_image003
The detected cooling of the deepest oceans is quite remarkable given geothermal warming from the ocean floor. Wunsch and Heimbach (2014) note, “As with other extant estimates, the present state estimate does not yet account for the geothermal flux at the sea floor whose mean values (Pollack et al., 1993) are of order 0.1 W/m2,” which is small but “not negligible compared to any vertical heat transfer into the abyss.3 (A note of interest is an increase in heat from the ocean floor has recently been associated with increased basal melt of Antarctica’s Thwaites glacier. ) Since heated waters rise, I find it reasonable to assume that, at least in part, any heating of the “middle layers” likely comes from heat that was stored in the deepest ocean decades to thousands of years ago.

Wunsch and Heimbach (2014) emphasize the many uncertainties involved in attributing the cause of changes in the overall heat content concluding, “As with many climate-related records, the unanswerable question here is whether these changes are truly secular, and/or a response to anthropogenic forcing, or whether they are instead fragments of a general red noise behavior seen over durations much too short to depict the long time-scales of Fig. 6, 7, or the result of sampling and measurement biases, or changes in the temporal data density.”

Given those uncertainties, they concluded that much less heat is being added to the oceans compared to claims in previous studies (seen in the table below). It is interesting to note that compared to Hansen’s study that ended in 2003 before the observed warming pause, subsequent studies also suggest less heat is entering the oceans. Whether those declining trends are a result of improved methodologies, or due to a cooler sun, or both requires more observations.


StudyYears ExaminedWatts/m2
9Hansen 20051993-20030.86 +/- 0.12
5Lyman 20101993-20080.64 +/- 0.11
10von Schuckmann 20112005-20100.54 +/- 0.1
3Wunsch 20141992-20110.2 +/- 0.1

No climate model had predicted the dramatically rising temperatures in the deep oceans calculated by the Balmaseda/Trenberth re-analysis,13 and oceanographers suggest such a sharp rise is more likely an artifact of shifting measuring systems. Indeed the unusual warming correlates with the switch to the Argo observing system. Wunsch and Heimbach (2013)2 wrote, “clear warnings have appeared in the literature—that spurious trends and values are artifacts of changing observation systems (see, e.g., Elliott and Gaffen, 1991; Marshall et al., 2002; Thompson et al., 2008)—the reanalyses are rarely used appropriately, meaning with the recognition that they are subject to large errors.3
More specifically Wunsch and Heimbach (2014) warned, “Data assimilation schemes running over decades are usually labeled “reanalyses.” Unfortunately, these cannot be used for heat or other budgeting purposes because of their violation of the fundamental conservation laws; see Wunsch and Heimbach (2013) for discussion of this important point. The problem necessitates close examination of claimed abyssal warming accuracies of 0.01 W/m2 based on such methods (e.g., Balmaseda et al., 2013).” 3

So who to believe?

Because ocean heat is stored asymmetrically and that heat is shifting 24/7, any limited sampling scheme will be riddled with large biases and uncertainties. In Figure 12 below Wunsch and Heimbach (2014) map the uneven densities of regionally stored heat. Apparently associated with its greater salinity, most of the central North Atlantic stores twice as much heat as any part of the Pacific and Indian Oceans. Regions where there are steep heat gradients require a greater sampling effort to avoid misleading results. They warned, “The relatively large heat content of the Atlantic Ocean could, if redistributed, produce large changes elsewhere in the system and which, if not uniformly observed, show artificial changes in the global average.” 3

clip_image005

Furthermore, due to the constant time-varying heat transport, regions of warming are usually compensated by regions of cooling as illustrated in their Figure 15. It offers a wonderful visualization of the current state of those natural ocean oscillations by comparing changes in heat content between1992 and 2011. Those patterns of heat re-distributions evolve enormous amounts of heat and that make detection of changes in heat content that are many magnitudes smaller extremely difficult. Again any uneven sampling regime in time or space, would result in “artificial changes in the global average”.

Figure 15 shows the most recent effects of La Nina and the negative Pacific Decadal Oscillation. The eastern Pacific has cooled, while simultaneously the intensifying trade winds have swept more warm water into the western Pacific causing it to warm. Likewise heat stored in the mid‑Atlantic has likely been transported northward as that region has cooled while simultaneously the sub‑polar seas have warmed. This northward change in heat content is in agreement with earlier discussions about cycles of warm water intrusions that effect Arctic sea ice, confounded climate models of the Arctic and controls the distribution of marine organisms.

Most interesting is the observed cooling throughout the upper 700 meters of the Arctic. There have been 2 competing explanations for the unusually warm Arctic air temperature that heavily weights the global average. CO2 driven hypotheses argue global warming has reduced polar sea ice that previously reflected sunlight, and now the exposed dark waters are absorbing more heat and raising water and air temperatures. But clearly a cooling upper Arctic Ocean suggests any absorbed heat is insignificant. Despite greater inflows of warm Atlantic water, declining heat content of the upper 700 meters supports the competing hypothesis that warmer Arctic air temperatures are, at least in part, the result of increased ventilation of heat that was previously trapped by a thick insulating ice cover.7
That second hypothesis is also in agreement with extensive observations that Arctic air temperatures had been cooling in the 80s and 90s. Warming occurred after subfreezing winds, re‑directed by the Arctic Oscillation, drove thick multi-year ice out from the Arctic.11

Regional cooling is also detected along the storm track from the Caribbean and along eastern USA. This evidence contradicts speculation that hurricanes in the Atlantic will or have become more severe due to increasing ocean temperatures. This also confirms earlier analyses of blogger Bob Tisdale and others that Superstorm Sandy was not caused by warmer oceans.
clip_image007

In order to support their contention that the deep ocean has been dramatically absorbing heat, Balmaseda/Trenberth must provide a mechanism and the regional observations where heat has been carried from the surface to those depths. But few are to be found. Warming at great depths and simultaneous cooling of the surface is antithetical to climate models predictions. Models had predicted global warming would store heat first in the upper layer and stratify that layer. Diffusion would require hundreds to thousands of years, so it is not the mechanism. Trenberth, Rahmstorf, and others have argued the winds could drive heat below the surface. Indeed winds can drive heat downward in a layer that oceanographers call the “mixed-layer,” but the depth where wind mixing occurs is restricted to a layer roughly 10-200 meters thick over most of the tropical and mid-latitude belts. And those depths have been cooling slightly.

The only other possible mechanism that could reasonably explain heat transfer to the deep ocean was that the winds could tilt the thermocline. The thermocline delineates a rapid transition between the ocean’s warm upper layer and cold lower layer. As illustrated above in Figure 15, during a La Nina warm waters pile up in the western Pacific and deepens the thermocline. But the tilting Pacific thermocline typically does not dip below the 700 meters, if ever.8

Unfortunately the analysis by Wunsch and Heimbach (2014) does not report on changes in the layer between 700 meters and 2000 meters. However based on changes in heat content below 2000 meters (their Figure 16 below), deeper layers of the Pacific are practically devoid of any deep warming.
clip_image009
The one region transporting the greatest amount of heat into the deep oceans is the ice forming regions around Antarctica, especially the eastern Weddell Sea where annually sea ice has been expanding.12 Unlike the Arctic, the Antarctic is relatively insulated from intruding subtropical waters (discussed here) so any deep warming is mostly from heat descending from above with a small contribution from geothermal.

Counter‑intuitively greater sea ice production can deliver relatively warmer subsurface water to the ocean abyss. When oceans freeze, the salt is ejected to form a dense brine with a temperature that always hovers at the freezing point. Typically this unmodified water is called shelf water. Dense shelf water readily sinks to the bottom of the polar seas. However in transit to the bottom, shelf water must pass through layers of variously modified Warm Deep Water or Antarctic Circumpolar Water.
Turbulent mixing also entrains some of the warmer water down to the abyss. Warm Deep Water typically comprises 62% of the mixed water that finally reaches the bottom. Any altered dynamic (such as increasing sea ice production, or circulation effects that entrain a greater proportion of Warm Deep Water), can redistribute more heat to the abyss.14. Due to the Antarctic Oscillation the warmer waters carried by the Antarctic Circumpolar Current have been observed to undulate southward bringing those waters closer to ice forming regions. Shelf waters have generally cooled and there has been no detectable warming of the Warm Deep Water core, so this region’s deep ocean warming is likely just re-distributing heat and not adding to the ocean heat content.

So it remains unclear if and how Trenberth’s “missing heat” has sunk to the deep ocean. The depiction of a dramatic rise in deep ocean heat is highly questionable, even though alarmists have flaunted it as proof of Co2’s power. As Dr. Wunsch had warned earlier, “Convenient assumptions should not be turned prematurely into ‘facts,’ nor uncertainties and ambiguities suppressed.” … “Anyone can write a model: the challenge is to demonstrate its accuracy and precision… Otherwise, the scientific debate is controlled by the most articulate, colorful, or adamant players.” 1

To reiterate, “the uncertainties remain too large to rationalize e.g., the apparent “pause” in warming.”

==================================

Literature Cited

1. C. Wunsch, 2007. The Past and Future Ocean Circulation from a Contemporary Perspective, in AGU Monograph, 173, A. Schmittner, J. Chiang and S. Hemming, Eds., 53-74
2. Wunsch, C. and P. Heimbach (2013) Dynamically and Kinematically Consistent Global Ocean Circulation and Ice State Estimates. In Ocean Circulation and Climate, Vol. 103. http://dx.doi.org/10.1016/B978-0-12-391851-2.00021-0
3. Wunsch, C., and P. Heimbach, (2014) Bidecadal Thermal Changes in the Abyssal Ocean, J. Phys. Oceanogr., http://dx.doi.org/10.1175/JPO-D-13-096.1
4. Xue,Y., et al., (2012) A Comparative Analysis of Upper-Ocean Heat Content Variability from an Ensemble of Operational Ocean Reanalyses. Journal of Climate, vol 25, 6905-6929.
5. Lyman, J. et al, (2010) Robust warming of the global upper ocean. Nature, vol. 465,334-
337.
6. Lyman, J. and G. Johnson (2014) Estimating Global Ocean Heat Content Changes in the Upper 1800m since 1950 and the Influence of Climatology Choice*. Journal of Climate, vol 27.
7. Rigor, I.G., J.M. Wallace, and R.L. Colony (2002), Response of Sea Ice to the Arctic Oscillation, J. Climate, v. 15, no. 18, pp. 2648 – 2668.
8. Zhang, R. et al. (2007) Decadal change in the relationship between the oceanic entrainment temperature and thermocline depth in the far western tropical Pacific. Geophysical Research Letters, Vol. 34.
9. Hansen, J., and others, 2005: Earth’s energy imbalance: confirrmation and implications. Science, vol. 308, 1431-1435.
10. von Schuckmann, K., and P.-Y. Le Traon, 2011: How well can we derive Global Ocean Indicators
from Argo data?, Ocean Sci., 7, 783-791, doi:10.5194/os-7-783-2011.
11. Kahl, J., et al., (1993) Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years. Nature, vol. 361, p. 335‑337, doi:10.1038/361335a0
12. Parkinson, C. and D. Cavalieri (2012) Antarctic sea ice variability and trends, 1979–2010. The Cryosphere, vol. 6, 871–880.
13. Balmaseda, M. A., K. E. Trenberth, and E. Kallen, 2013: Distinctive climate signals in reanalysis of global ocean heat content. Geophysical Research Letters, 40, 1754-1759.
14. Azaneau, M. et al. (2013) Trends in the deep Southern Ocean (1958–2010): Implications for Antarctic Bottom Water properties and volume export. Journal Of Geophysical Research: Oceans, Vol. 118

How the Ebola Outbreak Became Deadliest in History

How the Ebola Outbreak Became Deadliest in History

Original Link:  https://richarddawkins.net/2014/08/how-the-ebola-outbreak-became-deadliest-in-history/
 
By Bahar Gholipour

The reasons why the Ebola outbreak in West Africa has grown so large, and why it is happening now, may have to do with the travel patterns of bats across Africa and recent weather patterns in the region, as well as other factors, according to a researcher who worked in the region.

The outbreak began with Ebola cases that surfaced in Guinea, and subsequently spread to the neighboring countries of Liberia and Sierra Leone. Until now, none of these three West African countries had ever experienced an Ebola outbreak, let alone cases involving a type of Ebola virus that had been found only in faraway Central Africa.

But despite the image of Ebola as a virus that mysteriously and randomly emerges from the forest, the sites of the cases are far from random, said Daniel Bausch, a tropical medicine researcher at Tulane University who just returned from Guinea and Sierra Leone, where he had worked as part of the outbreak response team.

“A very dangerous virus got into a place in the world that is the least prepared to deal with it,” Bausch told Live Science.

In a new article published today (July 31) in the journal PLOS Neglected Tropical Diseases, Bausch
and a colleague reviewed the factors that potentially turned the current outbreak into the largest and deadliest Ebola outbreak in history. Although the focus is now on getting the outbreak under control, for long-term prevention, underlying factors need to be addressed, they said.

Here are five potential reasons why this outbreak is so severe:

The virus causing this outbreak is the deadliest type of Ebola virus.

The Ebola virus has five species, and each species has caused outbreaks in different regions. Experts were surprised to see that instead of the Taï Forest Ebola virus, which is found near Guinea, it was the Zaire Ebola virus that is the culprit in the current outbreak. This virus was previously found only in three countries in Central Africa: the Democratic Republic of the Congo, the Republic of the Congo and Gabon.

Zaire Ebola virus is the deadliest type of Ebola virus — in previous outbreaks it has killed up to 90 percent of those it infected.

But how did the Zaire Ebola virus get to Guinea? Few people travel between those two regions, and Guéckédou, the remote epicenter of first cases of disease, is far off the beaten path, Bausch said. “If
Ebola virus was introduced into Guinea from afar, the more likely traveler was a bat,” he said.
It is also possible that the virus was actually in West Africa before the current outbreak, circulating in bats — and perhaps even infected people but so sporadically that it was never recognized, Bausch said. Some preliminary analysis of blood samples collected from patients with other diseases before the outbreak suggests people in this region were exposed to Ebola previously, but more research is needed to know for sure.

Pondering The Second Law

I glanced through a few posts and comments the other day about creationism and evolution, in which the famous Second Law of Thermodynamics was mentioned several times.  I also know, or it is alleged, that the US Patent Office will not even consider any application if it defies the 2'nd Law in any way -- but maybe that is a legend, I really don't know.  In either case, it got me thinking about the law, and the ideas of laws of physics in general.  What is a law?

I always find a good starting place Wikipedia, that famous repository of seemingly all knowledge of learned minds, yet notorious at the same time because seemingly anyone can change its contents (I've never even tried).  I do know that when it comes to subjects I know something about, I've always found it both agreeable and further educational.  So I looked up the Second Law on it, and found this:  http://en.wikipedia.org/wiki/Second_law_of_thermodynamics

________________________________________

Second law of thermodynamics

From Wikipedia, the free encyclopedia
   
The second law of thermodynamics states that the entropy of an isolated system never decreases, because isolated systems always evolve toward thermodynamic equilibrium, a state with maximum entropy.

The second law is an empirically validated postulate of thermodynamics. In classical thermodynamics, the second law is a basic postulate defining the concept of thermodynamic entropy, applicable to any system involving measurable heat transfer. In statistical thermodynamics, the second law is a consequence of unitarity in quantum mechanics. In statistical mechanics information entropy is defined from information theory, known as the Shannon entropy. In the language of statistical mechanics, entropy is a measure of the number of alternative microscopic configurations corresponding to a single macroscopic state.

The second law refers to increases in entropy that can be analyzed into two varieties, due to dissipation of energy and due to dispersion of matter. One may consider a compound thermodynamic system that initially has interior walls that restrict transfers within it. The second law refers to events over time after a thermodynamic operation on the system, that allows internal heat transfers, removes or weakens the constraints imposed by its interior walls, and isolates it from the surroundings. As for dissipation of energy, the temperature becomes spatially homogeneous, regardless of the presence or absence of an externally imposed unchanging external force field. As for dispersion of matter, in the absence of an externally imposed force field, the chemical concentrations also become as spatially homogeneous as is allowed by the permeabilities of the interior walls. Such homogeneity is one of the characteristics of the state of internal thermodynamic equilibrium of a thermodynamic system.

________________________________________

There is more, much more, and please read it all, for it is good.  To begin, it immediately takes us to the concept of entropy, which a measure of disorder in an (isolated, closed) system.

Yet, this has always struck me as bizarre and counter-intuitive.  Why don't we speak of the order in a system, in positive terms?  In science, as in everyday life, we are accustomed to measuring how much of something a thing has, not how much non-something it possesses.  So why isn't entropy the same, a measurement of what's there, not what's lacking?  It's as if we defined matter in terms of the space surrounding it.

The entropy of the cosmos is always increasing, we are also told, as another invocation of the Second Law.  Information content is always decreasing.  Efficiencies are always less than 100%.  We're always losing.  Growing older, and dying.  Death and decay -- what could be a better metaphor for a process that also describes a Carnot Engine?  How do all these ends tie together anyway?

________________________________________

Yet they do, in a very mathematical, and, yes, intuitive, way.  The mathematics I speak of here is that branch called Probability and Statistics.

Stop.  Don't run and hide for cover.  I'm not a mathematician, or even a physicist.  I'm just a plain old chemist, without even his PhD.  I'm not even going to look anything up, or present any strange looking equations or even charts to use.  I'm going to try to talk about it the same down to earth language that I used in convincing myself of the validity of the Second Law years ago.

Think of a deck of cards.  Better yet, if you have one handy, go grab it.  Riffle through it, in your mind or in your hands (or in your mental hands, if you've got nimble ones).  What do you notice?  First, that all the cards are different -- ah, if this isn't the case, you aren't holding a proper deck.  If it is, do like me and count them.  Fifty-two of them, all spread before you, name and face cards, black and red, tops and bottoms.

Now shuffle them as randomly as you can (if you find this difficult, let your dog or a small child do it for you).  Drop them on the floor, kick them around for a while, then walk about, picking them here and there, at whim, until they're all in your hands again.  The only thing I ask you to do while doing this is to keep the faces (all different) down and the tops (all the same, I think) up.  Pick them all up.  Nudge every corner, every side, every edge, into place, so that the deck is neatly piled.

Now guess the first card.  A protest?  "I've only a one in fifty-two chance of being right,"  you exclaim in dismay.  If you did, that's good, for we're already making progress.  You have some sense of what a probability means.  One in fifty-two is not a very good chance.  You certainly wouldn't bet any money on it (unless you're a compulsive gambler, in which case Chance help you).

Another way of stating your predicament is that you haven't sufficient information to make a good guess.  If you could only know whether it was a red card or a black card, a face card or a number card, or something, anything like this, you could start thinking about your pocketbook before making you guess.  But you know nothing, nothing! Well, except that it has to be one of the fifty-two cards comprising the deck.

Hold on, because I'm going to make things worse.  What if I asked you to guess, not just the first card, but to guess every card in the deck?  If punching me in the mouth isn't your answer, you might just hunker down and wonder how to determine what your chances were of accomplishing such an amazing feat.  How could you do this?

This is where a little mathematics comes in.  Create a mental deck of cards in your head.  Choose the first card at random -- say, seven of spades.  That could have been in any one of fifty-two cards, but you placed it first.  Then the second card -- what is it?  How many remaining cards could you have chosen?  Why, fifty-two minus one, equaling fifty-one.  Now the third card.  Fifty-one minus one, equaling fifty.  And so on, and on, and on, etc., until we come to the last card.

So in the placement of the cards you have 53 X 51 X 50 X ... all the way down to the last card, or X 1.  Mathematicians have a nice way of expressing a product like this:  it's called a factorial, and its represented the "!" symbol.  In this case, it would be fifty-two factorial, or 52!.

It's one thing to state it.  Actually carrying out the calculation, even with a calculator or on your computer, isn't very easy.  Fortunately, all those years ago I already did it, so I will present you with the approximate answer I recall (approximate because my calculator couldn't handle that many digits).  That answer is "8 X 10 E67".  This is another mathematical shorthand, meaning in this case "Eight times ten, where ten is multiplied by itself 67 times over first".  Or, if your prefer, because this is a very, very large number, actually eight followed by sixty-seven zeroes, we can take its based-10 logarithm, around 67.9.  Well, round that off to 68, and you're there as good as not.

A number like that is so large (though it's only a trillionth of the number of atoms in the universe), that you wouldn't bet the tiniest tip of one hair leg of the louse living off the tip of one of your hair legs on it.  It might as well be infinite, as far as you're concerned.  But it's not infinite, not even the even tiniest bit close, all of which brings me back to the subject of thermodynamics.
________________________________________
 
Heat, as we all now know thanks to Ludwig Boltzmann, is merely the random motion of atoms.  That motion, thanks to Newton's equations, represent the kinetic energy of the atoms, and with this in mind I am going to attempt a magical transformation:  Imagine, instead of those cards in a deck, the kinetic energies of all the atoms in a given body of matter.  As gas is the simplest state of matter, we'll work with that.  Imagine a hollow magic glass globe, if you like, filled with atoms (or molecules; in this analysis we can treat them the same) of a gas.  And instead of fifty-two cards, consider uncountable trillions upon trillions of different states of kinetic energy among the (otherwise alike) gas atoms.

I want to make this crystal clear.  I am not relating the cards to the individuals atoms, but to the quantity of kinetic energy each particular atom has.  We can even quantize the energy in integer units, from one unit all the way up to -- well, as high as you wish.  If there are a trillion atoms of gas in this globe, then let's say that there are anywhere from one to a trillion energy levels available to each atom.  The precise number doesn't really matter -- I am using trillions here but any number large enough to be unimaginable will do; and there isn't actually any relationship between the number of atoms and number of energy levels.  All of this is just simplification for the purpose of explanation.

Very well.  Consider this glass globe full of gas atoms.  There were two ways we can go about measuring properties of it.  The easiest way is to measure its macroscopic properties.  These are properties such as volume, pressure, temperature, the number of atoms (in conveniently large units like a mole, or almost a trillion times a trillion), the mass, and so on.  They're convenient because we have devices like thermometers, scales, barometers, etc., that we can use to do this.

But there is another way to measure the properties of the gas:  the microscopic way.  In this, we take into account each atom and its quantity of kinetic energy, or some measure of its motion, one by one, and sum the whole thing up.  I'm sure you'll agree that this would be a very tedious, and in practice, absurd and impossible way to make the measurements -- for one thing, even if we could do it at all (a very dubious if, to say the least) it would take nearly forever to get anywhere with any measurement at all.  Fortunately, however, there is a correspondence between these macroscopic and microscopic properties, or states as I shall now call them.  That correspondence is via entropy, or the heart of the Second Law.

Recall the statement from Wiki:  "Entropy is a measure of the number of alternative microscopic configurations corresponding to a single macroscopic state."  That card deck of energy units assigned to each gas atom, like the cards in an actual deck, can be arranged in many, many ways:  about a number whose base ten logarithm is 67.9, recall.  Excuse me, that's for the 52 cards in a deck; for the trillions of possible energy states in a trillion X trillion atoms, that number would be astronomically large; so large that even its logarithm could not be expressed in a format like this, possibly not in any format available in the universe (if anyone can calculate that).  From a microscopic view, the probability of that particular state might as well be zero, for our ability to calculate it.  Like a well-shuffled, deck of cards, there's just no useful information in it.  Another way of saying this, returning to our randomly moving globe of gas atoms, is that there is no way of doing any useful work with it.

That's for a well-shuffled deck of cards / highly randomized energy distribution among atoms.  What about a highly ordered deck or distribution?  First, we have to specify what we mean by "ordered."  For a deck, this might mean ordered by suit (spades, heart, diamond, and club, say), and by value (ace, king, queen, jack, ten ... two), while for our gas it could mean the one atom owns all the units of energies, and all the others none.  I hope you can see that, defined this way, there is only one particular distribution; and once we, either by shuffling the deck or allowing the gas atoms to bump into each other, thereby releasing or gaining units of energy, the distributions become progressively less and less ordered, eventually (though this may take an enormous amount of time) become highly randomized.  The overall macroscopic properties, such as temperature or pressure, don't change, but the ways those properties can be achieved, increase dramatically.  This is why we talk about the " number of alternative microscopic configurations corresponding to a single macroscopic state", or entropy, and why we say that, in the cosmos as a whole, entropy is always increasing.  It is why the ordered state contains a great deal of information and can do a great deal of work, while increasing disorder, or entropy, means less of both.

Now if you've ever worked with decks of cards, you've noticed something quite obvious in retrospect:  you rarely go from perfect order to complete disorder in one shuffle.  There are many, many (also almost innumerable) in-between states of the deck that still have some order in some places with disorder in others.  In fact, even a completely randomized deck will have, by pure chance, some small pockets of order , which can still be exploited as information or for work).  The same is true in nature of course, which is why the Second Law is really a statement of probabilities, not absolutes.  Or, to quote the late Jacob Bronowski in his famous book Ascent of Man:  "It is not true that orderly states constantly run down to disorder.  It is a statistical law, which says that order will tend to vanish.  But statistics do not say 'always'.  Statistics allow order to be built up in some islands of the universe (here on earth, in you, in me, in the stars, in all kinds of places) while disorder takes over in others."

Information, order, the ability for work:  these are always things that a universe has to some degree, however incompletely.  Indeed, by our current understanding of cosmic evolution, our universe started off in a very high state of order, a perhaps highly improbable state of affairs, but quite permissible by the deeply understood laws of thermodynamics.  This initial high degree of order has allowed all the galaxies and stars, and atoms, and of course you an me and all other living things in this universe, to come into existence; and in the same way, will see all these things we regard as precious to us now pass out of existence.  But do not despair.  Order can never run down to absolutely zero; or, from the opposite perspective, disorder, or entropy, can never increase to an infinite however great it becomes; because of this simultaneously subtle but obvious observation, life in some quantity -- organized consciousness in some form is maybe the better word -- doesn't have to completely vanish.  If fact, if the reality really is infinite, as I suspect it is -- consisting of an infinite number of universes in an infinite space-time, all subtly different but all obeying the same fundamental laws of logic -- then we never have to worry about the light of mind being utterly snuffed out everywhere, for all times and places, at any point in any future.  That light has certainly shown long before life on Earth began to organize, and will continue to, somehow, long after our solar system and even entire universe has long burnt out into a heatless cinder.
________________________________________

What I've put forth here is a necessarily very limited explanation of the Second Law of Thermodynamics, of order, disorder, entropy, information, and work.  There are many more explanations, and whatever you have gained from mine -- I presume more questions than answers -- you should seek deeper comprehension in others' explanations, and in your own mental work on the subject.  There are many topics I've overlooked, or hit upon only in sketchy form, concepts you may need to fully explore and gain clarity in first before grasping the Great Second Law.  If it is any consolation -- assuming you need any -- I am no doubt in much the same situation, perhaps even more so.  If so, I wish you prosperity in your quest for full comprehension, as I have had in my own.  Thank you for attending to these words.

Tidal forces gave moon its shape, according to new analysis

Tidal forces gave moon its shape, according to new analysis

July 30, 2014
moon-350.jpg
NASA's Lunar Reconnaissance Orbiter Camera acquired this image of the nearside of the moon in 2010. (Credit: NASA/GSFC/Arizona State University)
 
The shape of the moon deviates from a simple sphere in ways that scientists have struggled to explain. A new study by researchers at UC Santa Cruz shows that most of the moon's overall shape can be explained by taking into account tidal effects acting early in the moon's history.

The results, published July 30 in Nature, provide insights into the moon's early history, its orbital evolution, and its current orientation in the sky, according to lead author Ian Garrick-Bethell, assistant professor of Earth and planetary sciences at UC Santa Cruz.

As the moon cooled and solidified more than 4 billion years ago, the sculpting effects of tidal and rotational forces became frozen in place. The idea of a frozen tidal-rotational bulge, known as the "fossil bulge" hypothesis, was first described in 1898. "If you imagine spinning a water balloon, it will start to flatten at the poles and bulge at the equator," Garrick-Bethell explained. "On top of that you have tides due to the gravitational pull of the Earth, and that creates sort of a lemon shape with the long axis of the lemon pointing at the Earth."

But this fossil bulge process cannot fully account for the current shape of the moon. In the new paper, Garrick-Bethell and his coauthors incorporated other tidal effects into their analysis. They also took into account the large impact basins that have shaped the moon's topography, and they considered the moon's gravity field together with its topography.

Impact craters

Efforts to analyze the moon's overall shape are complicated by the large basins and craters created by powerful impacts that deformed the lunar crust and ejected large amounts of material. "When we try to analyze the global shape of the moon using spherical harmonics, the craters are like gaps in the data," Garrick-Bethell said. "We did a lot of work to estimate the uncertainties in the analysis that result from those gaps."

Their results indicate that variations in the thickness of the moon's crust caused by tidal heating during its formation can account for most of the moon's large-scale topography, while the remainder is consistent with a frozen tidal-rotational bulge that formed later.

A previous paper by Garrick-Bethell and some of the same coauthors described the effects of tidal stretching and heating of the moon's crust at a time 4.4 billion years ago when the solid outer crust still floated on an ocean of molten rock. Tidal heating would have caused the crust to be thinner at the poles, while the thickest crust would have formed in the regions in line with the Earth. Published in Science in 2010, the earlier study found that the shape of one area of unusual topography on the moon, the lunar farside highlands, was consistent with the effects of tidal heating during the formation of the crust.

"In 2010, we found one area that fits the tidal heating effect, but that study left open the rest of the moon and didn't include the tidal-rotational deformation. In this paper we tried to bring all those considerations together," Garrick-Bethell said.

Tidal heating and tidal-rotational deformation had similar effects on the moon's overall shape, giving it a slight lemon shape with a bulge on the side facing the Earth and another bulge on the opposite side. The two processes left distinct signatures, however, in the moon's gravity field. Because the crust is lighter than the underlying mantle, gravity signals reveal variations in the thickness of the crust that were caused by tidal heating.

Gravity field

Interestingly, the researchers found that the moon's overall gravity field is no longer aligned with the topography, as it would have been when the tidal bulges were frozen into the moon's shape. The principal axis of the moon's overall shape (the long axis of the lemon) is now separated from the gravity principal axis by about 34 degrees. (Excluding the large basins from the data, the difference is still about 30 degrees.)

"The moon that faced us a long time ago has shifted, so we're no longer looking at the primordial face of the moon," Garrick-Bethell said. "Changes in the mass distribution shifted the orientation of the moon. The craters removed some mass, and there were also internal changes, probably related to when the moon became volcanically active."

The details and timing of these processes are still uncertain. But Garrick-Bethell said the new analysis should help efforts to work out the details of the moon's early history. While the new study shows that tidal effects can account for the overall shape of the moon, tidal processes don't explain the topographical differences between the near side and the far side.


In addition to Garrick-Bethell, the coauthors of the paper include Viranga Perera, who worked on the study as a UCSC graduate student and is now at Arizona State University; Francis Nimmo, professor of Earth and planetary sciences at UCSC; and Maria Zuber, a planetary scientist at the Massachusetts Institute of Technology. This work was funded by the Ministry of Education of Korea through the National Research Foundation.

Inquiry-based learning

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inquiry-based_learning ...