Search This Blog

Friday, May 29, 2015

Olbers' paradox


From Wikipedia, the free encyclopedia


Olbers' paradox in action

In astrophysics and physical cosmology, Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840) and also called the "dark night sky paradox", is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. The darkness of the night sky is one of the pieces of evidence for a non-static universe such as the Big Bang model. If the universe is static, homogeneous at a large scale, and populated by an infinite number of stars, any sight line from Earth must end at the (very bright) surface of a star, so the night sky should be completely bright. This contradicts the observed darkness of the night.

History

Edward Robert Harrison's Darkness at Night: A Riddle of the Universe (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges, who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars.[1] Kepler also posed the problem in 1610, and the paradox took its mature form in the 18th century work of Halley and Cheseaux.[2] The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers, who described it in 1823, but Harrison shows convincingly that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable. Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper,[3] and that Edgar Allan Poe's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument:
Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.[4]

The paradox


What if every line of sight ended in a star? (Infinite universe assumption#2)

The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark.

To show this, we divide the universe into a series of concentric shells, 1 light year thick. Thus, a certain number of stars will be in the shell 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell between 2,000,000,000 to 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear four times dimmer than the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell.

Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light. And with infinitely many shells there would be a bright night sky.

Dark clouds could obstruct the light. But in that case the clouds would heat up, until they were as hot as stars, and then radiate the same amount of light.

Kepler saw this as an argument for a finite observable universe, or at least for a finite number of stars. In general relativity theory, it is still possible for the paradox to hold in a finite universe:[5] though the sky would not be infinitely bright, every point in the sky would still be like the surface of a star.

In a universe of three dimensions with stars distributed evenly, the number of stars would be proportional to volume. If the surface of concentric sphere shells were considered, the number of stars on each shell would be proportional to the square of the radius of the shell. In the picture above, the shells are reduced to rings in two dimensions with all of the stars on them.

The mainstream explanation

Poet Edgar Allan Poe suggested that the finite size of the observable universe resolves the apparent paradox.[6] More specifically, because the universe is finitely old and the speed of light is finite, only finitely many stars can be observed within a given volume of space visible from Earth (although the whole universe can be infinite in space).[7] The density of stars within this finite volume is sufficiently low that any line of sight from Earth is unlikely to reach a star.
However, the Big Bang theory introduces a new paradox: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the Sun, due to the high temperature of the universe in that era; and most light rays will terminate not in a star but in the relic of the Big Bang.

This paradox is explained by the fact that the Big Bang theory also involves the expansion of space which can cause the energy of emitted light to be reduced via redshift. More specifically, the extreme levels of radiation from the Big Bang have been redshifted to microwave wavelengths (1100 times longer than its original wavelength) as a result of the cosmic expansion, and thus form the cosmic microwave background radiation. This explains the relatively low light densities present in most of our sky despite the assumed bright nature of the Big Bang. The redshift also affects light from distant stars and quasars, but the diminution is minor, since the most distant galaxies and quasars have redshifts of only around 5 to 8.6.

Alternative explanations

Steady state

The redshift hypothesised in the Big Bang model would by itself explain the darkness of the night sky, even if the universe were infinitely old. The steady state cosmological model assumed that the universe is infinitely old and uniform in time as well as space. There is no Big Bang in this model, but there are stars and quasars at arbitrarily great distances. The expansion of the universe will cause the light from these distant stars and quasars to be redshifted (by the Doppler effect), so that the total light flux from the sky remains finite. However, observations of the reduction in [radio] light-flux with distance in the 1950s and 1960s showed that it did not drop as rapidly as the Steady State model predicted. Moreover, the Steady State model predicts that stars should (collectively) be visible at all redshifts (provided that their light is not drowned out by nearer stars, of course). Thus, it does not predict a distinct background at fixed temperature as the Big Bang does. And the steady-state model cannot be modified to predict the temperature distribution of the microwave background accurately.[8]

Finite age of stars

Stars have a finite age and a finite power, thereby implying that each star has a finite impact on a sky's light field density. Edgar Allan Poe suggested that this idea could provide a resolution to Olbers' paradox; a related theory was also proposed by Jean-Philippe de Chéseaux. However, stars are continually being born as well as dying. As long as the density of stars throughout the universe remains constant, regardless of whether the universe itself has a finite or infinite age, there would be infinitely many other stars in the same angular direction, with an infinite total impact. So the finite age of the stars does not explain the paradox.[9]

Brightness

Suppose that the universe were not expanding, and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. Eventually, it would reach 3000 K (corresponding to a typical photon energy of 0.3 eV and so a frequency of 7.5×1013 Hz), and the photons would begin to be absorbed by the hydrogen plasma filling most of the universe, rendering outer space opaque. This maximal radiation density corresponds to about 1.2×1017 eV/m3 = 2.1×1019 kg/m3, which is much greater than the observed value of 4.7×1031 kg/m3.[2] So the sky is about fifty billion times darker than it would be if the universe were neither expanding nor too young to have reached equilibrium yet.

Fractal star distribution

A different resolution, which does not rely on the Big Bang theory, was first proposed by Carl Charlier in 1908 and later rediscovered by Benoît Mandelbrot in 1974. They both postulated that if the stars in the universe were distributed in a hierarchical fractal cosmology (e.g., similar to Cantor dust)—the average density of any region diminishes as the region considered increases—it would not be necessary to rely on the Big Bang theory to explain Olbers' paradox. This model would not rule out a Big Bang but would allow for a dark sky even if the Big Bang had not occurred.

Mathematically, the light received from stars as a function of star distance in a hypothetical fractal cosmos is:
\text{light}=\int_{r_0}^\infty L(r) N(r)\,dr
where:
r0 = the distance of the nearest star. r0 > 0;
r = the variable measuring distance from the Earth;
L(r) = average luminosity per star at distance r;
N(r) = number of stars at distance r.

The function of luminosity from a given distance L(r)N(r) determines whether the light received is finite or infinite. For any luminosity from a given distance L(r)N(r) proportional to ra, \text{light} is infinite for a ≥ −1 but finite for a < −1. So if L(r) is proportional to r−2, then for \text{light} to be finite, N(r) must be proportional to rb, where b < 1. For b = 1, the numbers of stars at a given radius is proportional to that radius. When integrated over the radius, this implies that for b = 1, the total number of stars is proportional to r2. This would correspond to a fractal dimension of 2. Thus the fractal dimension of the universe would need to be less than 2 for this explanation to work.

This explanation is not widely accepted among cosmologists since the evidence suggests that the fractal dimension of the universe is at least 2.[10][11][12] Moreover, the majority of cosmologists accept the cosmological principle, which assumes that matter at the scale of billions of light years is distributed isotropically. Contrarily, fractal cosmology requires anisotropic matter distribution at the largest scales.

Companies rush to build ‘biofactories’ for medicines, flavorings and fuels

For scientist Jack Newman, creating a new life-form has become as simple as this: He types out a DNA sequence on his laptop. Clicks “send.” And a few yards away in the laboratory, robotic arms mix together some compounds to produce the desired cells.

Newman’s biotech company is creating new organisms, most forms of genetically modified yeast, at the dizzying rate of more than 1,500 a day. Some convert sugar into medicines. Others create moisturizers that can be used in cosmetics. And still others make biofuel, a renewable energy source usually made from corn.

“You can now build a cell the same way you might build an app for your iPhone,” said Newman, chief science officer of Amyris.

Some believe this kind of work marks the beginning of a third industrial revolution — one based on using living systems as “bio-factories” for creating substances that are either too tricky or too expensive to grow in nature or to make with petrochemicals.

The rush to biological means of production promises to revolutionize the chemical industry and transform the economy, but it also raises questions about environmental safety and biosecurity and revives ethical debates about “playing God.” Hundreds of products are in the pipeline.

Laboratory-grown artemisinin, a key anti-malarial drug, went on sale in April with the potential to help stabilize supply issues. A vanilla flavoring that promises to be significantly cheaper than the costly extract made from beans grown in rain forests is scheduled to hit the markets in 2014.

On Wednesday, Amyris announced another milestone — a memorandum of understanding with Brazil’s largest low-cost airline, GOL Linhas Aereas, to begin using a jet fuel produced by yeast starting in 2014.

Proponents characterize bio-factories as examples of “green technology” that are sustainable and immune to fickle weather and disease. Backers say they will reshape how we use land globally, reducing the cultivation of cash crops in places where that practice hurts the environment, break our dependence on pesticides and result in the closure of countless industrial factories that pollute the air and water.

But some environmental groups are skeptical.

They compare the spread of bio-factories to the large-scale burning of coal at the turn of the 20th century — a development with implications for carbon dioxide emissions and global warming that weren’t understood until decades later.

Much of the early hype surrounding this technology was about biofuels — the dream of engineering colonies of yeast that could produce enough fuel to power whole cities. It turned out that the technical hurdles were easier to overcome than the economic ones. Companies haven’t been able to find a way to produce enough of it to make the price affordable, and so far the biofuels have been used only in smaller projects, such as local buses and Amyris’s experiment with GOL’s planes.

But dozens of other products are close to market, including synthetic versions of fragrances extracted from grass, coconut oil and saffron powder, as well as a gas used to make car tires. Other applications are being studied in the laboratory: biosensors that light up when a parasite is detected in water; goats with spider genes that produce super-strength silk in their milk; and synthetic bacteria that decompose trash and break down oil spills and other contaminated waste at a rapid pace.

Revenue from industrial chemicals made through synthetic biology is already as high as $1.5 billion, and it will increase at an annual rate of 15 to 25 percent for the next few years, according to an estimate by Mark Bünger, an analyst for Lux Research, a Boston-based advisory firm that focuses on emerging technologies.
 
Reengineering yeast

Since it was founded a decade ago, Amyris has become a legend in the field that sits at the intersection of biology and engineering, creating more than 3 million organisms. Unlike traditional genetic engineering, which typically involves swapping a few genes, the scientists are building entire genomes from scratch.

Keeping bar-code-stamped vials in giant refrigerators at minus-80 degrees, the company’s repository in Emeryville, Calif., is one of the world’s largest collections of living organisms that do not exist in nature.

Ten years ago, when Newman was a postdoctoral student at the University of California at Berkeley, the idea of being able to program cells on a computer was fanciful.

Newman was working in a chemical engineering lab run by biotech pioneer Jay Keasling and helping conduct research on how to rewrite the metabolic pathways of microorganisms to produce useful substances.

Their first target was yeast.

The product of millions of years of evolution, the single-celled organism was capable of a miraculous feat: When fed sugar, it produced energy and excreted alcohol and carbon dioxide. Humans have harnessed this power for centuries to make wine, beer, cheese and other products. Could they tinker with some genes in the yeast to create a biological machine capable of producing medicine?

Excited about the idea of trying to apply the technology to a commercial product, Keasling, Newman and two other young post-docs — Keith Kinkead Reiling and Neil Renninger — started Amyris in 2003 and set their sights on artemisinin, an ancient herbal remedy found to be more than 90 percent effective at curing those infected with malaria.

It is harvested from the leaves of the sweet wormwood plant, but the supply of the plant had sometimes fluctuated in the past, causing shortages.

The new company lined up high-profile investors: the Bill & Melinda Gates Foundation, which gave $42.6 million to a nonprofit organization to help finance the research, and Silicon Valley luminaries John Doerr and Vinod Khosla, who as part of a group invested $20 million.

As of this month, Amyris said its partner, pharmaceutical giant Sanofi, has manufactured 35 tons of artemisinin — roughly equivalent to 70 million courses of treatment. The World Health Organization gave its stamp of approval to the drug in May, and the pills are being used widely.
 
Concerns about risks

The early scientific breakthroughs by the Amyris founders paved the way for dozens of other companies to do similar work. The next major product to be released is likely to be a vanilla flavoring by Evolva, a Swiss company that has laboratories in the San Francisco Bay area.

Cultivated in the remote forests of Madagascar, Mexico and the West Indies, natural vanilla is one of the world’s most revered spices. But companies that depend on the ingredient to flavor their products have long struggled with its scarcity and the volatility of its price.

Its chemically synthesized cousins, which are made from petrochemicals and paper pulp waste and are three to five times cheaper, have 99 percent of the vanilla market but have failed to match the natural version’s complexity.

Now scientists in a lab in Denmark believe they’ve created a type of vanilla flavoring produced by yeast that they say will be more satisfying to the palate and cheaper at the same time.

In Evolva’s case, much of the controversy has focused on whether the flavoring can be considered “natural.” Evolva boasts that it is, because only the substance used to produce the flavoring was genetically modified — not what people actually consume.

“From my point of view it’s fundamentally as natural as beer or bread,” said Evolva chief executive Neil Goldsmith, who is a co-founder of the company. “Neither brewer’s or baker’s yeast is identical to yeast in the wild. I’m comfortable that if beer is natural, then this is natural.”

That justification has caused an uproar among some consumer protection and environmental groups. They say that representing Evolva’s laboratory-grown flavoring as something similar to vanilla extract from an orchid plant is deceptive, and they have mounted a global campaign urging food companies to boycott the “vanilla grown in a petri dish.”

“Any ice-cream company that calls this all-natural vanilla would be committing fraud,” argues Jaydee Hanson, a senior policy analyst at the Center for Food Safety, a nonprofit public interest group based in Washington.

Jim Thomas, a researcher for the ETC Group, said there is a larger issue that applies to all organisms produced by synthetic biology techniques: What if they are accidentally released and evolve to have harmful characteristics?

“There is no regulatory structure or even protocols for assessing the safety of synthetic organisms in the environment,” Thomas said.

Then there’s the potential economic impact. What about the hundreds of thousands of small farmers who produce these crops now?

Artemisinin is farmed by an estimated 100,000 people in Kenya, Tanzania, Vietnam and China and the vanilla plant by 200,000 in Madagascar, Mexico and beyond.

Evolva officials say they believe there will still be a strong market for artisan ingredients like vanilla from real beans and that history has shown that these products typically attract an even higher premium when new products hit the market.

Other biotech executives say they are sympathetic, but that it is the price of progress. Amyris’s Newman says he is confused by environmental groups’ criticism and points to the final chapter of Rachel Carson’s “Silent Spring” — the seminal book that is credited with launching the environmental movement. In it, Carson mentions ways that science can solve the environmental hazards we have endured through years of use of fossil fuels and petrochemicals.

“The question you have to ask yourself is, ‘Is the status quo enough?’ ” Newman said. “We live in a world where things can be improved upon.”

Paradox of a charge in a gravitational field


From Wikipedia, the free encyclopedia

The special theory of relativity is known for its paradoxes: the twin paradox and the ladder-in-barn paradox, for example. Neither are true paradoxes; they merely expose flaws in our understanding, and point the way toward deeper understanding of nature. The ladder paradox exposes the breakdown of simultaneity, while the twin paradox highlights the distinctions of accelerated frames of reference.

So it is with the paradox of a charged particle at rest in a gravitational field; it is a paradox between the theories of electrodynamics and general relativity.

Recap of Key Points of Gravitation and Electrodynamics

It is a standard result from the Maxwell equations of classical electrodynamics that an accelerated charge radiates. That is, it produces an electric field that falls off as 1/r in addition to its rest-frame 1/r^2 Coulomb field. This radiation electric field has an accompanying magnetic field, and the whole oscillating electromagnetic radiation field propagates independently of the accelerated charge, carrying away momentum and energy. The energy in the radiation is provided by the work that accelerates the charge. We understand a photon to be the quantum of the electromagnetic radiation field, but the radiation field is a classical concept.

The theory of general relativity is built on the principle of the equivalence of gravitation and inertia. This means that it is impossible to distinguish through any local measurement whether one is in a gravitational field or being accelerated. An elevator out in deep space, far from any planet, could mimic a gravitational field to its occupants if it could be accelerated continuously "upward". Whether the acceleration is from motion or from gravity makes no difference in the laws of physics. This can also be understood in terms of the equivalence of so-called gravitational mass and inertial mass. The mass in Newton's law of gravity (gravitational mass) is the same as the mass in Newton's second law of motion (inertial mass). They cancel out when equated, with the result discovered by Galileo that all bodies fall at the same rate in a gravitational field, independent of their mass. This was famously demonstrated on the Moon during the Apollo 15 mission, when a hammer and a feather were dropped at the same time and, of course, struck the surface at the same time.

Closely tied in with this equivalence is the fact that gravity vanishes in free fall. For objects falling in an elevator whose cable is cut, all gravitational forces vanish, and things begin to look like the free-floating absence of forces one sees in videos from the International Space Station. One can find the weightlessness of outer space right here on earth: just jump out of an airplane. It is a lynchpin of general relativity that everything must fall together in free fall. Just as with acceleration versus gravity, no experiment should be able to distinguish the effects of free fall in a gravitational field, and being out in deep space far from any forces.

Statement of the Paradox

Putting together these two basic facts of general relativity and electrodynamics, we seem to encounter a paradox. For if we dropped a neutral particle and a charged particle together in a gravitational field, the charged particle should begin to radiate as it is accelerated under gravity, thereby losing energy, and slowing relative to the neutral particle. Then a free-falling observer could distinguish free fall from true absence of forces, because a charged particle in a free-falling laboratory would begin to be pulled relative to the neutral parts of the laboratory, even though no obvious electric fields were present.

Equivalently, we can think about a charged particle at rest in a laboratory on the surface of the earth. Since we know the earth's gravitational field of 1 g is equivalent to being accelerated constantly upward at 1 g, and we know a charged particle accelerated upward at 1 g would radiate, why don't we see radiation from charged particles at rest in the laboratory? It would seem that we could distinguish between a gravitational field and acceleration, because an electric charge apparently only radiates when it is being accelerated through motion, but not through gravitation.

Resolution of the Paradox

The resolution of this paradox, like the twin paradox and ladder paradox, comes through appropriate care in distinguishing frames of reference. We follow the excellent development of Rohrlich (1965),[1] section 8-3, who shows that a charged particle and a neutral particle fall equally fast in a gravitational field, despite the fact that the charged one loses energy by radiation. Likewise, a charged particle at rest in a gravitational field does not radiate in its rest frame. The equivalence principle is preserved for charged particles.

The key is to realize that the laws of electrodynamics, the Maxwell equations, hold only in an inertial frame. That is, in a frame in which no forces act locally. This could be free fall under gravity, or far in space away from any forces. The surface of the earth is not an inertial frame. It is being constantly accelerated. We know the surface of the earth is not an inertial frame because an object at rest there may not remain at rest—objects at rest fall to the ground when released. So we cannot naively formulate expectations based on the Maxwell equations in this frame. It is remarkable that we now understand the special-relativistic Maxwell equations do not hold, strictly speaking, on the surface of the earth—even though they were of course discovered in electrical and magnetic experiments conducted in laboratories on the surface of the earth. Nevertheless, in this case we cannot apply the Maxwell equations to the description of a falling charge relative to a "supported", non-inertial observer.

The Maxwell equations can be applied relative to an observer in free fall, because free-fall is an inertial frame. So the starting point of considerations is to work in the free-fall frame in a gravitational field—a "falling" observer. In the free-fall frame the Maxwell equations have their usual, flat spacetime form for the falling observer. In this frame, the electric and magnetic fields of the charge are simple: the falling electric field is just the Coulomb field of a charge at rest, and the magnetic field is zero. As an aside, note that we are building in the equivalence principle from the start, including the assumption that a charged particle falls equally as fast as a neutral particle. Let us see if any contradictions arise.

Now we are in a position to establish what an observer at rest in a gravitational field, the supported observer, will see. Given the electric and magnetic fields in the falling frame, we merely have to transform those fields into the frame of the supported observer. This is not a Lorentz transformation, because the two frames have a relative acceleration. Instead we must bring to bear the machinery of general relativity.

In this case our gravitational field is fictitious because it can be transformed away in an accelerating frame. Unlike the total gravitational field of the earth, here we are assuming that spacetime is locally flat, so that the curvature tensor vanishes. Equivalently, the lines of gravitational acceleration are everywhere parallel, with no convergences measurable in the laboratory. Then the most general static, flat-space, cylindrical metric and line element can be written:

c^2 d\tau^2 = u^2(z)c^2dt^2 - \left ( {c^2\over g} {du\over dz}  \right )^2 dz^2 - dx^2 - dy^2
where c is the speed of light, \tau is proper time, x,y,z,t are the usual coordinates of space and time, g is the acceleration of the gravitational field, and u(z) is an arbitrary function of the coordinate but must approach the observed Newtonian value of 1+gz/c^2. This is the metric for the gravitational field measured by the supported observer.

Meanwhile, the metric in the frame of the falling observer is simply the Minkowski metric:

c^2 d\tau^2 = c^2 dt'^2 - dz'^2 - dy'^2 -dz'^2
From these two metrics Rohrlich constructs the coordinate transformation between them:
\begin{align}
x'=x &\qquad y'=y  \\
{g\over c^2} (z'-z_0') &= u(z) \cosh{g(t-t_0)} -1\\
{g\over c} (t'-t_0') &= u(z) \sinh{g(t-t_0) }
\end{align}
When this coordinate transformation is applied to the rest frame electric and magnetic fields of the charge, it is found to be radiating—as expected for a charge falling away from a supported observer. Rohrlich emphasizes that this charge remains at rest in its free-fall frame, just as a neutral particle would. Furthermore, the radiation rate for this situation is Lorentz invariant, but it is not invariant under the coordinate transformation above, because it is not a Lorentz transformation.

So a falling charge will appear to radiate to a supported observer, as expected. What about a supported charge, then? Does it not radiate due to the equivalence principle? To answer this question, start again in the falling frame.
In the falling frame, the supported charge appears to be accelerated uniformly upward. The case of constant acceleration of a charge is treated by Rohrlich [1] in section 5-3. He finds a charge e uniformly accelerated at rate g has a radiation rate given by the Lorentz invariant:

R={2\over 3}{e^2\over c^3} g^2
The corresponding electric and magnetic fields of an accelerated charge are also given in Rohrlich section 5-3. To find the fields of the charge in the supported frame, the fields of the uniformly accelerated charge are transformed according to the coordinate transformation previously given. When that is done, one finds no radiation in the supported frame from a supported charge, because the magnetic field is zero in this frame. Rohrlich does note that the gravitational field slightly distorts the Coulomb field of the supported charge, but too small to be observable. So although the Coulomb law was of course discovered in a supported frame, relativity tells us the field of such a charge is not precisely 1/r^2.

The radiation from the supported charge is something of a curiosity: where does it go? Boulware (1980) [2] finds that the radiation goes into a region of spacetime inaccessible to the co-accelerating, supported observer. In effect, a uniformly accelerated observer has an event horizon, and there are regions of spacetime inaccessible to this observer. de Almeida and Saa (2006) [3] have a more-accessible treatment of the event horizon of the accelerated observer.

Global climate on verge of multi-decadal change

Date:  May 27, 2015
 
Source:  University of Southampton
 
Original link:  http://www.sciencedaily.com/releases/2015/05/150527133932.htm
 
Summary
 
The global climate is on the verge of broad-scale change that could last for a number of decades a new study implies. The change to the new set of climatic conditions is associated with a cooling of the Atlantic, and is likely to bring drier summers in Britain and Ireland, accelerated sea-level rise along the northeast coast of the United States, and drought in the developing countries of the Sahel region.
The RAPID moorings being deployed.
Credit: National Oceanography Centre

A new study, by scientists from the University of Southampton and National Oceanography Centre (NOC), implies that the global climate is on the verge of broad-scale change that could last for a number of decades.

The change to the new set of climatic conditions is associated with a cooling of the Atlantic, and is likely to bring drier summers in Britain and Ireland, accelerated sea-level rise along the northeast coast of the United States, and drought in the developing countries of the Sahel region. Since this new climatic phase could be half a degree cooler, it may well offer a brief reprise from the rise of global temperatures, as well as resulting in fewer hurricanes hitting the United States.

The study, published in Nature, proves that ocean circulation is the link between weather and decadal scale climatic change. It is based on observational evidence of the link between ocean circulation and the decadal variability of sea surface temperatures in the Atlantic Ocean.

Lead author Dr Gerard McCarthy, from the NOC, said: "Sea-surface temperatures in the Atlantic vary between warm and cold over time-scales of many decades. These variations have been shown to influence temperature, rainfall, drought and even the frequency of hurricanes in many regions of the world. This decadal variability, called the Atlantic Multi-decadal Oscillation (AMO), is a notable feature of the Atlantic Ocean and the climate of the regions it influences."

These climatic phases, referred to as positive or negative AMO's, are the result of the movement of heat northwards by a system of ocean currents. This movement of heat changes the temperature of the sea surface, which has a profound impact on climate on timescales of 20-30 years. The strength of these currents is determined by the same atmospheric conditions that control the position of the jet stream. Negative AMO's occur when the currents are weaker and so less heat is carried northwards towards Europe from the tropics.

The strength of ocean currents has been measured by a network of sensors, called the RAPID array, which have been collecting data on the flow rate of the Atlantic meridonal overturning circulation (AMOC) for a decade.

Dr David Smeed, from the NOC and lead scientist of the RAPID project, adds: "The observations of AMOC from the RAPID array, over the past ten years, show that it is declining. As a result, we expect the AMO is moving to a negative phase, which will result in cooler surface waters. This is consistent with observations of temperature in the North Atlantic."

Since the RAPID array has only been collecting data for last ten years, a longer data set was needed to prove the link between ocean circulation and slow climate variations. Therefore this study instead used 100 years of sea level data, maintained by the National Oceanography Centre's permanent service for mean sea level. Models of ocean currents based on this data were used to predict how much heat would be transported around the ocean, and the impact this would have on the sea surface temperature in key locations.

Co-author Dr Ivan Haigh, lecturer in coastal oceanography at the University of Southampton, said: "By reconstructing ocean circulation over the last 100 years from tide gauges that measure sea level at the coast, we have been able to show, for the first time, observational evidence of the link between ocean circulation and the AMO."

Story Source:

The above story is based on materials provided by University of Southampton. Note: Materials may be edited for content and length.

Journal Reference:
  1. Gerard D. McCarthy, Ivan D. Haigh, Joël J.-M. Hirschi, Jeremy P. Grist, David A. Smeed. Ocean impact on decadal Atlantic climate variability revealed by sea-level observations. Nature, 2015; 521 (7553): 508 DOI: 10.1038/nature14491

Retracted Scientific Studies: A Growing List

Haruko Obokata, the lead scientist of a retracted stem cell study, at a news conference last year. Credit Kimimasa Mayama/European Pressphoto Agency
The retraction by Science of a study of changing attitudes about gay marriage is the latest prominent withdrawal of research results from scientific literature. And it very likely won't be the last. A 2011 study in Nature found a 10-fold increase in retraction notices during the preceding decade.

Many retractions barely register outside of the scientific field. But in some instances, the studies that were clawed back made major waves in societal discussions of the issues they dealt with. This list recounts some prominent retractions that have occurred since 1980.
  1. Vaccines and Autism

    In 1998, The Lancet, a British medical journal, published a study by Dr. Andrew Wakefield that suggested that autism in children was caused by the combined vaccine for measles, mumps and rubella. In 2010, The Lancet retracted the study following a review of Dr. Wakefield's scientific methods and financial conflicts.
    Despite challenges to the study, Dr. Wakefield's research had a strong effect on many parents. Vaccination rates tumbled in Britain, and measles cases grew. American antivaccine groups also seized on the research. The United States had more cases of measles in the first month of 2015
    than the number that is typically diagnosed in a full year.

  2.  Stem Cell Production

    Papers published by Japanese researchers in Nature in 2014 claimed to provide an easy method to create multipurpose stem cells, with eventual implications for the treatment of diseases and injuries. Months later, the authors, including Haruko Obokata, issued a retraction. An investigation by one of Japan's most prestigious scientific institutes, where much of the research occurred, found that the author had manipulated some of the images published in the study.

    Approximately one month after the retraction, one of Ms. Obokata's co-authors, Yoshiki Sasai, was found hanging in a stairwell of his office. He had taken his own life.
  3. Photo
    Cloning and Human Stem Cells

    Papers in 2004 and 2005 in the journal Science pointed to major progress in human cloning and the extraction of stem cells. When it became clear that much of the data was fabricated, Science eventually retracted both papers.

    Hwang Woo Suk, the lead author of the papers, was later convicted of embezzlement and bioethical violations in South Korea. In the years since his conviction, Dr. Hwang has continued working in the field, and was awarded an American patent in 2014 for the fraudulent work.

  4. John Darsee's Heart Research

    In 1983, Dr. John Darsee, a heart researcher at both Harvard Medical School and Emory University, was caught faking data in most of his 100 published pieces. That Dr. Darsee managed to slip through the system undetected for 14 years revealed, said Dr. Eugene Braunwald, his former superior at Harvard, ''the extraordinary difficulty of detecting fabrication by a clever individual.'' Dr. Darsee was barred from receiving federal funds for 10 years.

  5. Physics Discoveries at Bell Labs

    Between 1998 and 2001, Bell Labs announced a series of major breakthroughs in physics, including the creation of molecular-scale transistors. A panel found that 17 papers relied on fraudulent data, and blamed one scientist, J. Hendrik Schön. It did not fault Mr. Schön's co-authors. In 2004, the University of Konstanz in Germany stripped Mr. Schön of his Ph.D.

  6. Cancer in Rats and Herbicides

    The journal Food and Chemical Toxicology published a paper in 2012 that seemed to show that genetically modified corn and the herbicide Roundup caused premature death in rats resulting from cancer. In 2013, the journal retracted the study, finding that the number of rats studied had been too small, and that the strain of rats had prone to cancer. The lead author, Gilles-Eric Séralini, was not accused of fraud, and the paper was republished in 2014 in another journal.

  7. Pesticides and Estrogen

    A 1996 report in Science said mixtures of some pesticides might be endocrine disruptors and lead to a rise in estrogen hormones, causing cancer and birth defects in humans and animals. In 1997, the paper was withdrawn after its senior author, John A. McLachlan, admitted the results could not be reproduced. The paper's publication affected federal legislation and set off a frantic round of research.

  8. Diederik Stapel's Psychology Research 
     
    Several dozen published papers by Diederik Stapel, a psychology researcher at Tilburg University in the Netherlands, were based on falsified data and faked experiments. Dr. Stapel's studies, like one that found eating meat made people selfish, generated considerable media attention. Dr. Stapel admitted in an interview that his frauds were driven by "a quest for aesthetics, for beauty — instead of the truth."

  9. Marc Hauser's Cognition Research

    In 2010, Harvard University found that Marc Hauser, a researcher in animal and human cognition, had committed eight instances of scientific misconduct. Dr. Hauser retracted a 2002 paper in the journal Cognition about rule learning in monkeys, and corrected his research in other papers. A federal investigation also found that Dr. Hauser had committed fraud. Dr. Hauser left his post at Harvard in 2011.

Anti-GMO stance by Greenpeace, other environmental activists, worsen climate change

| May 28, 2015 |
 
Original link:  http://www.geneticliteracyproject.org/2015/05/28/anti-gmo-stance-by-greenpeace-other-environmental-activists-worsens-climate-change/ 
 
Unknown

Pretend for a moment that you lead an environmental group, dedicated to eliminating the causes of global climate change. As such an environmental leader, you’d be excited about a technology change that reduces the greenhouse gases behind climate change, right? Especially if this technology already has reduced greenhouse gas by the equivalent of nearly 12 million automobiles?

Not if that technology is genetically modified food. Non-government organizations that have taken a strident anti-GMO stance, like Greenpeace, resist any connection between climate change and genetically modified agriculture. Biotechnology isn’t even a significant concept in climate change efforts, particularly in Europe. Worse, this resistance isn’t as scientific as it is political.

A recent study by Philipp Aerni at the University of Zurich evaluated 55 representatives of 44 organizations that included business, US, European and Asian government agencies, academic institutions, organizations like the Intergovernmental Panel on Climate Change (IPCC), The World Bank/International Monetary Fund, Greenpeace, the World Wildlife Fund, and the Bill and Melinda Gates Foundation. Most of the participants looked favorably upon biotechnology and genetic modifications of crops, with the exception of a small number of advocacy groups, the most well-known of which was Greenpeace. But while participants were more willing to privately favor biotechnology as a source of solutions to climate change, few were willing to express that sentiment publicly. Aerni wrote:
Since core stakeholders in both debates (Greenpeace and the World Wildlife Fund are involved with both climate change and GMO issues) radically oppose the use of modern biotechnology, and can count on widespread public support, especially in affluent countries,…other stakeholders may also side with the popular view, even if that view may not be in line with their more pragmatic personal view.
Biotechnology may not be ‘clean tech’ as long as powerful environmental groups say it is not.
Science versus “because I said so”

Greenpeace, for its part, repeatedly points to three areas to show that “GM crops fail in climate change conditions:”
  • It cites a 2005 study that allegedly shows that “temperature fluctuations caused crop losses in GM cotton in China.” However the study only shows a drop in Bt Cry protein production in very hot conditions; no mention is made of crop loss. In fact, other studies show similar effects of heat on any kind of cotton, regardless of breeding.
  • The organization claims that GM soybeans “suffered unexpected losses in the US during hot weather,” citing a news report on a study conducted in 1999. However, scientists who read the study discovered that it did not even mention yield loss. Instead, they read about a loss of total crop that was marginally worse than conventional grown soybeans.
  • Finally, it blames monoculture and loss of biodiversity on genetically modified crops, which we have seen have nothing directly to do with genetic modifications of plants. Monoculture is an over-simplistic designation, and is an issue with any kind of large-scale agriculture.
The GM Contribution

Genetic modification has made significant inroads into curbing climate change. Drought- and salt-tolerant GM plants have been produced, and researchers are looking into developing more. Thriving under extreme heat also is an area of keen interest.

As for helping to prevent climate change in the first place, GM has not been much of a slouch. Adopting GM technology has reduced pesticide spraying by nearly 9 percent. This arises from replacing broad-range pesticides with glyphosate and in some cases 2,4D, reducing the volume of pesticides used and the fossil fuel needed to spray them.

In addition, tillage, necessary for organic and some conventional farming, is usually not necessary for genetically modified crops. A Purdue University study found that no-till fields released 57 percent less nitrous oxide (another greenhouse gas) than fields that required tillage. Thus, less tillage sequesters more carbon and nitrogen in soil.

Finally, genetic modifications produce crops that can get higher yields using less space. This means that less land needs to be disrupted—fewer trees are removed, more plants are preserved and less carbon is released (not to mention the carbon dioxide taken up by plants). The USDA recently found that organic agriculture would require almost twice and much land than is necessary using conventional methods (measured against GM crops, that number’s more dramatic. Organic agriculture would require an extra 121.7 million acres to grow all US-produced food—that’s an area the size of Spain.

Together, reducing spraying and tillage saved 2.1 billion kilograms of carbon dioxide in 2012 alone. This means tons of carbon that isn’t released into the atmosphere, carbon that isn’t burned by crop dusters and other sprayers, and reduction in land use needed for agricultural production.

As for Greenpeace and other anti-GMO groups, organic remains a more viable alternative than GM, even in the face of organic’s threats to climate change.

Ramez Naam, author of The Infinite Resource: The Power of Ideas on a Finite Planet, recently wrote in the Genetic Literacy Project that “Organic farming is environmentally kinder to every acre of land. But it requires more acres.  The trade-off is a harsh one. Would we rather have pesticides on farmland and nitrogen runoffs from them? Or would we rather chop down more forest? How much more forest would we have to chop down?”

What we’re facing

As the world population increases, it’s estimated that demand for food will increase 70 percent over the next 40 years. This means we need higher yields, and more efficient ways of maintaining a secure, safe food supply. At the same time, efforts to reduce (or even reverse) climate change call for conservation of green plants and activities that do not release more greenhouse gases into the atmosphere. To the participants in Aerni’s Swiss study, they knew that the plant can’t leave any technology unturned. Unfortunately, louder public voices are shouting them down.

Andrew Porterfield is a writer, editor and communications consultant for academic institutions, companies and non-profits in the life sciences. He is based in Camarillo, California. Follow @AMPorterfield on Twitter.

New paper finds a large warming bias in Northern Hemisphere temperatures from 'non-valid' station data

Original linkhttp://hockeyschtick.blogspot.ca/2015/05/new-paper-finds-large-warming-bias-in.html
 
A new paper published in the Journal of Atmospheric and Solar-Terrestrial Physics finds that the quality of Northern Hemisphere temperature data has significantly & monotonically decreased since the year 1969, and that the continued use of 'non-valid' weather stations in calculating Northern Hemisphere average temperatures has created a 'positive bias' and "overestimation of temperatures after including non-valid stations." 

The paper appears to affirm a number of criticisms of skeptics that station losses, fabricated/infilled data, and positively-biased 'adjustments' to temperature data have created a positive skew to the data and overestimation of warming during the 20th and 21st centuries. 

Graphs from the paper below show that use of both valid and 'non-valid' station data results in a mean annual Northern Hemisphere temperature over 1C warmer at the end of the record in 2013 as compared to use of 'valid' weather station data exclusively. 

In addition, the paper shows that use of the sharply decreasing number of stations with valid data produces a huge spike in Northern Hemisphere temperatures around ~2004, which is in sharp contrast to much more comprehensive satellite data showing a 'pause' or even cooling over the same period, further calling into question the quality of even the 'valid' land-based stations (urban heat island effects perhaps?).

"The number of valid weather stations is monotonically decreasing after 1969" is shown by the dashed line, and has resulted in an "overestimation of temperature after including non-valid stations" shown by the solid line, especially a spike in temperature in the early 21st century that is not found in satellite temperature records. 
Using temperature data from "valid" stations only, and a base period of 1961-1990, the warmest temperatures were in the first half of the 20th century.
Using a base period of 1800-2013 (including 'non-valid' stations) shows a temperature spike beginning in the early 21st century, but this is not found in the much more accurate and spatially comprehensive satellite records. 
"The computed average by using all stations [including invalid stations, dashed line] is always greater than from using only valid [stations, solid line at bottom of chart]. Percentage of valid stations has steadily declined since 1969 [shown in grey shaded area]. 

Highlights

Introduce the concept of a valid station and use for computations.
Define indices for data quality and seasonal bias and use for data evaluation.
Compute averages for mean and five point summary plus standard deviations.
Indicate a monotonically decreasing data quality after the year 1969.
Observe an overestimation of temperature after including non-valid stations.

Abstract

Starting from a set of 6190 meteorological stations we are choosing 6130 of them and only for Northern Hemisphere we are computing average values for absolute annual MeanMinimumQ1, MedianQ3,Maximum temperature plus their standard deviations for years 1800–2013, while we use 4887 stations and 389 467 rows of complete yearly data. The data quality and the seasonal bias indices are defined and used in order to evaluate our dataset. After the year 1969 the data quality is monotonically decreasing while the seasonal bias is positive in most of the cases. An Extreme Value Distribution estimation is performed for minimum and maximum values, giving some upper bounds for both of them and indicating a big magnitude for temperature changes. Finally suggestions for improving the quality of meteorological data are presented

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...