Search This Blog

Saturday, August 30, 2014

Superabsorption of light via quantum engineering

Superabsorption of light via quantum engineering

Original link:  http://www.nature.com/ncomms/2014/140822/ncomms5705/full/ncomms5705.html


Almost 60 years ago Dicke introduced the term superradiance to describe a signature quantum effect: N atoms can collectively emit light at a rate proportional to N2. Structures that superradiate must also have enhanced absorption, but the former always dominates in natural systems. Here we show that this restriction can be overcome by combining several well-established quantum control techniques. Our analytical and numerical calculations show that superabsorption can then be achieved and sustained in certain simple nanostructures, by trapping the system in a highly excited state through transition rate engineering. This opens the prospect of a new class of quantum nanotechnology with potential applications including photon detection and light-based power transmission. An array of quantum dots or a molecular ring structure could provide a suitable platform for an experimental demonstration.
One potential realization of superabsorption. Figure 1
  1. Engineering the Dicke ladder.
    Figure 2
  2. Enhanced absorption probability.
    Figure 3
  3. A superabsorption cycle.
    Figure 4
  4. Superlinear exciton absorption.
    Figure 5

Introduction

Superradiance can occur when N individual atoms interact with the surrounding electromagnetic field1. Here we use the term ‘atom’ broadly to refer to entities with a discrete dipole-allowed transition, including semiconductor quantum dots2, crystal defects and molecules3. Following an initial excitation of all atoms, dipole-allowed decay down a series of symmetrical ‘Dicke ladder’ states leads to an enhanced light–matter coupling that, when the system reaches the state half way down the ladder, depends on the square of the atomic transition dipole1, 4, 5. Thus when N dipoles add coherently, light can be emitted at an enhanced rate proportional to N2. Even for moderate N this represents a significant increase over the prediction of classical physics, and the effect has found applications ranging from probing exciton delocalization in biological systems6, to developing a new class of laser7 and may even lead to observable effects in astrophysics8.

Time-reversal symmetry of quantum mechanics implies that systems with enhanced emission rates will also have enhanced absorption rates. Naturally emission dominates if an excited state of the collective emits into a vacuum, since there are no photons to absorb. Even in an intense light field where absorption and emission are closely balanced, a given transition remains more likely to emit than to absorb. Thus it might seem that the inverse of superradiance is intrinsically ephemeral.

However, here we show that certain interactions between the atoms allow us to control a quantum system such that a sustained superabsorbing state can exist. For atoms in close proximity and with a suitable geometrical arrangement, ever present atomic dipolar interactions are sufficient for our purposes. An appropriate realization involves a ring structure that is strikingly reminiscent of the photosynthetic light harvesting complex LH1 (refs 9, 10; see Fig. 1). Although the potential for enhanced absorption inherently exists in all superradiating systems, natural systems are not designed to ulitize it. Rather, these will always perform an (often strongly) biased random walk down the ladder of accessible states, being attracted by the bottom most rung. Strongly enhanced absorption near the middle of the Dicke ladder is thus an improbable process and can only last for a vanishingly short time.

Figure 1: One potential realization of superabsorption.
One potential realization of superabsorption.

















Photons absorbed by the ring give rise to delocalized excitons; ideally the ring maintains a specific exciton population to achieve enhanced absorption. Combined with a suitable charge sensor (for example, a quantum point contact) this enables photon sensing. We also model an application for photon harvesting, where newly created excitons are transferred from the ring to a central core absorber, followed by an irreversible process (for example, one-way transfer down a strongly coupled chain) to a centre converting the exciton into stored energy.
By contrast, in this Communication, we will show how to harness environmental quantum control techniques to break the dominance of emission over absorption and extend the time during which a collective system maintains the capability for quantum-enhanced absorption. By interfacing the well-established physical phenomena of superradiance, light filtering, photonic band gaps and quantum feedback control, we show that sustained superlinear scaling of the light absorption rate with the number of atoms is possible. Since this represents the reciprocal process to superradiance, we shall refer to it as ‘superabsorption’. Note that this effect is quite distinct from other recent studies investigating collective light–matter interactions in the context of ‘cloaking’11 and time-reversed lasing12. In the following we present the Dicke model of superradiance before describing the requirements for unlocking engineered superabsorption. Our discussion explores its potential for practical technologies through the examples of photon sensing and light-based energy transmission.

Results

Superradiance

The Hamiltonian of an ensemble of N identical atoms is (ħ=1):

where ωA is the bare atomic transition frequency; , , and are the usual Pauli operators defined with respect to the ith atom’s ground, |gi, and optically excited state, |ei. When the wavelength λ of light is much larger than all interatomic distances rij, (λ>>rij), the atoms become indistinguishable and light interacts with the system collectively. The dynamics are then best described by collective operators:

which generate transitions between the eigenstates of the Hamiltonian (1) and obey SU(2) commutation relations. We can succinctly express the light–matter interaction Hamiltonian as

where Ê is the light field operator and d is the atomic dipole matrix element. The Hamiltonian (3) causes the system to move along a ladder of states called the ‘Dicke’ or ‘bright’ states which are characterized by the eigenvalues J and M of Ĵ2 and Ĵz, respectively. In the absence of interactions between the atoms, Ĵ2 commutes with ĤS+ĤL and thus its eigenvalue is a conserved quantity. The Dicke states form a ladder from to shown in Fig. 2a; the N+1 rungs correspond to the fully symmetric superpositions of N/2+M excited atoms for each value of M. The collective excitation operators

Figure 2: Engineering the Dicke ladder.

(a) The ladder of Dicke states of an N atom system, with emission (red) and absorption (blue) processes. In the presence of interactions Ω≠0, the frequency shift of each transition is given by ωA+δM. (b) The effective two-level system (E2LS) picture with the optional trapping process for energy extraction in the dashed box. (c) A scheme for using the environment to confine the ladder of states into an effective two-level system either by tailoring the spectral density κ(ω) or the mode occupation n(ω).

explore this ladder of states, and the transition rates between adjacent Dicke ladder states are then readily calculated:

where γ=8π2d2/(3ε0ħλ3) is the free atom decay rate.
If the system is initialized in the fully excited state with no environmental photons, then the system cascades down the ladder, as shown by the red arrows in Fig. 2a. Upon reaching the midway point (M=0) its emission rate exceeds the rate γN expected of N uncorrelated atoms for N>2. For a larger number of atoms the peak transition rate of equation (5) follows a quadratic dependence on N and is well-approximated by



This is the essence of superradiance: constructive interference between the different possible decay paths greatly enhances the emission rate, producing a high intensity pulse. The enhancement is the result of simple combinatorics: near the middle of the ladder, |J, 0›, there are a large number of possible configurations of excited atoms that contribute to each respective Dicke state.

Superradiance is not an intrinsically transient effect: steady-state operation can occur through repumping13, or in cavities14, 15, and recently a superradiant laser with potential for extraordinary stability and narrow linewidth has been demonstrated7.

Superabsorption

The crucial ingredient for achieving superabsorption is to engineer the transition rates in a way that primarily confines the dynamics to an effective two-level system (E2LS) around the M=0 transition (see Fig. 2b), which exhibits the required quadratic absorption rate as depicted in Fig. 3c.

Figure 3: Enhanced absorption probability.
Enhanced absorption probability.
(a) the probability of absorbing a photon within the lifetime (N) of the superabsorbing E2LS comprising N atoms, compared with that of N individual atoms over the same duration. The relative advantage is linear in N as expected, and the coloured shading indicates the quantum advantage. (b) lifetime of the E2LS for growing N relative to the four atoms case . Note that the decrease in lifetime corresponds to an increasing time resolution of a superabsorbing photon detector: after initialization the system is receptive to a photon of the requisite frequency only during this time window. (c) absorption rate at the midpoint of the Dicke ladder (blue) and for N individual absorbers (red). The clearly visible N2 scaling that is typical of superradiant pulses also applies to the absorption rate.
To ensure that most transitions take place within the E2LS we must either suppress the total loss rate from the E2LS or enhance the probability of transitions within it. This becomes possible if the frequency of the E2LS transition is distinct from that of other transitions, and in particular the one immediately below the targeted transition within the E2LS. This will never be the case for a non-interacting set of atoms, which must have a degenerate set of ladder transition energies, but it can occur once suitable interactions are included. Dicke physics requires that the atoms remain identical, but interactions are still permissible in certain symmetric geometries such as rings4, 16, and these structures will continue to exhibit superradiance, and are therefore also capable of superabsorption.

To show this, we consider the candidate superabsorber depicted in Fig. 1. We assume that the interactions act between adjacent atoms only and are due to Förster-type coupling. This leads to a Dicke ladder of non-degenerate transitions whose dynamics are found from a collective quantum optical master equation:



κ(ω)=∑k|gk|2δ(ωωk)≡χ(ω)|g(ω)|2 is the spectral density at frequency ω; n(ωβ) is the occupation number of the ωβ mode, and is the Lindbladian dissipator . moves the system up a Dicke ladder transition with frequency ωβ.
Equation (7) also features unitary dynamics due to the field interaction that comprises two components: the Lamb shift, accounted for by renormalising ωA in the system Hamiltonian ĤS, and the field induced dipole–dipole interaction



which describes energy conserving ‘hopping’ of excitons between sites mediated by virtual photon exchange. Such interactions can also be added to the system implicitly, yielding analogous results (see Supplementary Method 4). The hopping interaction strength Ωi,j is given by ref. 4



with being a unit vector parallel to the direction of the dipoles. For a circular geometry with dipoles perpendicular to rij and retaining only nearest neighbour interactions (a good approximation for larger rings since ), Ω:=Ω(i, i+1) is a constant. However, note that the restriction to εnearest neighbour coupling is not a requirement; please see the Supplementary Method 3 for a full discussion. Owing to the high degree of symmetry of the ring geometry, to first order ĤI does not mix the |J,M› eigenstates, only shifting their energies4 according to


The shift of the transition frequencies is given by the difference of two adjacent levels EMEM−1:


These altered frequencies break the degeneracy in the Dicke ladder where each transition now has a unique frequency. For example the transition frequency from the ground state to the first Dicke state is ωN/2+1→−N/2=ωA−2Ω. Crucially, the Dicke states still represent a very good approximation of the eigenbasis of the system, yet each transition in the ladder now samples both κ(ω) and n(ω) at its own unique frequency. One might expect that the speed of the collective transitions could cause sufficient lifetime broadening to mask the shifts. However in Supplementary Method 2, we show that this is not the case.

Transition rate engineering

Our aim is to enhance transition rates at the frequency of the E2LS, which we shall call the ‘good' frequency (ω0→−1=ωgood) and suppress those for transitions directly out of the E2LS at the ‘bad’ frequency (ω−1→−2=ωbad). The required type of control of the environment is known as reservoir engineering17; in principle we have a choice between tailoring κ(ω), n(ω) or both. Tailoring the spectral density has the advantage that it can, in theory, completely eliminate the rate of loss from our E2LS when there is no mode of the right frequency present to allow decay. This requires placing the device inside a suitably designed cavity or a photonic bandgap (PBG) crystal with a stop band at ωbad (see Fig. 2c), where the required dimensionality of the PBG depends on the orientation of the optical dipoles. Suppression of emission rates by several orders of magnitude is then achievable with state-of-the-art systems18, 19, 20, 21. Photonic crystal cavities can offer both enhancement of a resonant transition (ωgood) and suppression of an off-resonant one (ωbad; ref. 22), making them ideal for the type of control required.

Control of n(ω) is technically easier to achieve, for example, by using filtered thermal or pseudothermal23, 24 light. However, this approach has the limitation that even in the optimal control regime, where n(ω)=0 everywhere except in a narrow region around ωgood, spontaneous emission will still cause loss from the E2LS.

Since both environmental control approaches rely on frequency selectivity, a sufficiently large detuning between adjacent Dicke transitions will be critical for achieving effective containment within the E2LS. Fortunately, this detuning is already within the frequency selectivity of current experimental controls for moderately sized rings, of say N~10: see the Supplementary Tables 1 and 2.

In practice the environmental control will never be quite perfect and our system will, over long times, inevitably evolve away from the E2LS. For example, one may only have control over n(ω) but not κ(ω), or an imperfect PBG with κ(ωbad)>0, and both cases lead to an exponential decay of E2LS population with the lifetime . Dephasing processes will also lead to leakage out of the fully symmetric subspace and thus shorten the effective lifetime of the E2LS. However, these imperfections need not dominate the behaviour and destroy the effect. We shall discuss the issue of sustained operation in the reinitialization section.

Let us now consider the properties of the system immediately following initialization: Figure 3 shows the increased photon absorption rate of the superabsorbing E2LS compared with N uncorrelated atoms, Γ−1→0/N. Clearly, the probability of absorbing a photon within a given time window (up to the E2LS lifetime) is much higher in the superabsorbing configuration, providing an opportunity for photon dectection with improved sensitivity. The inset of Fig. 3 shows the lifetime of the E2LS, , as a function of N, here assumed to be limited by an imperfect PBG with κ(ωbad)/κ(ωgood)=1/100. For photon sensing, the reduction of the operational window with increasing N may even be a desirable attribute (offering time resolved detection). Generally, the system we have so far described can function as sensor as long as the temporary presence of an additional exciton can be registered, for example, through continuously monitoring the system’s charge state with a quantum point contact25, 26, 27, 28.

Trapping

We have detailed how to create a photon sensor using superabsorption. We can also employ the superabsorption phenomenon in the context of energy harvesting if we can meet a further requirement: a non-radiative channel to extract excitons from the upper of these two levels, turning them into useful work as depicted in the dashed box of Fig. 2b. Specifically, we require an irreversible trapping process that extracts only the excitons that are absorbed by the E2LS, and does not extract excitons from levels below the E2LS. Moreover, the trapping process competes with the re-emission of the photons at a rate proportional to n(ωgood)+1, so that ideally it is much faster than that. Note that in this limit saturation is not an issue since absorbed photons are quickly transferred and converted, leaving the system free to absorb the next photon.

The excitons are delocalized across the ring and need to be extracted collectively to preserve the symmetry of the Dicke states. In designing this process we take inspiration from natural light harvesting systems: a ‘trap’ atom is located at the centre of the ring and symmetrically coupled via a resonant hopping interaction to all the other atoms (see Fig. 1). The corresponding trapping Hamiltonian is



where the superscript T denotes the trap site, g is the strength of the coupling between the ring and the trap, and the trap’s transition frequency ωtrap ideally matches ωgood. In this case the interaction is mediated by the electromagnetic field as described in the previous section, but it could have other physical origins depending on the system of interest. Once an exciton has moved to the trap site we assume that it is then removed into the wider environment by a process which irreversibly absorbs its energy. We note that more exotic and potentially far more efficient trapping implementations can be envisioned, such as, for example, a reservoir of excitons with an effective ‘Fermi level’ capable of accepting excitons only above the energy level E−1. However, at present our aim is to focus on the simplest system capable of exhibiting enhanced photon energy harvesting by superabsorption.

The above trapping process is adequately described phenomenologically (see Supplementary Method 6, Supplementary Figs 1 & 2) as collective exciton extraction from the midpoint (M=0) by adding the dissipator to the righthand side of equation (7) with , and where Γtrap is the rate of the trapping process. The rate of exciton extraction Itrap is then simply given by the population of the trapping level multiplied by the trapping rate:




Consider an ideal E2LS realized by a PBG completely blocking ωbad, that is, a vanishing Γloss:=κ(ωbad)(n(ωbad)+1)Γ−1→−2. Assuming a faster trapping than emission rate, Γtrap>>Γemit:=κ(ωgood)(n(ωgood)+1)Γ0→−1, our figure of merit Itrap matches the absorption rate Γabsorb:=κ(ωgood)n(ωgood−1→0 for all t:

where μ=γκ(ωgood)n(ωgood). It is clear from this equation that under these conditions we achieve a superlinear scaling of the exciton current flowing out of the superabsorber. Trapping processes like the one described here have been demonstrated experimentally and meet the requirement Γtrap>>Γemit, see Supplementary Method 6.
The inevitable loss out of the E2LS entails an exponential decay of Itrap(t) with the lifetime , as shown in Fig. 4. The initial net superabsorption rate far exceeds that possible from uncorrelated atoms, however it is only a transient effect and the system needs to be reinitialized periodically to maintain its advantage. This aspect will be discussed in the next section.

Figure 4: A superabsorption cycle.
A superabsorption cycle.
Superabsorption of the effective two-level system indicated in Fig. 2. The green shading indicates the superabsorption region, the red when the extraction rate is below what could be extracted from uncorrelated atoms; both are for a system of 20 atoms and mode occupancy n(ωgood)=10. The maximum extraction possible from independent atoms (Γind=n(ωgood)Nγ) is used for comparison.
We have detailed the case where a PBG is used to increase the lifetime of the E2LS. If instead intense filtered thermal light is used to ensure n(ωgood)>>1, then many absorption-trapping cycles can take place before a spontaneous emission event happens. This set-up would enable quantum-enhanced light-based power transmission, where a large number of photons need to be harvested quickly in a confined area.

Reinitialization

Reinitialization could be achieved by exploiting a chirped pulse of laser light to re-excite the system, or through a temporary reversal of the trapping process. In practice there will be an energy cost associated with reinitialization but, as we show below, in all but the most severe cases this cost is more than offset by the faster photon to exciton conversion rate during the transient superabsorption period. Furthermore, the frequency with which one has to reinitialize does not have a fundamental lower bound, it is limited only by the quality of the control one can apply.

Perhaps the most elegant way of implementing the reinitialization step (short of self-initialization, see below) would make use of quantum feedback control29: The superabsorption enhancement is derived from coherence between states that all possess the same number of excitons. Therefore, the number of excitons could be continually monitored (for example, by a quantum point contact or by monitoring fluorescence of a probe field tuned to a level or two below the desired manifold) without destroying the desired effect. A suitably designed feedback system could then feed in an excitation only when a loss event occurs, providing optimal efficiency



where σ=γκ(ωbad)(1+n(ωbad)). Provided μ>σ superabsorption will occur, and for σ=0, we recover the theoretical maximum of the idealized case in equation (14).

A far simpler reinitialization scheme would only require periodic reinitialization following a fixed time interval, and does away with the need for feedback control. To account for the relative cost of such reinitialization, we need to quantify the total number of excitons absorbed in a given time. Let us fix the time at which reinitialization is performed to the natural lifetime of the E2LS, . Integrating the trapping rate Itrap(t) over one lifetime and subtracting the reinitialization cost gives a fair measure of the number of excitons the system has absorbed within the given time. We can then consider the extreme limits of the reinitialization cost, from simply replacing a single lost exciton, to having to replace all of the N/2 excitons that make up the superabsorbing state. A larger system requires more frequent reinitializations, since its loss rate is also enhanced by the system size. However, the bias in favour of absorption created by the environmental control is sufficient to ensure this does not negate the superabsorption process. Figure 5 shows how the number of excitons absorbed in a given time scales with the number of atoms, and for all cost models we find a superlinear scaling.

Figure 5: Superlinear exciton absorption.
Superlinear exciton absorption.
The total number of excitons absorbed within the common reference time as a function of the number of atoms N. The coloured curves represent the reinitialization cost models described in the main text, and the red line shows the maximum extracted from independent atoms for comparison. The scaling is superlinear in all coupled atom cases, approximately following the ideal N2 law (green), except for large N in the pessimistic cost model of full reinitialization (blue). If quantum feedback control enables the replacement of a single exciton as soon as a loss event has happened, then the nearly quadratic scaling persists up to an arbitrary number of atoms (olive).

Discussion

We have shown that the absorptive analogue of quantum superradiance can be engineered in structures with suitably symmetric interactions. We have provided an intuitive explanation of this many-body light–matter effect by introducing an E2LS. Despite its simplicity this analytic model can provide highly accurate predictions, as we have validated through the extensive exact numerical calculations that are summarized in the Methods section, see Supplementary Method 5 and Supplementary Fig. 3. As we have already discussed, absorbing light beyond the limits of classical physics raises prospects for at least two new types of technology, and superabsorption could be realized in a broad range of candidate systems.

The foremost application of the phenomenon may be in the context of optical or microwave sensors, either in future cameras or for scientific instruments. In addition to the obvious merits of being sensitive to low light levels, the frequency specificity of the superabsorber may be a desirable attribute. The small size of the ring structure and collective ‘antenna array’ could lead to high spatial and angular resolution, and the fact that the superabsorber is (re)initialized into its fully receptive state by an excitation pulse allows detection events to be confined into a narrowly defined time window. Note that for sensing applications the cost of (re)initialization is likely unimportant, and a trapping mechanism is not required if the number of excitons in the system can be monitored differently, for example, with a quantum point contact.
Light harvesting technologies represent another potential application, and indeed our Fig. 5 indicates that one can obtain a net increase in the number of exctions absorbed compared with conventional systems even allowing for the energy cost of sustaining the superabsorbing state. The technique would be particularly suited to wireless power transfer using narrowband light, for example, for remote sensors or biologically implanted devices, where wired electrical power is impractical. For solar light harvesting a given superabsorber can only achieve optimal performance for a specific frequency range; however, one could engineer a range of different systems to jointly cover the solar spectrum.

There are multiple candidate systems for engineering the above applications. Molecular rings have the advantage of featuring a natural symmetry and intrinsically low levels of disorder. Taking Ω=350 cm−1 as appropriate for a B850 ring (ref. 30) with eight atoms produces transition wavelength shifts exceeding 6 nm, and a wavelength selectivity on the scale of nanometres is readily available with current laser and cavity linewidths. Of course, the dipole alignment of the B850 ring is not optimized for this purpose. However, complex ring structures can be designed and synthesized artificially (for example, porphyrin rings31) and this route should enable far superior properties. Self-assembly into much larger molecular J or H aggregates with established superradiant properties32, 33 may provide further opportunities. Alternatively, superradiance, long-range interactions and optical control have been demonstrated in quantum dots2, 34, and there has been recent progress in synthesizing ring-like clusters with high spectral and spatial order35. Further, suppression of the local density of optical states by two orders of magnitude at specific frequencies has been demonstrated in an appropriate semiconductor photonic crystal environment18. For typical parameters of those systems that would be consistent with the requirements for superabsorption see the Supplementary Discussion 1.

To demonstrate the effect of superabsorption (that is, sustained confinement into an E2LS with enhanced absorption and emission rates) as an instance of an engineered physical phenomenon, several additional possibilities present themselves. For example, circuit QED experiments possess long coherence times and have already demonstrated sub and superradiant effects36, 37, as well as tuneable cross Lamb shifts38 and recent three-dimensional structures39 provide further flexibility. Bose–Einstein Condensates offer similar properties but with much larger numbers of atoms40, 41. Dissipative Dicke model studies with nonlinear atom-photon interaction can enable a steady-state at the midpoint of the Dicke ladder (M=0; refs 42, 43), which may provide a route to self-initialising superabsorbing systems.

Methods

Collective master equation

The master equation (7) is an N atom generalization of the standard quantum optical master equation; we give the full derivation in the Supplementary Method 1. In particular, it assumes that all N atoms are spatially indistinguishable due to occupying a volume with linear dimensions much smaller than the relevant wavelength of light. In addition, interactions between atoms must respect certain symmetry requirements to only shift the Dicke states to first order (for example, as is exemplified by equation (10)). However, as we also discuss in Supplementary Method 5—and verify with numerical calculations—superlinear scaling of the absorption rate with the number of atoms remains possible beyond a first order perturbative treatment of suitably symmetrical interactions.

Numerical calculations

The E2LS model reduces the complexity of the problem and makes it analytically tractable. To verify this approach we have compared it with two different independent numerical models. Supplementary Fig. 3 shows excellent agreement between the E2LS model and the Monte Carlo simulations of the master equation (7). In Supplementary Figs 1 and 2, we extend the model further by incorporating an explicit trap site and allow coherent transfer from the ring to the trap, as described in the Trapping Section, showing that superabsorption is still realized in that case and that the E2LS model still provides a good description of the behaviour. This model uses a generalized master equation solved numerically.

Imperfections

Any real physical system used to demonstrate superabsorption or indeed superradiance, will have imperfections such as slightly varying frequencies for each atom, or a deviation away from perfect ring symmetry. In essence all such imperfections in superradiance are alike; they diminish the collective effect because they lead to the emission of distinguishable photons. It might therefore be a concern that these collective effects could only be realized in the ideal case. However, superradiant effects of molecular aggregates with a spatial extent smaller than the wavelength of light are known to possess a certain degree of robustness against inhomogenous broadening44, dephasing processes45 and exciton phonon coupling46. This is because the increased transition rates produced by superradiance serve to counterbalance the effect of disorder: the faster rate broadens the natural linewidth of the transitions, effectively preventing the introduction of distinguishability from the disorder. Intuitively, we expect a superabsorption advantage to be achievable whenever an imperfect system is still capable of displaying superradiant behaviour (of course with the additional requirement that the energy shifts of adjacent decay process are resolvable). In Supplementary Method 5 and Supplementary Figs 4 and 5, we model realistic imperfections by considering static energy disorder and show that superabsorption can still be realized in the presence of disorder.

Neil deGrasse Tyson on liberal science denial and GMOs

Neil deGrasse Tyson on liberal science denial and GMOs

IMAGE: PATRICK ECCELSINE/FOX
IMAGE: PATRICK ECCELSINE/FOX

Science denial is a major epidemic in the US, public policy is suffering while the Republican Party refuses to accept climate change, religious groups are fighting to inject creationism into the classroom all the while the country’s scientific literacy falls behind the rest of the industrialized world.

However, we are wrong to say that science denial is a problem only coming from the right, or so says astrophysicist and host of the popular Cosmos series, Neil deGrasse Tyson, who sat down with me for a one-on-one interview, and when asked about the right’s constant denial of science and how we can address it, he brought up a different issue:
“It’s wrong to simply attack the right for science denial. Liberals cannot claim to fully embrace science, there is plenty of science denial from the left.”
What Tyson is talking about is the anti-vaccine movement, which is made up of a lot of liberals, or as the daily show called them, the climate denying nutjobs of the left, who at the same time deny things like modern medicine and seek alternative medicines that have either never been confirmed by science or fully debunked and let’s not forget that the debate over genetically modified foods is almost completely run by the left.

Even the liberal’s flagship grocery store, Whole Foods is a pseudoscientific drugstore unto itself, and yet these same shoppers will be quick to laugh at someone denying climate change, all while popping $35 placebos to heal imaginary ailments.

So the conversation shifted towards leftist science denial and since Tyson, who recently made headlines, and liberal enemies by speaking out against the anti-GMO movement focused a good amount of the interview on GMO science and denial.

So when asked about labeling campaigns for GMO foods, “I don’t care if you want to label GMOs in the grocery story, but do so knowing that you will be labeling 80-90% of the food on the shelves, or go ahead and tell me you want to remove all GMOs from food, and know that the same 80-90% of food will have to be removed.”

Yet this is a debate we will have to face, and how do we do that?
“You have to adjust peoples point of view on the topic.” Tyson said, “People need to know cows don’t exist in the wild, we have been modifying our food for over 10,000 years, and you need to know this stuff! You have to make informed decisions, I am not telling you to embrace GMOs, but know the facts!”
The anti-GMO left makes a lot of compelling sounding arguments about getting back to nature, eating more healthy and natural foods, but Tyson doesn’t think this is the right argument, “The advocates keep saying ‘lets go back to nature’, but those cows don’t exist in nature, corn doesn’t exist, those red apples you love don’t exist, we genetically engineered all of that. Don’t pretend what is going on in the laboratory is fundamentally different than what is going on in agriculture.

We need to be having rational conversations about these issues, instead of just repeating your opinion.”

Tyson did say that speaking up for GMO science has made him an, “Enemy of the liberals, they keep claiming I must have been paid off by Monsanto.”

That argument sounds a lot like the conservatives argument against climate change, the rallying cry of “follow the money,” so does this make liberals as bad as conservatives?
“Liberals always assert scientific literacy, and that just isn’t the case. Funding for science under Republican administrations has been historically higher than under Democrats, Under Bush, the big denial issue was stem cells, but he increased military research. Republicans just invest in things like military research instead of biology and NASA. Both sides fund scientific research, and they just fund different projects.
If you are running the government, I care where the money is.”
So while it is true that both sides deny science, in may in fact still stand to note that while funding defense science projects is a form of scientific spending, blocking the funding for life saving medical research is far worse than worrying about labels on food, and while the left may have to answer to its vaccine denial, it is far from a left only issue and could easily be argued the Religious Right is doing far more damage to vaccine awareness than the liberals.

Read more: http://www.patheos.com/blogs/danthropology/2014/08/neil-degrasse-tyson-on-liberal-science-denial-and-gmos/#ixzz3BrLmdXqK

Thursday, August 28, 2014

Climate sceptics see a conspiracy in Australia's record breaking heat

Climate sceptics see a conspiracy in Australia's record breaking heat

Bureau of Meteorology says claims from one climate sceptic that it has corrupted temperature data are false
Original link:  http://www.theguardian.com/environment/planet-oz/2014/aug/27/climate-sceptics-see-a-conspiracy-in-australias-record-breaking-heat
Australia's hottest year of 2013 started with a heatwave that caused widespread bush fires. In January the Holmes family from Tasmania took refuge under a jetty as wild fires raged around them.
Australia’s hottest year of 2013 started with a heatwave that caused widespread bush fires. In January the Holmes family from Tasmania took refuge under a jetty as wild fires raged around them. Photograph: Tim Holmes/AP
You could cut the triumphalism on the climate science denialist blogs right now with a hardback copy of George Orwell’s Nineteen Eighty-Four.

Their unbridled joy comes not in the wake of some key research published in the scientific literature but in the fact that a climate sceptic has got a mainstream newspaper to give their conspiracy theory another airing.

The sceptic in question is Dr Jennifer Marohasy, a long-time doubter of human-caused climate change whose research at Central Queensland University (CQU) is funded by another climate change sceptic.

I choose the Nineteen Eighty-Four analogy in my introduction because it is one of Marohasy’s favourites. She likes to compare the work of the Bureau of Meteorology (BoM) to the various goings on in Orwell’s fictional dystopian novel.

The conspiracy theory is that BoM is using a technique to selectively tamper with its temperature data so that it better fits with the global warming narrative.

The people at NASA are in on it too.

Now the great thing about conspiracy theories is that, for believers, attempts to correct the record just serve to reinforce the conspiracy. Like a video clip of the moon landing on a constant loop, the whole thing feeds back on itself.

Correspondence posted on Marohasy’s blog shows she has been pushing her claims for months that BoM has “corrupted the official temperature record so it more closely accords with the theory of anthropogenic global warming”, according to a letter she wrote to Liberal Senator Simon Birmingham, whose parliamentary secretary portfolio includes responsibility for the agency.

Marohasy lays it on thick in the letter, accusing the bureau of engaging in “propaganda” and littering the text with claims of “corruption”.

The Australian’s environment editor Graham Lloyd was approached to cover the “story” and stepped bravely forward with four pieces in recent days covering Marohasy’s claims.

Lloyd wrote there was now an “escalating row” over the “competence and integrity” of the BoM despite the fact that Marohasy has not published her claims in a peer reviewed journal (the two papers mentioned in Lloyd’s story actually relate to rainfall prediction, not temperature).

Yet this matters not.

The climate science denialists, contrarians and anti-environmental culture warriors are lapping it up with headlines like “Australia Government Climate Office Accused Of Manipulating Temperature Data” and “Australian Bureau of Meteorology Accused of Criminally Adjusting Global Warming”.

This evening the BoM has released a statement that explains the processes at the bureau. I’ve posted it in full at the bottom of this post, but here’s a quote:
Contrary to assertions in some parts of the media, the Bureau is not altering climate records to exaggerate estimates of global warming.

Homogenise this

The BoM maintains several sets of data on temperatures in Australia and the agency makes all that data available online.

One of those datasets is known as the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) and this is the one BoM used to declare 2013 was the hottest year on record.

Marohasy has been looking at some of the temperature stations that are included in ACORN-SAT and analysing the impact of a method known as “homogenisation” that the BoM sometimes employs with the ACORN-SAT data.

It’s no secret or even a revelation that the Bureau of Meteorology employs these techniques and others.

On the bureau’s website, anyone is free to lose themselves in a world of homogenised data sets, gridded temperature analysis and temporal homogeneity adjustments. Go for your life.

While Marohasy’s central claim – that BoM is doctoring figures to make them more acceptable to a narrative of warming - remains entirely untested in the scientific literature, the bureau’s methods used to compile ACORN-SAT have been peer reviewed.

Unusually, the bureau’s full response to one set of questions from Graham Lloyd has found its way onto at least one climate sceptic blog.

In the response the bureau explained why three specific site records it was asked about had been homogenised.

At Bourke, for example, the station had been moved three times in its history. Detective work had found that a noticeable shift in the readings in the 1950s had likely been due to changes in vegetation around the instrument.

At Amberley, the bureau noticed a marked shift in the minimum temperatures it had been recording, which was also likely due to the station being moved.

Another site at Rutherglen had data adjusted to account for two intervals – 1966 and 1974 – when its thought the site was moved from close to buildings to low-flat ground.

Marohasy wants heads to roll [rolls eyes] because she claims that the Rutherglen site was never moved and so there was no need to homogenise the data.

However, the bureau has documentary evidence showing that sometime before the 1970s the weather station was not in the place where it is now.

The bureau had initially spotted a break or jump in the data that pointed to a likely move at Rutherglen.

Perhaps all of these movements of temperature stations was a conspiracy in itself, cooked up in the 1950s?

Professor Neville Nicholls, of Monash University, worked at BoM for more than 30 years and from 1990 until he left in 2005 had led efforts to analyse rainfall and temperature readings from across the country. He told me:
The original raw data is all still there – it has not been corrupted. Anyone can go and get that original data.
Pre-1910 there was not much of a spread but also there was more uncertainty about how the temperatures were being measured. By 1910, most temperatures were being measured in a Stevenson Screen. A lot of measurements were taken at Post Offices but in many cases these were moved out to airports around the middle of the 20th century. That produces artificial cooling in the data.
Towns for example in coastal New South Wales originally had temperatures taken near the ocean because that’s where the town was. But as the town grew the observations would move inland and that is enough to affect temperature and rainfall.
Are we supposed to just ignore that? A scientist can’t ignore those effects. It’s not science to just go ahead and plot that raw data.
Nicholls said if people didn’t trust the way the BoM was presenting the data they could look elsewhere, such as a major project known as Berkeley Earth undertaken by former sceptic Professor Richard Muller which also used BoM data from as early as 1852 to mid-2013.
A chart showing Australia's warming trend from the Berkeley Earth project
 
A chart from the Berkeley Earth analysis of global temperatures used data from the Bureau of Meteorology to reconstruct average temperatures for Australia going back to 1852. Photograph: Berkeley Earth

Sceptic funding

Before joining CQU, Marohasy spent many years working at the Institute of Public Affairs – a Melbourne-based free market think tank that has been promoting climate science denialism for more than two decades.

After leaving there, she became the chair of the Australian Environment Foundation, a spin-off from the IPA.

Marohasy has said that Bryant Macfie, a Perth-based climate science sceptic, funds her research at Central Queensland University.

In 2008, after Macfie had gifted $350,000 to the University of Queensland in a donation facilitated by the IPA to pay for environmental research scholarships there, he wrote that science had been corrupted by a “newer religion” of environmentalism.

In June, Marohasy made her claims about BoM to the Sydney Institute. In July she travelled to Las Vegas to speak at the Heartland Institute’s gathering of climate science denialists and assorted contrarians.

The Heartland Institute is the “free market” think tank that once ran a billboard advert with a picture of terrorist and murderer Ted “Unabomber” Kaczynski alongside the question: “I still believe in Global Warming. Do You?”

Also speaking in Las Vegas was federal MP for the Queensland electorate of Dawson, George Christensen, who appeared on a panel alongside Marohasy.

Christensen described mainstream climate science as “a lot of fiction dressed up as science”.

Data shows warming

Dr Lisa Alexander, the chief investigator at the ARC Centre of Excellence for Climate System Science, explained that in Australia it was not uncommon for temperature stations to be moved, often away from urban environments.

She said that, for example, sites moved only a kilometre or so to more exposed areas such as airports would tend to record lower temperatures.
That then creates a jump in the time series that’s not related to a jump in the climate. The bureau is altering the temperature data to remove those non-climatic effects that are due to changes like new instrumentation or site movements.
Is the bureau fiddling the figures to fit with a global warming conspiracy? No! Are they amending the records to make them consistent through time? Yes.
Also included in the BoM’s statement comes the following graph that overlays 18 different sets of temperature data for Australia - including (in yellow) another BoM dataset which is not homogenised. The graph also includes temperature measurements by satellite.

Now either the satellites are also in on the warming conspiracy, or there’s something else going on. I wonder what that might be?
Graph showing 18 different temperature datasets for Australia from 1911 to 2010
Comparison of 18 different sources of temperature data between 1911 and 2010 including from adjusted and unadjusted data and from analyses from international authorities. Photograph: Bureau of Meteorology, Australia
Here is the Bureau’s statement in full.
Contrary to assertions in some parts of the media, the Bureau is not altering climate records to exaggerate estimates of global warming.
Our role is to make meteorological measurements, and to curate, analyse and communicate the data for use in decision making and to support public understanding.
To undertake these tasks, the Bureau employs highly skilled technicians and scientists and invests in high quality monitoring equipment.
The Bureau measures temperature at nearly 800 sites across Australia, chiefly for the purpose of weather forecasting. The Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) is a subset of this network comprising 112 locations that are used for climate analysis. The ACORN-SAT stations have been chosen to maximise both length of record and network coverage across the continent. For several years, all of this data has been made publicly available on the Bureau’s web site.
Temperature records are influenced by a range of factors such as changes to site surrounds (eg. trees casting shade or influencing wind), measurement methods and the relocation of stations (eg. from a coastal to more inland location). Such changes introduce biases into the climate record that need to be adjusted for prior to analysis.
Adjusting for these biases, a process known as homogenisation, is carried out by meteorological authorities around the world as best practice, to ensure that climate data is consistent through time.
At the Bureau’s request, our climate data management practices were subject to a rigorous independent peer-review in 2012. A panel of international experts found the Bureau’s data and methods were amongst the best in the world.
The Bureau’s submissions to the review were published on the Bureau’s website, as were the findings of the review panel.
The Bureau’s methods have also been published in peer-reviewed scientific journals.
Both the raw and adjusted ACORN-SAT data and the larger unadjusted national data set all indicate that Australian air temperatures have warmed over the last century. This finding is consistent with observed warming in the oceans surrounding Australia. These findings are also consistent with those of other leading international meteorological authorities, such as NOAA and NASA in the United States and the UK MetOffice. The high degree of similarity is demonstrated in Figure 1 (above).
The Bureau strives to ensure that its data sets and analysis methods are as robust as possible. For this reason we place considerable emphasis on quality assurance, transparency and communication. The Bureau welcomes critical analysis of the Australian climate record by others through rigorous scientific peer review processes.

Monday, August 25, 2014

Olbers' paradox

Olbers' paradox

From Wikipedia, the free encyclopedia
 
Olbers' paradox in action
 
In astrophysics and physical cosmology, Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840) and also called the "dark night sky paradox", is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. The darkness of the night sky is one of the pieces of evidence for a non-static universe such as the Big Bang model. If the universe is static, homogeneous at a large scale, and populated by an infinite number of stars, any sight line from Earth must end at the (very bright) surface of a star, so the night sky should be completely bright. This contradicts the observed darkness of the night.

History

Edward Robert Harrison's Darkness at Night: A Riddle of the Universe (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges, who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars.[1] Kepler also posed the problem in 1610, and the paradox took its mature form in the 18th century work of Halley and Cheseaux.[2] The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers, who described it in 1823, but Harrison shows convincingly that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable.
Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper,[3] and that Edgar Allan Poe's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument:
Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.[4]

The paradox

What if every line of sight ended in a star? (Infinite universe assumption #2)

The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark.

To show this, we divide the universe into a series of concentric shells, 1 light year thick. Thus, a certain number of stars will be in the shell 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell between 2,000,000,000 to 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear four times dimmer than the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell.

Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light. And with infinitely many shells there would be a bright night sky.

Dark clouds could obstruct the light. But in that case the clouds would heat up, until they were as hot as stars, and then radiate the same amount of light.

Kepler saw this as an argument for a finite observable universe, or at least for a finite number of stars. In general relativity theory, it is still possible for the paradox to hold in a finite universe:[5] though the sky would not be infinitely bright, every point in the sky would still be like the surface of a star.
In a universe of three dimensions with stars distributed evenly, the number of stars would be proportional to volume. If the surface of concentric sphere shells were considered, the number of stars on each shell would be proportional to the square of the radius of the shell. In the picture above, the shells are reduced to rings in two dimensions with all of the stars on them.

The mainstream explanation

Poet Edgar Allan Poe suggested that the finite size of the observable universe resolves the apparent paradox.[6] More specifically, because the universe is finitely old and the speed of light is finite, only finitely many stars can be observed within a given volume of space visible from Earth (although the whole universe can be infinite in space).[7] The density of stars within this finite volume is sufficiently low that any line of sight from Earth is unlikely to reach a star.
However, the Big Bang theory introduces a new paradox: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the sun, due to the high temperature of the universe in that era; and most light rays will terminate not in a star but in the relic of the Big Bang.

This paradox is explained by the fact that the Big Bang theory also involves the expansion of space which can cause the energy of emitted light to be reduced via redshift. More specifically, the extreme levels of radiation from the Big Bang have been redshifted to microwave wavelengths (1100 times longer than its original wavelength) as a result of the cosmic expansion, and thus form the cosmic microwave background radiation. This explains the relatively low light densities present in most of our sky despite the assumed bright nature of the Big Bang. The redshift also affects light from distant stars and quasars, but the diminution is minor, since the most distant galaxies and quasars have redshifts of only around 5 to 8.6.

Alternative explanations

Steady state

The redshift hypothesised in the Big Bang model would by itself explain the darkness of the night sky, even if the universe were infinitely old. The steady state cosmological model assumed that the universe is infinitely old and uniform in time as well as space. There is no Big Bang in this model, but there are stars and quasars at arbitrarily great distances. The light from these distant stars and quasars will be redshifted accordingly (by the Doppler effect and thermalisation[8]), so that the total light flux from the sky remains finite. Thus the observed radiation density (the sky brightness of extragalactic background light) can be independent of finiteness of the Universe. Mathematically, the total electromagnetic energy density (radiation energy density) in thermodynamic equilibrium from Planck's law is
{U\over V} = \frac{8\pi^5(kT)^4}{15 (hc)^3},
e.g. for temperature 2.7 K it is 40 fJ/m3 ... 4.5×10−31 kg/m3 and for visible temperature 6000 K we get 1 J/m3 ... 1.1×10−17 kg/m3. But the total radiation emitted by a star (or other cosmic object) is at most equal to the total nuclear binding energy of isotopes in the star. For the density of the observable universe of about 4.6×10−28 kg/m3 and given the known abundance of the chemical elements, the corresponding maximal radiation energy density of 9.2×10−31 kg/m3, i.e. temperature 3.2 K.[9][10]
This is close to the summed energy density of the cosmic microwave background and the cosmic neutrino background. The Big Bang hypothesis, by contrast, predicts that the CBR should have the same energy density as the binding energy density of the primordial helium, which is much greater than the binding energy density of the non-primordial elements; so it gives almost the same result. But (neglecting quantum fluctuations in the early universe) the Big Bang would also predict a uniform distribution of CBR, while the steady-state model predicts nothing about its distribution[citation needed]. Nevertheless the isotropy is very probable in steady state as in the kinetic theory.

Since the speed of light is a constant value, regardless of the shift towards infrared frequencies, the universe is still sharply constrained to finite sizes in space as well as in time. Some models of an infinite (Steady State theory or static universe) solution of the universe are still viable, and Olber's paradox cannot sharply distinguish between them from some variants of the Big Bang model.

Finite age of stars

Stars have a finite age and a finite power, thereby implying that each star has a finite impact on a sky's light field density. But if the universe were infinitely old, there would be infinitely many other stars in the same angular direction, with an infinite total impact.

Absorption

A commonly proposed alternative explanation is that the universe is not transparent, and the light from distant stars is blocked by intermediate dark stars or absorbed by dust or gas, so that there is a bound on the distance from which light can reach the observer.

This would not resolve the paradox given the following argument: according to the laws of thermodynamics, the intermediate matter must eventually heat up (or cool down, if it was initially hotter) until it is in thermal equilibrium with the surrounding stars. Once this happens, the matter would then radiate the energy it receives from the stars at the same (average) temperature, so the sky would still appear uniformly bright.

Brightness

Suppose that the universe were not expanding, and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. Eventually, it would reach 3000 K (corresponding to a typical photon energy of 0.3 eV and so a frequency of 7.5×1013 Hz), and the photons would begin to be absorbed by the hydrogen plasma filling most of the universe, rendering outer space opaque. This maximal radiation density corresponds to about 1.2×1017 eV/m3 = 2.1×10−19 kg/m3, which is much greater than the observed value of 4.7×10−31 kg/m3.[2] So the sky is about fifty billion times darker than it would be if the universe were neither expanding nor too young to have reached equilibrium yet.

Fractal star distribution

A different resolution, which does not rely on the Big Bang theory, was first proposed by Carl Charlier in 1908 and later rediscovered by Benoît Mandelbrot in 1974. They both postulated that if the stars in the universe were distributed in a hierarchical fractal cosmology (e.g., similar to Cantor dust)—the average density of any region diminishes as the region considered increases—it would not be necessary to rely on the Big Bang theory to explain Olbers' paradox. This model would not rule out a Big Bang but would allow for a dark sky even if the Big Bang had not occurred.

Mathematically, the light received from stars as a function of star distance in a hypothetical fractal cosmos is:
\text{light}=\int_{r_0}^\infty L(r) N(r)\,dr
where:
r0 = the distance of the nearest star. r0 > 0;
r = the variable measuring distance from the Earth;
L(r) = average luminosity per star at distance r;
N(r) = number of stars at distance r.
The function of luminosity from a given distance L(r)N(r) determines whether the light received is finite or infinite. For any luminosity from a given distance L(r)N(r) proportional to ra, \text{light} is infinite for a ≥ −1 but finite for a < −1. So if L(r) is proportional to r−2, then for \text{light} to be finite, N(r) must be proportional to rb, where b < 1. For b = 1, the numbers of stars at a given radius is proportional to that radius. When integrated over the radius, this implies that for b = 1, the total number of stars is proportional to r2. This would correspond to a fractal dimension of 2. Thus the fractal dimension of the universe would need to be less than 2 for this explanation to work.

This explanation is not widely accepted among cosmologists since the evidence suggests that the fractal dimension of the universe is at least 2.[11][12][13] Moreover, the majority of cosmologists take the cosmological principle as a given, which assumes that matter at the scale of billions of light years is distributed isotropically. Contrasting this, fractal cosmology requires anisotropic matter distribution at the largest scales.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...