Search This Blog

Sunday, May 10, 2015

Scientific American Demonstrates How To Commit Major Science Fraud


Original link:  https://stevengoddard.wordpress.com/2015/05/10/scientific-american-demonstrates-how-to-commit-major-science-fraud/

Scientific American published this map showing an increase in heavy rains since 1958, and blamed them on global warming.
9_13_13_andrew_heavydownpoursUS_500_281_s_c1_c_c
Global Warming May Mean More Downpours like in Oklahoma – Scientific American
Now check out the spectacular fraud being committed. The author  cherry picked the start date of 1958, because it was the minimum in the US climate record. In fact, heavy rainfall events were much more common in the early 20th century, when temperatures were cooler.
ScreenHunter_9140 May. 09 23.42
There is no correlation between US temperature and heavy precipitation events
ScreenHunter_9141 May. 09 23.53
The author is engaged in scientific malfeasance, in an effort mislead Scientific American readers and direct them to the wrong conclusion.

Dark energy


From Wikipedia, the free encyclopedia

In physical cosmology and astronomy, dark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe.[1] Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. According to the Planck mission team, and based on the standard model of cosmology, on a mass–energy equivalence basis, the observable universe contains 26.8% dark matter, 68.3% dark energy (for a total of 95.1%) and 4.9% ordinary matter.[2][3][4][5] Again on a mass–energy equivalence basis, the density of dark energy (6.91 × 10−27 kg/m3) is very low, much less than the density of ordinary matter or dark matter within galaxies. However, it comes to dominate the mass–energy of the universe because it is uniform across space.[6]

Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,[7] and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy. Scalar fields that do change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.

High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is parameterized by the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today.

Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model" of cosmology because of its precise agreement with observations. Dark energy has been used as a crucial ingredient in a recent attempt to formulate a cyclic model for the universe.[8]

Nature of dark energy

Many things about the nature of dark energy remain matters of speculation. The evidence for dark energy is indirect but comes from three independent sources:
  • Distance measurements and their relation to redshift, which suggest the universe has expanded more in the last half of its life.[9]
  • The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature).
  • It can be inferred from measures of large scale wave-patterns of mass density in the universe.
Dark energy is thought to be very homogeneous, not very dense and is not known to interact through any of the fundamental forces other than gravity. Since it is quite rarefied—roughly 10−30 g/cm3—it is unlikely to be detectable in laboratory experiments. Dark energy can have such a profound effect on the universe, making up 68% of universal density, only because it uniformly fills otherwise empty space. The two leading models are a cosmological constant and quintessence. Both models include the common characteristic that dark energy must have negative pressure.

Effect of dark energy: a small constant negative pressure of vacuum

Independently of its actual nature, dark energy would need to have a strong negative pressure (acting repulsively) in order to explain the observed acceleration of the expansion of the universe.

According to general relativity, the pressure within a substance contributes to its gravitational attraction for other things just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity.

In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in universe expansion if the universe is already expanding, or a deceleration in universe contraction if the universe is already contracting. More exactly, the second derivative of the universe scale factor, \ddot{a}, is positive if the equation of state of the universe is such that \! w<-1/3 (see Friedmann equations).

This accelerating expansion effect is sometimes labeled "gravitational repulsion", which is a colorful but possibly confusing expression. In fact a negative pressure does not influence the gravitational interaction between masses—which remains attractive—but rather alters the overall evolution of the universe at the cosmological scale, typically resulting in the accelerating expansion of the universe despite the attraction among the masses present in the universe.

The acceleration is simply a function of dark energy density. Dark energy is persistent: its density remains constant (experimentally, within a factor of 1:10), i.e. it does not get diluted when space expands.

Accelerated expansion of spacetime

Evidence of existence

Supernovae


A Type Ia supernova (bright spot on the bottom-left) near a galaxy

In 1998, published observations of Type Ia supernovae ("one-A") by the High-Z Supernova Search Team[10] followed in 1999 by the Supernova Cosmology Project[11] suggested that the expansion of the universe is accelerating.[12] The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt and Adam G. Riess for their leadership in the discovery.[13][14]

Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos as well as improved measurements of supernovae have been consistent with the Lambda-CDM model.[15] Some people argue that the only indication for the existence of dark energy is observations of distance measurements and associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations are only observations that redshifts are larger than expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant.[16]

Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow the expansion history of the universe to be measured by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, the absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity.

Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter.[17]

Cosmic microwave background


Estimated distribution of matter and energy in the universe

The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass/energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%.[15] The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter and 4.5% ordinary matter.[4] Work done in 2013 based on the Planck spacecraft observations of the CMB gave a more accurate estimate of 68.3% of dark energy, 26.8% of dark matter and 4.9% of ordinary matter.[18]

Large-scale structure

The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density.

A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown.[19][20] The WiggleZ survey from Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ~150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to determine distances to galaxies as far as 2,000 Mpc (redshift 0.6), which allowed astronomers to determine more accurately the speeds of the galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10.[20] This provides a confirmation to cosmic acceleration independent of supernovae.

Late-time integrated Sachs-Wolfe effect

Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe.[21] It was reported at high significance in 2008 by Ho et al.[22] and Giannantonio et al.[23]

Observational Hubble constant data

A new approach to test evidence of dark energy through observational Hubble constant (H(z)) data (OHD) has gained significant attention in recent years.[24][25][26][27] The Hubble constant is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as “cosmic chronometers”.[28] From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter H(z)=-1/(1+z)dz/dt≈-1/(1+z)Δz/Δt. The merit of this approach is clear: the reliance on a differential quantity, Δz/Δt, can minimize many common issues and systematic effects; and as a direct measurement of the Hubble parameter instead of its integral, like supernovae and baryon acoustic oscillations (BAO), it brings more information and is appealing in computation. For these reasons, it has been widely used to examine the accelerated cosmic expansion and study properties of dark energy.

Theories of explanation

Cosmological constant

Lambda, the letter that represents the cosmological constant

The simplest explanation for dark energy is that it is simply the "cost of having space": that is, a volume of space has some intrinsic, fundamental energy. This is the cosmological constant, sometimes called Lambda (hence Lambda-CDM model) after the Greek letter Λ, the symbol used to represent this quantity mathematically. Since energy and mass are related by E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. In fact, most theories of particle physics predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). The cosmological constant is estimated by cosmologists to be on the order of 10−29 g/cm3, or about 10−120 in reduced Planck units[citation needed]. Particle physics predicts a natural value of 1 in reduced Planck units, leading to a large discrepancy.

The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason why a cosmological constant has negative pressure can be seen from classical thermodynamics; Energy must be lost from inside a container to do work on the container. A change in volume dV requires work done equal to a change of energy −P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases (dV is positive), because the energy is equal to ρV, where ρ (rho) is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = −ρ.

A major outstanding problem is that most quantum field theories predict a huge cosmological constant from the energy of the quantum vacuum, more than 100 orders of magnitude too large.[7] This would need to be cancelled almost, but not exactly, by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero,[citation needed] which does not help because supersymmetry must be broken. The present scientific consensus amounts to extrapolating the empirical evidence where it is relevant to predictions, and fine-tuning theories until a more elegant solution is found. Technically, this amounts to checking theories against macroscopic observations. Unfortunately, as the known error-margin in the constant predicts the fate of the universe more than its present state, many such "deeper" questions remain unknown.

In spite of its problems, the cosmological constant is in many respects the most economical solution to the problem of cosmic acceleration. One number successfully explains a multitude of observations. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.

Quintessence

In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.
No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time.[citation needed] Scalar fields are predicted by the standard model and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmic inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.

The cosmic coincidence problem asks why the cosmic acceleration began when it did. If cosmic acceleration began earlier in the universe, structures such as galaxies would never have had time to form and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called tracker behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.[citation needed]

In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w=−1) from above to below. A No-Go theorem has been proved that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario.

Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.

Alternative ideas

Some alternatives to dark energy aim to explain the observational data by a more refined use of established theories, focusing, for example, on the gravitational effects of density inhomogeneities, or on consequences of electroweak symmetry breaking in the early universe. If we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration.[29][30][31][32] A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble.[33][34][35]

Another class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. An example of this type of theory is the theory of dark fluid. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of the space-time such that the modified dynamic stems what have been assigned to the presence of dark energy and dark matter.[36]

A 2011 paper in the journal Physical Review D by Christos Tsagas, a cosmologist at Aristotle University of Thessaloniki in Greece, argued that it is likely that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe. The paper cites data showing that the 2.5 billion ly wide region of space we are inside of is moving very quickly relative to everything around it. If the theory is confirmed, then dark energy would not exist (but the "dark flow" still might).[37][38]

Some theorists think that dark energy and cosmic acceleration are a failure of general relativity on very large scales, larger than superclusters.[citation needed] However most attempts at modifying general relativity have turned out to be either equivalent to theories of quintessence, or inconsistent with observations.[citation needed] Other ideas for dark energy have come from string theory, brane cosmology and the holographic principle, but have not yet proved[citation needed] as compelling as quintessence and the cosmological constant.

On string theory, an article in the journal Nature described:
String theories, popular with many particle physicists, make it possible, even desirable, to think that the observable universe is just one of 10500 universes in a grander multiverse, says Leonard Susskind, a cosmologist at Stanford University in California. The vacuum energy will have different values in different universes, and in many or most it might indeed be vast. But it must be small in ours because it is only in such a universe that observers such as ourselves can evolve.
[39]
Paul Steinhardt in the same article criticizes string theory's explanation of dark energy stating "...Anthropics and randomness don't explain anything... I am disappointed with what most theorists are willing to accept".[39]

Another set of proposals is based on the possibility of a double metric tensor for space-time.[40][41] It has been argued that time reversed solutions in general relativity require such double metric for consistency, and that both dark matter and dark energy can be understood in terms of time reversed solutions of general relativity.[42]

It has been shown that if inertia is assumed to be due to the effect of horizons on Unruh radiation then this predicts galaxy rotation and a cosmic acceleration similar to that observed.[43]

Implications for the fate of the universe

Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of dark matter and baryons. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates.
Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).

If the acceleration continues indefinitely, the ultimate result will be that galaxies outside the local supercluster will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light.[44] This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.[45][46]
However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future[47] because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.[46]

As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely[48][49] (see Future of an expanding universe). The Earth, the Milky Way, and the Virgo Supercluster[contradictory], however, would remain virtually undisturbed while the rest of the universe recedes and disappears from view. In this scenario, the local supercluster would ultimately suffer heat death, just as was thought for the flat, matter-dominated universe before measurements of cosmic acceleration.

There are some very speculative ideas about the future of the universe. One suggests that phantom energy causes divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a "Big Crunch".[50] Some scenarios, such as the cyclic model, suggest this could be the case. It is also possible the universe may never have an end and continue in its present state forever (see The Second Law as a law of disorder). While these ideas are not supported by observations, they are not ruled out.

History of discovery and previous speculation

The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity.[51] Not only was the mechanism an inelegant example of fine-tuning but it was also later realized that Einstein's static universe would actually be unstable because local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: If the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. More importantly, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and not static at all. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.[52]

Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.

Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: notably, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al.[10] and in Perlmutter et al.,[11] and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters.

The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael Turner in 1998.[53]

As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%.[54] Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.

The UN is using climate change as a tool not an issue

Christiana FigueresChristiana Figueres

















It’s a well-kept secret, but 95 per cent of the climate models we are told prove the link between human CO2 emissions and catastrophic global warming have been found, after nearly two decades of temperature stasis, to be in error. It’s not surprising.

We have been subjected to extravagance from climate catastrophists for close to 50 years.

In January 1970, Life magazine, based on “solid scientific evidence”, claimed that by 1985 air pollution would reduce the sunlight reaching the Earth by half. In fact, across that period sunlight fell by between 3 per cent and 5 per cent. In a 1971 speech, Paul Ehrlich said: “If I were a gambler I would take even money that ­England will not exist in the year 2000.”

Fast forward to March 2000 and David Viner, senior research scientist at the Climatic Research Unit, University of East Anglia, told The Independent, “Snowfalls are now a thing of the past.” In December 2010, the Mail Online reported, “Coldest December since records began as temperatures plummet to minus 10C bringing travel chaos across Britain”.

We’ve had our own busted predictions. Perhaps the most preposterous was climate alarmist Tim Flannery’s 2005 observation: “If the computer records are right, these drought conditions will become permanent in eastern Australia.” Subsequent rainfall and severe flooding have shown the records or his analysis are wrong. We’ve swallowed dud prediction after dud prediction. What’s more, the Intergovernmental Panel on Climate Change, which we were instructed was the gold standard on global warming, has been exposed repeatedly for ­mis­rep­resentation and shoddy methods.

Weather bureaus appear to have “homogenised” data to suit narratives. NASA’s claim that 2014 was the warmest year on record was revised, after challenge, to only 38 per cent probability. Extreme weather events, once blamed on global warming, no longer are, as their frequency and intensity decline.

Why then, with such little evidence, does the UN insist the world spend hundreds of billions of dollars a year on futile climate change policies? Perhaps Christiana Figueres, executive secretary of the UN’s Framework on Climate Change has the answer?

In Brussels last February she said, “This is the first time in the history of mankind that we are setting ourselves the task of intentionally, within a defined period of time, to change the economic development model that has been reigning for at least 150 years since the Industrial Revolution.”

In other words, the real agenda is concentrated political authority. Global warming is the hook.

Figueres is on record saying democracy is a poor political system for fighting global warming. Communist China, she says, is the best model. This is not about facts or logic. It’s about a new world order under the control of the UN. It is opposed to capitalism and freedom and has made environmental catastrophism a household topic to achieve its objective.

Figueres says that, unlike the Industrial Revolution, “This is a centralised transformation that is taking place.” She sees the US partisan divide on global warming as “very detrimental”. Of course. In her authoritarian world there will be no room for debate or ­disagreement.

Make no mistake, climate change is a must-win battlefield for authoritarians and fellow travellers. As Timothy Wirth, president of the UN Foundation, says: “Even if the ­(climate change) theory is wrong, we will be doing the right thing in terms of economic and environmental policy.”

Having gained so much ground, eco-catastrophists won’t let up. After all, they have captured the UN and are extremely well funded. They have a hugely powerful ally in the White House. They have successfully enlisted compliant academics and an obedient and gullible mainstream media (the ABC and Fairfax in Australia) to push the scriptures regardless of evidence.

They will continue to present the climate change movement as an independent, spontaneous consensus of concerned scientists, politicians and citizens who believe human activity is “extremely likely” to be the dominant cause of global warming. (“Extremely likely” is a scientific term?)

And they will keep mobilising public opinion using fear and appeals to morality. UN support will be assured through promised wealth redistribution from the West, even though its anti-growth policy prescriptions will needlessly prolong poverty, hunger, sickness and illiteracy for the world’s poorest.

Figueres said at a climate ­summit in Melbourne recently that she was “truly counting on Australia’s leadership” to ensure most coal stayed in the ground.

Hopefully, like India’s Prime Minister Narendra Modi, Tony Abbott isn’t listening. India knows the importance of cheap energy and is set to overtake China as the world’s leading importer of coal. Even Germany is about to commission the most coal-fired power stations in 20 years.

There is a real chance Figueres and those who share her centralised power ambitions will succeed. As the UN’s December climate change conference in Paris approaches, Australia will be pressed to sign even more futile job-destroying climate change treaties.

Resisting will be politically difficult. But resist we should. We are already paying an unnecessary social and economic price for empty gestures. Enough is enough.

Maurice Newman is chairman of the Prime Minister’s Business Advisory Council. The views expressed here are his own.

6 Things You Probably Didn’t Know About Monsanto

Original link:  http://www.forwardprogressives.com/6-things-didnt-know-monsanto/ 
 monsanto
Seed manufacturer Monsanto Company has been the target of a lot of criticism over the past few years, including a couple of articles that I wrote when I first started writing for Forward Progressives. In 2013, the first annual March Against Monsanto took place. It was supposedly in response to the failure of California Proposition 37, in 2012 which would have mandated labeling of foods that came from seeds that were genetically enhanced.

After a couple of articles on the subject in which I expressed concern over certain Monsanto practices, I was urged by people who have a background in science to “do some research” – so since then I’ve spent literally hundreds of hours researching not only Monsanto itself, but GE technology as well. As a result of this research, I came to the conclusion that Monsanto – and genetic engineering of seed technology – isn’t the horrible Frankenstein experiment the March Against Monsanto crowd would have people believe.

Here are a few of the things that I’ve learned that I want to share with you.

6. You’ll often hear about how Monsanto is suing farmers for alleged cross-contamination. However, out of the hundreds of thousands of farmers the company sells seed to annually, they’ve only sued 144 between 1997 and 2010 and that was for violating their patent rights or contract with the company. The company also notes that out of all of those lawsuits, only 9 have gone to trial and any recovered funds are donated.
Even though Monsanto’s policy of enforcing patents might seem strict, other companies that sell biotechnology-enhanced seeds enforce their patents. In other words, it’s not a some evil plot, it’s simply a business being a business. “Monsanto is not the only seed company that enforces its intellectual property rights,” Baucum said. “Pioneer, Agripro, Syngenta, all these companies have engaged in enforcement actions against other people who had violated their rights in seed and seed products they’re creating.”Baucum also said people should weigh the small number of lawsuits against the “hundreds of thousands of people” to whom the company has licensed seed to over the past ten years.
Overall, both Baucum and Reat agree growers are usually more than willing to settle matters with Monsanto representatives in a polite, respectable way out-of-court. “A lot of times growers are worried that Monsanto is going to take their farm, but we will do everything possible to reach a settlement in these matters,” Reat said.
Whether the farmer settles directly with Monsanto, or the case goes to trial, the proceeds are donated to youth leadership initiatives including scholarship programs. (Source)
5. Monsanto is not the only seed company out there that uses biotechnology to modify seed lines to create plants that are more resistant to drought and pests. Dow, Syngenta and Pioneer are just a few other companies that do the same thing, but you will probably never hear a March Against Monsanto activist talk about them. I wonder why that is?

4. Monsanto received a 100 percent rating from the Human Rights campaign for LGBT equality in the workplace in 2013, and this wasn’t a one-time fluke either.
It’s the fourth consecutive time the company has been designated a “Best Places to Work for LGBT Equality” by the Human Rights Campaign.
The campaign’s Corporate Equality Index rates companies based on LGBT-friendly policies and procedures. Monsanto, for example, offers domestic partner and transgender-inclusive health care coverage to employees.
“We are proud of our company’s diversity and our focus on inclusion to insure that every voice is heard and every person is treated equally as these are critical to our success,” said Nicole Ringenberg, Monsanto vice president and controller, as well as the executive sponsor for the company’s LGBT employee network, Encompass. “We’re thrilled to share the news that we are being recognized again by the Human Rights Campaign.”
3. Monsanto and GE technology have often been blamed for the decline of the monarch butterfly, but the actual decline of the butterfly is due to farming practices which have killed off a lot of the plant they depend on, the milkweed.
Milkweed is the only plant on which monarch butterflies will lay their eggs, and it is the primary food source for monarch caterpillars. Despite its necessity to the species, the plant decreased 21 percent in the United States between 1995 and 2013. Scientists, conservationists, and butterfly enthusiasts are encouraging people to grow the plant in their own yards and gardens.
Monsanto has since pledged $4 million to restore the habitat needed for monarch butterflies and is encouraging people to leave patches of milkweed intact whenever possible.

2. Monsanto is often vilified by Big Organic activists (Yes, organic is a very real industry with a global market of $63 billion dollars. Been to Whole Foods lately?) as trying to starve or poison the world, but they’ve actually done a lot to combat hunger and promote agriculture in the developing world, including countries like Bangladesh.
Monsanto actually supports common sense labeling laws, but does not agree with labels lobbied for by the organic industry which attempts to vilify GE technology despite the absence of any scientifically proven risks – and no, the retracted Seralini study doesn’t count.

1. Monsanto doesn’t control the United States government, including the FDA. You’re thinking of defense contractors, the oil industry, and Wall Street. While it is true they’re a multi-billion dollar company and may have some lobbyists, they pale in comparison to companies like Exxon, Lockheed Martin, Verizon, or Goldman Sachs.

In the interest of fairness, Monsanto isn’t a perfect company. In their past, they’ve been involved in lawsuits over PCBs contaminating creeks from their chemical division Solutia, which was spun off in 1997 and is now owned by the Eastman Chemical Company (which itself was spun off from Eastman Kodak in 1994). Another surprising fact is that their transition from a chemical company that notoriously produced Agent Orange for the United States (along with other companies) as well as some other environmentally-damaging products, to a bio-tech corporation was partially steered by one Mitt Romney who worked for Bain Capital at the time. In other words, they’ve moved from a polluting chemical giant to a player in the green biotech world along with companies like Dow, Pioneer, Syngenta and others with the help of a former presidential candidate.

Many are also concerned about the active ingredient in Monsanto’s Roundup weed killer, saying it has been linked to cancer – but there’s more to that story as well. The U.S. Environmental Protection Agency’s official stance currently is that there’s no evidence that glyphosate can be linked to cancer. However, the UN’s International Agency for Research on Cancer declared in March that glyphosate “probably” raises the risk of cancer in people exposed. What’s important to note here is that the levels of exposure according to the UN’s findings have to be extremely high and sustained over some period of time, which means farmworkers are the main group who would be most at risk – if there is a risk. Due to the new findings from the UN, the EPA is reviewing glyphosate’s risks and expects to release a new assessment later this year, which could include new restrictions on use but will not call for an outright ban on the chemical.

Another common claim by the organics industry and their blogs is that GMOs are killing off the bees. That claim is false. Neonicotinoid pesticides (which are not a feature of GE seeds) have been implicated as a possible culprit in colony collapse along with a variety of parasites and fungi. It’s also worth pointing out that Rotenone, a similar pesticide based on nicotine, has been used for decades in the organic industry and has also likely killed its fair share of pollinating insects. When I was a kid and we did organic farming, we used Rotenone pretty heavily on crops and nobody ever discussed how toxic the pesticide could be to the ecosystem, including bees.

Now, I understand that vast majority of my audience is left of center and most of them laugh at conservatives who believe climate change is a liberal conspiracy, despite the fact that science has shown over and over again that climate change is a very real thing. What is very troubling is that these same people who believe Fox News viewers are idiots for denying climate change or evolution, deny science themselves when it comes to topics like GE technology or even vaccines. It is also important to point out that corporations can and do change over time based on science, profit, and public image. That’s why Monsanto isn’t producing industrial insulators these days, forgoing PCBs and designing new strains of vegetables, including organic vegetables using traditional cross-breeding methods.

I support sustainable, locally sourced produce and independent farmers wholeheartedly every chance I get. Trust me, I’d much rather spend my money on food where I can go see the farm from which it came, but that’s because I prefer food that hasn’t been in a storage locker for months or possibly produced in countries using slave labor. However, it’s arrogant and ignorant to insist that farmers and consumers in developing countries follow first world ideals, which aren’t based on ethical concerns, but rather on the inability of some to understand science, business law, or how capitalism works.

There are important things for us as liberals and progressives to work on, including making sure that all Americans have access to fresh, nutritious food – but demonizing a company out of uninformed fear peddled by conspiracy nuts and snake oil salesmen like Dr. Oz takes us backwards, not forward.

Saturday, May 9, 2015

Hubble's law


From Wikipedia, the free encyclopedia

Hubble's law is the name for the observation in physical cosmology that: (1) objects observed in deep space (extragalactic space, ~10 megaparsecs or more) are found to have a Doppler shift interpretable as relative velocity away from the Earth; and (2) that this Doppler-shift-measured velocity, of various galaxies receding from the Earth, is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away.[1][2] This is normally interpreted as a direct, physical observation of the expansion of the spatial volume of the observable universe.[3]

The motion of astronomical objects due solely to this expansion is known as the Hubble flow.[4] Hubble's law is considered the first observational basis for the expanding space paradigm and today serves as one of the pieces of evidence most often cited in support of the Big Bang model.

Although widely attributed to Edwin Hubble, the law was first derived from the general relativity equations by Georges Lemaître in a 1927 article where he proposed the expansion of the universe and suggested an estimated value of the rate of expansion, now called the Hubble constant.[5][6][7][8][9][10] Two years later Edwin Hubble confirmed the existence of that law and determined a more accurate value for the constant that now bears his name.[11] Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917.[12]

The law is often expressed by the equation v = H0D, with H0 the constant of proportionality (Hubble constant) between the "proper distance" D to a galaxy (which can change over time, unlike the comoving distance) and its velocity v (i.e. the derivative of proper distance with respect to cosmological time coordinate; see Uses of the proper distance for some discussion of the subtleties of this definition of 'velocity'). The SI unit of H0 is s−1 but it is most frequently quoted in (km/s)/Mpc, thus giving the speed in km/s of a galaxy 1 megaparsec (3.09×1019 km) away. The reciprocal of H0 is the Hubble time.

Observed values

Date published Hubble constant
(km/s)/Mpc
Observer Citation Remarks / methodology
2013-03-21 67.80±0.77 Planck Mission [13][14][15][16][17] The ESA Planck Surveyor was launched in May 2009. Over a four-year period, it performed a significantly more detailed investigation of cosmic microwave radiation than earlier investigations using HEMT radiometers and bolometer technology to measure the CMB at a smaller scale than WMAP. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's data including a new CMB all-sky map and their determination of the Hubble constant.
2012-12-20 69.32±0.80 WMAP (9-years) [18]
2010 70.4+1.3
1.4
WMAP (7-years), combined with other measurements. [19] These values arise from fitting a combination of WMAP and other cosmological data to the simplest version of the ΛCDM model. If the data are fit with more general versions, H0 tends to be smaller and more uncertain: typically around 67±4 (km/s)/Mpc although some models allow values near 63 (km/s)/Mpc.[20]
2010 71.0±2.5 WMAP only (7-years). [19]
2009-02 70.1±1.3 WMAP (5-years). combined with other measurements. [21]
2009-02 71.9+2.6
2.7
WMAP only (5-years) [21]
2006-08 77.6+14.9
12.5
Chandra X-ray Observatory [22]
2007 70.4+1.5
1.6
WMAP (3-years) [23]
2001-05 72±8 Hubble Space Telescope [24] This project established the most precise optical determination, consistent with a measurement of H0 based upon Sunyaev-Zel'dovich effect observations of many galaxy clusters having a similar accuracy.
prior to 1996 50–90 (est.) [25]
1958 75 (est.) Allan Sandage [26] This was the first good estimate of H0, but it would be decades before a consensus was achieved.

Discovery

A decade before Hubble made his observations, a number of physicists and mathematicians had established a consistent theory of the relationship between space and time by using Einstein's field equations of general relativity. Applying the most general principles to the nature of the universe yielded a dynamic solution that conflicted with the then-prevailing notion of a static universe.

FLRW equations

In 1922, Alexander Friedmann derived his Friedmann equations from Einstein's field equations, showing that the Universe might expand at a rate calculable by the equations.[27] The parameter used by Friedmann is known today as the scale factor which can be considered as a scale invariant form of the proportionality constant of Hubble's law. Georges Lemaître independently found a similar solution in 1927. The Friedmann equations are derived by inserting the metric for a homogeneous and isotropic universe into Einstein's field equations for a fluid with a given density and pressure. This idea of an expanding spacetime would eventually lead to the Big Bang and Steady State theories of cosmology.

Lemaitre's Equation

In 1927, two years before Hubble published his own article, the Belgian priest and astronomer Georges Lemaître was the first to publish research deriving what is now known as Hubble's Law. Unfortunately, for reasons unknown, "all discussions of radial velocities and distances (and the very first empirical determination of "H") were omitted".[28] It is speculated that these omissions were deliberate. According to the Canadian astronomer Sidney van den Bergh, "The 1927 discovery of the expansion of the Universe by Lemaitre was published in French in a low-impact journal. In the 1931 high-impact English translation of this article a critical equation was changed by omitting reference to what is now known as the Hubble constant. That the section of the text of this paper dealing with the expansion of the Universe was also deleted from that English translation suggests a deliberate omission by the unknown translator."[29]

Shape of the universe

Before the advent of modern cosmology, there was considerable talk about the size and shape of the universe. In 1920, the famous Shapley-Curtis debate took place between Harlow Shapley and Heber D. Curtis over this issue. Shapley argued for a small universe the size of the Milky Way galaxy and Curtis argued that the Universe was much larger. The issue was resolved in the coming decade with Hubble's improved observations.

Cepheid variable stars outside of the Milky Way

Edwin Hubble did most of his professional astronomical observing work at Mount Wilson Observatory, the world's most powerful telescope at the time. His observations of Cepheid variable stars in spiral nebulae enabled him to calculate the distances to these objects. Surprisingly, these objects were discovered to be at distances which placed them well outside the Milky Way. They continued to be called "nebulae" and it was only gradually that the term "galaxies" took over.

Combining redshifts with distance measurements


Fit of redshift velocities to Hubble's law.[30] Various estimates for the Hubble constant exist. The HST Key H0 Group fitted type Ia supernovae for redshifts between 0.01 and 0.1 to find that H0 = 71 ± 2 (statistical) ± 6 (systematic) km s−1Mpc−1,[24] while Sandage et al. find H0 = 62.3 ± 1.3 (statistical) ± 5 (systematic) km s−1Mpc−1.[31]

The parameters that appear in Hubble’s law: velocities and distances, are not directly measured. In reality we determine, say, a supernova brightness, which provides information about its distance, and the redshift z = ∆λ/λ of its spectrum of radiation. Hubble correlated brightness and parameter z.

Combining his measurements of galaxy distances with Vesto Slipher and Milton Humason's measurements of the redshifts associated with the galaxies, Hubble discovered a rough proportionality between redshift of an object and its distance. Though there was considerable scatter (now known to be caused by peculiar velocities – the 'Hubble flow' is used to refer to the region of space far enough out that the recession velocity is larger than local peculiar velocities), Hubble was able to plot a trend line from the 46 galaxies he studied and obtain a value for the Hubble constant of 500 km/s/Mpc (much higher than the currently accepted value due to errors in his distance calibrations).

At the time of discovery and development of Hubble's law, it was acceptable to explain redshift phenomenon as a Doppler shift in the context of special relativity, and use the Doppler formula to associate redshift z with velocity.
Today, the velocity-distance relationship of Hubble's law is viewed as a theoretical result with velocity to be connected with observed redshift not by the Doppler effect, but by a cosmological model relating recessional velocity to the expansion of the Universe. Even for small z the velocity entering the Hubble law is no longer interpreted as a Doppler effect, although at small z the velocity-redshift relation for both interpretations is the same.

Hubble Diagram

Hubble's law can be easily depicted in a "Hubble Diagram" in which the velocity (assumed approximately proportional to the redshift) of an object is plotted with respect to its distance from the observer.[32] A straight line of positive slope on this diagram is the visual depiction of Hubble's law.

Cosmological constant abandoned

After Hubble's discovery was published, Albert Einstein abandoned his work on the cosmological constant, which he had designed to modify his equations of general relativity, to allow them to produce a static solution which, in their simplest form, model either an expanding or contracting universe.[33] After Hubble's discovery that the Universe was, in fact, expanding, Einstein called his faulty assumption that the Universe is static his "biggest mistake".[33] On its own, general relativity could predict the expansion of the Universe, which (through observations such as the bending of light by large masses, or the precession of the orbit of Mercury) could be experimentally observed and compared to his theoretical calculations using particular solutions of the equations he had originally formulated.
In 1931, Einstein made a trip to Mount Wilson to thank Hubble for providing the observational basis for modern cosmology.[34]

The cosmological constant has regained attention in recent decades as a hypothesis for dark energy.[35]

Interpretation


A variety of possible recessional velocity vs. redshift functions including the simple linear relation v = cz; a variety of possible shapes from theories related to general relativity; and a curve that does not permit speeds faster than light in accordance with special relativity. All curves are linear at low redshifts.

The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's Law as follows:
v = H_0 \, D
where
  • v is the recessional velocity, typically expressed in km/s.
  • H0 is Hubble's constant and corresponds to the value of H (often termed the Hubble parameter which is a value that is time dependent and which can be expressed in terms of the scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript 0. This value is the same throughout the Universe for a given comoving time.
  • D is the proper distance (which can change over time, unlike the comoving distance, which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3-space defined by given cosmological time. (Recession velocity is just v = dD/dt).
Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted, and is not established except for small redshifts.

For distances D larger than the radius of the Hubble sphere rHS , objects recede at a rate faster than the speed of light:
r_{HS} = \frac{c}{H_0} \ .
Since the Hubble "constant" is a constant only in space, not in time, the radius of the Hubble sphere may increase or decrease over various time intervals. The subscript '0' indicates the value of the Hubble constant today.[30] Current evidence suggests that the expansion of the Universe is accelerating (see Accelerating universe), meaning that, for any given galaxy, the recession velocity dD/dt is increasing over time as the galaxy moves to greater and greater distances; however, the Hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some fixed distance D and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.[37]

Redshift velocity and recessional velocity

Redshift can be measured by determining the wavelength of a known transition, such as hydrogen α-lines for distant quasars, and finding the fractional shift compared to a stationary reference. Thus redshift is a quantity unambiguous for experimental observation. The relation of redshift to recessional velocity is another matter. For an extensive discussion, see Harrison.[38]

Redshift velocity

The redshift z is often described as a redshift velocity, which is the recessional velocity that would produce the same redshift if it were caused by a linear Doppler effect (which, however, is not the case, as the shift is caused in part by a cosmological expansion of space, and because the velocities involved are too large to use a non-relativistic formula for Doppler shift). This redshift velocity can easily exceed the speed of light.[39] In other words, to determine the redshift velocity vrs, the relation:
 v_{rs} \equiv cz \ ,
is used.[40][41] That is, there is no fundamental difference between redshift velocity and redshift: they are rigidly proportional, and not related by any theoretical reasoning. The motivation behind the "redshift velocity" terminology is that the redshift velocity agrees with the velocity from a low-velocity simplification of the so-called Fizeau-Doppler formula[42]
z = \frac{\lambda_o}{\lambda_e}-1 = \sqrt{\frac{1+v/c}{1-v/c}}-1 \approx \frac{v}{c} \ .
Here, λo, λe are the observed and emitted wavelengths respectively. The "redshift velocity" vrs is not so simply related to real velocity at larger velocities, however, and this terminology leads to confusion if interpreted as a real velocity. Next, the connection between redshift or redshift velocity and recessional velocity is discussed. This discussion is based on Sartori.[43]

Recessional velocity

Suppose R(t) is called the scale factor of the Universe, and increases as the Universe expands in a manner that depends upon the cosmological model selected. Its meaning is that all measured distances D(t) between co-moving points increase proportionally to R. (The co-moving points are not moving relative to each other except as a result of the expansion of space.) In other words:
\frac {D(t)}{D(t_0)} = \frac {R(t)}{R(t_0)} \ ,
where t0 is some reference time. If light is emitted from a galaxy at time te and received by us at t0, it is red shifted due to the expansion of space, and this redshift z is simply:
z = \frac {R(t_0)}{R(t_e)} - 1 \ .
Suppose a galaxy is at distance D, and this distance changes with time at a rate dtD . We call this rate of recession the "recession velocity" vr:
v_r = d_tD = \frac {d_tR}{R} D \ .
We now define the Hubble constant as
H \equiv \frac {d_tR}{R} \ ,
and discover the Hubble law:
 v_r = H D \ .
From this perspective, Hubble's law is a fundamental relation between (i) the recessional velocity contributed by the expansion of space and (ii) the distance to an object; the connection between redshift and distance is a crutch used to connect Hubble's law with observations. This law can be related to redshift z approximately by making a Taylor series expansion:
 z = \frac {R(t_0)}{R(t_e)} - 1 \approx \frac {R(t_0)} {R(t_0)\left(1+(t_e-t_0)H(t_0)\right)}-1 \approx (t_0-t_e)H(t_0) \ ,
If the distance is not too large, all other complications of the model become small corrections and the time interval is simply the distance divided by the speed of light:
 z \approx (t_0-t_e)H(t_0) \approx \frac {D}{c} H(t_0) \ , or  cz \approx D H(t_0) = v_r \ .
According to this approach, the relation cz = vr is an approximation valid at low redshifts, to be replaced by a relation at large redshifts that is model-dependent. See velocity-redshift figure.

Observability of parameters

Strictly speaking, neither v nor D in the formula are directly observable, because they are properties now of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it.
For relatively nearby galaxies (redshift z much less than unity), v and D will not have changed much, and v can be estimated using the formula v = zc where c is the speed of light. This gives the empirical relation found by Hubble.

For distant galaxies, v (or D) cannot be calculated from z without specifying a detailed model for how H changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: (1+z) is the factor by which the Universe has expanded while the photon was travelling towards the observer.

Expansion velocity vs relative velocity

In using Hubble's law to determine distances, only the velocity due to the expansion of the Universe can be used.
Since gravitationally interacting galaxies move relative to each other independent of the expansion of the Universe, these relative velocities, called peculiar velocities, need to be accounted for in the application of Hubble's law.

The Finger of God effect is one result of this phenomenon. In systems that are gravitationally bound, such as galaxies or our planetary system, the expansion of space is a much weaker effect than the attractive force of gravity.

Idealized Hubble's Law

The mathematical derivation of an idealized Hubble's Law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated the theorem is this:
Any two points which are moving away from the origin, each along straight lines and with speed proportional to distance from the origin, will be moving away from each other with a speed proportional to their distance apart.
In fact this applies to non-Cartesian spaces as long as they are locally homogeneous and isotropic; specifically to the negatively- and positively-curved spaces frequently considered as cosmological models (see shape of the universe).

An observation stemming from this theorem is that seeing objects recede from us on Earth is not an indication that Earth is near to a center from which the expansion is occurring, but rather that every observer in an expanding universe will see objects receding from them.

Ultimate fate and age of the universe


The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterized by values of density parameters (ΩM for matter and ΩΛ for dark energy). A "closed universe" with ΩM > 1 and ΩΛ = 0 comes to an end in a Big Crunch and is considerably younger than its Hubble age. An "open universe" with ΩM ≤ 1 and ΩΛ = 0 expands forever and has an age that is closer to its Hubble age. For the accelerating universe with nonzero ΩΛ that we inhabit, the age of the universe is coincidentally very close to the Hubble age.
The value of the Hubble parameter changes over time, either increasing or decreasing depending on the value of the so-called deceleration parameter q, which is defined by
q = -\left(1+\frac{\dot H}{H^2}\right).
In a universe with a deceleration parameter equal to zero, it follows that H = 1/t, where t is the time since the Big Bang. A non-zero, time-dependent value of q simply requires integration of the Friedmann equations backwards from the present time to the time when the comoving horizon size was zero.

It was long thought that q was positive, indicating that the expansion is slowing down due to gravitational attraction. This would imply an age of the Universe less than 1/H (which is about 14 billion years). For instance, a value for q of 1/2 (once favoured by most theorists) would give the age of the Universe as 2/(3H). The discovery in 1998 that q is apparently negative means that the Universe could actually be older than 1/H. However, estimates of the age of the universe are very close to 1/H.

Olbers' paradox

The expansion of space summarized by the Big Bang interpretation of Hubble's Law is relevant to the old conundrum known as Olbers' paradox: if the Universe were infinite, static, and filled with a uniform distribution of stars, then every line of sight in the sky would end on a star, and the sky would be as bright as the surface of a star.
However, the night sky is largely dark. Since the 17th century, astronomers and other thinkers have proposed many possible ways to resolve this paradox, but the currently accepted resolution depends in part on the Big Bang theory and in part on the Hubble expansion. In a universe that exists for a finite amount of time, only the light of a finite number of stars has had a chance to reach us yet, and the paradox is resolved. Additionally, in an expanding universe, distant objects recede from us, which causes the light emanating from them to be redshifted and diminished in brightness.[44]

Dimensionless Hubble parameter

Instead of working with Hubble's constant, a common practice is to introduce the dimensionless Hubble parameter, usually denoted by h, and to write the Hubble's parameter H0 as h × 100 km s−1 Mpc−1, all the uncertainty relative of the value of H0 being then relegated on h.[45] Currently h = 0.678. This should not be confused with the dimensionless value of Hubble's constant, usually expressed in terms of Planck units, with current value of H0×tP = 1.18 × 10−61.

Determining the Hubble constant


Value of the Hubble Constant including measurement uncertainty for recent surveys.[13]

The value of the Hubble constant is estimated by measuring the redshift of distant galaxies and then determining the distances to the same galaxies (by some other method than Hubble's law). Uncertainties in the physical assumptions used to determine these distances have caused varying estimates of the Hubble constant.

Earlier measurement and discussion approaches

For most of the second half of the 20th century the value of H_0 was estimated to be between 50 and 90 (km/s)/Mpc.

The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs, who claimed the value was around 100, and Allan Sandage, who claimed the value was near 50.[25] In 1996, a debate moderated by John Bahcall between Gustav Tammann and Sidney van den Bergh was held in similar fashion to the earlier Shapley-Curtis debate over these two competing values.

This previously wide variance in estimates was partially resolved with the introduction of the ΛCDM model of the Universe in the late 1990s. With the ΛCDM model observations of high-redshift clusters at X-ray and microwave wavelengths using the Sunyaev-Zel'dovich effect, measurements of anisotropies in the cosmic microwave background radiation, and optical surveys all gave a value of around 70 for the constant.[citation needed]

More recent measurements from the Planck mission indicate a lower value of around 67.[13]

Acceleration of the expansion

A value for q measured from standard candle observations of Type Ia supernovae, which was determined in 1998 to be negative, surprised many astronomers with the implication that the expansion of the Universe is currently "accelerating"[46] (although the Hubble factor is still decreasing with time, as mentioned above in the Interpretation section; see the articles on dark energy and the ΛCDM model).

Derivation of the Hubble parameter

Start with the Friedmann equation:
H^2 \equiv \left(\frac{\dot{a}}{a}\right)^2 = \frac{8 \pi G}{3}\rho - \frac{kc^2}{a^2}+ \frac{\Lambda c^2}{3},
where H is the Hubble parameter, a is the scale factor, G is the gravitational constant, k is the normalised spatial curvature of the Universe and equal to −1, 0, or +1, and \Lambda is the cosmological constant.

Matter-dominated universe (with a cosmological constant)

If the Universe is matter-dominated, then the mass density of the Universe \rho can just be taken to include matter so
\rho = \rho_m(a) = \frac{\rho_{m_{0}}}{a^3},
where \rho_{m_{0}} is the density of matter today. We know for nonrelativistic particles that their mass density decreases proportional to the inverse volume of the Universe, so the equation above must be true. We can also define (see density parameter for \Omega_m)
\rho_c = \frac{3 H^2}{8 \pi G};
\Omega_m \equiv \frac{\rho_{m_{0}}}{\rho_c} = \frac{8 \pi G}{3 H_0^2}\rho_{m_{0}};
so \rho=\rho_c \Omega_m /a^3. Also, by definition,
\Omega_k \equiv \frac{-kc^2}{(a_0H_0)^2}
and
\Omega_{\Lambda} \equiv \frac{\Lambda c^2}{3H_0^2},
where the subscript nought refers to the values today, and a_0=1. Substituting all of this into the Friedmann equation at the start of this section and replacing a with a=1/(1+z) gives
H^2(z)= H_0^2 \left( \Omega_M (1+z)^{3} + \Omega_k (1+z)^{2} + \Omega_{\Lambda} \right).

Matter- and dark energy-dominated universe

If the Universe is both matter-dominated and dark energy- dominated, then the above equation for the Hubble parameter will also be a function of the equation of state of dark energy. So now:
\rho = \rho_m (a)+\rho_{de}(a),
where \rho_{de} is the mass density of the dark energy. By definition, an equation of state in cosmology is P=w\rho c^2, and if we substitute this into the fluid equation, which describes how the mass density of the Universe evolves with time,
\dot{\rho}+3\frac{\dot{a}}{a}\left(\rho+\frac{P}{c^2}\right)=0;
\frac{d\rho}{\rho}=-3\frac{da}{a}\left(1+w\right).
If w is constant,
\ln{\rho}=-3\left(1+w\right)\ln{a};
\rho=a^{-3\left(1+w\right)}.
Therefore, for dark energy with a constant equation of state w, \rho_{de}(a)= \rho_{de0}a^{-3\left(1+w\right)}. If we substitute this into the Friedman equation in a similar way as before, but this time set k=0, which is assuming we live in a spatially flat universe, (see Shape of the Universe)
H^2(z)= H_0^2 \left( \Omega_M (1+z)^{3} + \Omega_{de}(1+z)^{3\left(1+w \right)} \right).
If dark energy does not have a constant equation-of-state w, then
\rho_{de}(a)= \rho_{de0}e^{-3\int\frac{da}{a}\left(1+w(a)\right)},
and to solve this we must parametrize w(a), for example if w(a)=w_0+w_a(1-a), giving
H^2(z)= H_0^2 \left( \Omega_M a^{-3} + \Omega_{de}a^{-3\left(1+w_0 +w_a \right)}e^{-3w_a(1-a)} \right).
Other ingredients have been formulated recently.[47][48][49] In certain era, where the high energy experiments seem to have a reliable access in analyzing the property of the matter dominating the background geometry, with this era we mean the quark-gluon plasma, the transport properties have been taken into consideration. Therefore, the evolution of the Hubble parameter and of other essential cosmological parameters, in such a background are found to be considerably (non-negligibly) different than their evolution in an ideal, gaseous, non-viscous background.

Units derived from the Hubble constant

Hubble time

The Hubble constant H_0 has units of inverse time, i.e. H_0 = 67.8\textrm{km/s/Mpc} = 4.56\cdot 10^{17}\textrm{s} or 14.4 billion years.
This is somewhat longer than the age of the universe of 13.8 billion years. The Hubble time is the age it would have had if the expansion had been linear, and it is different from the real age of the universe because the expansion isn't linear.

We currently appear to be approaching a period where the expansion is exponential due to the increasing dominance of vacuum energy. In this regime, the Hubble parameter is constant, and the universe grows by a factor e each Hubble time:
H \equiv \frac{\dot a}{a} = \textrm{const.} \Rightarrow a \propto \textrm{e}^{Ht} = \textrm{e}^{t/t_H}
Over long periods of time, the dynamics are complicated by general relativity, dark energy, inflation, etc., as explained above.

Hubble length

The Hubble length or Hubble distance is a unit of distance in cosmology, defined as cH0−1 — the speed of light multiplied by the Hubble time. It is equivalent to 4,228 million parsecs or 13.8 billion light years. (The numerical value of the Hubble length in light years is, by definition, equal to that of the Hubble time in years.) The Hubble distance would be the distance between the Earth and the galaxies which are currently receding from us at the speed of light, as can be seen by substituting D = c/H0 into the equation for Hubble's law, v = H0D.

Hubble volume

The Hubble volume is sometimes defined as a volume of the Universe with a comoving size of c/H0. The exact definition varies: it is sometimes defined as the volume of a sphere with radius c/H0, or alternatively, a cube of side c/H0. Some cosmologists even use the term Hubble volume to refer to the volume of the observable universe, although this has a radius approximately three times larger.

Two-state solution

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Two-state_solution A peace movement po...