Search This Blog

Saturday, May 30, 2015

Fluid dynamics


From Wikipedia, the free encyclopedia


Typical aerodynamic teardrop shape, assuming a viscous medium passing from left to right, the diagram shows the pressure distribution as the thickness of the black line and shows the velocity in the boundary layer as the violet triangles. The green vortex generators prompt the transition to turbulent flow and prevent back-flow also called flow separation from the high pressure region in the back. The surface in front is as smooth as possible or even employs shark-like skin, as any turbulence here will reduce the energy of the airflow. The truncation on the right, known as a Kammback, also prevents backflow from the high pressure region in the back across the spoilers to the convergent part.

In physics, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the natural science of fluids (liquids and gases) in motion. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation. Some of its principles are even used in traffic engineering, where traffic is treated as a continuous fluid, and crowd dynamics.

Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.

Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.[1]

Equations of fluid dynamics

The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum (also known as Newton's Second Law of Motion), and conservation of energy (also known as First Law of Thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds Transport Theorem.

In addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption considers fluids to be continuous, rather than discrete. Consequently, properties such as density, pressure, temperature, and flow velocity are taken to be well-defined at infinitesimally small points, and are assumed to vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.

For fluids which are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations, which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve. Some of them allow appropriate fluid dynamics problems to be solved in closed form.[citation needed]

In addition to the mass, momentum, and energy conservation equations, a thermodynamical equation of state giving the pressure as a function of other thermodynamic variables for the fluid is required to completely specify the problem. An example of this would be the perfect gas equation of state:
p= \frac{\rho R_u T}{M}
where p is pressure, ρ is density, Ru is the gas constant, M is molar mass and T is temperature.

Conservation laws

Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. Mathematical formulations of these conservation laws may be interpreted by considering the concept of a control volume. A control volume is a specified volume in space through which air can flow in and out. Integral formulations of the conservation laws consider the change in mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression which may be interpreted as the integral form of the law applied to an infinitesimal volume at a point within the flow.
  • Mass continuity (conservation of mass): The rate of change of fluid mass inside a control volume must be equal to the net rate of fluid flow into the volume. Physically, this statement requires that mass is neither created nor destroyed in the control volume,[2] and can be translated into the integral form of the continuity equation:
{\partial \over \partial t} \iiint_V \rho \, dV = - \, {} \oiint{\scriptstyle S}{}\,\rho\mathbf{u}\cdot d\mathbf{S}
Above, \rho is the fluid density, u is the flow velocity vector, and t is time. The left-hand side of the above expression contains a triple integral over the control volume, whereas the right-hand side contains a surface integral over the surface of the control volume. The differential form of the continuity equation is, by the divergence theorem:
\ {\partial \rho \over \partial t} + \nabla \cdot (\rho \mathbf{u}) = 0
  • Conservation of momentum: This equation applies Newton's second law of motion to the control volume, requiring that any change in momentum of the air within a control volume be due to the net flow of air into the volume and the action of external forces on the air within the volume. In the integral formulation of this equation, body forces here are represented by fbody, the body force per unit mass. Surface forces, such as viscous forces, are represented by \mathbf{F}_\text{surf}, the net force due to stresses on the control volume surface.
 \frac{\partial}{\partial t} \iiint_{\scriptstyle V} \rho\mathbf{u} \, dV = -\, {} \oiint_{\scriptstyle S}  (\rho\mathbf{u}\cdot d\mathbf{S}) \mathbf{u} -{} \oiint{\scriptstyle S} {}\, p \, d\mathbf{S} \displaystyle{}+ \iiint_{\scriptstyle V} \rho \mathbf{f}_\text{body} \, dV + \mathbf{F}_\text{surf}
The differential form of the momentum conservation equation is as follows. Here, both surface and body forces are accounted for in one total force, F. For example, F may be expanded into an expression for the frictional and gravitational forces acting on an internal flow.
\ {D \mathbf{u} \over D t} = \mathbf{F} - {\nabla p \over \rho}
In aerodynamics, air is assumed to be a Newtonian fluid, which posits a linear relationship between the shear stress (due to internal friction forces) and the rate of strain of the fluid. The equation above is a vector equation: in a three-dimensional flow, it can be expressed as three scalar equations. The conservation of momentum equations for the compressible, viscous flow case are called the Navier–Stokes equations.[citation needed]
\ \rho {Dh \over Dt} = {D p \over D t} + \nabla \cdot \left( k \nabla T\right) + \Phi
Above, h is enthalpy, k is the thermal conductivity of the fluid, T is temperature, and \Phi is the viscous dissipation function. The viscous dissipation function governs the rate at which mechanical energy of the flow is converted to heat. The second law of thermodynamics requires that the dissipation term is always positive: viscosity cannot create energy within the control volume.[3] The expression on the left side is a material derivative.

Compressible vs incompressible flow

All fluids are compressible to some extent, that is, changes in pressure or temperature will result in changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used.

Mathematically, incompressibility is expressed by saying that the density ρ of a fluid parcel does not change as it moves in the flow field, i.e.,
\frac{\mathrm{D} \rho}{\mathrm{D}t} = 0 \, ,
where D/Dt is the substantial derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density.

For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is to be evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate.

Inviscid vs Newtonian and non-Newtonian fluids


Potential flow around a wing

Viscous problems are those in which fluid friction has significant effects on the fluid motion.

The Reynolds number, which is a ratio between inertial and viscous forces, can be used to evaluate whether viscous or inviscid equations are appropriate to the problem.

Stokes flow is flow at very low Reynolds numbers, Re << 1, such that inertial forces can be neglected compared to viscous forces.

On the contrary, high Reynolds numbers indicate that the inertial forces are more significant than the viscous (friction) forces. Therefore, we may assume the flow to be an inviscid flow, an approximation in which we neglect viscosity completely, compared to inertial terms.

This idea can work fairly well when the Reynolds number is high. However, certain problems such as those involving solid boundaries, may require that the viscosity be included. Viscosity often cannot be neglected near solid boundaries because the no-slip condition can generate a thin region of large strain rate (known as Boundary layer) which enhances the effect of even a small amount of viscosity, and thus generating vorticity. Therefore, to calculate net forces on bodies (such as wings) we should use viscous flow equations. As illustrated by d'Alembert's paradox, a body in an inviscid fluid will experience no drag force. The standard equations of inviscid flow are the Euler equations. Another often used model, especially in computational fluid dynamics, is to use the Euler equations away from the body and the boundary layer equations, which incorporates viscosity, in a region close to the body.

The Euler equations can be integrated along a streamline to get Bernoulli's equation. When the flow is everywhere irrotational and inviscid, Bernoulli's equation can be used throughout the flow field. Such flows are called potential flows.

Sir Isaac Newton showed how stress and the rate of strain are very close to linearly related for many familiar fluids, such as water and air. These Newtonian fluids are modelled by a viscosity that is independent of strain rate, depending primarily on the specific fluid.

However, some of the other materials, such as emulsions and slurries and some visco-elastic materials (e.g. blood, some polymers), have more complicated non-Newtonian stress-strain behaviours. These materials include sticky liquids such as latex, honey, and lubricants which are studied in the sub-discipline of rheology.

Steady vs unsteady flow


Hydrodynamics simulation of the Rayleigh–Taylor instability [4]

When all the time derivatives of a flow field vanish, the flow is considered to be a steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Otherwise, flow is called unsteady (also called transient[5]). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady.

Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. According to Pope:[6]
The random field U(x,t) is statistically stationary if all statistics are invariant under a shift in time.
This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow.

Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field.

Laminar vs turbulent flow

Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. It should be noted, however, that the presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component.

It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows.[7]

Most flows of interest have Reynolds numbers much too high for DNS to be a viable option,[8] given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human (L > 3 m), moving faster than 72 km/h (20 m/s) is well beyond the limit of DNS simulation (Re = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord). In order to solve these real-life flow problems, turbulence models will be a necessity for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the guise of detached eddy simulation (DES)—which is a combination of RANS turbulence modelling and large eddy simulation.

Subsonic vs transonic, supersonic and hypersonic flows

While many terrestrial flows (e.g. flow of water through a pipe) occur at low mach numbers, many flows of practical interest (e.g. in aerodynamics) occur at high fractions of the Mach Number M=1 or in excess of it (supersonic flows). New phenomena occur at these Mach number regimes (e.g. shock waves for supersonic flow, transonic instability in a regime of flows with M nearly equal to 1, non-equilibrium chemical behaviour due to ionization in hypersonic flows) and it is necessary to treat each of these flow regimes separately.

Magnetohydrodynamics

Magnetohydrodynamics is the multi-disciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism.

Other approximations

There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below.

Terminology in fluid dynamics

The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods.

Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics.

Terminology in incompressible fluid dynamics

The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.

In Aerodynamics, L.J. Clancy writes:[9] To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure.

A point in a fluid flow where the flow has come to rest (i.e. speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field.

Terminology in compressible fluid dynamics

In a compressible fluid, such as air, the temperature and density are essential when determining the state of the fluid. In addition to the concept of total pressure (also known as stagnation pressure), the concepts of total (or stagnation) temperature and total (or stagnation) density are also essential in any study of compressible fluid flows. To avoid potential ambiguity when referring to temperature and density, many authors use the terms static temperature and static density. Static temperature is identical to temperature; and static density is identical to density; and both can be identified for every point in a fluid flow field.

The temperature and density at a stagnation point are called stagnation temperature and stagnation density.

A similar approach is also taken with the thermodynamic properties of compressible fluids. Many authors use the terms total (or stagnation) enthalpy and total (or stagnation) entropy. The terms static enthalpy and static entropy appear to be less common, but where they are used they mean nothing more than enthalpy and entropy respectively, and the prefix "static" is being used to avoid ambiguity with their 'total' or 'stagnation' counterparts. Because the 'total' flow conditions are defined by isentropically bringing the fluid to rest, the total (or stagnation) entropy is by definition always equal to the "static" entropy.

Updated NASA Data: Global Warming Not Causing Any Polar Ice [Area] Retreat



Original link:   http://www.forbes.com/sites/jamestaylor/2015/05/19/updated-nasa-data-polar-ice-not-receding-after-all/ 

Updated data from NASA satellite instruments reveal the Earth’s polar ice caps have not receded at all [in area] since the satellite instruments began measuring the ice caps in 1979. Since the end of 2012, moreover, total polar ice extent has largely remained above the post-1979 average. The updated data contradict one of the most frequently asserted global warming claims – that global warming is causing the polar ice caps to recede.

The timing of the 1979 NASA satellite instrument launch could not have been better for global warming alarmists.

The late 1970s marked the end of a 30-year cooling trend. As a result, the polar ice caps were quite likely more extensive than they had been since at least the 1920s. Nevertheless, this abnormally extensive 1979 polar ice extent would appear to be the “normal” baseline when comparing post-1979 polar ice extent.

Updated NASA satellite data show the polar ice caps remained at approximately their 1979 extent until the middle of the last decade. Beginning in 2005, however, polar ice modestly receded for several years. By 2012, polar sea ice had receded by approximately 10 percent from 1979 measurements. (Total polar ice area – factoring in both sea and land ice – had receded by much less than 10 percent, but alarmists focused on the sea ice loss as “proof” of a global warming crisis.)
NASA satellite measurements show the polar ice caps have not retreated at all.
NASA satellite measurements show the polar ice caps have not retreated at all.

A 10-percent decline in polar sea ice is not very remarkable, especially considering the 1979 baseline was abnormally high anyway. Regardless, global warming activists and a compliant news media frequently and vociferously claimed the modest polar ice cap retreat was a sign of impending catastrophe. Al Gore even predicted the Arctic ice cap could completely disappear by 2014.

In late 2012, however, polar ice dramatically rebounded and quickly surpassed the post-1979 average. Ever since, the polar ice caps have been at a greater average extent than the post-1979 mean.

Now, in May 2015, the updated NASA data show polar sea ice is approximately 5 percent above the post-1979 average.

During the modest decline in 2005 through 2012, the media presented a daily barrage of melting ice cap stories.
Since the ice caps rebounded – and then some – how have the media reported the issue?

The frequency of polar ice cap stories may have abated, but the tone and content has not changed at all. Here are some of the titles of news items I pulled yesterday from the front two pages of a Google News search for “polar ice caps”:

Climate change is melting more than just the polar ice caps

2020: Antarctic ice shelf could collapse

An Arctic ice cap’s shockingly rapid slide into the sea

New satellite maps show polar ice caps melting at ‘unprecedented rate’

The only Google News items even hinting that the polar ice caps may not have melted so much (indeed not at all) came from overtly conservative websites. The “mainstream” media is alternating between maintaining radio silence on the extended run of above-average polar ice and falsely asserting the polar ice caps are receding at an alarming rate.

To be sure, receding polar ice caps are an expected result of the modest global warming we can expect in the years ahead. In and of themselves, receding polar ice caps have little if any negative impact on human health and welfare, and likely a positive benefit by opening up previously ice-entombed land to human, animal, and plant life.

Nevertheless, polar ice cap extent will likely be a measuring stick for how much the planet is or is not warming.

The Earth has warmed modestly since the Little Ice Age ended a little over 100 years ago, and the Earth will likely continue to warm modestly as a result of natural and human factors. As a result, at some point in time, NASA satellite instruments should begin to report a modest retreat of polar ice caps. The modest retreat – like that which happened briefly from 2005 through 2012 – would not be proof or evidence of a global warming crisis. Such a retreat would merely illustrate that global temperatures are continuing their gradual recovery from the Little Ice Age. Such a recovery – despite alarmist claims to the contrary – would not be uniformly or even on balance detrimental to human health and welfare. Instead, an avalanche of scientific evidence indicates recently warming temperatures have significantly improved human health and welfare, just as warming temperatures have always done.

Friday, May 29, 2015

Olbers' paradox


From Wikipedia, the free encyclopedia


Olbers' paradox in action

In astrophysics and physical cosmology, Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840) and also called the "dark night sky paradox", is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. The darkness of the night sky is one of the pieces of evidence for a non-static universe such as the Big Bang model. If the universe is static, homogeneous at a large scale, and populated by an infinite number of stars, any sight line from Earth must end at the (very bright) surface of a star, so the night sky should be completely bright. This contradicts the observed darkness of the night.

History

Edward Robert Harrison's Darkness at Night: A Riddle of the Universe (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges, who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars.[1] Kepler also posed the problem in 1610, and the paradox took its mature form in the 18th century work of Halley and Cheseaux.[2] The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers, who described it in 1823, but Harrison shows convincingly that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable. Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper,[3] and that Edgar Allan Poe's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument:
Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.[4]

The paradox


What if every line of sight ended in a star? (Infinite universe assumption#2)

The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark.

To show this, we divide the universe into a series of concentric shells, 1 light year thick. Thus, a certain number of stars will be in the shell 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell between 2,000,000,000 to 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear four times dimmer than the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell.

Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light. And with infinitely many shells there would be a bright night sky.

Dark clouds could obstruct the light. But in that case the clouds would heat up, until they were as hot as stars, and then radiate the same amount of light.

Kepler saw this as an argument for a finite observable universe, or at least for a finite number of stars. In general relativity theory, it is still possible for the paradox to hold in a finite universe:[5] though the sky would not be infinitely bright, every point in the sky would still be like the surface of a star.

In a universe of three dimensions with stars distributed evenly, the number of stars would be proportional to volume. If the surface of concentric sphere shells were considered, the number of stars on each shell would be proportional to the square of the radius of the shell. In the picture above, the shells are reduced to rings in two dimensions with all of the stars on them.

The mainstream explanation

Poet Edgar Allan Poe suggested that the finite size of the observable universe resolves the apparent paradox.[6] More specifically, because the universe is finitely old and the speed of light is finite, only finitely many stars can be observed within a given volume of space visible from Earth (although the whole universe can be infinite in space).[7] The density of stars within this finite volume is sufficiently low that any line of sight from Earth is unlikely to reach a star.
However, the Big Bang theory introduces a new paradox: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the Sun, due to the high temperature of the universe in that era; and most light rays will terminate not in a star but in the relic of the Big Bang.

This paradox is explained by the fact that the Big Bang theory also involves the expansion of space which can cause the energy of emitted light to be reduced via redshift. More specifically, the extreme levels of radiation from the Big Bang have been redshifted to microwave wavelengths (1100 times longer than its original wavelength) as a result of the cosmic expansion, and thus form the cosmic microwave background radiation. This explains the relatively low light densities present in most of our sky despite the assumed bright nature of the Big Bang. The redshift also affects light from distant stars and quasars, but the diminution is minor, since the most distant galaxies and quasars have redshifts of only around 5 to 8.6.

Alternative explanations

Steady state

The redshift hypothesised in the Big Bang model would by itself explain the darkness of the night sky, even if the universe were infinitely old. The steady state cosmological model assumed that the universe is infinitely old and uniform in time as well as space. There is no Big Bang in this model, but there are stars and quasars at arbitrarily great distances. The expansion of the universe will cause the light from these distant stars and quasars to be redshifted (by the Doppler effect), so that the total light flux from the sky remains finite. However, observations of the reduction in [radio] light-flux with distance in the 1950s and 1960s showed that it did not drop as rapidly as the Steady State model predicted. Moreover, the Steady State model predicts that stars should (collectively) be visible at all redshifts (provided that their light is not drowned out by nearer stars, of course). Thus, it does not predict a distinct background at fixed temperature as the Big Bang does. And the steady-state model cannot be modified to predict the temperature distribution of the microwave background accurately.[8]

Finite age of stars

Stars have a finite age and a finite power, thereby implying that each star has a finite impact on a sky's light field density. Edgar Allan Poe suggested that this idea could provide a resolution to Olbers' paradox; a related theory was also proposed by Jean-Philippe de Chéseaux. However, stars are continually being born as well as dying. As long as the density of stars throughout the universe remains constant, regardless of whether the universe itself has a finite or infinite age, there would be infinitely many other stars in the same angular direction, with an infinite total impact. So the finite age of the stars does not explain the paradox.[9]

Brightness

Suppose that the universe were not expanding, and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. Eventually, it would reach 3000 K (corresponding to a typical photon energy of 0.3 eV and so a frequency of 7.5×1013 Hz), and the photons would begin to be absorbed by the hydrogen plasma filling most of the universe, rendering outer space opaque. This maximal radiation density corresponds to about 1.2×1017 eV/m3 = 2.1×1019 kg/m3, which is much greater than the observed value of 4.7×1031 kg/m3.[2] So the sky is about fifty billion times darker than it would be if the universe were neither expanding nor too young to have reached equilibrium yet.

Fractal star distribution

A different resolution, which does not rely on the Big Bang theory, was first proposed by Carl Charlier in 1908 and later rediscovered by Benoît Mandelbrot in 1974. They both postulated that if the stars in the universe were distributed in a hierarchical fractal cosmology (e.g., similar to Cantor dust)—the average density of any region diminishes as the region considered increases—it would not be necessary to rely on the Big Bang theory to explain Olbers' paradox. This model would not rule out a Big Bang but would allow for a dark sky even if the Big Bang had not occurred.

Mathematically, the light received from stars as a function of star distance in a hypothetical fractal cosmos is:
\text{light}=\int_{r_0}^\infty L(r) N(r)\,dr
where:
r0 = the distance of the nearest star. r0 > 0;
r = the variable measuring distance from the Earth;
L(r) = average luminosity per star at distance r;
N(r) = number of stars at distance r.

The function of luminosity from a given distance L(r)N(r) determines whether the light received is finite or infinite. For any luminosity from a given distance L(r)N(r) proportional to ra, \text{light} is infinite for a ≥ −1 but finite for a < −1. So if L(r) is proportional to r−2, then for \text{light} to be finite, N(r) must be proportional to rb, where b < 1. For b = 1, the numbers of stars at a given radius is proportional to that radius. When integrated over the radius, this implies that for b = 1, the total number of stars is proportional to r2. This would correspond to a fractal dimension of 2. Thus the fractal dimension of the universe would need to be less than 2 for this explanation to work.

This explanation is not widely accepted among cosmologists since the evidence suggests that the fractal dimension of the universe is at least 2.[10][11][12] Moreover, the majority of cosmologists accept the cosmological principle, which assumes that matter at the scale of billions of light years is distributed isotropically. Contrarily, fractal cosmology requires anisotropic matter distribution at the largest scales.

Companies rush to build ‘biofactories’ for medicines, flavorings and fuels

For scientist Jack Newman, creating a new life-form has become as simple as this: He types out a DNA sequence on his laptop. Clicks “send.” And a few yards away in the laboratory, robotic arms mix together some compounds to produce the desired cells.

Newman’s biotech company is creating new organisms, most forms of genetically modified yeast, at the dizzying rate of more than 1,500 a day. Some convert sugar into medicines. Others create moisturizers that can be used in cosmetics. And still others make biofuel, a renewable energy source usually made from corn.

“You can now build a cell the same way you might build an app for your iPhone,” said Newman, chief science officer of Amyris.

Some believe this kind of work marks the beginning of a third industrial revolution — one based on using living systems as “bio-factories” for creating substances that are either too tricky or too expensive to grow in nature or to make with petrochemicals.

The rush to biological means of production promises to revolutionize the chemical industry and transform the economy, but it also raises questions about environmental safety and biosecurity and revives ethical debates about “playing God.” Hundreds of products are in the pipeline.

Laboratory-grown artemisinin, a key anti-malarial drug, went on sale in April with the potential to help stabilize supply issues. A vanilla flavoring that promises to be significantly cheaper than the costly extract made from beans grown in rain forests is scheduled to hit the markets in 2014.

On Wednesday, Amyris announced another milestone — a memorandum of understanding with Brazil’s largest low-cost airline, GOL Linhas Aereas, to begin using a jet fuel produced by yeast starting in 2014.

Proponents characterize bio-factories as examples of “green technology” that are sustainable and immune to fickle weather and disease. Backers say they will reshape how we use land globally, reducing the cultivation of cash crops in places where that practice hurts the environment, break our dependence on pesticides and result in the closure of countless industrial factories that pollute the air and water.

But some environmental groups are skeptical.

They compare the spread of bio-factories to the large-scale burning of coal at the turn of the 20th century — a development with implications for carbon dioxide emissions and global warming that weren’t understood until decades later.

Much of the early hype surrounding this technology was about biofuels — the dream of engineering colonies of yeast that could produce enough fuel to power whole cities. It turned out that the technical hurdles were easier to overcome than the economic ones. Companies haven’t been able to find a way to produce enough of it to make the price affordable, and so far the biofuels have been used only in smaller projects, such as local buses and Amyris’s experiment with GOL’s planes.

But dozens of other products are close to market, including synthetic versions of fragrances extracted from grass, coconut oil and saffron powder, as well as a gas used to make car tires. Other applications are being studied in the laboratory: biosensors that light up when a parasite is detected in water; goats with spider genes that produce super-strength silk in their milk; and synthetic bacteria that decompose trash and break down oil spills and other contaminated waste at a rapid pace.

Revenue from industrial chemicals made through synthetic biology is already as high as $1.5 billion, and it will increase at an annual rate of 15 to 25 percent for the next few years, according to an estimate by Mark Bünger, an analyst for Lux Research, a Boston-based advisory firm that focuses on emerging technologies.
 
Reengineering yeast

Since it was founded a decade ago, Amyris has become a legend in the field that sits at the intersection of biology and engineering, creating more than 3 million organisms. Unlike traditional genetic engineering, which typically involves swapping a few genes, the scientists are building entire genomes from scratch.

Keeping bar-code-stamped vials in giant refrigerators at minus-80 degrees, the company’s repository in Emeryville, Calif., is one of the world’s largest collections of living organisms that do not exist in nature.

Ten years ago, when Newman was a postdoctoral student at the University of California at Berkeley, the idea of being able to program cells on a computer was fanciful.

Newman was working in a chemical engineering lab run by biotech pioneer Jay Keasling and helping conduct research on how to rewrite the metabolic pathways of microorganisms to produce useful substances.

Their first target was yeast.

The product of millions of years of evolution, the single-celled organism was capable of a miraculous feat: When fed sugar, it produced energy and excreted alcohol and carbon dioxide. Humans have harnessed this power for centuries to make wine, beer, cheese and other products. Could they tinker with some genes in the yeast to create a biological machine capable of producing medicine?

Excited about the idea of trying to apply the technology to a commercial product, Keasling, Newman and two other young post-docs — Keith Kinkead Reiling and Neil Renninger — started Amyris in 2003 and set their sights on artemisinin, an ancient herbal remedy found to be more than 90 percent effective at curing those infected with malaria.

It is harvested from the leaves of the sweet wormwood plant, but the supply of the plant had sometimes fluctuated in the past, causing shortages.

The new company lined up high-profile investors: the Bill & Melinda Gates Foundation, which gave $42.6 million to a nonprofit organization to help finance the research, and Silicon Valley luminaries John Doerr and Vinod Khosla, who as part of a group invested $20 million.

As of this month, Amyris said its partner, pharmaceutical giant Sanofi, has manufactured 35 tons of artemisinin — roughly equivalent to 70 million courses of treatment. The World Health Organization gave its stamp of approval to the drug in May, and the pills are being used widely.
 
Concerns about risks

The early scientific breakthroughs by the Amyris founders paved the way for dozens of other companies to do similar work. The next major product to be released is likely to be a vanilla flavoring by Evolva, a Swiss company that has laboratories in the San Francisco Bay area.

Cultivated in the remote forests of Madagascar, Mexico and the West Indies, natural vanilla is one of the world’s most revered spices. But companies that depend on the ingredient to flavor their products have long struggled with its scarcity and the volatility of its price.

Its chemically synthesized cousins, which are made from petrochemicals and paper pulp waste and are three to five times cheaper, have 99 percent of the vanilla market but have failed to match the natural version’s complexity.

Now scientists in a lab in Denmark believe they’ve created a type of vanilla flavoring produced by yeast that they say will be more satisfying to the palate and cheaper at the same time.

In Evolva’s case, much of the controversy has focused on whether the flavoring can be considered “natural.” Evolva boasts that it is, because only the substance used to produce the flavoring was genetically modified — not what people actually consume.

“From my point of view it’s fundamentally as natural as beer or bread,” said Evolva chief executive Neil Goldsmith, who is a co-founder of the company. “Neither brewer’s or baker’s yeast is identical to yeast in the wild. I’m comfortable that if beer is natural, then this is natural.”

That justification has caused an uproar among some consumer protection and environmental groups. They say that representing Evolva’s laboratory-grown flavoring as something similar to vanilla extract from an orchid plant is deceptive, and they have mounted a global campaign urging food companies to boycott the “vanilla grown in a petri dish.”

“Any ice-cream company that calls this all-natural vanilla would be committing fraud,” argues Jaydee Hanson, a senior policy analyst at the Center for Food Safety, a nonprofit public interest group based in Washington.

Jim Thomas, a researcher for the ETC Group, said there is a larger issue that applies to all organisms produced by synthetic biology techniques: What if they are accidentally released and evolve to have harmful characteristics?

“There is no regulatory structure or even protocols for assessing the safety of synthetic organisms in the environment,” Thomas said.

Then there’s the potential economic impact. What about the hundreds of thousands of small farmers who produce these crops now?

Artemisinin is farmed by an estimated 100,000 people in Kenya, Tanzania, Vietnam and China and the vanilla plant by 200,000 in Madagascar, Mexico and beyond.

Evolva officials say they believe there will still be a strong market for artisan ingredients like vanilla from real beans and that history has shown that these products typically attract an even higher premium when new products hit the market.

Other biotech executives say they are sympathetic, but that it is the price of progress. Amyris’s Newman says he is confused by environmental groups’ criticism and points to the final chapter of Rachel Carson’s “Silent Spring” — the seminal book that is credited with launching the environmental movement. In it, Carson mentions ways that science can solve the environmental hazards we have endured through years of use of fossil fuels and petrochemicals.

“The question you have to ask yourself is, ‘Is the status quo enough?’ ” Newman said. “We live in a world where things can be improved upon.”

Paradox of a charge in a gravitational field


From Wikipedia, the free encyclopedia

The special theory of relativity is known for its paradoxes: the twin paradox and the ladder-in-barn paradox, for example. Neither are true paradoxes; they merely expose flaws in our understanding, and point the way toward deeper understanding of nature. The ladder paradox exposes the breakdown of simultaneity, while the twin paradox highlights the distinctions of accelerated frames of reference.

So it is with the paradox of a charged particle at rest in a gravitational field; it is a paradox between the theories of electrodynamics and general relativity.

Recap of Key Points of Gravitation and Electrodynamics

It is a standard result from the Maxwell equations of classical electrodynamics that an accelerated charge radiates. That is, it produces an electric field that falls off as 1/r in addition to its rest-frame 1/r^2 Coulomb field. This radiation electric field has an accompanying magnetic field, and the whole oscillating electromagnetic radiation field propagates independently of the accelerated charge, carrying away momentum and energy. The energy in the radiation is provided by the work that accelerates the charge. We understand a photon to be the quantum of the electromagnetic radiation field, but the radiation field is a classical concept.

The theory of general relativity is built on the principle of the equivalence of gravitation and inertia. This means that it is impossible to distinguish through any local measurement whether one is in a gravitational field or being accelerated. An elevator out in deep space, far from any planet, could mimic a gravitational field to its occupants if it could be accelerated continuously "upward". Whether the acceleration is from motion or from gravity makes no difference in the laws of physics. This can also be understood in terms of the equivalence of so-called gravitational mass and inertial mass. The mass in Newton's law of gravity (gravitational mass) is the same as the mass in Newton's second law of motion (inertial mass). They cancel out when equated, with the result discovered by Galileo that all bodies fall at the same rate in a gravitational field, independent of their mass. This was famously demonstrated on the Moon during the Apollo 15 mission, when a hammer and a feather were dropped at the same time and, of course, struck the surface at the same time.

Closely tied in with this equivalence is the fact that gravity vanishes in free fall. For objects falling in an elevator whose cable is cut, all gravitational forces vanish, and things begin to look like the free-floating absence of forces one sees in videos from the International Space Station. One can find the weightlessness of outer space right here on earth: just jump out of an airplane. It is a lynchpin of general relativity that everything must fall together in free fall. Just as with acceleration versus gravity, no experiment should be able to distinguish the effects of free fall in a gravitational field, and being out in deep space far from any forces.

Statement of the Paradox

Putting together these two basic facts of general relativity and electrodynamics, we seem to encounter a paradox. For if we dropped a neutral particle and a charged particle together in a gravitational field, the charged particle should begin to radiate as it is accelerated under gravity, thereby losing energy, and slowing relative to the neutral particle. Then a free-falling observer could distinguish free fall from true absence of forces, because a charged particle in a free-falling laboratory would begin to be pulled relative to the neutral parts of the laboratory, even though no obvious electric fields were present.

Equivalently, we can think about a charged particle at rest in a laboratory on the surface of the earth. Since we know the earth's gravitational field of 1 g is equivalent to being accelerated constantly upward at 1 g, and we know a charged particle accelerated upward at 1 g would radiate, why don't we see radiation from charged particles at rest in the laboratory? It would seem that we could distinguish between a gravitational field and acceleration, because an electric charge apparently only radiates when it is being accelerated through motion, but not through gravitation.

Resolution of the Paradox

The resolution of this paradox, like the twin paradox and ladder paradox, comes through appropriate care in distinguishing frames of reference. We follow the excellent development of Rohrlich (1965),[1] section 8-3, who shows that a charged particle and a neutral particle fall equally fast in a gravitational field, despite the fact that the charged one loses energy by radiation. Likewise, a charged particle at rest in a gravitational field does not radiate in its rest frame. The equivalence principle is preserved for charged particles.

The key is to realize that the laws of electrodynamics, the Maxwell equations, hold only in an inertial frame. That is, in a frame in which no forces act locally. This could be free fall under gravity, or far in space away from any forces. The surface of the earth is not an inertial frame. It is being constantly accelerated. We know the surface of the earth is not an inertial frame because an object at rest there may not remain at rest—objects at rest fall to the ground when released. So we cannot naively formulate expectations based on the Maxwell equations in this frame. It is remarkable that we now understand the special-relativistic Maxwell equations do not hold, strictly speaking, on the surface of the earth—even though they were of course discovered in electrical and magnetic experiments conducted in laboratories on the surface of the earth. Nevertheless, in this case we cannot apply the Maxwell equations to the description of a falling charge relative to a "supported", non-inertial observer.

The Maxwell equations can be applied relative to an observer in free fall, because free-fall is an inertial frame. So the starting point of considerations is to work in the free-fall frame in a gravitational field—a "falling" observer. In the free-fall frame the Maxwell equations have their usual, flat spacetime form for the falling observer. In this frame, the electric and magnetic fields of the charge are simple: the falling electric field is just the Coulomb field of a charge at rest, and the magnetic field is zero. As an aside, note that we are building in the equivalence principle from the start, including the assumption that a charged particle falls equally as fast as a neutral particle. Let us see if any contradictions arise.

Now we are in a position to establish what an observer at rest in a gravitational field, the supported observer, will see. Given the electric and magnetic fields in the falling frame, we merely have to transform those fields into the frame of the supported observer. This is not a Lorentz transformation, because the two frames have a relative acceleration. Instead we must bring to bear the machinery of general relativity.

In this case our gravitational field is fictitious because it can be transformed away in an accelerating frame. Unlike the total gravitational field of the earth, here we are assuming that spacetime is locally flat, so that the curvature tensor vanishes. Equivalently, the lines of gravitational acceleration are everywhere parallel, with no convergences measurable in the laboratory. Then the most general static, flat-space, cylindrical metric and line element can be written:

c^2 d\tau^2 = u^2(z)c^2dt^2 - \left ( {c^2\over g} {du\over dz}  \right )^2 dz^2 - dx^2 - dy^2
where c is the speed of light, \tau is proper time, x,y,z,t are the usual coordinates of space and time, g is the acceleration of the gravitational field, and u(z) is an arbitrary function of the coordinate but must approach the observed Newtonian value of 1+gz/c^2. This is the metric for the gravitational field measured by the supported observer.

Meanwhile, the metric in the frame of the falling observer is simply the Minkowski metric:

c^2 d\tau^2 = c^2 dt'^2 - dz'^2 - dy'^2 -dz'^2
From these two metrics Rohrlich constructs the coordinate transformation between them:
\begin{align}
x'=x &\qquad y'=y  \\
{g\over c^2} (z'-z_0') &= u(z) \cosh{g(t-t_0)} -1\\
{g\over c} (t'-t_0') &= u(z) \sinh{g(t-t_0) }
\end{align}
When this coordinate transformation is applied to the rest frame electric and magnetic fields of the charge, it is found to be radiating—as expected for a charge falling away from a supported observer. Rohrlich emphasizes that this charge remains at rest in its free-fall frame, just as a neutral particle would. Furthermore, the radiation rate for this situation is Lorentz invariant, but it is not invariant under the coordinate transformation above, because it is not a Lorentz transformation.

So a falling charge will appear to radiate to a supported observer, as expected. What about a supported charge, then? Does it not radiate due to the equivalence principle? To answer this question, start again in the falling frame.
In the falling frame, the supported charge appears to be accelerated uniformly upward. The case of constant acceleration of a charge is treated by Rohrlich [1] in section 5-3. He finds a charge e uniformly accelerated at rate g has a radiation rate given by the Lorentz invariant:

R={2\over 3}{e^2\over c^3} g^2
The corresponding electric and magnetic fields of an accelerated charge are also given in Rohrlich section 5-3. To find the fields of the charge in the supported frame, the fields of the uniformly accelerated charge are transformed according to the coordinate transformation previously given. When that is done, one finds no radiation in the supported frame from a supported charge, because the magnetic field is zero in this frame. Rohrlich does note that the gravitational field slightly distorts the Coulomb field of the supported charge, but too small to be observable. So although the Coulomb law was of course discovered in a supported frame, relativity tells us the field of such a charge is not precisely 1/r^2.

The radiation from the supported charge is something of a curiosity: where does it go? Boulware (1980) [2] finds that the radiation goes into a region of spacetime inaccessible to the co-accelerating, supported observer. In effect, a uniformly accelerated observer has an event horizon, and there are regions of spacetime inaccessible to this observer. de Almeida and Saa (2006) [3] have a more-accessible treatment of the event horizon of the accelerated observer.

Political psychology

From Wikipedia, the free encyclopedia ...