Search This Blog

Tuesday, July 29, 2014

New theory says the Universe isn't expanding — it's just getting fat

New theory says the Universe isn't expanding — it's just getting fat

 
 

 
 
 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



 
 
 
 
 
 
Conventional thinking says the Universe has been expanding ever since the Big Bang. But theoretical astrophysicist Christof Wetterich says it's not expanding at all. It’s just that the mass of all the particles within it is steadily increasing.
Top Image: T. Piner.
We think the Universe is expanding because all the galaxies within it are pushing away from one another. Scientists see this as the redshift — a kind of doppler effect that happens when atoms emit or absorb light. We see these frequencies as appearing in the red, an indication that mass is moving away from us. Galaxies exhibit this redshift, which is why scientists say the Universe is expanding.
But Wetterich, who works out of the University of Heidelberg in Germany, says the characteristic light emitted by atoms are also governed by the masses of the atoms’ elementary particles, particularly their electrons.
Writing in Nature News, Jon Cartwright explains:
If an atom were to grow in mass, the photons it emits would become more energetic. Because higher energies correspond to higher frequencies, the emission and absorption frequencies would move towards the blue part of the spectrum. Conversely, if the particles were to become lighter, the frequencies would become redshifted.
Because the speed of light is finite, when we look at distant galaxies we are looking backwards in time — seeing them as they would have been when they emitted the light that we observe. If all masses were once lower, and had been constantly increasing, the colours of old galaxies would look redshifted in comparison to current frequencies, and the amount of redshift would be proportionate to their distances from Earth. Thus, the redshift would make galaxies seem to be receding even if they were not.
Work through the maths in this alternative interpretation of redshift, and all of cosmology looks very different. The Universe still expands rapidly during a short-lived period known as inflation. But prior to inflation, according to Wetterich, the Big Bang no longer contains a 'singularity' where the density of the Universe would be infinite. Instead, the Big Bang stretches out in the past over an essentially infinite period of time. And the current cosmos could be static, or even beginning to contract.
Whoa. That is a radically different picture of the Universe than what we're used to.
Unfortunately, there’s no way for us to test this. Well, at least not yet.
But Wetterich says his theory is useful for thinking about different cosmological models. And indeed, it may offer some fresh insights into the spooky dark energy that's apparently pushing the Universe outwards at an accelerating rate.

Read the entire study — which has not yet been peer reviewed — at arXiv: “A Universe without expansion.” But as Cartwright notes in his article, other physicists are not hating the idea.

To AGW Doubters, Skeptics, "Deniers", and Anyone Interested in the Science Behind Global Warming

Almost all of what follows come from Wikipedia, but as it agrees with my own knowledge of chemistry and physics from many sources over the years, it makes a good if sometimes hard to follow summary of the science behind anthropogenic CO2 enhanced global warming.  It is theory however, so how much warming it has caused in the Earth's atmosphere over the last ~ 150 years, and the climatological consequences of that is left up to scientific debate.
___________________________________________________________

We start with Svante Arrhenius, in the latter 19'th century:

Greenhouse effect

Arrhenius developed a theory to explain the ice ages, and in 1896 he was the first scientist to attempt to calculate how changes in the levels of carbon dioxide in the atmosphere could alter the surface temperature through the greenhouse effect.[8] He was influenced by the work of others, including Joseph Fourier and John Tyndall. Arrhenius used the infrared observations of the moon by Frank Washington Very and Samuel Pierpont Langley at the Allegheny Observatory in Pittsburgh to calculate the absorption of infrared radiation by atmospheric CO2 and water vapour. Using 'Stefan's law' (better known as the Stefan-Boltzmann law), he formulated his greenhouse law. In its original form, Arrhenius' greenhouse law reads as follows:
if the quantity of carbonic acid [CO2] increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.
The following equivalent formulation of Arrhenius' greenhouse law is still used today:[9]
ΔF = α Ln(C/C_0)
Here C is carbon dioxide (CO2) concentration measured in parts per million by volume (ppmv); C_0 denotes a baseline or unperturbed concentration of CO2, and ΔF is the radiative forcing, measured in watts per square meter. The constant alpha (α) has been assigned a value between five and seven.[9]
Arrhenius at the first Solvay conference on chemistry in 1922 in Brussels.
 
Based on information from his colleague Arvid Högbom (sv), Arrhenius was the first person to predict that emissions of carbon dioxide from the burning of fossil fuels and other combustion processes were large enough to cause global warming. In his calculation Arrhenius included the feedback from changes in water vapor as well as latitudinal effects, but he omitted clouds, convection of heat upward in the atmosphere, and other essential factors. His work is currently seen less as an accurate prediction of global warming than as the first demonstration that it should be taken as a serious possibility.

Arrhenius' absorption values for CO2 and his conclusions met criticism by Knut Ångström in 1900, who published the first modern infrared spectrum of CO2 with two absorption bands, and published experimental results that seemed to show that absorption of infrared radiation by the gas in the atmosphere was already "saturated" so that adding more could make no difference. Arrhenius replied strongly in 1901 (Annalen der Physik), dismissing the critique altogether. He touched the subject briefly in a technical book titled Lehrbuch der kosmischen Physik (1903). He later wrote Världarnas utveckling (1906) (German: Das Werden der Welten [1907], English: Worlds in the Making [1908]) directed at a general audience, where he suggested that the human emission of CO2 would be strong enough to prevent the world from entering a new ice age, and that a warmer earth would be needed to feed the rapidly increasing population:
"To a certain extent the temperature of the earth's surface, as we shall presently see, is conditioned by the properties of the atmosphere surrounding it, and particularly by the permeability of the latter for the rays of heat." (p46)
"That the atmospheric envelopes limit the heat losses from the planets had been suggested about 1800 by the great French physicist Fourier. His ideas were further developed afterwards by Pouillet and Tyndall. Their theory has been styled the hot-house theory, because they thought that the atmosphere acted after the manner of the glass panes of hot-houses." (p51)
 
"If the quantity of carbonic acid [CO2] in the air should sink to one-half its present percentage, the temperature would fall by about 4°; a diminution to one-quarter would reduce the temperature by 8°. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the earth's surface by 4°; and if the carbon dioxide were increased fourfold, the temperature would rise by 8°." (p53)
 
"Although the sea, by absorbing carbonic acid, acts as a regulator of huge capacity, which takes up about five-sixths of the produced carbonic acid, we yet recognize that the slight percentage of carbonic acid in the atmosphere may by the advances of industry be changed to a noticeable degree in the course of a few centuries." (p54)
 
"Since, now, warm ages have alternated with glacial periods, even after man appeared on the earth, we have to ask ourselves: Is it probable that we shall in the coming geological ages be visited by a new ice period that will drive us from our temperate countries into the hotter climates of Africa? There does not appear to be much ground for such an apprehension. The enormous combustion of coal by our industrial establishments suffices to increase the percentage of carbon dioxide in the air to a perceptible degree." (p61)
 
"We often hear lamentations that the coal stored up in the earth is wasted by the present generation without any thought of the future, and we are terrified by the awful destruction of life and property which has followed the volcanic eruptions of our days. We may find a kind of consolation in the consideration that here, as in every other case, there is good mixed with the evil. By the influence of the increasing percentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates, especially as regards the colder regions of the earth, ages when the earth will bring forth much more abundant crops than at present, for the benefit of rapidly propagating mankind." (p63)
Arrhenius clearly believed that a warmer world would be a positive change. His ideas remained in circulation, but until about 1960 most scientists doubted that global warming would occur (believing the oceans would absorb CO2 faster than humanity emitted the gas).[citation needed] Most scientists also dismissed the greenhouse effect as implausible for the cause of ice ages, as Milutin Milankovitch had presented a mechanism using orbital changes of the earth (Milankovitch cycles).[citation needed]
Nowadays, the accepted explanation is that orbital forcing sets the timing for ice ages with CO2 acting as an essential amplifying feedback.

Arrhenius estimated that halving of CO2 would decrease temperatures by 4–5 °C (Celsius) and a doubling of CO2 would cause a temperature rise of 5–6 °C.[10] In his 1906 publication, Arrhenius adjusted the value downwards to 1.6 °C (including water vapor feedback: 2.1 °C). Recent (2014) estimates from IPCC say this value (the Climate sensitivity) is likely to be between 1.5 and 4.5 °C. Arrhenius expected CO2 levels to rise at a rate given by emissions in his time. Since then, industrial carbon dioxide levels have risen at a much faster rate: Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.
___________________________________________________________

And now onto the 20'th century, and the quantum-mechanical explanation of why certain gasses exhibit the greenhouse effect, i.e., molecular vibrations and infrared radiation absorption:

Molecular vibration

Molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency, and the typical frequencies of molecular vibrations range from less than 1012 to approximately 1014 Hz.

In general, a molecule with N atoms has 3N – 6 normal modes of vibration, but a linear molecule has 3N – 5 such modes, as rotation about its molecular axis cannot be observed.[1] A diatomic molecule has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other but each normal mode will involve simultaneous vibrations of different parts of the molecule such as different chemical bonds.

A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E = (where h is Planck's constant). A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher overtones.

To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential.

The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.

Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas state.

In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
E_n = h \left( n + {1 \over 2 } \right)\nu=h\left( n + {1 \over 2 } \right) {1\over {2 \pi}} \sqrt{k \over m} \!,
where n is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.[7][8]

The difference in energy when n (or v) changes by 1 is therefore equal to h\nu, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency \nu (in the harmonic oscillator approximation).

See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
\Delta n = \pm 1
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band.

Intensities

In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate.[9] The intensity of Raman bands depends on polarizability.

Symmetrical
stretching
Asymmetrical
stretching
Scissoring (Bending)
Symmetrical stretching.gifAsymmetrical stretching.gifScissoring.gif
RockingWaggingTwisting
Modo rotacao.gifWagging.gifTwisting.gif

Americans' Attitudes Toward Muslims And Arabs Are Getting Worse, Poll Finds


Posted: Updated:            

MUSLIM POLL

 
WASHINGTON -- Americans were outraged to learn they were being spied on by the National Security Agency, but many support law enforcement profiling of Muslims, according to a poll released Tuesday by the Arab American Institute.

The survey, conducted by Zogby Analytics for the advocacy group, found that 42 percent of Americans believe law enforcement is justified in using profiling tactics against Muslim-Americans and Arab-Americans. The survey also shows American attitudes toward Arab-Americans and Muslim-Americans have turned for the worse since the Arab American Institute first began polling on the subject in 2010. The new poll found favorability toward Arab-Americans at 36 percent, down from 43 percent in 2010. For Muslim-Americans, favorability was just 27 percent, compared with 36 percent in 2010.

Recent news headlines associated with Muslims have focused on the ongoing civil war in Syria; the rise of ISIS, or the Islamic State in Iraq and Levant, in Iraq; the abduction of Nigerian schoolgirls by the Islamist group Boko Haram; and the 2012 terrorist attack on a U.S. diplomatic mission in Benghazi, Libya.

"The way forward is clear," the pollsters wrote in the survey's executive summary. "Education about and greater exposure to Arab Americans and American Muslims are the keys both to greater understanding of these growing communities of American citizens and to ensuring that their rights are secured."

The poll found a growing number of Americans doubt that Muslim-Americans or Arab-Americans would be able to perform in a government post without their ethnicity or religion affecting their work. Thirty-six percent of respondents felt that Arab-Americans would be influenced by their ethnicity, and 42 percent said Muslim-Americans would be influenced by religion.

Results differed by political party, with the majority of Republicans holding negative views of both Arab-Americans and Muslims. Democrats gave Arab-Americans a 30 percent unfavorable rating and Muslim-Americans a 33 percent unfavorable rating, while Republicans gave Arab-Americans a 54 percent unfavorable rating and Muslim-Americans a 63 percent unfavorable rating.

Similarly, Republicans were more likely to think that Arab-Americans and Muslim-Americans unable to hold a role in government without being influenced by ethnicity or religion. Fifty-seven percent of Republicans said they believed Muslim-Americans would be influenced by their religion, while half said the same for Arab-Americans. Almost half of Democrats said they were confident Muslim-Americans and Arab-Americans could do their jobs without influence.

The survey also showed a generational gap in attitudes toward Arab-Americans and Muslim-Americans, with younger respondents showing more favorability toward both groups. Part of that, according to the pollsters, has to do with exposure -- those ages 18 to 29 were likely to know Arab-Americans or Muslim-Americans, while respondents older than 65 were almost evenly split on that question.

Previous polls also have shown Americans holding a cold view of Muslims. A Pew poll this month found that Muslims were perceived as negatively as atheists.

The Arab American Institute survey was conducted online among 1,110 likely U.S. voters from June 27 to June 29, a period of unrest in the Muslim world.

Several Muslim-American groups are dedicated to changing the negative perception of Islam, and have encouraged Muslims to pursue more public engagement, both within the federal government and individual communities.

The Littlest Victims of Anti-Science Rhetoric


vitamin K
Vitamin K is a critical compound in our bodies that allows blood to coagulate. Infants are naturally low in it and are at risk for terrible problems that can be otherwise prevented with a simple shot.
Photo by Shutterstock/Natalia Karpova

After all these years advocating for science, and hammering away at those who deny it, I’m surprised I can still be surprised at how bad anti-science can get.
Yet here we are. Babies across the U.S. are suffering from horrific injuries—including hemorrhages, brain damage, and even strokes (yes, strokes, in babies)—because of parents refusing a vitamin K shot. This vitamin is needed to coagulate blood, and without it internal bleeding can result.
Advertisement
Vitamin K deficiency is rare in adults, but it doesn’t cross the placental barrier except in limited amounts, so newborn babies are generally low in it. That’s why it’s been a routine injection for infants for more than 50 years—while vitamin K deficiency is not as big a risk as other problems, the shot is essentially 100 percent effective, and is quite safe.
Mind you, this is not a vaccine, which contains minuscule doses of killed or severely weakened microbes to prime the immune system. It’s a shot of a critical vitamin.
Phil Plait Phil Plait
Phil Plait writes Slate’s Bad Astronomy blog and is an astronomer, public speaker, science evangelizer, and author of Death from the Skies! 
 
Nevertheless, as my friend Chris Mooney writes in Mother Jones, there is an overlap with the anti-vax and “natural health” community. As an example, as reported by the Centers for Disease Control and Prevention, in the Nashville, Tennessee, area, more than 3 percent of parents who gave birth in hospitals refused the injection overall, but in “natural birth” centers that rate shot up to 28 percent.
My Slate colleague Amanda Marcotte points out that vitamin K levels in breast milk are very low as well, and that’s the preferred technique for baby feeding among those who are also hostile to vaccines. In those cases, getting the shot is even more critical.

But the anti-vax rhetoric has apparently crossed over into simple injections. Chris has examples in his Mother Jones article, and there’s this in an article in the St, Louis Post-Dispatch:
The CDC learned that parents refused the injection for several reasons, including an impression it was unnecessary if they had healthy pregnancies, and a desire to minimize exposure to “toxins.” A 1992 study associated vitamin K and childhood leukemia, but the findings have been debunked by subsequent research.
“We sort of came to the realization that parents were relying on a lot of sources out there that were providing misleading and inaccurate information,” said Dr. Lauren Marcewicz, a pediatrician with the CDC’s Division of Blood Disorders. 
By “sources,” they mean various anti-science websites and alt-med anti-vaxxers like Joe Mercola (who has decidedly odd things to say about the vitamin K shot, which you can read about at Science-Based Medicine). Despite the lack of evidence of harm, some parents are still buying into the nonsense, and it’s babies who are suffering the ghastly consequences.

These include infants with brain damage, children with severe developmental disabilities, and more, because of parents refusing a simple shot for their infants. The irony here is extreme: These are precisely the sorts of things the anti-vaxxers claim they are trying to prevent.
The Centers for Disease Control and Prevention has a great Web page about Vitamin K: what it is, why we need it, and why babies need it even more so. It will answer any questions you have about this necessary vitamin.

If you’re about to have a baby or have had one recently: Congratulations! It’s one of the most amazing things we can do as humans, and I will always remember watching and participating in my daughter’s birth. I would have done anything to make her ready for the world, and for me—for every parent—that includes getting the real facts about health.

George Will stuns Fox panel: ‘Preposterous’ that U.S. can’t shelter child refugees from violence

 By David Edwards
Sunday, July 27, 2014 11:12 EDT


George Will speaks to Fox News
                                       
  • 6155
     
Fox News contributor George Will shocked his fellow panelists on Sunday by asserting that the United States should not deport child refugees who were fleeing violence in Central America.

“We ought to say to these children, ‘Welcome to America, you’re going to go to school, and get a job, and become American,’” Will suggested. “We have 3,141 counties in this country. That would be 20 per county.”

“They idea that we can’t assimilate these 8-year-old criminals with their teddy bears is preposterous,” he added.
At that point, Fox News host Chris Wallace interrupted: “You got to know, we’re going to get tons of email saying, ‘This guy doesn’t understand the border. Why should we be dealing with Central America’s problem? We can’t import the problem, they’ve got to deal with it there, and our border has to mean something.’”
“We can handle this problem is what I’m saying,” Will explained. “We’ve handled what [American poet] Emma Lazarus called the ‘wretched refuse of your teeming shore’ a long time ago, and a lot more people than this.”
Watch the video below from Fox News’ Fox News Sunday, broadcast July 27, 2014.

How Do Intelligent Machines Learn?

How and under what conditions is it possible for an intelligent machine to learn? To address this question, let’s start with a definition of machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.”
What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability.
Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications,we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications:
  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.
In essence machine learning incorporates four essential elements.
  1. Representation: The intelligent machine must be able to assimilate data (input) and transform it in a way that makes it useful for a specific algorithm.
  2. Generalization: The intelligent machine must be able to accurately map unseen data to similar data in the learning data set.
  3. Algorithm selection: After generalization the intelligent machine must choose and/or combine algorithms to make a computation (such as a decision or an evaluation).
  4. Feedback: After a computation, the intelligent machine must use feedback (such as a reward or punishment) to improve its ability to perform steps 1 through 3 above.
Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Machine learning has come a long way since the 1972 introduction of Pong, the first game developed by Atari Inc. Today’s computer games are incredibly realistic, and the graphics are similar to watching a movie. Few of us can win a chess game on our computer or smartphone unless we set the difficulty level to low. In general machine learning appears to be accelerating, even faster than the field of AI as a whole. We may, however, see a bootstrap effect, in which machine learning results in highly intelligent agents that accelerate the development of artificial general intelligence, but there is more to the human mind than intelligence. One of the most important characteristics of our humanity is our ability to feel human emotions.

This raises an important question. When will computers be capable of feeling human emotions? A new science is emerging to address how to develop and program computers to be capable of simulating and eventually feeling human emotions. This new science is termed “affective computing.”  We will discuss affective computing in a future post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte
 

Hang in their just a few more years.  SLS (and SpaceX's Heavy Falcon) are on their way.  To the moon and beyond!

We're putting a forest on a climate-change fast-track


An ambitious experiment that exposes a natural woodland to rising carbon dioxide levels will tell us what's in store for the world's trees, says Rob Mackenzie
 
You head the Birmingham Institute of Forest Research. How will it stand out?
One way it will stand out is a novel experiment called FACE – Free-Air Carbon Dioxide Enrichment. It will be the first in the world to take a mature, temperate, broad-leafed woodland ecosystem and, where it stands, expose it to predicted future atmospheric concentrations of carbon dioxide. We will look at the effects of the CO2 on the structure and functioning of the woodland.
With FACE we are responding to a lack of long-term data on the effects of CO2 on woodland. People have been saying we need something like this for a long time.
 
How long will the experiment last?
The FACE experiment has been on the wish-list of UK scientists for years, but has never been possible at this scale because of funding insecurities. Now we are in the extremely fortunate situation of having received philanthropic funding. This allows us to plan for an experiment lasting at least 10 years. If our results are as significant as we expect, then we should be able to extend the run beyond 10 years.
 
How far forward will it look?
The CO2 we will be adding corresponds to what we expect to be in the air 75 years from now at current rates of change.

How will you be monitoring the woodland?
We will be using developments in genomics to characterise biodiversity in unprecedented detail. For plant health we have a dedicated lab with the latest biomedical technology. And we will use the latest sensor technology to provide us with never-before-seen levels of detail about how semi-natural woodlands function.
 
Can't you just do all this in a lab?
You can learn a lot about how plants respond to changing CO2 using greenhouses, plant growth chambers, even cell lines. But in nature 1+1 has a habit of not equalling 2, so you need to take away the walls, the fake growing media, the artificial climate and watch actual nature working. FACE is Gaia science, if you like.
 
What else will the institute be looking at?
The other topic in the early years is figuring out the microbiology of pathogen threats to plants.
 
Why focus your research on these things?
We don't think it's possible to understand the true value of woodlands and forests if we are uncertain about how resilient they are to biological and environmental challenges. These threats include things like ash dieback disease and, of course, human-induced climate change.
 
How vital are experiments like this?
This is part of an emerging experimental array that will do for ecology what the great atom smashers and telescopes have done for physics. Ultimately, we aim to provide fundamental science, social science and cultural research of relevance to forests anywhere in the world.
 
This article appeared in print under the headline "Fast-forwarding forests"

Profile

Rob Mackenzie is the director of the newly established Birmingham Institute of Forest Research at the University of Birmingham in the UK, where he is also a professor of atmospheric science

Genetic moderation is needed to debate our food future


GM is now a term loaded with baggage. Scientists must allow for people's objections to show the public there's nothing "spooky" about it
 
WITH food security firmly on the international agenda, there's a growing appetite to look again at the opportunities promised by agricultural biotechnology.
Scientists working in this area are excited by new techniques that enable them to edit plant DNA with unprecedented accuracy. Even epigenetic markers, which modulate the activity of genes, can now be altered. The promise is to modify crops to make them more nutritious or resistant to disease.
 
But there's a problem, notably in Europe: genetic modification.
Much of agricultural biotechnology – including conventional breeding – involves genetic modification of one kind or another. But "GM" has come to mean something quite specific, and is loaded with baggage. To many people it means risky or unnatural mixing of genes from widely disparate species, even across the plant and animal kingdoms, to create hybrids such as corn with scorpion genes. That baggage now threatens to undermine mature debate about the future of food production.
 
It is no longer a simple yes/no choice between high-tech agribusiness and conventional production driven by something ill-defined as more "natural".

The battle lines of this latest wave of agricultural advance are already being drawn. The UK's Biotechnology and Biological Sciences Research Council, for example, is working on a position statement on the new technologies, which it expects to release later this summer.
It is clear that, over the coming years, the general public will have to decide which of these technologies we find acceptable and which we do not.
 
So where did it all go wrong to begin with? In the late 1990s, when I was reporting on early GM research for the BBC's current affairs programme Newsnight, anti-GM protestors realised that vivid images made good TV and rampaged through fields in white boiler suits destroying trial crops.
 
On the other side, industry representatives brushed aside public concerns and tried to control the media message, thumping the table in the office of at least one bemused newspaper editor (who went on to co-script a TV drama about a darker side to GM). They also lobbied hard for the relaxation of regulations governing agribusiness.
 
In the middle was the public, just coming to terms with farming's role in the BSE crisis. There was little space for calm, rational debate. Instead, GM became the cuckoo in the nest of agricultural biotechnology and its industry backers became ogres, shouting down any discussion of alternatives.
 
As a result, many people remain unaware that there are other high-tech ways to create crops. Many of these techniques involve the manipulation of genes, but they are not primarily about the transfer of genes across species.
 
But for GM to be discussed alongside such approaches as just another technology, scientists will have to work harder to dispel the public's remaining suspicions.
 
I recently chaired a debate on biotech at the UK's Cambridge Festival of Plants, where one audience member identified a public unease about what he called the slightly "spooky" aspect of GM crops. He meant those scorpion genes, or fish genes placed into tomatoes – the type of research that helped to coin the phrase "yuck factor".
To my surprise, a leading plant scientist on the panel said she would be prepared to see cross-species manipulation of food crops put on hold if the public was overwhelmingly uncomfortable with it. Ottoline Leyser, director of the University of Cambridge's Sainsbury Laboratory, said she believed valuable GM crop development could still be done even if scientists were initially restricted to species that can swap their genes naturally, outside of the laboratory. An example of this might be adding a trait from one variety of rice to another.
 
Nevertheless, Leyser remains adamant that there is "nothing immensely fishy about a fish gene". What's more, she added, the notion of a natural separation between species is misplaced: gene-swapping between species in the wild is far more prevalent than once thought.
But Leyser insisted that scientists must respect the views of objectors – even if "yuck" is their only complaint. That concession from a scientist is unusual. I've spoken to many of her peers who think such objections are irrational.
 
Scientists cannot expect people to accept their work blindly and they must make time to listen. Above all, more of them should be prepared to halt experiments that the public is uncomfortable with. And it's beginning to happen.
 
Paul Freemont is co-director of the Centre for Synthetic Biology and Innovation at Imperial College London. He designs organisms from scratch but would be prepared to discontinue projects that the public is unhappy about. He says scientists need an occasional reality check.
 
"We are going to have to address some of the consequences of what we're doing, and have agreements about what's acceptable to society in terms of manipulating biology at this level," Freemont says.
 
Scientists funded with public money may already feel some obligation to adopt this approach. But those working in industry should consider its advantages too. A more open and engaged conversation with the public could surely benefit the companies trying to sell us novel crop technologies.
 
Society, for its part, will need to listen to the experts with an open mind. And as we work out how to feed an expanding population, we will need to ask questions that are bigger than "GM: yes or no?"
 
This article appeared in print under the headline "Genetic moderation"

Susan Watts is a journalist and broadcaster. She was science editor of Newsnight until the post was closed

Strange dark stuff is making the universe too bright


LIGHT is in crisis. The universe is far brighter than it should be based on the number of light-emitting objects we can find, a cosmic accounting problem that has astronomers baffled.
"Something is very wrong," says Juna Kollmeier at the Observatories of the Carnegie Institution of Washington in Pasadena, California.
 
Solving the mystery could show us novel ways to hunt for dark matter, or reveal the presence of another unknown "dark" component to the cosmos.
 
"It's such a big discrepancy that whatever we find is going to be amazing, and it will overturn something we currently think is true," says Kollmeier.
The trouble stems from the most recent census of objects that produce high-energy ultraviolet light.
Some of the biggest known sources are quasars – galaxies with actively feeding black holes at their centres. These behemoths spit out plenty of UV light as matter falling into them is heated and compressed. Young galaxies filled with hot, bright stars are also contributors.

Ultraviolet light from these objects ionises the gas that permeates intergalactic space, stripping hydrogen atoms of their electrons. Observations of the gas can tell us how much of it has been ionised, helping astronomers to estimate the amount of UV light that must be flying about.
But as our images of the cosmos became sharper, astronomers found that these measurements don't seem to tally with the number of sources found.
 
Kollmeier started worrying in 2012, when Francesco Haardt at the University of Insubria in Como, Italy, and Piero Madau at the University of California, Santa Cruz, compiled the results of several sky surveys and found far fewer UV sources than previously suggested.
 
Then in February, Charles Danforth at the University of Colorado, Boulder, and his colleagues released the latest observations of intergalactic hydrogen by the Hubble Space Telescope. That work confirmed the large amount of gas being ionised. "It could have been that there was much more neutral hydrogen than we thought, and therefore there would be no light crisis," says Kollmeier. "But that loophole has been shut."
 
Now Kollmeier and her colleagues have run computer simulations of intergalactic gas and compared them with the Hubble data, just to be sure. They found that there is five times too much ionised gas for the number of known UV sources in the modern, nearby universe.
 
Strangely, their simulations also show that, for the early, more distant universe, UV sources and ionised gas match up perfectly, suggesting something has changed with time (Astrophysical Journal Letters, doi.org/tqm).
This could be down to dark matter, the mysterious stuff thought to make up more than 80 per cent of the matter in the universe.
 
The leading theoretical candidates for dark matter are weakly interacting massive particles, or WIMPs. There are many proposed versions of WIMPs, including some non-standard varieties that would decay and release UV photons.
 
Knowing that dark matter in the early universe worked like a scaffold to create the cosmic structure we see today, we have a good idea how much must have existed in the past. That suggests dark matter particles are stable for billions of years before they begin to decay.
 
Theorists can now consider the UV problem in their calculations and see if any of the proposed particles start to decay at the right time to account for the extra light, says Kathryn Zurek, a dark matter expert at the University of Michigan in Ann Arbor. If so, that could explain why the excess only shows up in the modern cosmos.
 
If WIMPS aren't the answer, the possible explanations become even more bizarre, such as mysterious "dark" objects that can emit UV light but remain shrouded from view. And if all else fails, there's even a chance something is wrong with our basic understanding of hydrogen.
 
"We don't know what it is, or we would be reporting discovery instead of crisis," says Kollmeier.
"The point is to bring this to everyone's attention so we can figure it out as a community."
 
This article appeared in print under the headline "Why is the cosmos too bright to bear?"

Psychedelic cells are fruit of Alan Turing's equations


(Image: Jonathan McCabe)
 
WE ALL know the world can look weird and wonderful under the microscope, but who knew cells could look this pretty? Actually, you won't find these psychedelic blobs in any living creature on Earth, because contrary to appearances this image has been created by a computer.
Generative artist Jonathan McCabe works with algorithms first developed by mathematician Alan
Turing to create pictures like this. "I don't guide the production of any particular image, the program runs from start to finish without input," McCabe says, though he does tweak the software to produce different results. "The trick is to try to make a system that generates interesting output by itself."
Turing is most famous for his pioneering work in computingMovie Camera, but he was also interested in how living creatures produce biological patterns such as a tiger's stripes. He came up with a system of equations that describe how two chemicals react together, resulting in surprisingly lifelike arrangements.
 
McCabe developed his algorithm based on Turing's ideas. His program treats colours as different liquids that can't mix together because of an artificial surface tension, which is what gives them a cell-like appearance. "You get structures which look like cell membranes and mitochondria because at the microscopic scale surface tension forces are strong," says McCabe.
 
This article appeared in print under the headline "Rise of the blobs"

Cagey material acts as alcohol factory

2 hours ago by Kate Greene

Jeff Long, Materials Sciences scientist, with student Dianne Xiao. The team’s research enabled MOFs to oxidize ethane to ethanol. Credit: Roy Kaltschmidt
Some chemical conversions are harder than others. Refining natural gas into an easy-to-transport, easy-to-store liquid alcohol has so far been a logistic and economic challenge. But now, a new material, designed and patented by researchers at Lawrence Berkeley National Laboratory (Berkeley Lab), is making this process a little easier. The research, published earlier this year in Nature Chemistry, could pave the way for the adoption of cheaper, cleaner-burning fuels.


"Hydrocarbons like ethane and methane could be used as fuel, but they're hard to store and transport because they're gases," says Dianne Xiao, graduate student at the University of California Berkeley.
"But if you have a catalyst that can selectively turn them into alcohols, which are much easier to transfer and store," she says, "that would make things a lot easier."
Xiao and Jeffrey Long, scientist in Berkeley Lab's Materials Sciences Division and professor of chemistry at the UC Berkeley, focused this project on converting ethane to ethanol.
Ethanol is a potential alternative fuel that burns cleaner and has a higher energy density than other alternative fuels like methanol. One problem with ethanol, however, is that current methods for production require , which makes it expensive.

The innovation came when Long and Xiao designed a material called Fe-MOF-74, in a class of materials called metal-organic frameworks or MOFs. Because of their cage-shaped structures, MOFs boast a high surface area, which mean they can absorb extremely large amounts of gas or liquid compared to the weight of the MOF itself.

Cagey material acts as alcohol factory
A view inside the MOF: hexagonal channels lined with iron. Credit: Dianne Xiao, Berkeley

Since MOFs are essentially structured like a collection of tiny cages, they can capture other molecules, acting as a filter. Additionally, they can perform chemistry as molecules pass through the cages, becoming little chemical factories that convert one substance to another.
It's this chemical-conversion feature of MOFs that Long and Xiao took advantage of. Ethane is a molecule made of two carbon atoms where each atom is surrounded by atoms of hydrogen. Ethanol is also made of two carbon atoms bonded to hydrogen atoms, but one of its is also bonded to a hydrogen-oxygen ion called a hydroxyl.

Previous attempts to add a hydroxyl ion to ethane to make ethanol have required high pressure and high temperatures that range from 200 to 300 degrees Celsius. It's costly and inconvenient.
But by using a specially designed MOF—one in which a kind of iron was added inside the tiny molecular cages—the researchers were able to reduce the need for extreme heat, converting to alcohol at just 75 degrees Celsius.


"This is getting toward a holy grail in chemistry which is to be able to cleanly take alkanes to alcohols without a lot of energy," says Long. Long and Xiao worked closely with researchers at the National Institute of Standards and Technology, the University of Minnesota, the University of Delaware, and the University of Turin to design, model, and characterize the MOF and resultant ethanol production.

Next steps involve tweaking the concentrations of iron in the MOF to produce a more efficient conversion, says Xiao. "It's a promising proof of principle," she says. "It's exciting that we can do this now at low temperature and low pressures."
Explore further: Metal-organic framework helps convert one chemical to another

More information: "Oxidation of ethane to ethanol by N2O in a metal–organic framework with coordinatively unsaturated iron(II) sites." Dianne J. Xiao, et al. Nature Chemistry 6, 590–595 (2014) DOI: 10.1038/nchem.1956. Received 17 December 2013 Accepted 14 April 2014 Published online 18 May 2014
Journal reference: Nature Chemistry search and more info website

Read more at: http://phys.org/news/2014-07-cagey-material-alcohol-factory.html#jCp Read more at: http://phys.org/news/2014-07-cagey-material-alcohol-factory.html#jCp

Direct reaction heavy atoms to catalyst surface demonstrated

1 hour ago

Ruthenium crystal covered with oxygen atoms in the experimental set-up Harpoen. Credit: Fundamental Research on Matter (FOM)
Researchers from FOM Institute DIFFER are the first to have demonstrated that heavier atoms in a material surface can react directly with a surrounding gas. The so-called Eley-Rideal reaction has never previously been demonstrated for atoms heavier than hydrogen. The Eley-Rideal process requires less energy than a reaction between two atoms that are both attached to the material. The discovery could lead to more efficient catalysts for the production of synthetic fuel, for example. The researchers published the results on 29 July online in Physical Review Letters.

Most chemical reactions on a material surface (catalyst) follow the Langmuir-Hinshelwood scheme: from the surroundings adhere to the material and move randomly across the surface until they meet each other. At that spot the atoms react with each other and are subsequently released from the surface. In Eley-Rideal reactions a particle on the surface instead reacts directly with an atom from the surroundings that is rapidly moving past it. According to the theory, this type of reaction takes place most easily with light, rapidly moving atoms. In practice, the Eley-Rideal reaction has only been demonstrated with the lightest atom, hydrogen. The team from DIFFER, the Materials innovation institute M2i and the Van 't Hoff Institute for Molecular Sciences in Amsterdam have now demonstrated for the first time that heavier atoms such as nitrogen and oxygen can also undergo an Eley-Rideal reaction.

The direct Eley-Rideal reaction between the surrounding gas and an atom that is attached to the surface had never previously been observed for heavier atoms. Credit: Fundamental Research on Matter (FOM)
Rebound

"In our set-up, Harpoen, we can directly observe the difference between the two types of reaction", explains research leader Dr Teodor Zaharia. His team covered a surface of ruthenium with a layer of and fired a focused beam of at this to obtain the reaction product nitrogen oxide. "The Eley-Rideal reaction takes place within a fraction of a second: the original kinetic energy of the nitrogen is conserved and you can therefore observe the reaction product rebounding from the surface at the same angle as which the original nitrogen atom collided with it." In the Langmuir-Hinshelwood reaction, however, there is no link between the direction of movement of the original atoms and the reaction products; due to the random walk across the surface the information about the original direction of movement is lost. Using detectors that can measure the direction of the reaction product, Zaharia and his team could unequivocally observe the fingerprint of the Eley-Rideal reaction.

The higher energy of the reaction products also revealed that an Eley-Rideal reaction had taken place: just one of the reacting atoms needs to break its attachment to the surface as a result of which less energy is needed. The Eley-Rideal reaction between heavier atoms is therefore attractive for applications in catalysis. The offers extra control over which particles react and that could lead to new ways of producing and processing materials. The research will be continued in a collaboration between DIFFER and the Center of Interface Dynamic for Sustainability that fellow researcher and former director of DIFFER Aart Kleyn has set up in the Chinese city of Chengdu.
Explore further: Scientists discover channel used by catalyst to produce ammonia, vital for food and fuel crops
 
More information: 'Eley-Rideal reactions with N atoms at Ru(0001): Formation of NO and N2T. Zaharia, A. Kleijn, M. Gleeson, Physical Review Letters, 21 July 2014.
Read more at: http://phys.org/news/2014-07-reaction-heavy-atoms-catalyst-surface.html#jCp

Space travel in science fiction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Space_travel_in_science_fiction Rock...