Search This Blog

Tuesday, July 29, 2014

House Republicans Pass Bill to Lower Taxes on the Rich and Raise Taxes on the Poor

By
Mon Jul. 28, 2014 2:12 PM EDT
 
So what are Republicans in the House of Representatives up to these days? According to Danny Vinik, they just passed a bill that would reduce taxes on the rich and raise them on the poor.
I know, I know: you're shocked. But in a way, I think this whole episode is even worse than Vinik makes it sound.

Here's the background: The child tax credit reduces your income tax by $1,000 for each child you have. It phases out for upper middle-income folks, but—and this is the key point—it phases out differently for singles and couples. The way the numbers sort out, it treats singles better than couples. This is the dreaded "marriage penalty," which is bad because we want to encourage people to get married, not discourage them.

So what did House Republicans do? Naturally, they raised the phase-out threshold for married couples so that well-off couples would get a higher benefit. They didn't have to do this, of course. They could have lowered the benefit for singles instead. Or they could have jiggled the numbers so that everyone got equal benefits but the overall result was revenue neutral.

But they didn't. They chose the path that would increase the benefit—and thus lower taxes—for married couples making high incomes. The bill also indexes the credit to inflation, which helps only those with incomes high enough to claim the full credit. And it does nothing to make permanent a reduction in the earnings threshold that benefits poor working families. Here's the net result:
If the House legislation became law, the Center for Budget and Policy Priorities estimated that a couple making $160,000 a year would receive a new tax cut of $2,200. On the other hand, the expiring provisions of the CTC would cause a single mother with two kids making $14,500 to lose her full CTC, worth $1,725.
So inflation indexing, which is verboten when the subject is the minimum wage, is A-OK when it comes to high-income taxpayers. And eliminating the marriage penalty is also a good idea—but again, only for high-income couples.
Which is crazy. I don't really have a firm opinion on whether the government should be in the business of encouraging marriage, but if it is, surely it should focus its attention on the people who need encouragement in the first place. And that is very decidedly not the upper middle class, which continues to get married at the same rate as ever.

So we have a deficit-busting tax cut. It's a cut only for the upper middle class. It's indexed for inflation, even though we're not allowed to index things like the minimum wage. And the poor are still scheduled for a tax increase in 2017 because this bill does nothing to stop it. It's a real quad-fecta. I wonder what Paul Ryan thinks of all this?

Third Siberian Crater “Doesn’t Look Like Natural Formation”

Is it possible that someone has been playing a big hoax on us?

third siberian crater

Third Siberian Crater “Doesn’t Look Like Natural Formation”

OK, this hole definitely needs to be looked into. A third mysterious crater has been found in Siberia. This one was discovered in the Taymyr Peninsula by local reindeer herders who live in the northern village of Nosok.
Map showing locations of the three Siberian craters reported so far.
Map showing locations of the three Siberian craters reported so far.

The area is east of Yamal, where the first crater was reported last week, and northeast of the Taz district where the second one was found. This hole is smaller than those two – about 4 feet in diameter – and observers say its perimeter is perfectly round and the 100-to-300-feet-deep hole is shaped like a cone. According to The Siberian Times, one described it this way:
It is not like this is the work of men, but also doesn’t look like natural formation.
News travels slowly in Siberia. Local residents say the hole was formed on September 27, 2013. Shortly after, Mikhail Lapsui, a deputy of the regional parliament, flew over the area by helicopter and gave this account:

“There is ground outside, as if it was thrown as a result of an underground explosion. Observers give several versions. According to the first, initially at the place was smoking, and then there was a bright flash. In the second version, a celestial body fell there.”

Geologists, ecologists and historians are unable to come to a consensus on the causes of the craters. The prevailing theory on the other two is methane gas releases caused by global warming melting the permafrost. However, the Taymyr crater is far from the gas fields of the Yamal and Taz areas and is smaller, cone-shaped and perfectly formed.

A view inside the Taymyr crater taken by herder who stayed far enough away to not fall in.
A view inside the Taymyr crater taken by herder who stayed far enough away to not fall in.
Could it have another cause? Marina Leibman, chief scientist of the Earth Cryosphere Institute, says more information is needed.
Undoubtedly, we need to study all such formations. It is necessary to be able to predict their occurrence.”
Especially if it “doesn’t look like natural formation.”

No, Earth Wasn’t Nearly Destroyed by a 2012 Solar Storm

The views expressed are those of the author and are not necessarily those of Scientific American.
  



Credit: NASA/SDO/AIA
 
Yes, a large glob of plasma and magnetic fields from the sun did just miss us two years ago, as news organizations have feverishly reported over the past few days, following a NASA press release. At the time, scientists were hugely relieved it flew by Earth and missed us entirely. If it had hit, the coronal mass ejection (CME) could have decimated power grids on the ground and sizzled satellite electronics in space, causing widespread communications and electricity blackouts. It would not, however, have destroyed the entire planet (unless you equate the loss of the internet with the end of existence).

Solar storms are a real threat, however, and we might not be lucky enough to dodge the next one.  Our sun is currently in the middle of its “solar maximum,” a period of especially high magnetic activity—the root cause of solar storms and CMEs. So far, we’ve avoided major trouble from our nearest star this year, and physicists are even pondering why this cycle appears to be unusually quiescent. But solar activity is hard to predict, and no one knows when the next tempest might erupt.

In the August 2008 issue of Scientific American, astrophysicists Sten F. Odenwald and James L. Green argued that better solar forecasting is a must. If another major CME hits Earth, similar in scale to the superstorm of 1859, the infrastructure damage could be catastrophic, they write, but an early warning would help mitigate damage. The authors lay out a plan for steeling Earth against future solar attacks by reinforcing power grids and shielding satellites, and by improving our ability to predict solar storms through improved sun monitoring. For more on what we should be doing to protect ourselves, check out the article (subscription required) here:

Bracing the Satellite Infrastructure for a Solar Superstorm
Clara MoskowitzAbout the Author: Clara Moskowitz is Scientific American's associate editor covering space and physics. Follow on Twitter @ClaraMoskowitz.
The views expressed are those of the author and are not necessarily those of Scientific American.

RestoringHistory.US

Shared publicly  -  4:43 PM
 
Mosab Hassan Yousef (Son of Hamas Founder) tells the truth about Hamas

https://www.youtube.com/watch?feature=player_embedded&v=KakxXN5Z-XI

New theory says the Universe isn't expanding — it's just getting fat

New theory says the Universe isn't expanding — it's just getting fat

 
 

 
 
 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



 
 
 
 
 
 
Conventional thinking says the Universe has been expanding ever since the Big Bang. But theoretical astrophysicist Christof Wetterich says it's not expanding at all. It’s just that the mass of all the particles within it is steadily increasing.
Top Image: T. Piner.
We think the Universe is expanding because all the galaxies within it are pushing away from one another. Scientists see this as the redshift — a kind of doppler effect that happens when atoms emit or absorb light. We see these frequencies as appearing in the red, an indication that mass is moving away from us. Galaxies exhibit this redshift, which is why scientists say the Universe is expanding.
But Wetterich, who works out of the University of Heidelberg in Germany, says the characteristic light emitted by atoms are also governed by the masses of the atoms’ elementary particles, particularly their electrons.
Writing in Nature News, Jon Cartwright explains:
If an atom were to grow in mass, the photons it emits would become more energetic. Because higher energies correspond to higher frequencies, the emission and absorption frequencies would move towards the blue part of the spectrum. Conversely, if the particles were to become lighter, the frequencies would become redshifted.
Because the speed of light is finite, when we look at distant galaxies we are looking backwards in time — seeing them as they would have been when they emitted the light that we observe. If all masses were once lower, and had been constantly increasing, the colours of old galaxies would look redshifted in comparison to current frequencies, and the amount of redshift would be proportionate to their distances from Earth. Thus, the redshift would make galaxies seem to be receding even if they were not.
Work through the maths in this alternative interpretation of redshift, and all of cosmology looks very different. The Universe still expands rapidly during a short-lived period known as inflation. But prior to inflation, according to Wetterich, the Big Bang no longer contains a 'singularity' where the density of the Universe would be infinite. Instead, the Big Bang stretches out in the past over an essentially infinite period of time. And the current cosmos could be static, or even beginning to contract.
Whoa. That is a radically different picture of the Universe than what we're used to.
Unfortunately, there’s no way for us to test this. Well, at least not yet.
But Wetterich says his theory is useful for thinking about different cosmological models. And indeed, it may offer some fresh insights into the spooky dark energy that's apparently pushing the Universe outwards at an accelerating rate.

Read the entire study — which has not yet been peer reviewed — at arXiv: “A Universe without expansion.” But as Cartwright notes in his article, other physicists are not hating the idea.

To AGW Doubters, Skeptics, "Deniers", and Anyone Interested in the Science Behind Global Warming

Almost all of what follows come from Wikipedia, but as it agrees with my own knowledge of chemistry and physics from many sources over the years, it makes a good if sometimes hard to follow summary of the science behind anthropogenic CO2 enhanced global warming.  It is theory however, so how much warming it has caused in the Earth's atmosphere over the last ~ 150 years, and the climatological consequences of that is left up to scientific debate.
___________________________________________________________

We start with Svante Arrhenius, in the latter 19'th century:

Greenhouse effect

Arrhenius developed a theory to explain the ice ages, and in 1896 he was the first scientist to attempt to calculate how changes in the levels of carbon dioxide in the atmosphere could alter the surface temperature through the greenhouse effect.[8] He was influenced by the work of others, including Joseph Fourier and John Tyndall. Arrhenius used the infrared observations of the moon by Frank Washington Very and Samuel Pierpont Langley at the Allegheny Observatory in Pittsburgh to calculate the absorption of infrared radiation by atmospheric CO2 and water vapour. Using 'Stefan's law' (better known as the Stefan-Boltzmann law), he formulated his greenhouse law. In its original form, Arrhenius' greenhouse law reads as follows:
if the quantity of carbonic acid [CO2] increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.
The following equivalent formulation of Arrhenius' greenhouse law is still used today:[9]
ΔF = α Ln(C/C_0)
Here C is carbon dioxide (CO2) concentration measured in parts per million by volume (ppmv); C_0 denotes a baseline or unperturbed concentration of CO2, and ΔF is the radiative forcing, measured in watts per square meter. The constant alpha (α) has been assigned a value between five and seven.[9]
Arrhenius at the first Solvay conference on chemistry in 1922 in Brussels.
 
Based on information from his colleague Arvid Högbom (sv), Arrhenius was the first person to predict that emissions of carbon dioxide from the burning of fossil fuels and other combustion processes were large enough to cause global warming. In his calculation Arrhenius included the feedback from changes in water vapor as well as latitudinal effects, but he omitted clouds, convection of heat upward in the atmosphere, and other essential factors. His work is currently seen less as an accurate prediction of global warming than as the first demonstration that it should be taken as a serious possibility.

Arrhenius' absorption values for CO2 and his conclusions met criticism by Knut Ångström in 1900, who published the first modern infrared spectrum of CO2 with two absorption bands, and published experimental results that seemed to show that absorption of infrared radiation by the gas in the atmosphere was already "saturated" so that adding more could make no difference. Arrhenius replied strongly in 1901 (Annalen der Physik), dismissing the critique altogether. He touched the subject briefly in a technical book titled Lehrbuch der kosmischen Physik (1903). He later wrote Världarnas utveckling (1906) (German: Das Werden der Welten [1907], English: Worlds in the Making [1908]) directed at a general audience, where he suggested that the human emission of CO2 would be strong enough to prevent the world from entering a new ice age, and that a warmer earth would be needed to feed the rapidly increasing population:
"To a certain extent the temperature of the earth's surface, as we shall presently see, is conditioned by the properties of the atmosphere surrounding it, and particularly by the permeability of the latter for the rays of heat." (p46)
"That the atmospheric envelopes limit the heat losses from the planets had been suggested about 1800 by the great French physicist Fourier. His ideas were further developed afterwards by Pouillet and Tyndall. Their theory has been styled the hot-house theory, because they thought that the atmosphere acted after the manner of the glass panes of hot-houses." (p51)
 
"If the quantity of carbonic acid [CO2] in the air should sink to one-half its present percentage, the temperature would fall by about 4°; a diminution to one-quarter would reduce the temperature by 8°. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the earth's surface by 4°; and if the carbon dioxide were increased fourfold, the temperature would rise by 8°." (p53)
 
"Although the sea, by absorbing carbonic acid, acts as a regulator of huge capacity, which takes up about five-sixths of the produced carbonic acid, we yet recognize that the slight percentage of carbonic acid in the atmosphere may by the advances of industry be changed to a noticeable degree in the course of a few centuries." (p54)
 
"Since, now, warm ages have alternated with glacial periods, even after man appeared on the earth, we have to ask ourselves: Is it probable that we shall in the coming geological ages be visited by a new ice period that will drive us from our temperate countries into the hotter climates of Africa? There does not appear to be much ground for such an apprehension. The enormous combustion of coal by our industrial establishments suffices to increase the percentage of carbon dioxide in the air to a perceptible degree." (p61)
 
"We often hear lamentations that the coal stored up in the earth is wasted by the present generation without any thought of the future, and we are terrified by the awful destruction of life and property which has followed the volcanic eruptions of our days. We may find a kind of consolation in the consideration that here, as in every other case, there is good mixed with the evil. By the influence of the increasing percentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates, especially as regards the colder regions of the earth, ages when the earth will bring forth much more abundant crops than at present, for the benefit of rapidly propagating mankind." (p63)
Arrhenius clearly believed that a warmer world would be a positive change. His ideas remained in circulation, but until about 1960 most scientists doubted that global warming would occur (believing the oceans would absorb CO2 faster than humanity emitted the gas).[citation needed] Most scientists also dismissed the greenhouse effect as implausible for the cause of ice ages, as Milutin Milankovitch had presented a mechanism using orbital changes of the earth (Milankovitch cycles).[citation needed]
Nowadays, the accepted explanation is that orbital forcing sets the timing for ice ages with CO2 acting as an essential amplifying feedback.

Arrhenius estimated that halving of CO2 would decrease temperatures by 4–5 °C (Celsius) and a doubling of CO2 would cause a temperature rise of 5–6 °C.[10] In his 1906 publication, Arrhenius adjusted the value downwards to 1.6 °C (including water vapor feedback: 2.1 °C). Recent (2014) estimates from IPCC say this value (the Climate sensitivity) is likely to be between 1.5 and 4.5 °C. Arrhenius expected CO2 levels to rise at a rate given by emissions in his time. Since then, industrial carbon dioxide levels have risen at a much faster rate: Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.
___________________________________________________________

And now onto the 20'th century, and the quantum-mechanical explanation of why certain gasses exhibit the greenhouse effect, i.e., molecular vibrations and infrared radiation absorption:

Molecular vibration

Molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency, and the typical frequencies of molecular vibrations range from less than 1012 to approximately 1014 Hz.

In general, a molecule with N atoms has 3N – 6 normal modes of vibration, but a linear molecule has 3N – 5 such modes, as rotation about its molecular axis cannot be observed.[1] A diatomic molecule has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other but each normal mode will involve simultaneous vibrations of different parts of the molecule such as different chemical bonds.

A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E = (where h is Planck's constant). A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher overtones.

To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential.

The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.

Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas state.

In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
E_n = h \left( n + {1 \over 2 } \right)\nu=h\left( n + {1 \over 2 } \right) {1\over {2 \pi}} \sqrt{k \over m} \!,
where n is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.[7][8]

The difference in energy when n (or v) changes by 1 is therefore equal to h\nu, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency \nu (in the harmonic oscillator approximation).

See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
\Delta n = \pm 1
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band.

Intensities

In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate.[9] The intensity of Raman bands depends on polarizability.

Symmetrical
stretching
Asymmetrical
stretching
Scissoring (Bending)
Symmetrical stretching.gifAsymmetrical stretching.gifScissoring.gif
RockingWaggingTwisting
Modo rotacao.gifWagging.gifTwisting.gif

Americans' Attitudes Toward Muslims And Arabs Are Getting Worse, Poll Finds


Posted: Updated:            

MUSLIM POLL

 
WASHINGTON -- Americans were outraged to learn they were being spied on by the National Security Agency, but many support law enforcement profiling of Muslims, according to a poll released Tuesday by the Arab American Institute.

The survey, conducted by Zogby Analytics for the advocacy group, found that 42 percent of Americans believe law enforcement is justified in using profiling tactics against Muslim-Americans and Arab-Americans. The survey also shows American attitudes toward Arab-Americans and Muslim-Americans have turned for the worse since the Arab American Institute first began polling on the subject in 2010. The new poll found favorability toward Arab-Americans at 36 percent, down from 43 percent in 2010. For Muslim-Americans, favorability was just 27 percent, compared with 36 percent in 2010.

Recent news headlines associated with Muslims have focused on the ongoing civil war in Syria; the rise of ISIS, or the Islamic State in Iraq and Levant, in Iraq; the abduction of Nigerian schoolgirls by the Islamist group Boko Haram; and the 2012 terrorist attack on a U.S. diplomatic mission in Benghazi, Libya.

"The way forward is clear," the pollsters wrote in the survey's executive summary. "Education about and greater exposure to Arab Americans and American Muslims are the keys both to greater understanding of these growing communities of American citizens and to ensuring that their rights are secured."

The poll found a growing number of Americans doubt that Muslim-Americans or Arab-Americans would be able to perform in a government post without their ethnicity or religion affecting their work. Thirty-six percent of respondents felt that Arab-Americans would be influenced by their ethnicity, and 42 percent said Muslim-Americans would be influenced by religion.

Results differed by political party, with the majority of Republicans holding negative views of both Arab-Americans and Muslims. Democrats gave Arab-Americans a 30 percent unfavorable rating and Muslim-Americans a 33 percent unfavorable rating, while Republicans gave Arab-Americans a 54 percent unfavorable rating and Muslim-Americans a 63 percent unfavorable rating.

Similarly, Republicans were more likely to think that Arab-Americans and Muslim-Americans unable to hold a role in government without being influenced by ethnicity or religion. Fifty-seven percent of Republicans said they believed Muslim-Americans would be influenced by their religion, while half said the same for Arab-Americans. Almost half of Democrats said they were confident Muslim-Americans and Arab-Americans could do their jobs without influence.

The survey also showed a generational gap in attitudes toward Arab-Americans and Muslim-Americans, with younger respondents showing more favorability toward both groups. Part of that, according to the pollsters, has to do with exposure -- those ages 18 to 29 were likely to know Arab-Americans or Muslim-Americans, while respondents older than 65 were almost evenly split on that question.

Previous polls also have shown Americans holding a cold view of Muslims. A Pew poll this month found that Muslims were perceived as negatively as atheists.

The Arab American Institute survey was conducted online among 1,110 likely U.S. voters from June 27 to June 29, a period of unrest in the Muslim world.

Several Muslim-American groups are dedicated to changing the negative perception of Islam, and have encouraged Muslims to pursue more public engagement, both within the federal government and individual communities.

The Littlest Victims of Anti-Science Rhetoric


vitamin K
Vitamin K is a critical compound in our bodies that allows blood to coagulate. Infants are naturally low in it and are at risk for terrible problems that can be otherwise prevented with a simple shot.
Photo by Shutterstock/Natalia Karpova

After all these years advocating for science, and hammering away at those who deny it, I’m surprised I can still be surprised at how bad anti-science can get.
Yet here we are. Babies across the U.S. are suffering from horrific injuries—including hemorrhages, brain damage, and even strokes (yes, strokes, in babies)—because of parents refusing a vitamin K shot. This vitamin is needed to coagulate blood, and without it internal bleeding can result.
Advertisement
Vitamin K deficiency is rare in adults, but it doesn’t cross the placental barrier except in limited amounts, so newborn babies are generally low in it. That’s why it’s been a routine injection for infants for more than 50 years—while vitamin K deficiency is not as big a risk as other problems, the shot is essentially 100 percent effective, and is quite safe.
Mind you, this is not a vaccine, which contains minuscule doses of killed or severely weakened microbes to prime the immune system. It’s a shot of a critical vitamin.
Phil Plait Phil Plait
Phil Plait writes Slate’s Bad Astronomy blog and is an astronomer, public speaker, science evangelizer, and author of Death from the Skies! 
 
Nevertheless, as my friend Chris Mooney writes in Mother Jones, there is an overlap with the anti-vax and “natural health” community. As an example, as reported by the Centers for Disease Control and Prevention, in the Nashville, Tennessee, area, more than 3 percent of parents who gave birth in hospitals refused the injection overall, but in “natural birth” centers that rate shot up to 28 percent.
My Slate colleague Amanda Marcotte points out that vitamin K levels in breast milk are very low as well, and that’s the preferred technique for baby feeding among those who are also hostile to vaccines. In those cases, getting the shot is even more critical.

But the anti-vax rhetoric has apparently crossed over into simple injections. Chris has examples in his Mother Jones article, and there’s this in an article in the St, Louis Post-Dispatch:
The CDC learned that parents refused the injection for several reasons, including an impression it was unnecessary if they had healthy pregnancies, and a desire to minimize exposure to “toxins.” A 1992 study associated vitamin K and childhood leukemia, but the findings have been debunked by subsequent research.
“We sort of came to the realization that parents were relying on a lot of sources out there that were providing misleading and inaccurate information,” said Dr. Lauren Marcewicz, a pediatrician with the CDC’s Division of Blood Disorders. 
By “sources,” they mean various anti-science websites and alt-med anti-vaxxers like Joe Mercola (who has decidedly odd things to say about the vitamin K shot, which you can read about at Science-Based Medicine). Despite the lack of evidence of harm, some parents are still buying into the nonsense, and it’s babies who are suffering the ghastly consequences.

These include infants with brain damage, children with severe developmental disabilities, and more, because of parents refusing a simple shot for their infants. The irony here is extreme: These are precisely the sorts of things the anti-vaxxers claim they are trying to prevent.
The Centers for Disease Control and Prevention has a great Web page about Vitamin K: what it is, why we need it, and why babies need it even more so. It will answer any questions you have about this necessary vitamin.

If you’re about to have a baby or have had one recently: Congratulations! It’s one of the most amazing things we can do as humans, and I will always remember watching and participating in my daughter’s birth. I would have done anything to make her ready for the world, and for me—for every parent—that includes getting the real facts about health.

George Will stuns Fox panel: ‘Preposterous’ that U.S. can’t shelter child refugees from violence

 By David Edwards
Sunday, July 27, 2014 11:12 EDT


George Will speaks to Fox News
                                       
  • 6155
     
Fox News contributor George Will shocked his fellow panelists on Sunday by asserting that the United States should not deport child refugees who were fleeing violence in Central America.

“We ought to say to these children, ‘Welcome to America, you’re going to go to school, and get a job, and become American,’” Will suggested. “We have 3,141 counties in this country. That would be 20 per county.”

“They idea that we can’t assimilate these 8-year-old criminals with their teddy bears is preposterous,” he added.
At that point, Fox News host Chris Wallace interrupted: “You got to know, we’re going to get tons of email saying, ‘This guy doesn’t understand the border. Why should we be dealing with Central America’s problem? We can’t import the problem, they’ve got to deal with it there, and our border has to mean something.’”
“We can handle this problem is what I’m saying,” Will explained. “We’ve handled what [American poet] Emma Lazarus called the ‘wretched refuse of your teeming shore’ a long time ago, and a lot more people than this.”
Watch the video below from Fox News’ Fox News Sunday, broadcast July 27, 2014.

How Do Intelligent Machines Learn?

How and under what conditions is it possible for an intelligent machine to learn? To address this question, let’s start with a definition of machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.”
What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability.
Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications,we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications:
  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.
In essence machine learning incorporates four essential elements.
  1. Representation: The intelligent machine must be able to assimilate data (input) and transform it in a way that makes it useful for a specific algorithm.
  2. Generalization: The intelligent machine must be able to accurately map unseen data to similar data in the learning data set.
  3. Algorithm selection: After generalization the intelligent machine must choose and/or combine algorithms to make a computation (such as a decision or an evaluation).
  4. Feedback: After a computation, the intelligent machine must use feedback (such as a reward or punishment) to improve its ability to perform steps 1 through 3 above.
Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Machine learning has come a long way since the 1972 introduction of Pong, the first game developed by Atari Inc. Today’s computer games are incredibly realistic, and the graphics are similar to watching a movie. Few of us can win a chess game on our computer or smartphone unless we set the difficulty level to low. In general machine learning appears to be accelerating, even faster than the field of AI as a whole. We may, however, see a bootstrap effect, in which machine learning results in highly intelligent agents that accelerate the development of artificial general intelligence, but there is more to the human mind than intelligence. One of the most important characteristics of our humanity is our ability to feel human emotions.

This raises an important question. When will computers be capable of feeling human emotions? A new science is emerging to address how to develop and program computers to be capable of simulating and eventually feeling human emotions. This new science is termed “affective computing.”  We will discuss affective computing in a future post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte
 

Hang in their just a few more years.  SLS (and SpaceX's Heavy Falcon) are on their way.  To the moon and beyond!

We're putting a forest on a climate-change fast-track


An ambitious experiment that exposes a natural woodland to rising carbon dioxide levels will tell us what's in store for the world's trees, says Rob Mackenzie
 
You head the Birmingham Institute of Forest Research. How will it stand out?
One way it will stand out is a novel experiment called FACE – Free-Air Carbon Dioxide Enrichment. It will be the first in the world to take a mature, temperate, broad-leafed woodland ecosystem and, where it stands, expose it to predicted future atmospheric concentrations of carbon dioxide. We will look at the effects of the CO2 on the structure and functioning of the woodland.
With FACE we are responding to a lack of long-term data on the effects of CO2 on woodland. People have been saying we need something like this for a long time.
 
How long will the experiment last?
The FACE experiment has been on the wish-list of UK scientists for years, but has never been possible at this scale because of funding insecurities. Now we are in the extremely fortunate situation of having received philanthropic funding. This allows us to plan for an experiment lasting at least 10 years. If our results are as significant as we expect, then we should be able to extend the run beyond 10 years.
 
How far forward will it look?
The CO2 we will be adding corresponds to what we expect to be in the air 75 years from now at current rates of change.

How will you be monitoring the woodland?
We will be using developments in genomics to characterise biodiversity in unprecedented detail. For plant health we have a dedicated lab with the latest biomedical technology. And we will use the latest sensor technology to provide us with never-before-seen levels of detail about how semi-natural woodlands function.
 
Can't you just do all this in a lab?
You can learn a lot about how plants respond to changing CO2 using greenhouses, plant growth chambers, even cell lines. But in nature 1+1 has a habit of not equalling 2, so you need to take away the walls, the fake growing media, the artificial climate and watch actual nature working. FACE is Gaia science, if you like.
 
What else will the institute be looking at?
The other topic in the early years is figuring out the microbiology of pathogen threats to plants.
 
Why focus your research on these things?
We don't think it's possible to understand the true value of woodlands and forests if we are uncertain about how resilient they are to biological and environmental challenges. These threats include things like ash dieback disease and, of course, human-induced climate change.
 
How vital are experiments like this?
This is part of an emerging experimental array that will do for ecology what the great atom smashers and telescopes have done for physics. Ultimately, we aim to provide fundamental science, social science and cultural research of relevance to forests anywhere in the world.
 
This article appeared in print under the headline "Fast-forwarding forests"

Profile

Rob Mackenzie is the director of the newly established Birmingham Institute of Forest Research at the University of Birmingham in the UK, where he is also a professor of atmospheric science

Genetic moderation is needed to debate our food future


GM is now a term loaded with baggage. Scientists must allow for people's objections to show the public there's nothing "spooky" about it
 
WITH food security firmly on the international agenda, there's a growing appetite to look again at the opportunities promised by agricultural biotechnology.
Scientists working in this area are excited by new techniques that enable them to edit plant DNA with unprecedented accuracy. Even epigenetic markers, which modulate the activity of genes, can now be altered. The promise is to modify crops to make them more nutritious or resistant to disease.
 
But there's a problem, notably in Europe: genetic modification.
Much of agricultural biotechnology – including conventional breeding – involves genetic modification of one kind or another. But "GM" has come to mean something quite specific, and is loaded with baggage. To many people it means risky or unnatural mixing of genes from widely disparate species, even across the plant and animal kingdoms, to create hybrids such as corn with scorpion genes. That baggage now threatens to undermine mature debate about the future of food production.
 
It is no longer a simple yes/no choice between high-tech agribusiness and conventional production driven by something ill-defined as more "natural".

The battle lines of this latest wave of agricultural advance are already being drawn. The UK's Biotechnology and Biological Sciences Research Council, for example, is working on a position statement on the new technologies, which it expects to release later this summer.
It is clear that, over the coming years, the general public will have to decide which of these technologies we find acceptable and which we do not.
 
So where did it all go wrong to begin with? In the late 1990s, when I was reporting on early GM research for the BBC's current affairs programme Newsnight, anti-GM protestors realised that vivid images made good TV and rampaged through fields in white boiler suits destroying trial crops.
 
On the other side, industry representatives brushed aside public concerns and tried to control the media message, thumping the table in the office of at least one bemused newspaper editor (who went on to co-script a TV drama about a darker side to GM). They also lobbied hard for the relaxation of regulations governing agribusiness.
 
In the middle was the public, just coming to terms with farming's role in the BSE crisis. There was little space for calm, rational debate. Instead, GM became the cuckoo in the nest of agricultural biotechnology and its industry backers became ogres, shouting down any discussion of alternatives.
 
As a result, many people remain unaware that there are other high-tech ways to create crops. Many of these techniques involve the manipulation of genes, but they are not primarily about the transfer of genes across species.
 
But for GM to be discussed alongside such approaches as just another technology, scientists will have to work harder to dispel the public's remaining suspicions.
 
I recently chaired a debate on biotech at the UK's Cambridge Festival of Plants, where one audience member identified a public unease about what he called the slightly "spooky" aspect of GM crops. He meant those scorpion genes, or fish genes placed into tomatoes – the type of research that helped to coin the phrase "yuck factor".
To my surprise, a leading plant scientist on the panel said she would be prepared to see cross-species manipulation of food crops put on hold if the public was overwhelmingly uncomfortable with it. Ottoline Leyser, director of the University of Cambridge's Sainsbury Laboratory, said she believed valuable GM crop development could still be done even if scientists were initially restricted to species that can swap their genes naturally, outside of the laboratory. An example of this might be adding a trait from one variety of rice to another.
 
Nevertheless, Leyser remains adamant that there is "nothing immensely fishy about a fish gene". What's more, she added, the notion of a natural separation between species is misplaced: gene-swapping between species in the wild is far more prevalent than once thought.
But Leyser insisted that scientists must respect the views of objectors – even if "yuck" is their only complaint. That concession from a scientist is unusual. I've spoken to many of her peers who think such objections are irrational.
 
Scientists cannot expect people to accept their work blindly and they must make time to listen. Above all, more of them should be prepared to halt experiments that the public is uncomfortable with. And it's beginning to happen.
 
Paul Freemont is co-director of the Centre for Synthetic Biology and Innovation at Imperial College London. He designs organisms from scratch but would be prepared to discontinue projects that the public is unhappy about. He says scientists need an occasional reality check.
 
"We are going to have to address some of the consequences of what we're doing, and have agreements about what's acceptable to society in terms of manipulating biology at this level," Freemont says.
 
Scientists funded with public money may already feel some obligation to adopt this approach. But those working in industry should consider its advantages too. A more open and engaged conversation with the public could surely benefit the companies trying to sell us novel crop technologies.
 
Society, for its part, will need to listen to the experts with an open mind. And as we work out how to feed an expanding population, we will need to ask questions that are bigger than "GM: yes or no?"
 
This article appeared in print under the headline "Genetic moderation"

Susan Watts is a journalist and broadcaster. She was science editor of Newsnight until the post was closed

Rejuvenation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...