Search This Blog

Monday, December 30, 2013

Simon And Garfunkel – I Am A Rock Lyrics

Simon And Garfunkel – I Am A Rock Lyrics

 
A winter's day
In a deep and dark December;
I am alone,
Gazing from my window to the streets below
On a freshly fallen silent shroud of snow.
I am a rock,
I am an island.

I've built walls,
A fortress deep and mighty,
That none may penetrate.
I have no need of friendship; friendship causes pain.
It's laughter and it's loving I disdain.
I am a rock,
I am an island.

Don't talk of love,
But I've heard the words before;
It's sleeping in my memory.
I won't disturb the slumber of feelings that have died.
If I never loved I never would have cried.
I am a rock,
I am an island.

I have my books
And my poetry to protect me;
I am shielded in my armor,
Hiding in my room, safe within my womb.
I touch no one and no one touches me.
I am a rock,
I am an island.

And a rock feels no pain;
And an island never cries.

Nothing Can't Exist!


Many of you, I am certain, have heard what would seem to be (and was for a while) the most "profound" question for philosophy and science, particularly physics, of all:  Why is there something rather than nothing?  It is also a question religious people often ask, contradictorily, as their ironic proof of the particular deity's existence.

And yet the answer is easy.  Something exists because nothing logically cannot.

I am not speaking about recent developments in quantum mechanics (QM) and virtual particles, but I should sum some things up.  QM is physically founded on the so-called Uncertainty Principle.  This principle declares that non-commutative variables of particles -- the typical example being location and momentum -- can never be simultaneously measured with no uncertainty.  If you need perfect certainty in one variable, you must sacrifice all knowledge of the other.

Another pair of non-commutative properties are time and energy.  If we measure time in shorter and shorter intervals, an uncertainty builds up in energy (or mass) as a result of the Uncertainty Principle.  This means that if we sample a reason of space over exceedingly short intervals (like a trillionth of a trillionth of a second and smaller), we will find filled with so-called "virtual particles" popping into and out of existence at all times.  Nor are they trivial, not at all.  The total mass/energy of these particles can be enormous; we are fortunate they exist, or all known physical forces (possibly excepting gravity) require them to carry them to exist, and if they didn't exist -- well, we wouldn't either.

Thus, from a QM (and experimental) point of view, nothing isn't nothing, and can't be, as long as the laws of physics are still in our otherwise empty space.  But what if we press further (assuming we can), and remove all physical laws, and perhaps even logic, from the space?  Would it then be empty.  My answer is still NO.  For example, without the First Law of Thermodynamics (the ordinary law of conservation of mass/energy), what would there be preventing not just virtual but permanent particles, or all kinds, coming into existence?  And without the Second Law of Thermodynamics, which govern order and the inexorable evolution toward disorder (entropy), what would prevent all this mass/energy from assuming the most possibly ordered form possible?  Nothing.

Add to that all the other physical laws and logical and -- is it possible?  We might find ourselves right at the very beginning of the Big Bang.  Of course, many other arrangements are possible too, so multiverses of all kinds are possible.  We may be living in an infinite reality containing an infinite number of universes, infinitely creating more all the time.  But I leave here to let the theoreticians and philosophers to seek truth, and conclude my essay with my conclusion:  nothing cannot exists, any time, anywhere.  Oh, and no deities needed at all.

Significant Science of 2013: An Explosion of Exoplanets

Posted by on


This year was a banner year for planet-hunters. Though 2013 doesn’t hold the record for number of exoplanets detected, many of them are Earth-like, meaning they have masses, compositions, and orbits that put them in the sweet spot of habitability. Astronomers have found so many that some estimate that up to 22% of sunlike stars could harbor Earth-like planets.

Leading the charge has been the Kepler space telescope, an orbiting, purpose-built, planet-seeking machine that has been spotting potential exoplanets by the hundreds.
kepler-22b
An artist's impression of exoplanet Kepler-22b

John Timmer, writing for Ars Technica:

With 34 months of data in total, the number of planet candidates has grown to over 3,500, a rise of roughly 30 percent. Although larger planets are easier to spot since they block more light, 600 of these candidates are now Earth-sized or smaller.

Kepler operates by observing the faint dimming that occurs when a planet passes between its star and the telescope. Astronomers have focused on sunlike stars, 42,000 of which have been in Kepler’s view.

Unfortunately, Kepler suffered the debilitating loss of two of its four reaction wheels, devices which keep the craft steady. Without them, its vision isn’t nearly clear enough to keep up its planet-hunting mission, and astronomers can’t shift its gaze to different parts of the universe.

But all is not lost. Kepler may soldier on with a new mission—searching for starquakes—and the time it spent looking for exoplanets has yielded so much data that it’ll be another another few years before scientists have sifted through the backlog. Who knows? Maybe 2014 will be an even better year for exoplanet enthusiasts.

 

 

Study: Fracking saves water

Chuck Ross
Reporter, Daily Caller News Foundation

Hydraulic fracturing conserves water compared to other energy-generation methods, according to a recent study that undermines claims by fracking opponents.
Bridget Scanlon and a team of researchers at the Bureau of Economic Geology at the University of Texas compared the state’s water consumption levels for 2010, a non-drought year, and 2011, a drought year, at the state’s 423 power plants.

Even after accounting for the water used in obtaining natural gas from the ground, natural gas-powered plants use much less water to obtain the same amount of energy as coal-powered plants.
 
“Although water use for gas production is controversial, these data show that water saved by using natural gas combined cycle plants relative to coal steam turbine plants is 25-50 times greater than the amount of water used in hydraulic fracturing to extract the case,” reads the report, published in Environmental Research Letters.

“Natural gas, now ~50% of power generation in Texas, enhances drought resilience by increasing the flexibility of power plants generators,” the report continues. The researchers predict that reductions in water use from the increased use of natural gas will continue through 2030.
This is good news for the state of Texas, which is prone to drought. Even counting the amount of water used in the hydraulic fracturing process — which uses water and other chemicals to break shale below the earth’s surface to free up natural gas — the researchers estimated that if Texas’ natural gas plants had instead burned coal, the state would have used 32 billion gallons of extra water, enough to satiate 870,000 residents.

Scanlon and her team looked at what is known as the “water-energy nexus.” Drought conditions can severely limit energy generation. In turn, the increased energy usage brought on by drought requires more precious water. But the recent study suggests that switching from other forms of energy generation, such as coal, would improve the drought situation.

“The bottom line is that hydraulic fracturing, by boosting natural gas production and moving the state from water-intensive coal technologies, makes our electric power system more drought resilient,” said Scanlon in a press release.
 
Environmentalists believe fracking is unsafe and have tried to regulate, and even ban, the drilling practice.

But Josiah Neeley, a policy analyst at the Texas Public Policy Foundation, calls the new study smart, saying that it shows that fracking is “actually a net water saver” when compared to other energy generation methods.

“As with anything else, you have to compare fracking to the available alternatives, instead of looking at it in the abstract,” Neeley told The Daily Caller News Foundation.

“The latest charge has been that fracking uses too much water,” he said. “That’s a big concern in Texas, because of the recent drought. What this study does is look not just at how much water gets used in fracking, but compares this to how much water you would need to generate the same amount of electricity from other sources.”

Neeley said that this study pokes another hole in environmentalists’ objections to fracking. “When each of them is proved baseless they simply move on to the next allegation,” he concluded.

The recent report focused solely on Texas, but the researchers felt that the findings could apply to other states. “These changes in water and electricity in Texas may also apply to the US, which has seen a 30% increase in natural gas consumption for electric power production since 2005.”

Follow Chuck on Twitter
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.

Read more: http://dailycaller.com/2013/12/26/study-fracking-saves-water/#ixzz2oyU7DNdN

Atheist Says Challenging Religion is ‘Cruel,’ Nonbelief is for the Wealthy

What Do You Think?
 
Chris Arnade has a PhD in physics, used to work on Wall Street, and now works with the homeless. He is an atheist, but just about none of the people in trouble that he works with are, calling them “some of the strongest believers I have met, steeped in a combination of Bible, superstition, and folklore.” In a piece he wrote for The Guardian, he seems to be saying that this is more or less how it should be. And why? Because it is in this religion and superstition that they find hope.

In doing so, he unfortunately invents a heartless atheist strawman:
They have their faith because what they believe in doesn’t judge them. Who am I to tell them that what they believe is irrational? Who am I to tell them the one thing that gives them hope and allows them to find some beauty in an awful world is inconsistent? I cannot tell them that there is nothing beyond this physical life. It would be cruel and pointless.
Is there anyone doing this? Is there any atheist activist or celebrity who is targeting the downtrodden and brazenly attempting to force the blessings of godlessness on them? Of course not.

Instead, many organized atheist groups and individuals trying to lend aid without any theological (or atheological) strings attached.

Arnade concludes that atheism is something that is really only tenable for those who “have done well,” or at least are not struggling to such an extent as the subjects of his work are. Certainly, it is easier to step back and take a critical look at supernatural claims if one is not constantly worried about one’s safety or ability to feed one’s family. Of course those who are desperate are more vulnerable to seeking a grain of hope wherever they can find it, even in the ephemeral or fictional.

Arnade recalls his 16-year-old self who, as he tells it, snidely turned his nose up at believers in fragile and desperate situations.
I want to go back to that 16-year-old self and tell him to shut up with the “see how clever I am attitude”. I want to tell him to appreciate how easy he had it, with a path out. A path to riches. 
I also see Richard Dawkins differently. I see him as a grown up version of that 16-year-old kid, proud of being smart, unable to understand why anyone would believe or think differently from himself. I see a person so removed from humanity and so removed from the ambiguity of life that he finds himself judging those who think differently.
I suppose Arnade has caught Dawkins lurking around, being extremely nasty to people in the streets, telling them how stupid they are.

Look, I understand that many atheists can be uncomfortable with confrontation of religious claims, and I even understand that one can take issue with the tactics or rhetoric of certain groups or figures.
None of them, not Dawkins, not Hemant, not the big atheist groups (including my own), and definitely not me, get it right all the time. (I’m kidding, Hemant, you always get everything right. Please don’t fire me.) The magic force field our culture has placed around religious belief and superstition makes every discussion and debate fraught with tension and tender sensitivities.

But Arnade makes a mistake by castigating atheism-writ-large as some heartless, elitist club of buzzkills and dream-crushers. For many, if not most of us, our decision to be public and active about our atheism and our opposition to religion stems from a desire to see the world at large lifted out of a morass of bad and oppressive magical thinking. Flawed as we are, we are trying to make things better.

If religion is giving desperate people hope, rather than shake a finger at those who argue against religion, perhaps we should be working as hard as we can to give these people something other than religion to lean on. Something real that actually solves problems, rather than mystical falsehoods.

To leave things as they are, to allow religion to continue its infestation in the lives of those who deserve something better, just because it seems like the nicer thing to do in the short term, I think that’s what’s patronizing and elitist.

Image via Shutterstock.

Sunday, December 29, 2013

Welcome to the 'Era of Behavior'

by Dov Seidman      
December 29, 2013, 8:00 AM
Shutterstock_107684843
As the world has gone from connected to interconnected to interdependent, I believe we’ve entered a new era.  What I call the era of behavior.  I acknowledge that behavior has always mattered.  What
I’m saying is that behavior now matters more than ever and in ways it never has before.  And what I mean by behavior – it’s not just doing the right principle, the responsible thing.  
 
Of course that’s fundamentally what I mean by behavior.  But every Tweet is a behavior.  Every email is a behavior.  We call these things communications and Tweeting and friending and unfriending.  But every email we write, collaboration is a behavior.  Innovation is a behavior.  How we lead is a behavior.  How we engender trust in our relationships.  How we say we’re sorry when we should.  It’s all behavior.  And the more connected the world gets, the more that every form of these behaviors and the more that leaders can create cultures where these behaviors can flourish and be scaled and be embedded into the DNA – the more these are the organizations that will succeed and, more importantly, achieve significance in their endeavors and therefore lasting enduring success.  So I believe that we have entered deeply the era of behavior.
 
Now I want to make a distinction when it comes to behavior.  Carrots and sticks.  The proverbial carrots and sticks, you know, bonuses and compensation and threats of punishment or discipline – they can shift behavior.  You know, we went through an election where tiny slivers of swing state voters received ads, were bombarded with ads trying to get them to shift from one camp to the other.  If you put a product on sale we’re shifting behavior.  Buy now, not later.
 
So we’ve scaled up marketplaces and mechanics of shifting behavior – left, right, forward, back, now not later.  But if you sit with corporate managers and you say, “What behaviors do you want from your colleagues, from your people?”  They say, “I want creativity.  I want collaboration, loyalty, passion.”  And they go on and on and on – responsible, principle of conduct.  These are not behaviors you can shift for.  You can’t say, “You two, go in the room and don’t emerge until you have a brilliant idea.”  “And you two from different cultures, go in a room and don’t come out but I’ll pay you double if you figure out how to truly collaborate and move us forward.  The behaviors we want today are behaviors that we elevate.  These are elevated behaviors.
 
And what’s elevated people since the beginning of time is a mission worthy of their loyalty.  A purpose worthy of their dedication.  Core values that they share with others that really animate them.  Beliefs that they believe are near and dear and a kind of leadership rooted in moral authority that inspires them.  So not only are we in the era of behavior where competitive advantage has shifted to behavior, we are in the era of elevated behavior and what elevates behavior are fundamental values that we share with others and missions and purposes worthy of our commitment and dedication that we also share with others.  
 
And twenty-first century leadership is about connecting with people from within.  And that’s what I mean by this notion that there’s only three ways to get another human being, a friend, a colleague, a worker, to do anything.  You can coerce them, do this or else.  You can motivate them with the carrot, with the bonus, with the stock option.  And fireable offenses, for example, coercion and paying people well – they continue to have their place.  But the freest, cheapest, most enduring, most affordable and cleanest form of human energy is inspiration.  
 
And when you inspire somebody, you get in touch with the first two letters of the word inspire – IN.  And what’s in us are beliefs and values and missions and purposes worthy of our commitment. And leadership today is really fast going from command and control with carrots and sticks as the mechanics of command and control to inspirational leadership.  Leadership that’s animated by moral authority that connects with people in an inspired way from within.  And I think twenty-first century leadership is about becoming an inspirational leader.
 
In Their Own Words is recorded in Big Think's studio.
Image courtesy of Shutterstock

Why America is NOT the greatest country in the world, anymore.


More Good News for Solar Power

Solar cell performance improves with ion-conducting polymer

Sep 04, 2013
Solar cell performance improves with ion-conducting polymer       
A dye-sensitized solar cell panel is tested in the laboratory at the School of Chemical Science and Engineering. Dye-sensitized solar photovoltaics can be greatly improved as a result of research done at KTH Royal Institute of Technology. Credit: David Callahan

Researchers at Stockholm's KTH Royal Institute of Technology have found a way to make dye-sensitized solar cells more energy-efficient and longer-lasting.

Drawing their inspiration from photosynthesis, dye-sensitized offer the promise of low-cost and – when coupled with catalysts – even the possibility of generating hydrogen and oxygen, just like plants. A study published in August could lead to more efficient and longer-lasting dye-sensitized solar cells, says one of the researchers from KTH Royal Institute of Technology in Stockholm.

A research team that included James Gardner, Assistant Professor of Photoelectrochemistry at KTH, reported the success of a new quasi-liquid, polymer-based electrolyte that increases a dye-sensitized solar cell's voltage and current, and lowers resistance between its electrodes.

The study highlights the advantages of speeding up the movement of oxidized electrolytes in a dye-sensitized solar cell, or DSSC. Also on the team from KTH were Lars Kloo, Professor of Inorganic Chemistry and researcher Muthuraaman Bhagavathi Achari.

Their research was published in the Royal Society of Chemistry's journal, Physical Chemistry Chemical Physics on August 19.

"We now have clear evidence that by adding the ion- to the solar cell's cobalt redox electrolyte, the transport of oxidized electrolytes is greatly enhanced," Gardner says. "The fast transport increases by 20 percent."

A dye-sensitized solar cell absorbs photons and injects electrons into the of a transparent semiconductor. This anode is actually a plate with a highly porous, thin layer of that is sensitized with dyes that absorb visible light. The electrons in the semiconductor diffuse through the anode, out into the external circuit.

In the electrolyte, a cobalt complex redox shuttle acts as a catalyst, providing the internal electrical continuity between the anode and cathode. When the dye releases electrons and becomes oxidized by the titanium dioxide, the electrolyte supplies electrons to replenish the deficiency. This "resets" the dye molecules, reducing them back to their original states. As a result, the electrolyte becomes oxidized and electron-deficient and migrates toward the cathode to recovers its missing electrons. Electrons migrating through the circuit recombine with the oxidized form of the cobalt complex when they reach the cathode.

In the most efficient solar cells this transport of ions relies on acetonitrile, a low viscosity, volatile organic solvent. But in order to build a stable, commercially-viable solar cell, a low volatility solvent is used instead, usually methoxypropionitrile. The problem is that while methoxypropionitrile is more stable, it is also more viscous than acetonitrile, and it impedes the flow of ions.

But with the introduction of a new quasi-liquid, polymer-based electrolyte (containing the Co3+/Co2+ redox mediator in 3-methoxy propionitrile solvent), the research team has overcome the viscosity problem, Gardner says. At the same time, adding the ion-conducting polymer to the maintains its low volatility. This makes it possible for the oxidized form of the cobalt complex to reach the cathode, and get reduced, faster.

Speeding up this transport is important because when slowed down, more of the cobalt complexes react with electrons in the semiconductor instead of with the electrons at the cathode, resulting in rapid recombination losses. Speeding up the cobalt lowers resistance and increases voltage and current in the solar cell, Gardner says.
Explore further: Cobalt replacements make solar cells more sustainable

Prebiotic Molecules May Form in Exoplanet Atmospheres

Prebiotic Molecules May Form in Exoplanet Atmospheres

 





New research suggests that the building blocks of life — prebiotic molecules — may form in the atmospheres of planets, where the dust provides a safe platform to form on and various reactions with the surrounding plasma provide enough energy necessary to create life.

It's time to approve the full Keystone XL pipeline project.





Mark Green, Posted Dec 27, 2013 by Energy Tomorrow

Brainlike Computers, Learning From Experience

Science Twitter Logo.
Erin Lubin/The New York Times
 
Kwabena Boahen holding a biologically inspired processor attached to a robotic arm in a laboratory at Stanford University.
 
PALO ALTO, Calif. — Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.       
 
The first commercial version of the new kind of computer chip is scheduled to be released in 2014.
Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.       
 
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.
 
In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.
 
Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.
 
“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.
 
Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.
 
But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.
 
In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.
 
The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.
 
“We have no clue,” he said. “I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.”
 
Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.
 
The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.
 
The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.
 
They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.
 
“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”
 
The new computers, which are still based on silicon chips, will not replace today’s computers, but will augment them, at least for now. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and in the giant centralized computers that make up the cloud. Modern computers already consist of a variety of coprocessors that perform specialized tasks, like producing graphics on your cellphone and converting visual, audio and other data for your laptop.
 
One great advantage of the new approach is its ability to tolerate glitches. Traditional computers are precise, but they cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks.
 
Traditional computers are also remarkably energy inefficient, especially when compared to actual brains, which the new neurons are built to mimic.
 
I.B.M. announced last year that it had built a supercomputer simulation of the brain that encompassed roughly 10 billion neurons — more than 10 percent of a human brain. It ran about 1,500 times more slowly than an actual brain. Further, it required several megawatts of power, compared with just 20 watts of power used by the biological brain.
 
Running the program, known as Compass, which attempts to simulate a brain, at the speed of a human brain would require a flow of electricity in a conventional computer that is equivalent to what is needed to power both San Francisco and New York, Dr. Modha said.
 
I.B.M. and Qualcomm, as well as the Stanford research team, have already designed neuromorphic processors, and Qualcomm has said that it is coming out in 2014 with a commercial version, which is expected to be used largely for further development. Moreover, many universities are now focused on this new style of computing. This fall the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of
Technology, with Harvard and Cornell.
 
The largest class on campus this fall at Stanford was a graduate level machine-learning course covering both statistical and biological approaches, taught by the computer scientist Andrew Ng. More than 760 students enrolled. “That reflects the zeitgeist,” said Terry Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered early biologically inspired algorithms. “Everyone knows there is something big happening, and they’re trying find out what it is.”

The Act 13 decision unravels protections


Our industry has worked, and continues to work, closely toward shared goals with the communities in which we operate to be good neighbors and stewards of our environment. And the outcomes are clear. The federal Environmental Protection Agency confirmed in October that U.S. carbon emissions are at their lowest since 1994, thanks to expanded natural gas use. The independent State Review of Oil & Natural Gas Environmental Regulations again gave Pennsylvania’s oil and gas regulatory framework high marks — noting that regulations were strengthened significantly through Act 13’s enhancements.

Consumers are winning, too. A recent Boston Consulting Group analysis projects average U.S. household savings tied to shale development to reach $1,200. And in addition to more than $1.8 billion in state taxes since 2008, our industry has paid $406 million in impact fees over the last two years to communities across the commonwealth. Unfortunately, with this ruling, these shared benefits may unnecessarily be limited rather than maximized.

If we are to remain competitive and if our focus is truly an improved environment, more job creation and economic prosperity, we must continue to work together toward common-sense proposals that encourage investment in the commonwealth.
DAVE SPIGELMYER
President
Marcellus Shale Coalition
North Fayette


Read more: http://www.post-gazette.com/opinion/letters/2013/12/29/The-Act-13-decision-unravels-protections/stories/201312290024#ixzz2osdnciFd

GMOs May Feed the World Using Fewer Pesticides (and Bring Back the Bees)

By on
De Jong produced the plants in the same old, laborious way that his father did before him. He collected pollen from a plant that produces potatoes that fry as potato chips should and then sprinkled the pollen on the flower of a potato plant that resists viruses. If the resulting potatoes bear their parents’ finest features—and none of the bad ones—De Jong will bury them in the ground next year and test their mettle against a common potato virus. If they survive—and are good for frying and eating—he and his team will repeat this for 13 years to ensure that problematic genes did not creep in during the initial cross.

Walter_field
Walter De Jong, a second-generation potato breeder, walks the fields in upstate New York.

Each year, the chance of failure is high. Potatoes that resist viruses, for example, often have genes that make them taste bitter. Others turn an unappetizing shade of brown when fried. If anything like that happens, De Jong will have to start from scratch. Tedious as it is, he loves the work. Kicking up dirt in the furrows that cascade along the hillsides of upstate New York, he says, “I’m never stressed in the potato fields.”

De Jong has some serious cred in the agriculture world. Not only was his father a potato breeder, he’s also descended from a long line of farmers. The potato farmers he works with appreciate this deeply, along with his commitment to the age-old craft of producing new potato varieties through selective breeding. They even advocated on his behalf during his hiring and when he was up for tenure at Cornell, a school with a long history of agriculture research. “All of our farmers like Walter,” says Melanie Wickham, the executive secretary of the Empire State Growers organization in Stanley, New York. Often, he’s in the fields in a big hat, she says. Other times “you’ll see him in the grocery store, looking over the potatoes.”

De Jong has been working with farmers long enough to know that our food supply is never more than a step ahead of devastating insect infestations and disease. Selective breeders like De Jong work hard to develop resistant crops, but farmers still have to turn to chemical pesticides, some of which are toxic to human health and the environment. De Jong enjoys dabbing pollen from plant-to-plant the old-fashioned way, but he knows that selective breeding can only do so much.

seedlings
Seedlings from De Jong's selective breeding experiments poke their shoots above the soil in a greenhouse.
 
So while De Jong still devotes most of his time to honing his craft, he has recently begun to experiment in an entirely different way, with genetic engineering. To him, genetic engineering represents a far more exact way to produce new varieties, rather than simply scrambling the potato genome’s 39,000 genes the way traditional breeding does. By inserting a specific fungus-defeating gene into a tasty potato, for example, De Jong knows he could offer farmers a product that requires fewer pesticides.

“We want to make food production truly sustainable,” De Jong says, “and right now I cannot pretend that it is.”

The need to protect crops from ruin grows more vital every day. By 2050, farmers must produce 40% more food to feed an estimated 9 billion people on the planet. Either current yields will have to increase or farmland will expand farther into forests and jungles. In some cases, genetically modified organisms (GMOs) would offer an alternative way to boost yields without sacrificing more land or using more pesticides, De Jong says. But he fears this approach won’t blossom if the public rejects GMOs out of hand.

When It All Began

In the late 1990s, the agriculture corporation Monsanto began to sell corn engineered to include a protein from the bacteria Bacillus thuringiensis, better known as Bt. The bacteria wasn’t new to agriculture—organic farmers spray it on their crops to kill certain insects. Today more than 60% of the corn grown within the United States is Bt corn. Farmers have adopted it in droves because it saves them money that they would otherwise spend on insecticide and the fuel and labor needed to apply it.
They also earn more money for an acre of Bt corn compared with a conventional variety because fewer kernels are damaged. Between 1996 and 2011, Bt corn reduced insecticide use in corn production by 45% worldwide (110 million pounds, or roughly the equivalent of 20,000 Olympic swimming pools).

Between 1996 and 2001, Monsanto also produced Bt potato plants. Farmers like Duane Grant of Grant 4D Farms in Rupert, Idaho, welcomed the new variety. Grant grew up on his family’s farm, and his distaste for insecticides started at a young age. As a teenager, he recalls feeling so nauseous and fatigued after spraying the fields that he could hardly move until the next day. Today, pesticides are safer than those used 40 years ago, and stiffer U.S. federal regulations require that employees take more precautionary measures when applying them, but Grant occasionally tells his workers to head home early when they feel dizzy after spraying the fields. He was relieved when GM potatoes were introduced because he didn’t have to spray them with insecticide. He was warned that pests might overcome the modification in 15 to 20 years, but that didn’t deter him—he says the same thing happens with chemicals, too.

Unlike Bt corn, you can’t find any fields planted today with Bt potatoes. Soon after the breed hit the market, protestors began to single out McDonald’s restaurants, which collectively are the biggest buyer of potatoes worldwide. In response, McDonald’s, Wendy’s, and Frito-Lay stopped purchasing GM potatoes. In 2001, Monsanto dropped the product and Grant returned to conventional potatoes and the handful of insecticides he sprays on them throughout the summer.

“There is not a single documented case of anyone being hurt by genetically modified food, and yet this is a bigger problem for people than pesticides, which we know have caused harm,” he says. “I just shake my head in bewilderment at the folks who take these stringent positions that biotech should be banned.”
In the decade after Monsanto pulled their GM potatoes from the market, dozens of long-term animal feeding studies concluded that various GM crops were as safe as traditional varieties. And statements from science policy bodies, such as those issued by the American Association for the Advancement of Science, the U.S. National Academy of Sciences, the World Health Organization, and the European Commission, uphold that conclusion. Secondly, techniques to tweak genomes have become remarkably precise. Specific genes can be switched off without lodging foreign material into a plant’s genome. Scientists don’t necessarily have to mix disparate organisms with one another, either. In cisgenic engineering, organisms are engineered by transferring genes between individuals that could breed naturally.

Even some organic farmers bristle when asked about the anti-GMO movement. Under the U.S. Organic Foods Production Act, they are not allowed to grow GMOs, despite their ability to reduce pesticide applications. Organic farmers still spray their crops, just with different chemicals, such as sulfur and copper. Amy Hepworth, an organic farmer at Hepworth Farms in Milton, New York, says that they, too, can take a toll on the environment.

Hepworth would like to continuously evaluate new avenues towards sustainable agriculture as technology advances. However, her views often clash with her customers’ in the affluent Brooklyn, New York, neighborhood of Park Slope. Many of them see no benefit in GMOs’ ability to reduce pesticides because they say farmers should rely strictly on traditional farming methods.

“What people don’t understand is that without pesticides there is not enough food for the masses,” Hepworth says. “The fact is that GM is a tool that can help us use less pesticide.”

Saturday, December 28, 2013

Wht Do We Need the SPACE Launch System?

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Why is NASA using its precious, and shrinking, funding to construct a massive, Saturn V (which took Apollo astronauts to the moon) type rocket, the Space Launch System (SLS), when private companies, such as SpaceX and Orbital Sciences Corporation already or soon will have rockets that will accomplish all the functions sooner and cheaper?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SpaceX's Falcon 9 rocket is already delivering cargo regularly to the ISS (International Space System) and even to Geosynchronous orbit (22,000 miles high).  It is easily already man-rated in terms of safety, and will be carrying astronauts into orbit within a couple of years.  A more powerful version of the Falcon, the Falcon Heavy, is due for testing in 2014 and will also be ready for missions within the same time span.  While not as powerful as the SLS, the combination of a Falcon Heavy (to carry the needed equipment) and Falcon 9 (to carry a crew) will already have the capacity to send men to the moon or beyond -- even to Mars perhaps -- before the end of this decade, ahead of SLS' schedule.  SpaceX is also working on recoverable rocket stages and other advanced technologies.
 
 
Antares, known during early development as Taurus II, is an expendable launch system developed by Orbital Sciences Corporation. Designed to launch payloads of mass up to 5,000 kg (11,000 lb) into low-Earth orbit, it made its maiden flight on April 21, 2013. Designed to launch the Cygnus spacecraft to the International Space Station as part of NASA's COTS and CRS programs, Antares is the largest rocket operated by Orbital Sciences, and is scheduled to start supplying the ISS early 2014.

NASA awarded to Orbital a Commercial Orbital Transportation Services (COTS) Space Act Agreement (SAA) in 2008 to demonstrate delivery of cargo to the International Space Station. For these COTS missions Orbital intends to use Antares to launch its Cygnus spacecraft. In addition, Antares will compete for small-to-medium missions. On December 12, 2011 Orbital Sciences renamed the launch vehicle "Antares" from the previous designation of Taurus II, after the star of the same name.
 
I cannot see how, between SpaceX and COTS, the two companies and their rockets alone don't invalidate the need for the SLS.  If it were scrubbed (it is an Obama Administration project), large amounts of money could be freed up for more unmanned planetary missions, such as to Europa, Uranus, and Neptune, and to asteroids for mining possibilities.
 
 
Linus Torvalds

Linus Torvalds

On this date in 1969, Linus Torvalds was born in Helsinki, Finland. He started using computers when he was about 10 year old, and soon began designing simple computer programs. Torvalds earned his M.S. in computer science from the University of Helsinki in 1996, where he was introduced to the Unix operating system. In 1991, Torvalds began creating the innovative Linux, an operating system similar to Unix. Later in the year, he released Linux for free as an open source operating system, allowing anyone to edit its source code with Torvalds’ permission. Linux’s open source nature has contributed to its popularity and reliability, since it is regularly updated and improved by dedicated users. For his work with Linux, Torvalds received the 2008 Computer History Fellow Award and the 2005 Vollum Award for Distinguished Accomplishment in Science and Technology. The asteroid 9793 Torvalds was named after him.

After developing Linux, Torvalds worked for Transmeta Corporation from 1997 to 2003. He appeared in the 2001 documentary “Revolution OS,” and authored an autobiography titled Just for Fun: The Story of an Accidental Revolutionary (2001). He is married to Tove Torvalds, who also attended the University of Helsinki for Computer Science. They live in the U.S. and have three daughters, Patricia, born in 1996, Daniela, born in 1998, and Celeste, born in 2000.

In a Nov. 1, 1999 interview with Linux Journal, Torvalds described himself as “completely a-religious” and “atheist.” He explained his reasons for being an atheist: “I find it kind of distasteful having religions that tell you what you can do and what you can’t do.” He also believes in the separation of church and state, telling Linux Journal, “In practice, religion has absolutely nothing to do with everyday life.”
“I find that people seem to think religion brings morals and appreciation of nature. I actually think it detracts from both . . . I think we can have morals without getting religion into it, and a lot of bad things have come from organized religion in particular. I actually fear organized religion because it usually leads to misuses of power.”
—Linus Torvalds, Linux Journal, Nov. 1, 1999.
Compiled by Sabrina Gaylor
© Freedom From Religion Foundation. All rights reserved.

Why Consciousness Can't Keep Up with Fear

by Joseph LeDoux      
December 27, 2013, 6:00 AM
Brain_fear
 
Whenever you encounter some sudden danger out there in the world - let’s say it’s a snake on a path - the information from that stimulus goes into your brain through your visual system. It will then rise through the visual system through the standard pathways.  So every sensory system has this very well organized set of circuits that ultimately leads to a stop in the part of the brain called the thalamus on route to reaching the sensory cortex.  So each sensory system has an area in the cortex.  The cortex is that wrinkled part of the brain that you see whenever you see a picture of the brain.  And that’s where we have our perceptions and thoughts and all of that.  
 
In order to have a visual perception, information has to be transmitted from the eye, from the retina, through the optic nerve, into the visual thalamus, and from the visual thalamus to the visual cortex where the processing continues and you can ultimately have the perception.  So, the visual cortex connects directly with the amygdala, and so thats one route by which the information can get in: retina, thalamus, cortex, amygdala.  But one of the first things that I discovered when I started studying fear was that that pathway, the usual pathway that we think about for sensory processing was not the way that fear was elicited, or not the only way.  
 
What we found was that if the cortical pathway was blocked completely, rats could still form a memory about a sound.  We were studying sounds and shocks.  But we’ve also done it with a visual stimulus.  So, what we found was the sound had to go up to the level of the thalamus, but then it didn’t need to go to the auditory cortex or the visual cortex as if it happened to be a visual stimulus. 
Instead, it made an exit from the sensory system and went directly into the amygdala, below the level of the cortex.  That was really important because we generally think that the cortex is required for any kind of conscious experience.  So, this is a way that information was being sent through the brain and triggering emotions unconsciously.  So, the psychoanalysts love this because it vindicated the idea that you could have this unconscious fear that the cortex has no understanding of.  
 
This is important because a lot of people have fears and phobias and anxieties about things they don’t understand.  They don’t know why they’re afraid or anxious in a particular moment.  It may be through various kinds of experiences, the low road gets potentiated in a way that it’s activating fears and phobias outside of conscious awareness and that doesn’t make sense in terms of what the conscious brain is looking at in the world, or hearing in the world because they’ve been separately parsed out.  
 
So, the subcortical pathway, we’ve been able to time all of this very precisely in the rat brain and it takes, in order for a sound to get to the amygdala from the sub cortical pathway takes about 10 or 12 milliseconds.  So, take a second and divide that into 1,000 parts, and after 12 of those little parts, the amygdala was already getting the sound.  For you to be consciously aware of the stimulus, it takes 250-300 milliseconds.  So, the amygdala is being triggered much, much faster than consciousness is processing.  
 
So, the brain ticks in milliseconds, the neurons process information on the level of milliseconds, but the mind is processing things on the order of seconds and half seconds here.  So, if you have a fear response that is being triggered very rapidly like that, consciously you’re going to be interpreting what’s going on, but it’s not going to necessarily match what’s really going on.  
 
In Their Own Words is recorded in Big Think's studio.
Image courtesy of Shutterstock

Lake Vostok: Life in one of the Inhospitable Places on Earth

Posted on December 28, 2013 at 9:00 am
By on Quarks to Quasars

Lake Vostok: Life in one of the most Inhospitable Places on Earth:     

800px-Lake_Vostok_drill_2011
Credit: Nicolle Rager-Fuller / NSF
Credit: Nicolle Rager-Fuller / NSF
 
Located on the seemingly inhospitable continent of Antarctica (a place with the lowest recorded temperature on the Earth at -89 degrees Celsius), lies a subglacial lake that is 160 miles wide and 30 miles across. This lake has been dubbed Lake Vostok. It is believed that the lake formed some 20 million years ago. Isolated from the rest of the world for at LEAST 100 000 years, Lake Vostok was one of the last untouched places on this globe.

The lake presents itself as an analog for the study of both extremophilic microbial life (and possibly larger organisms) and evolutionary isolation. This inhospitable environment parallels some environments that we think might exist elsewhere in the solar system – either in the subsurface of Mars or on icy moons like Enceladus or even Europa. Ultimately, the search for life on other planets could start here on Earth at Lake Vostok.
Drilling is restricted to 3590 meters (2.2 miles) below the surface ice (just a few hundred meters above the lake) as the lake itself is considered inaccessible due to fears of contamination. A number of scientists have examined ice cores taken from above the lake, focusing on the so-called “accretion ice” at the base of these cores. Accretion ice was once lake water that later froze and adhered to the overlying ice sheet—and what’s in that ice might therefore provide clues to what’s in the lake itself.

Radar satellite image of Lake Vostok (Credit: NASA)
Radar satellite image of Lake Vostok (Credit: NASA)

Lake Vostok has made news again because of its most recent published findings  shared recently in PLOS ONE. Scientists led by Yury Shtarkman, a postdoc at Bowling Green State University in Ohio, identified a startlingly diverse array of microbes in the accretion ice—the most diverse suggested yet.

By cultivating and sequencing nucleic acids found in the ice, they identified more than 3500 unique genetic sequences (mostly from bacteria, but there were some multicellular eukaryotes). And those were similar to those of creatures found in all sorts of habitats on the planet: lakes, marine environments, deep-sea sediments, thermal vents, and, of course, icy environments.

Overall, researchers have generally observed low concentrations of such microbes relative to most environments on Earth. But they found the POTENTIAL for a complex microbial ecosystem of bacteria and fungi, and genetic sequences from crustaceans, mollusks, sea anemones, and fish. Note that Lake Vostok was in contact with the atmosphere millions of years ago, so a complex network of organisms likely populated the lake during that time. The team also found bacteria sequences that are common symbionts (organisms that live in symbiosis of each other) of larger species. There could be the poFEussibility of distinct ecological zones. As they are just studying accretion ice, the idea of fish actually living in the lake remains unclear.

A scientist traversing above the ice miles above the ice of Lake Vostok (Credit: M. Studinger, 2001)
A scientist traversing above the ice miles above the ice of Lake Vostok (Credit: M. Studinger, 2001)

Either way, this lake is far from being devoid of life. This does not conclusively lead us to the conclusion that life exists on other planets and moons in our solar system. But if abiogenesis is correct (still repudiated, but evidence is starting to suggest it is), it makes the case that much stronger for those seeking life beyond our planet.

Fractals and Scale Invariance















 
Fractals are plots of non-linear equations (equations in the result is used as the next input) which can build up to astonishingly complex and beautiful designs.  Typical of fractals is their scale invariance, which means that no matter how you view them, zoom in or zoom out, you will find self-similar and repeating geometric patterns.  This distinguishes from most natural patterns (though some are fractal-like, such as mountains or the branches of a tree, stars in the sky), in which as you zoom in or out completely changes what you see -- e.g., galaxies to stars down to atoms and sub-atomic nuclei and so forth).  Nevertheless, what is most (to me) fascinating about fractals is that they allow us to simulate reality in so many ways.

The Mandelbrot set shown above is the most famous example of fractal design known. 
The Mandelbrot set is a mathematical set of points whose boundary is a distinctive and easily recognizable two-dimensional fractal shape. The set is closely related to Julia sets (which include similarly complex shapes), and is named after the mathematician Benoit Mandelbrot, who studied and popularized it.

Mandelbrot set images are made by sampling complex numbers and determining for each whether the result tends towards infinity when a particular mathematical operation is iterated on it. Treating the real and imaginary parts of each number as image coordinates, pixels are colored according to how rapidly the sequence diverges, if at all.

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the complex quadratic polynomial
z n+1 =z n squared +c
remains bounded.[1] That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

In general, a fractal is a mathematical set that typically displays self-similar patterns, which means they are "the same from near as from far".[1] Often, they have an "irregular" and "fractured" appearance, but not always. Fractals may be exactly the same at every scale, or they may be nearly the same at different scales.[2][3][4][5] The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.[2]:166; 18[3][6]

One feature of fractals that distinguishes them from "regular" shapes is the amount their spatial content scales, which is the concept of fractal dimension. If the edge lengths of a square are all doubled, the area is scaled by four because squares are two dimensional, similarly if the radius of a sphere is doubled, its volume scales by eight because spheres are three dimensional. In the case of fractals, if all one-dimensional lengths are doubled, the spatial content of the fractal scales by a number which is not an integer. A fractal has a fractal dimension that usually exceeds its topological dimension[7] and may fall between the integers.[2]

As mathematical equations, fractals are usually nowhere differentiable.[2][5][8] An infinite fractal curve can be perceived of as winding through space differently from an ordinary line, still being a 1-dimensional line yet having a fractal dimension indicating it also resembles a surface.[7]:48[2]:15

The mathematical roots of the idea of fractals have been traced through a formal path of published works, starting in the 17th century with notions of recursion, then moving through increasingly rigorous mathematical treatment of the concept to the study of continuous but not differentiable functions in the 19th century, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 21st century.[9][10] The term "fractal" was first used by mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus meaning "broken" or "fractured", and used it to extend the concept of theoretical fractional dimensions to geometric patterns in nature.[2]:405[6]

There is some disagreement amongst authorities about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. That's fractals."[11] The general consensus is that theoretical fractals are infinitely self-similar, iterated, and detailed mathematical constructs having fractal dimensions, of which many examples have been formulated and studied in great depth.[2][3][4] Fractals are not limited to geometric patterns, but can also describe processes in time.[1][5][12] Fractal patterns with various degrees of self-similarity have been rendered or studied in images, structures and sounds[13] and found in nature, technology, art, and law.

Much of this was taken from Wikipedia Mandelbrot Set and Fractal.

Proto-metabolism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wi...