Search This Blog

Tuesday, January 7, 2014

To The Horror Of Global Warming Alarmists, Global Cooling Is Here by Peter Ferrara

English: Ice age Earth at glacial maximum. Bas...

From David Strumfels:

Some notes before reading this article.  Looking into the future is always hazardous, whatever your prediction.  Furthermore, facts are usually messy, and sometimes can be used in opposition to a theory as well as support.  A good example is the very basis of anthropogenic global warming:  the  cyclic but overall warming of 1850-1900 to ~2005 correlates well with the anthropogenic CO2 rise from 280 ppm to almost 400 ppm, which shows no sign of abating (yet, because I believe it will).  But is it cause and effect or mere correlation?  Or some of both?  That CO2 is a greenhouse gas weighs, I believe, towards cause and effect.  But careful study of the data seems to me to show that much of the warming until 1970-1980 was more natural, a combination of effects Earth's climate has always lived with.  Only after the mid-late '70s does another effect exert more serious influence, and the only one I can think of is CO2.

My study of global temperature graphs also made a detail about it clear (although I'm not the first to notice it):  as I've just alluded, the warming is not straight-line, even with random variation making it messy.  Look at the cart below:


 
 
 
 











The lines make the cyclic nature of warming stand out, although by themselves, they do not prove it.  Have we studied earlier historic records to see if they stand up?  Honestly, I don't know -- but it's an important analysis if we can do it. But historical records are based largely on temperature proxies (ice cores, tree rings, etc., which are notoriously prone to scatter) and it may be impossible.

These cycles (they are, of course, not the straight lines as I illustrate them) have met with dismissal by most warming alarmists as statistical massaging, and they may prove to be right about that.  May be proved -- or may not.  As detailed, strongly agreeable temperature proxies don't exist for history or "deep time", such short cycles would never be found whether they are there or not.  My point here is that when it comes to climate predictions, I don't see where we even could have sufficient knowledge or understanding to make solid predictions, like a 95% the temperature will be 4-8 degrees higher than today (if nothing changes is sometimes added, as if nothing will change).

I might have been dismissive of this hypothesis too, if temperatures had kept climbing as they had done since the '70s.  But they didn't.  Starting around the 1997-2005 range (exact numbers vary, but I  lean toward the latter), the warming clearly stopped, leaving us in at least a holding pattern, or even with some mild cooling (a disputable claim).  It may be just a little blip, of course.  But it strongly fits in to the cyclic warming hypothesis, is almost exactly what we would have predicted if that hypothesis is true.  If it is true, expect up to thirty years of cooling before warming (if nothing changes, remember) resumes.  Since much will change, and has already been changing for ~20+ years -- developments in recyclable energy, replacement of natural gas over high carbon fossil fuels, the on-going efficiency improvements in cars, appliances, CO2 reclamation, etc. -- the CO2 trend must flatten out and perhaps even decline, perhaps starting as soon as twenty years from now although as always with making predictions I will not bet my fortune on it.  But by 2050 and later, I would make that bet (though I'll be 94 years old by then and won't have enough money if I lose).

I'll say no more and let Ferrara speak for himself.

Around 1250 A.D., historical records show, ice packs began showing up farther south in the North Atlantic. Glaciers also began expanding on Greenland, soon to threaten Norse settlements on the island. From 1275 to 1300 A.D., glaciers began expanding more broadly, according to radiocarbon dating of plants killed by the glacier growth. The period known today as the Little Ice Age was just starting to poke through.

Summers began cooling in Northern Europe after 1300 A.D., negatively impacting growing seasons, as reflected in the Great Famine of 1315 to 1317. Expanding glaciers and ice cover spreading across Greenland began driving the Norse settlers out. The last, surviving, written records of the Norse Greenland settlements, which had persisted for centuries, concern a marriage in 1408 A.D. in the church of Hvalsey, today the best preserved Norse ruin.

Colder winters began regularly freezing rivers and canals in Great Britain, the Netherlands and Northern France, with both the Thames in London and the Seine in Paris frozen solid annually. The first River Thames Frost Fair was held in 1607. In 1607-1608, early European settlers in North America reported ice persisting on Lake Superior until June. In January, 1658, a Swedish army marched across the ice to invade Copenhagen. By the end of the 17th century, famines had spread from northern France, across Norway and Sweden, to Finland and Estonia.
Reflecting its global scope, evidence of the Little Ice Age appears in the Southern Hemisphere as well. Sediment cores from Lake Malawi in southern Africa show colder weather from 1570 to 1820. A 3,000 year temperature reconstruction based on varying rates of stalagmite growth in a cave in South Africa also indicates a colder period from 1500 to 1800. A 1997 study comparing West Antarctic ice cores with the results of the Greenland Ice Sheet Project Two (GISP2) indicate a global Little Ice Age affecting the two ice sheets in tandem.

The Siple Dome, an ice dome roughly 100 km long and 100 km wide, about 100 km east of the Siple Coast of Antartica, also reflects effects of the Little Ice Age synchronously with the GISP2 record, as do sediment cores from the Bransfield Basin of the Antarctic Peninsula. Oxygen/isotope analysis from the Pacific Islands indicates a 1.5 degree Celsius temperature decline between 1270 and 1475 A.D.

The Franz Josef glacier on the west side of the Southern Alps of New Zealand advanced sharply during the period of the Little Ice Age, actually invading a rain forest at its maximum extent in the early 1700s. The Mueller glacier on the east side of New Zealand’s Southern Alps expanded to its maximum extent at roughly the same time.

Ice cores from the Andeas mountains in South America show a colder period from 1600 to 1800. Tree ring data from Patagonia in South America show cold periods from 1270 to 1380 and from 1520 to 1670. Spanish explorers noted the expansion of the San Rafael Glacier in Chile from 1675 to 1766, which continued into the 19th century.

The height of the Little Ice Age is generally dated as 1650 to 1850 A.D. The American Revolutionary Army under General George Washington shivered at Valley Forge in the winter of 1777-78, and New York harbor was frozen in the winter of 1780. Historic snowstorms struck Lisbon, Portugal in 1665, 1744 and 1886. Glaciers in Glacier National Park in Montana advanced until the late 18th or early 19th centuries. The last River Thames Frost Fair was held in 1814. The Little Ice Age phased out during the middle to late 19th century.

The Little Ice Age, following the historically warm temperatures of the Medieval Warm Period, which lasted from about AD 950 to 1250, has been attributed to natural cycles in solar activity, particularly sunspots. A period of sharply lower sunspot activity known as the Wolf Minimum began in 1280 and persisted for 70 years until 1350. That was followed by a period of even lower sunspot activity that lasted 90 years from 1460 to 1550 known as the Sporer Minimum. During the period 1645 to 1715, the low point of the Little Ice Age, the number of sunspots declined to zero for the entire time. This is known as the Maunder Minimum, named after English astronomer Walter Maunder. That was followed by the Dalton Minimum from 1790 to 1830, another period of well below normal sunspot activity.

The increase in global temperatures since the late 19th century just reflects the end of the Little Ice Age. The global temperature trends since then have followed not rising CO2 trends but the ocean temperature cycles of the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO). Every 20 to 30 years, the much colder water near the bottom of the oceans cycles up to the top, where it has a slight cooling effect on global temperatures until the sun warms that water. That warmed water then contributes to slightly warmer global temperatures, until the next churning cycle.

Those ocean temperature cycles, and the continued recovery from the Little Ice Age, are primarily why global temperatures rose from 1915 until 1945, when CO2 emissions were much lower than in recent years. The change to a cold ocean temperature cycle, primarily the PDO, is the main reason that global temperatures declined from 1945 until the late 1970s, despite the soaring CO2 emissions during that time from the postwar industrialization spreading across the globe.

The 20 to 30 year ocean temperature cycles turned back to warm from the late 1970s until the late 1990s, which is the primary reason that global temperatures warmed during this period. But that warming ended 15 years ago, and global temperatures have stopped increasing since then, if not actually cooled, even though global CO2 emissions have soared over this period. As The Economist magazine reported in March, “The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO2 put there by humanity since 1750.” Yet, still no warming during that time. That is because the CO2 greenhouse effect is weak and marginal compared to natural causes of global temperature changes.

At first the current stall out of global warming was due to the ocean cycles turning back to cold. But something much more ominous has developed over this period. Sunspots run in 11 year short term cycles, with longer cyclical trends of 90 and even 200 years. The number of sunspots declined substantially in the last 11 year cycle, after flattening out over the previous 20 years. But in the current cycle, sunspot activity has collapsed. NASA’s Science News report for January 8, 2013 states,
“Indeed, the sun could be on the threshold of a mini-Maunder event right now. Ongoing Solar Cycle 24 [the current short term 11 year cycle] is the weakest in more than 50 years. Moreover, there is (controversial) evidence of a long-term weakening trend in the magnetic field strength of sunspots. Matt Penn and William Livingston of the National Solar Observatory predict that by the time Solar Cycle 25 arrives, magnetic fields on the sun will be so weak that few if any sunspots will be formed. Independent lines of research involving helioseismology and surface polar fields tend to support their conclusion.”

That is even more significant because NASA’s climate science has been controlled for years by global warming hysteric James Hansen, who recently announced his retirement.

But this same concern is increasingly being echoed worldwide. The Voice of Russia reported on April 22, 2013,

“Global warming which has been the subject of so many discussions in recent years, may give way to global cooling. According to scientists from the Pulkovo Observatory in St.Petersburg, solar activity is waning, so the average yearly temperature will begin to decline as well. Scientists from Britain and the US chime in saying that forecasts for global cooling are far from groundless.”

That report quoted Yuri Nagovitsyn of the Pulkovo Observatory saying, “Evidently, solar activity is on the decrease. The 11-year cycle doesn’t bring about considerable climate change – only 1-2%. The impact of the 200-year cycle is greater – up to 50%. In this respect, we could be in for a cooling period that lasts 200-250 years.” In other words, another Little Ice Age.

Faith in Global Warming is collapsing in formerly staunch Europe following increasingly severe winters which have now started continuing into spring. Christopher Booker explained in The Sunday Telegraph on April 27, 2013,

“Here in Britain, where we had our fifth freezing winter in a row, the Central England Temperature record – according to an expert analysis on the US science blog Watts Up With That – shows that in this century, average winter temperatures have dropped by 1.45C, more than twice as much as their rise between 1850 and 1999, and twice as much as the entire net rise in global temperatures recorded in the 20th century.”
A news report from India (The Hindu April 22, 2013) stated, “March in Russia saw the harshest frosts in 50 years, with temperatures dropping to –25° Celsius in central parts of the country and –45° in the north. It was the coldest spring month in Moscow in half a century….Weathermen say spring is a full month behind schedule in Russia.” The news report summarized,

“Russia is famous for its biting frosts but this year, abnormally icy weather also hit much of Europe, the United States, China and India. Record snowfalls brought Kiev, capital of Ukraine, to a standstill for several days in late March, closed roads across many parts of Britain, buried thousands of sheep beneath six-metre deep snowdrifts in Northern Ireland, and left more than 1,000,000 homes without electricity in Poland. British authorities said March was the second coldest in its records dating back to 1910. China experienced the severest winter weather in 30 years and New Delhi in January recorded the lowest temperature in 44 years.”

Booker adds, “Last week it was reported that 3,318 places in the USA had recorded their lowest temperatures for this time of year since records began. Similar record cold was experienced by places in every province of Canada. So cold has the Russian winter been that Moscow had its deepest snowfall in 134 years of observations.”

Britain’s Met Office, an international cheerleading headquarters for global warming hysteria, did concede last December that there would be no further warming at least through 2017, which would make 20 years with no global warming. That reflects grudging recognition of the newly developing trends. But that reflects as well growing divergence between the reality of real world temperatures and the projections of the climate models at the foundation of the global warming alarmism of the UN’s Intergovernmental Panel on Climate Change (IPCC). Since those models have never been validated, they are not science at this point, but just made up fantasies. That is why, “In the 12 years to 2011, 11 out of 12 [global temperature]forecasts [of the Met Office] were too high — and… none were colder than [resulted],” as BBC climate correspondent Paul Hudson wrote in January.

Global warming was never going to be the problem that the Lysenkoists who have brought down western science made it out to be. Human emissions of CO2 are only 4 to 5% of total global emissions, counting natural causes. Much was made of the total atmospheric concentration of CO2 exceeding 400 parts per million. But if you asked the daffy NBC correspondent who hysterically reported on that what portion of the atmosphere 400 parts per million is, she transparently wouldn’t be able to tell you. One percent of the atmosphere would be 10,000 parts per million. The atmospheric concentrations of CO2 deep in the geologic past were much, much greater than today, yet life survived, and we have no record of any of the catastrophes the hysterics have claimed. Maybe that is because the temperature impact of increased concentrations of CO2 declines logarithmically. That means there is a natural limit to how much increased CO2 can effectively warm the planet, which would be well before any of the supposed climate catastrophes the warming hysterics have tried to use to shut down capitalist prosperity.

Monday, January 6, 2014

Science Is Not Your Enemy An impassioned plea to neglected novelists, embattled professors, and tenure-less historians by Steven Pinker


The great thinkers of the Age of Reason and the Enlightenment were scientists. Not only did many of them contribute to mathematics, physics, and physiology, but all of them were avid theorists in the sciences of human nature. They were cognitive neuroscientists, who tried to explain thought and emotion in terms of physical mechanisms of the nervous system. They were evolutionary psychologists, who speculated on life in a state of nature and on animal instincts that are “infused into our bosoms.” And they were social psychologists, who wrote of the moral sentiments that draw us together, the selfish passions that inflame us, and the foibles of shortsightedness that frustrate our best-laid plans.
 
These thinkers—Descartes, Spinoza, Hobbes, Locke, Hume, Rousseau, Leibniz, Kant, Smith—are all the more remarkable for having crafted their ideas in the absence of formal theory and empirical data.
The mathematical theories of information, computation, and games had yet to be invented. The words “neuron,” “hormone,” and “gene” meant nothing to them. When reading these thinkers, I often long to travel back in time and offer them some bit of twenty-first-century freshman science that would fill a gap in their arguments or guide them around a stumbling block. What would these Fausts have given for such knowledge? What could they have done with it?
 
We don’t have to fantasize about this scenario, because we are living it. We have the works of the great thinkers and their heirs, and we have scientific knowledge they could not have dreamed of. This is an extraordinary time for the understanding of the human condition. Intellectual problems from antiquity are being illuminated by insights from the sciences of mind, brain, genes, and evolution.
Powerful tools have been developed to explore them, from genetically engineered neurons that can be controlled with pinpoints of light to the mining of “big data” as a means of understanding how ideas propagate.
 
One would think that writers in the humanities would be delighted and energized by the efflorescence of new ideas from the sciences. But one would be wrong. Though everyone endorses science when it can cure disease, monitor the environment, or bash political opponents, the intrusion of science into the territories of the humanities has been deeply resented. Just as reviled is the application of scientific reasoning to religion; many writers without a trace of a belief in God maintain that there is something unseemly about scientists weighing in on the biggest questions. In the major journals of opinion, scientific carpetbaggers are regularly accused of determinism, reductionism, essentialism, positivism, and worst of all, something called “scientism.” The past couple years have seen four denunciations of scientism in this magazine alone, together with attacks in Bookforum, The Claremont Review of Books, The Huffington Post, The Nation, National Review OnlineThe New Atlantis, The New York Times, and Standpoint.
 
The eclectic politics of these publications reflects the bipartisan nature of the resentment. This passage, from a 2011 review in The Nation of three books by Sam Harris by the historian Jackson Lears, makes the standard case for the prosecution by the left:
Positivist assumptions provided the epistemological foundations for Social Darwinism and pop-evolutionary notions of progress, as well as for scientific racism and imperialism. These tendencies coalesced in eugenics, the doctrine that human well-being could be improved and eventually perfected through the selective breeding of the "fit" and the sterilization or elimination of the "unfit." ... Every schoolkid knows about what happened next: the catastrophic twentieth century. Two world wars, the systematic slaughter of innocents on an unprecedented scale, the proliferation of unimaginable destructive weapons, brushfire wars on the periphery of empire—all these events involved, in various degrees, the application of sceintific research to advanced technology. 
The case from the right, captured in this 2007 speech from Leon Kass, George W. Bush’s bioethics adviser, is just as measured:
Scientific ideas and discoveries about living nature and man, perfectly welcome and harmless in themselves, are being enlisted to do battle against our traditional religious and moral teachings, and even our self-understanding as creatures with freedom and dignity. A quasi-religious faith has sprung up among us—let me call it "soul-less scientism"—which believes that our new biology, eliminating all mystery, can give a complete account of human life, giving purely scientific explanations of human thought, love, creativity, moral judgment, and even why we believe in God. ... Make no mistake. The stakes in this contest are high: at issue are the moral and spiritual health of our nation, the continued vitality of science, and our own self-understanding as human beings and as children of the West. 
These are zealous prosecutors indeed. But their cases are weak. The mindset of science cannot be blamed for genocide and war and does not threaten the moral and spiritual health of our nation. It is, rather, indispensable in all areas of human concern, including politics, the arts, and the search for meaning, purpose, and morality.
 
The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.
 
Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable. Scientism does not mean that all current scientific hypotheses are true; most new ones are not, since the cycle of conjecture and refutation is the lifeblood of science. It is not an imperialistic drive to occupy the humanities; the promise of science is to enrich and diversify the intellectual tools of humanistic scholarship, not to obliterate them. And it is not the dogma that physical stuff is the only thing that exists. Scientists themselves are immersed in the ethereal medium of information, including the truths of mathematics, the logic of their theories, and the values that guide their enterprise. In this conception, science is of a piece with philosophy, reason, and Enlightenment humanism. It is distinguished by an explicit commitment to two ideals, and it is these that scientism seeks to export to the rest of intellectual life.

The Linder Gallery, c.1622-1629, Cordover Collection, LLC
 
The first is that the world is intelligible. The phenomena we experience may be explained by principles that are more general than the phenomena themselves. These principles may in turn be explained by more fundamental principles, and so on. In making sense of our world, there should be few occasions in which we are forced to concede “It just is” or “It’s magic” or “Because I said so.”
The commitment to intelligibility is not a matter of brute faith, but gradually validates itself as more and more of the world becomes explicable in scientific terms. The processes of life, for example, used to be attributed to a mysterious élan vital; now we know they are powered by chemical and physical reactions among complex molecules.
 
Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.
Many of our cultural institutions cultivate a philistine indifference to science.
 
The second ideal is that the acquisition of knowledge is hard. The world does not go out of its way to reveal its workings, and even if it did, our minds are prone to illusions, fallacies, and super- stitions. Most of the traditional causes of belief—faith, revelation, dogma, authority, charisma, conventional wisdom, the invigorating glow of subjective certainty—are generators of error and should be dismissed as sources of knowledge. To understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity. Any movement that calls itself “scientific” but fails to nurture opportunities for the falsification of its own beliefs (most obviously when it murders or imprisons the people who disagree with it) is not a scientific movement.
 
In which ways, then, does science illuminate human affairs? Let me start with the most ambitious: the deepest questions about who we are, where we came from, and how we define the meaning and purpose of our lives. This is the traditional territory of religion, and its defenders tend to be the most excitable critics of scientism. They are apt to endorse the partition plan proposed by Stephen Jay
Gould in his worst book, Rocks of Ages, according to which the proper concerns of science and religion belong to “non-overlapping magisteria.” Science gets the empirical universe; religion gets the questions of moral meaning and value.
 
Unfortunately, this entente unravels as soon as you begin to examine it. The moral worldview of any scientifically literate person—one who is not blinkered by fundamentalism—requires a radical break from religious conceptions of meaning and value.
 
To begin with, the findings of science entail that the belief systems of all the world’s traditional religions and cultures—their theories of the origins of life, humans, and societies—are factually mistaken. We know, but our ancestors did not, that humans belong to a single species of African primate that developed agriculture, government, and writing late in its history. We know that our species is a tiny twig of a genealogical tree that embraces all living things and that emerged from prebiotic chemicals almost four billion years ago. We know that we live on a planet that revolves around one of a hundred billion stars in our galaxy, which is one of a hundred billion galaxies in a 13.8-billion-year-old universe, possibly one of a vast number of universes. We know that our intuitions about space, time, matter, and causation are incommensurable with the nature of reality on scales that are very large and very small. We know that the laws governing the physical world (including accidents, disease, and other misfortunes) have no goals that pertain to human well-being.
There is no such thing as fate, providence, karma, spells, curses, augury, divine retribution, or answered prayers—though the discrepancy between the laws of probability and the workings of cognition may explain why people believe there are. And we know that we did not always know these things, that the beloved convictions of every time and culture may be decisively falsified, doubtless including some we hold today.
 
In other words, the worldview that guides the moral and spiritual values of an educated person today is the worldview given to us by science. Though the scientific facts do not by themselves dictate values, they certainly hem in the possibilities. By stripping ecclesiastical authority of its credibility on factual matters, they cast doubt on its claims to certitude in matters of morality. The scientific refutation of the theory of vengeful gods and occult forces undermines practices such as human sacrifice, witch hunts, faith healing, trial by ordeal, and the persecution of heretics. The facts of science, by exposing the absence of purpose in the laws governing the universe, force us to take responsibility for the welfare of ourselves, our species, and our planet. For the same reason, they undercut any moral or political system based on mystical forces, quests, destinies, dialectics, struggles, or messianic ages. And in combination with a few unexceptionable convictions— that all of us value our own welfare and that we are social beings who impinge on each other and can negotiate codes of conduct—the scientific facts militate toward a defensible morality, namely adhering to principles that maximize the flourishing of humans and other sentient beings. This humanism, which is inextricable from a scientific understanding of the world, is becoming the de facto morality of modern democracies, international organizations, and liberalizing religions, and its unfulfilled promises define the moral imperatives we face today.
 
Moreover, science has contributed—directly and enormously—to the fulfillment of these values. If one were to list the proudest accomplishments of our species (setting aside the removal of obstacles we set in our own path, such as the abolition of slavery and the defeat of fascism), many would be gifts bestowed by science.
 
The most obvious is the exhilarating achievement of scientific knowledge itself. We can say much about the history of the universe, the forces that make it tick, the stuff we’re made of, the origin of living things, and the machinery of life, including our own mental life. Better still, this understanding consists not in a mere listing of facts, but in deep and elegant principles, like the insight that life depends on a molecule that carries information, directs metabolism, and replicates itself.
 
Science has also provided the world with images of sublime beauty: stroboscopically frozen motion, exotic organisms, distant galaxies and outer planets, fluorescing neural circuitry, and a luminous planet Earth rising above the moon’s horizon into the blackness of space. Like great works of art, these are not just pretty pictures but prods to contemplation, which deepen our understanding of what it means to be human and of our place in nature.
 
And contrary to the widespread canard that technology has created a dystopia of deprivation and violence, every global measure of human flourishing is on the rise. The numbers show that after millennia of near-universal poverty, a steadily growing proportion of humanity is surviving the first year of life, going to school, voting in democracies, living in peace, communicating on cell phones, enjoying small luxuries, and surviving to old age. The Green Revolution in agronomy alone saved a billion people from starvation. And if you want examples of true moral greatness, go to Wikipedia and look up the entries for “smallpox” and “rinderpest” (cattle plague). The definitions are in the past tense, indicating that human ingenuity has eradicated two of the cruelest causes of suffering in the history of our kind. 
 
Though science is beneficially embedded in our material, moral, and intellectual lives, many of our cultural institutions, including the liberal arts programs of many universities, cultivate a philistine indifference to science that shades into contempt. Students can graduate from elite colleges with a trifling exposure to science. They are commonly misinformed that scientists no longer care about truth but merely chase the fashions of shifting paradigms. A demonization campaign anachronistically impugns science for crimes that are as old as civilization, including racism, slavery, conquest, and genocide.
 
Just as common, and as historically illiterate, is the blaming of science for political movements with a pseudoscientific patina, particularly Social Darwinism and eugenics. Social Darwinism was the misnamed laissez-faire philosophy of Herbert Spencer. It was inspired not by Darwin’s theory of natural selection, but by Spencer’s Victorian-era conception of a mysterious natural force for progress, which was best left unimpeded. Today the term is often used to smear any application of evolution to the understanding of human beings. Eugenics was the campaign, popular among leftists and progressives in the early decades of the twentieth century, for the ultimate form of social progress, improving the genetic stock of humanity. Today the term is commonly used to assail behavioral genetics, the study of the genetic contributions to individual differences.
 
I can testify that this recrimination is not a relic of the 1990s science wars. When Harvard reformed its general education requirement in 2006 to 2007, the preliminary task force report introduced the teaching of science without any mention of its place in human knowledge: “Science and technology directly affect our students in many ways, both positive and negative: they have led to life-saving medicines, the internet, more efficient energy storage, and digital entertainment; they also have shepherded nuclear weapons, biological warfare agents, electronic eavesdropping, and damage to the environment.” This strange equivocation between the utilitarian and the nefarious was not applied to other disciplines. (Just imagine motivating the study of classical music by noting that it both generates economic activity and inspired the Nazis.) And there was no acknowledgment that we might have good reasons to prefer science and know-how over ignorance and superstition.
 
At a 2011 conference, another colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.
 
A major goad for the recent denunciations of scientism has been the application of neuroscience, evolution, and genetics to human affairs. Certainly many of these applications are glib or wrong, and they are fair game for criticism: scanning the brains of voters as they look at politicians’ faces, attributing war to a gene for aggression, explaining religion as an evolutionary adaptation to bond the group. Yet it’s not unheard of for intellectuals who are innocent of science to advance ideas that are glib or wrong, and no one is calling for humanities scholars to go back to their carrels and stay out of discussions of things that matter. It is a mistake to use a few wrongheaded examples as an excuse to quarantine the sciences of human nature from our attempt to understand the human condition.
To simplify is not to be simplistic. 
 
Take our understanding of politics. “What is government itself,” asked James Madison, “but the greatest of all reflections on human nature?” The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation. These impulses ordinarily operate beneath our conscious awareness, but in some circumstances they can be turned around by reason and debate. We are starting to grasp why these moralistic impulses evolved; how they are implemented in the brain; how they differ among individuals, cultures, and sub- cultures; and which conditions turn them on and off.
 
The application of science to politics not only enriches our stock of ideas, but also offers the means to ascertain which of them are likely to be correct. Political debates have traditionally been deliberated through case studies, rhetoric, and what software engineers call HiPPO (highest-paid person’s opinion). Not surprisingly, the controversies have careened without resolution. Do democracies fight each other? What about trading partners? Do neighboring ethnic groups inevitably play out ancient hatreds in bloody conflict? Do peacekeeping forces really keep the peace? Do terrorist organizations get what they want? How about Gandhian nonviolent movements? Are post-conflict reconciliation rituals effective at preventing the renewal of conflict?
 
History nerds can adduce examples that support either answer, but that does not mean the questions are irresolvable. Political events are buffeted by many forces, so it’s possible that a given force is potent in general but submerged in a particular instance. With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively. As best we can tell at present, the answers to the questions listed above are (on average, and all things being equal) no, no, no, yes, no, yes, and yes.
 
The humanities are the domain in which the intrusion of science has produced the strongest recoil. Yet it is just that domain that would seem to be most in need of an infusion of new ideas. By most accounts, the humanities are in trouble. University programs are downsizing, the next generation of scholars is un- or underemployed, morale is sinking, students are staying away in droves. No thinking person should be indifferent to our society’s disinvestment from the humanities, which are indispensable to a civilized democracy.
 
Diagnoses of the malaise of the humanities rightly point to anti-intellectual trends in our culture and to the commercialization of our universities. But an honest appraisal would have to acknowledge that some of the damage is self-inflicted. The humanities have yet to recover from the disaster of postmodernism, with its defiant obscurantism, dogmatic relativism, and suffocating political correctness. And they have failed to define a progressive agenda. Several university presidents and provosts have lamented to me that when a scientist comes into their office, it’s to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it’s to plead for respect for the way things have always been done.
 
Those ways do deserve respect, and there can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains.
They originate in our faculties of perception, thought, and emotion, and they cumulate and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.
 
In some disciplines, this consilience is a fait accompli. Archeology has grown from a branch of art history to a high-tech science. Linguistics and the philosophy of mind shade into cognitive science and neuroscience.
 
Similar opportunities are there for the exploring. The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.
 
As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.
And as with politics, the advent of data science applied to books, periodicals, correspondence, and musical scores holds the promise for an expansive new “digital humanities.” The possibilities for theory and discovery are limited only by the imagination and include the origin and spread of ideas, networks of intellectual and artistic influence, the persistence of historical memory, the waxing and waning of themes in literature, and patterns of unofficial censorship and taboo.
 
Nonetheless, many humanities scholars have reacted to these opportunities like the protagonist of the grammar-book example of the volitional future tense: “I will drown; no one shall save me.” Noting that these analyses flatten the richness of individual works, they reach for the usual adjectives: simplistic, reductionist, naïve, vulgar, and of course, scientistic.
 
The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic. An appreciation of the particulars of a work can co-exist with explanations at many other levels, from the personality of an author to the cultural milieu, the faculties of human nature, and the laws governing social beings. The rejection of a search for general trends and principles calls to mind Jorge Luis Borges’s fictitious empire in which “the Cartographers Guild drew a map of the Empire whose size was that of the Empire, coinciding point for point with it. The following Generations ... saw the vast Map to be Useless and permitted it to decay and fray under the Sun and winters.”
And the critics should be careful with the adjectives. If anything is naïve and simplistic, it is the conviction that the legacy silos of academia should be fortified and that we should be forever content with current ways of making sense of the world. Surely our conceptions of politics, culture, and morality have much to learn from our best understanding of the physical universe and of our makeup as a species.

Gaseous planet with same mass as Earth is discovered by scientists

KOI-314c, 200 light years away, is 60% larger than Earth, with a thick gaseous atmosphere, orbiting a red dwarf star
Gaseous planet discovered
 
Artist's impression from the Harvard-Smithsonian Centre for Astrophysics of KOI-314c in orbit around its star Photograph: C. Pulliam & D. Aguilar (CfA)/PA

Earth's gassy 'twin' has been discovered in another solar system 200 light years away.
The planet, known as KOI-314c, weighs the same as Earth but is 60% larger, leading scientists to suspect it has a thick gaseous atmosphere.

It orbits a dim red dwarf star at such a close distance that temperatures on its surface could be as high as 104C – too hot for most forms of life on Earth.

KOI-314C is only 30% more dense than water. This suggests that the world is enveloped by a blanket of hydrogen and helium hundreds of miles thick.

Scientists believe it may have started life as a mini-Neptune before some of its atmospheric gases were blasted away by intense radiation from the parent star.

Lead astronomer Dr David Kipping, from the Harvard-Smithsonian Centre for Astrophysics in the US, said: "This planet might have the same mass as Earth, but it is certainly not Earth-like.

"It proves that there is no clear dividing line between rocky worlds like Earth and fluffier planets like water worlds or gas giants."

The findings were presented at the annual meeting of the American Astronomical Society in Washington DC.

To weigh KOI-314c, the scientists used a new technique called transit timing variations (TTV), which only works when more than one planet orbits a star.

The two planets tug on each other, slightly altering the time they take to cross or "transit" the star's face. Analysing the way the planetary wobbles affect light coming from the star makes it possible to calculate their mass.

KOI-314c's companion world is similar to it in size but weighs four times more than Earth.

The new discovery was made by chance as scientists scoured data from the Kepler space telescope looking for evidence of moons rather than planets.

"When we noticed this planet showed transit timing variations, the signature was clearly due to the other planet in the system and not a moon," said Kipping.

"At first we were disappointed it wasn't a moon, but then we soon realised it was an extraordinary measurement."

What energy problem? Doom-and-gloom activists have missed the real solution: tech innovation.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
andrew saeger for the boston globe
 

Doom-and-gloom activists have missed the real solution: tech innovation

By John E. Sununu

Whoever coined the phrase “everyone loves a good mystery” was wrong. What people really love is a good mystery story, with an exciting plot and — more important — a tidy resolution where everything becomes clear. It’s a formula that Edgar Allan Poe invented, Arthur Conan Doyle perfected, and Agatha Christie employed to make millions. By comparison, a mystery without resolution can be frustrating. And a mystery whose conclusion contradicts the spirit of the narrative can be downright annoying.
 
That’s the problem with the tale of electricity in the United States. Consumption is falling, no one saw this coming, and it contradicts the gloom-and-doom narrative that so many environmental activists use to raise money. As a result, it’s barely getting any attention. When the Associated Press published a year-end story on the dramatic trend, it appeared in this newspaper on page B6.
 
Yet the facts are striking. Since 2007, total electricity consumption in the United States has fallen by over 100,000 megawatt hours. Consumption on a per-person basis is down even more dramatically, reaching levels not seen since 2001. In an age of ubiquitous handheld devices and laptops, that appears counterintuitive. Surprisingly, those devices are helping to fuel the decline.
 
Laptops use less electricity than desktops; tablets use less than laptops; smartphones use less than tablets. As smaller devices displace their clunkier brethren, we use less power even as we spend more time online. Often unwittingly, consumers are applying enormous pressure for greater efficiency: As they demand longer and longer battery life, manufacturers must find ever-more clever ways to minimize power consumption. According to the Electric Power Research Institute, today’s iPad consumes less than 5 percent of the electricity used by a desktop computer.
In other parts of the home, traditional electricity hogs like televisions have turned over a new leaf. Today’s flat-screen models use 80 percent less energy than the monster cathode-ray sets from my childhood. That’s primarily the work of the humble light emitting diodes — low-power units that are fast becoming the light source of choice for everything from stadium jumbotrons to car headlights.
 
It’s noteworthy that the lion’s share of this transformation has occurred without government intervention. Computers, TVs, and industrial lighting are generally free from regulation. Consumers have been helped by energy-efficiency labels on appliances, but as electricity prices continue to rise, companies see this less as a federal imperative than as a competitive necessity.
 
Nor has the government been especially adept at noticing, let alone understanding, the trend. After years of erroneously forecasting usage growth, the Energy Department has at last projected a drop in household electricity consumption for 2014. The agency still maintains, however, that total consumption will increase once industrial and commercial uses are included. We’ll see. It’s difficult to argue that three years of declines were simply an anomaly when Canada and the United Kingdom have seen the same pattern.
 
The laws of supply and demand remain as powerful as ever. Ultimately, lower demand should help counter electricity rates that are skyrocketing because of high-priced renewable energy projects. Like the shale gas revolution rocking US energy markets, it’s a technological phenomenon that’s good for consumers, good for industry, and good for the environment.
 
Unfortunately, this news undermines the Malthusian narrative that the only path to salvation involves carbon taxes, renewable energy mandates, and a government that decides what kind of light bulbs you can buy. Environmentalists on the left raise lots of cash off the claim that current energy consumption trends are unsustainable and we’re running out of everything. Lower electricity consumption could really hurt their business model.
 
Innovation has given consumers better, faster, and more nimble electronic products. In a highly competitive marketplace, that same innovation has delivered greater efficiency and productivity as well. As always, activists and government officials would love to claim that their prescription of regulation and intervention is essential to save the world, when in fact their best course of action might be to get out of the way. Why, for example, can’t they make it easier to import cheap surplus hydroelectricity from Canada?
 
That remains a mystery, but perhaps not for long. As the public becomes more aware that the sky isn’t falling, the appetite for exotic solutions and hypothetical energy sources will decline even faster than electricity consumption. That conclusion may not be as entertaining as a Sherlock Holmes story. But, as the detective would appreciate, it is at least built on common sense.
 

Does cloudy super-Earth hold life? Weather may be factor

A cloudy super-Earth is raising questions.

Does cloudy super-Earth hold life? Weather may be factor

Science Recorder | Delila James | Wednesday, January 01, 2014
 
Astronomers are having a tough time figuring out what some super-Earths are made of–thanks to layers of high-altitude clouds blanketing the planets.

The name “super-Earth” is just a bit misleading. In fact, these planets bear little resemblance to planet Earth. The term “super-Earth” only refers to the mass of the planet and doesn’t suggest anything
about its surface characteristics or potential for life.

Super-Earths are exoplanets–planets outside our solar system–that are larger than Earth but smaller than Neptune. And despite being rather common in our Milky Way, scientists still know very little about them. Super-Earths could be watery worlds or gas balls like Jupiter with atmospheres similar to Earth’s or completely different. Unlocking their mysteries would not only enhance understanding of how planets and solar systems form, but help narrow down the search for intergalactic life forms.

In 2009, astronomers discovered a super-Earth exoplanet, classified as GJ 1214b, that is 2.7 times the size of Earth, a relatively nearby 40 light years away in the constellation Ophiuchus, and races around its red dwarf star once every 38 hours. The problem was, despite being otherwise well-suited for study, scientists remained puzzled because they couldn’t determine its composition.

Now, NASA’s Hubble Space Telescope is shedding some light (literally) on the mystery by allowing scientists to observe GJ 1214b as it passes in front of, or transits, its host star. This gives them the chance to study the planet as starlight filters through its atmosphere. The researchers look for changes in certain wavelengths of light, which indicate what chemicals are in the atmosphere, and for apparent changes in the planet’s observed size. For example, a watery world would make it look bigger because water vapor is opaque when seen through certain colored filters and blocks starlight.

Completely contrary to the research team’s expectations, even with Hubble’s precision, they observed no apparent change in the size of the planet. According to lead author Laura Kreidberg of the University of Chicago and her colleagues, this could only mean that GJ 1214b was blanketed in a cloud cover composed not of water, but of zinc sulfide or potassium chloride. The team’s paper is published in the journal Nature.

While it’s nice to have the mystery partly solved, the downside is that the planet’s clouds, like those here on Earth, reduce visibility. So, whether GJ 2114b is home to any sort of biological activity is likely to remain a mystery for the time being.

Read more: http://www.sciencerecorder.com/news/does-cloudy-super-earth-hold-life-weather-may-be-factor/#ixzz2pffUQpEA

Mars One moves closer to launching humans to Mars

Are humans headed to Mars?

Mars One moves closer to launching humans to Mars

Science Recorder | Rick Docksai | Tuesday, December 31, 2013
 
As 2013 draws to a close, the founders of Mars One will be making at least one New Year’s resolution: make even more progress toward the goal of flying humans to Mars. It’s been less than eight months since the Dutch nonprofit went public with its plan to send several humans on a one-way voyage to the red planet by 2024, but the venture has made some noteworthy headway in that short time frame.

First, the venture has secured the buy-in of several respected industry partners. Aerospace heavyweight Lockheed Martin has agreed to build a Mars Lander vessel for the future expedition; and satellite firm SSTL has signed on to build a communications satellite that will transmit communications to and from Earth and the Mars base. Lockheed Martin has been involved in nearly every one of NASA’s robotic missions to Mars, and it has a lead role in NASA’s research-and-development of technologies for a human expedition to Mars. The aerospace firm will be designing its Mars One lander based on the Phoenix, a robotic NASA lander that explored Mars back in 2008. If all goes according to plan, Lockheed Martin could have a Mars One lander prototype ready for launch into space by 2018.

This year has thrown up a few difficulties, however. Among these are the volunteer signups. Mars One seeks interested volunteers from the general public. Applications have been coming in, but at a lower number than the project had been expecting: It’s gotten 165,000 at the time of writing but had been hoping for one million. Also, the applicant pool is overwhelmingly male, whereas Mars One’s ideal pioneer group would be an even number of men and women so as to ensure enough procreation to get a new thriving human settlement on the red planet up and running.

Still, some may argue that this initial pool of 165,000 applicants is already arguably close to enough for a final crew. The initial pioneering expedition isn’t supposed to take more than a dozen finalists when it launches.

In addition, the finalizing of a crew selection won’t move until the next phase until 2015. That’s when large groups of applicants will be chosen to form into teams and compete in tests of mental and physical capability. Only those who outshine all fellow competitors on round after round of these tests will be approved to join the mission.

The years 2018 to 2023 will see several unmanned missions take off for Mars and lay out infrastructure for the base. When the humans arrive, an event that Mars One expects will happen in 2024, an operational base will be there waiting for them.

Read more: http://www.sciencerecorder.com/news/mars-one-moves-closer-to-launching-humans-to-mars/#ixzz2pfaQUGlL

One-of-a-kind triple star system may offer clue to true nature of gravity

One-of-a-kind triple star system may offer clue to true nature of gravity
The system offers the scientists the best-yet opportunity to discover a violation of a concept called the Equivalence Principle.
Science Recorder | Jonathan Marker | Monday, January 06, 2014

According to a January 5 news release from the National Radio Astronomy Observatory , a team of astronomers using the NSF’s Green Bank Telescope has discovered a one-of-a-kind triple star system consisting of two white dwarf stars and a super-dense neutron star.  Intriguingly, all three of these stars occupy an orbit smaller than that of Earth’s.  This unique placement of three stars has permitted scientists to make the most accurate measurements yet of the intricate gravitational interactions in this type of star system.  Eventually, the detailed analysis of this system may offer a major clue for understanding the true nature of gravity.

“This triple system gives us a natural cosmic laboratory far better than anything found before for learning exactly how such three-body systems work and potentially for detecting problems with General Relativity that physicists expect to see under extreme conditions,” said Scott Ransom, of the National Radio Astronomy Observatory.  “This is the first millisecond pulsar found in such a system, and we immediately recognized that it provides us a tremendous opportunity to study the effects and nature of gravity.”

The astronomers embarked on an exhaustive observational program using the Green Bank Telescope, the Arecibo radio telescope in Puerto Rico, and the Westerbork Synthesis Radio Telescope in the Netherlands.  In addition, they observed the system using data from the Sloan Digital Sky Survey, the GALEX satellite, the WIYN telescope on Kitt Peak, Arizona, and the Spitzer Space Telescope.
“The gravitational perturbations imposed on each member of this system by the others are incredibly pure and strong,” Ransom said. “The millisecond pulsar serves as an extremely powerful tool for measuring those perturbations incredibly well.”

By accurately recording the time of appearance of the pulsar’s pulses, the scientists calculated the geometry of the system and the masses of the stars with unequaled precision.

“We have made some of the most accurate measurements of masses in astrophysics,” said Anne Archibald, a researcher at the Netherlands Institute for Radio Astronomy.  ”Some of our measurements of the relative positions of the stars in the system are accurate to hundreds of meters.”
The system offers the scientists the best-yet opportunity to discover a violation of a concept called the Equivalence Principle.  According to this principle, the effect of gravity on a body does not depend on the nature or internal structure of that body.

“While Einstein’s Theory of General Relativity has so far been confirmed by every experiment, it is not compatible with quantum theory. Because of that, physicists expect that it will break down under extreme conditions,” Ransom said.  ”This triple system of compact stars gives us a great opportunity to look for a violation of a specific form of the equivalence principle called the Strong Equivalence Principle.”

The complete research findings appear online January 5 in the journal Nature.

Scientists split water into hydrogen, oxygen utilizing light, nanoparticles

The experiments used different sources of light, ranging from a laser to white light simulating the solar spectrum. 

Scientists split water into hydrogen, oxygen utilizing light, nanoparticles

Science Recorder | Jonathan Marker | Monday, December 16, 2013
According to a December 15 news release from the University of Houston (UH), researchers there have discovered a catalyst that can rapidly separate hydrogen and oxygen from water using the sun’s rays and cobalt oxide nanoparticles.

Technology potentially could create a clean, renewable source of energy

Researchers from the University of Houston have found a catalyst that can quickly generate hydrogen from water using sunlight, potentially creating a clean and renewable source of energy.

Their research, published online Sunday in Nature Nanotechnology, involved the use of cobalt oxide nanoparticles to split water into hydrogen and oxygen.

Jiming Bao, lead author of the paper and an assistant professor in the Department of Electrical and
Computer Engineering at UH, said the research discovered a new photocatalyst and demonstrated the potential of nanotechnology in engineering a material's property, although more work remains to be done.

Bao said photocatalytic water-splitting experiments have been tried since the 1970s, but this was the first to use cobalt oxide and the first to use neutral water under visible light at a high energy conversion efficiency without co-catalysts or sacrificial chemicals. The project involved researchers from UH, along with those from Sam Houston State University, the Chinese Academy of Sciences, Texas State University, Carl Zeiss Microscopy LLC, and Sichuan University.

Researchers prepared the nanoparticles in two ways, using femtosecond laser ablation and through mechanical ball milling. Despite some differences, Bao said both worked equally well.

Different sources of light were used, ranging from a laser to white light simulating the solar spectrum. He said he would expect the reaction to work equally well using natural sunlight.

Once the nanoparticles are added and light applied, the water separates into hydrogen and oxygen almost immediately, producing twice as much hydrogen as oxygen, as expected from the 2:1 hydrogen to oxygen ratio in H2O water molecules, Bao said.

The experiment has potential as a source of renewable fuel, but at a solar-to-hydrogen efficiency rate of around 5 percent, the conversion rate is still too low to be commercially viable. Bao suggested a more feasible efficiency rate would be about 10 percent, meaning that 10 percent of the incident solar energy will be converted to hydrogen chemical energy by the process.

Other issues remain to be resolved, as well, including reducing costs and extending the lifespan of cobalt oxide nanoparticles, which the researchers found became deactivated after about an hour of reaction.

"It degrades too quickly," said Bao, who also has appointments in materials engineering and the Department of Chemistry.

The work, supported by the Welch Foundation, will lead to future research, he said, including the question of why cobalt oxide nanoparticles have such a short lifespan, and questions involving chemical and electronic properties of the material.

Extinct ancient ape did not walk like a human, study shows

Jul 25, 2013 
Read more at: http://phys.org/news/2013-07-extinct-ancient-ape-human.html#jCp

    
According to a new study, led by University of Texas at Austin anthropologists Gabrielle A. Russo and Liza Shapiro, the 9- to 7-million-year-old ape from Italy did not, in fact, walk habitually on two legs.

The findings refute a long body of evidence, suggesting that Oreopithecus had the capabilities for bipedal (moving on two legs) walking.

The study, published in a forthcoming issue of the Journal of Human Evolution, confirms that related to habitual upright, two-legged walking remain exclusively associated with humans and their fossil ancestors.

"Our findings offer new insight into the Oreopithecus locomotor debate," says Russo, who is currently a postdoctoral research fellow at Northeast Ohio Medical University. "While it's certainly possible that Oreopithecus walked on two legs to some extent, as apes are known to employ short bouts of this activity, an increasing amount of anatomical evidence clearly demonstrates that it didn't do so habitually."

As part of the study, the researchers analyzed the fossil ape to see whether it possessed lower spine anatomy consistent with bipedal walking. They compared measurements of its lumbar vertebrae (lower back) and (a triangular bone at the base of the spine) to those of modern humans, fossil hominins (extinct bipedal ), and a sample of mammals that commonly move around in trees, including apes, sloths and an extinct lemur.

The lower spine serves as a good basis for testing the habitual bipedal locomotion hypothesis because lumbar vertebrae and sacra exhibit distinct features that facilitate the transmission of body weight for habitual bipedalism, says Russo.

According to the findings, the anatomy of Oreopithecus lumbar vertebrae and sacrum is unlike that of humans, and more similar to apes, indicating that it is incompatible with the functional demands of walking upright as a human does.

"The lower spine of humans is highly specialized for habitual bipedalism, and is therefore a key region for assessing whether this uniquely human form of locomotion was present in Oreopithecus," says Shapiro, a professor of anthropology. "Previous debate on the locomotor behavior of Oreopithecus had focused on the anatomy of the limbs and pelvis, but no one had reassessed the controversial claim that its lower back was human-like."
 

'Ardi' skull reveals links to human lineage -- did our pre-chimp ancestors walk upright?

 Read more at: http://phys.org/news/2014-01-ardi-skull-reveals-links-human.html#jCp
'Ardi' skull reveals links to human lineageEnlarge        
This is the 4.4 million-year-old cranial base of Ardipithecus ramidus from Aramis, Middle Awash research area, Ethiopia. Credit: Tim White.
One of the most hotly debated issues in current human origins research focuses on how the 4.4 million-year-old African species Ardipithecus ramidus is related to the human lineage. "Ardi" was an unusual primate. Though it possessed a tiny brain and a grasping big toe used for clambering in the trees, it had small, humanlike canine teeth and an upper pelvis modified for bipedal walking on the ground.
 
Scientists disagree about where this mixture of features positions Ardipithecus ramidus on the tree of and ape relationships. Was Ardi an ape with a few humanlike features retained from an ancestor near in time (6 and 8 million years ago, according to DNA evidence) to the split between the chimpanzee and human lines? Or was it a true relative of the human line that had yet to shed many signs of its remote tree-dwelling ancestry?

New research led by ASU paleoanthropologist William Kimbel confirms Ardi's close evolutionary relationship to humans. Kimbel and his collaborators turned to the underside (or base) of a beautifully preserved partial cranium of Ardi. Their study revealed a pattern of similarity that links Ardi to Australopithecus and modern humans and but not to apes.

The research appears in the January 6, 2014, online edition of Proceedings of the National Academy of Science. Kimbel is director of the ASU Institute of Human Origins, a research center of the College of Liberal Arts and Sciences in the School of Human Evolution and Social Change. Joining ASU's Kimbel as co-authors are Gen Suwa (University of Tokyo Museum), Berhane Asfaw (Rift Valley Research Service, Addis Ababa), Yoel Rak (Tel Aviv University), and Tim White (University of California at Berkeley).

White's field-research team has been recovering fossil remains of Ardipithecus ramidus in the Middle Awash Research area, Ethiopia, since the 1990s. The most recent study of the Ardi skull, led by Suwa, was published in Science in 2009, whose work (with the Middle Awash team) first revealed humanlike aspects of its base. Kimbel co-leads the team that recovered the earliest known Australopithecus skulls from the Hadar site, home of the "Lucy" skeleton, in Ethiopia.
"Given the very tiny size of the Ardi skull, the similarity of its cranial base to a human's is astonishing," says Kimbel.

The cranial base is a valuable resource for studying phylogenetic, or natural evolutionary relationships, because its anatomical complexity and association with the brain, posture, and chewing system have provided numerous opportunities for adaptive evolution over time. The human cranial base, accordingly, differs profoundly from that of apes and other primates.

In humans, the structures marking the articulation of the spine with the skull are more forwardly located than in apes, the base is shorter from front to back, and the openings on each side for passage of blood vessels and nerves are more widely separated.

These shape differences affect the way the bones are arranged on the skull base such that it is fairly easy to tell apart even isolated fragments of ape and human basicrania.
Ardi's cranial base shows the distinguishing features that separate humans and Australopithecus from the apes. Kimbel's earlier research (with collaborator Rak) had shown that these human peculiarities were present in the earliest known Australopithecus skulls by 3.4 million years ago.
The new work expands the catalogue of anatomical similarities linking humans, Australopithecus, and Ardipithecus on the tree of life and shows that the human cranial base pattern is at least a million years older than Lucy's species, A. afarensis.

Paleoanthropologists generally fall into one of two camps on the cause of evolutionary changes in the human cranial base. Was it the adoption of upright posture and bipedality causing a shift in the poise of the head on the vertebral column? If so, does the humanlike cranial base of Ar. ramidus confirm postcranial evidence for partial bipedality in this species? Or, do the changes tell us about the shape of the brain (and of the base on which it sits), perhaps an early sign of brain reorganization in the human lineage? Both alternatives will need to be re-evaluated in light of the finding that Ardi does indeed appear to be more closely related to humans than to chimpanzees.

"The Ardi cranial base fills some important gaps in our understanding of human evolution above the neck," adds Kimbel. "But it opens up a host of new questions…just as it should!" 

Anthropologists confirm link between cranial anatomy and two-legged walking

Read more at: http://phys.org/news/2013-09-anthropologists-link-cranial-anatomy-two-legged.html#jCp
Sep 27, 2013

Anthropologists confirm link between cranial anatomy and two-legged walking










Comparison of the skeletons of three bipedal mammals: an Egyptian jerboa, an eastern gray kangaroo and a human.

Anthropology researchers from The University of Texas at Austin have confirmed a direct link between upright two-legged (bipedal) walking and the position of the foramen magnum, a hole in the base of the skull that transmits the spinal cord.

The study, published in a forthcoming issue of the Journal of Human Evolution, confirms a controversial finding made by anatomist Raymond Dart, who discovered the first known two-legged walking (bipedal) human ancestor, Australopithecus africanus. Since Dart's discovery in 1925, physical anthropologists have continued to debate whether this feature of the cranial base can serve as a direct link to bipedal .

Chris Kirk, associate professor of anthropology and co-author of the study, says the findings validate foramen magnum position as a for fossil research and sheds further insight into human evolution.

"Now that we know that a forward-shifted foramen magnum is characteristic of bipedal mammals generally, we can be more confident that fossil species showing this feature were also habitual bipeds," Kirk says. "Our methods can be applied to fossil material belonging to some of the earliest potential ."

The foramen magnum in humans is centrally positioned under the braincase because the head sits atop the upright spine in bipedal postures. In contrast, the foramen magnum is located further toward the back of the skull in and most other mammals, as the spine is positioned more behind the head in four-legged postures.

As part of the study, the researchers measured the position of the foramen magnum in 71 species from three mammalian groups: marsupials, rodents and . By comparing foramen magnum position broadly across mammals, the researchers were able to rule out other potential explanations for a forward-shifted foramen magnum, such as differences in .

According to the findings, a foramen magnum positioned toward the base of the skull is found not only in humans, but in other habitually bipedal mammals as well. Kangaroos, kangaroo rats and jerboas all have a more forward-shifted foramen magnum compared with their quadrupedal (four-legged walking) close relatives.

These particular mammals evolved locomotion and anteriorly positioned foramina magna independently, or as a result of convergent evolution, says Gabrielle Russo, who is a postdoctoral research fellow at Northeast Ohio Medical University and lead researcher of the study.

"As one of the few cranial features directly linked to locomotion, the position of the foramen magnum is an important feature for the study of human evolution," Russo says. "This is the case for early hominin species such as Sahelanthropus tchadensis, which shows a forward shift of the foramen magnum but has aroused some controversy as to whether it is more closely related to humans or African apes."




The current, and very disturbing long term trend is clear?


















The current, and very disturbing long term trend is clear. As noted above by Composer, since 2004 the 1998 (& 2002)record has fallen. It should be noted that that record was also exceeded in 2007 and 2009, although those were not record years because lower than the 2005 (and 2010) record. (Data) None of those records are certain because they all lie within error of each other, but all clearly exceed any record prior to 1998.

Note:  the admitted uncertainty of historical data, if  "they lie within error of each other" means that the Medieval Warm Period could be around current temperatures.  More accurate readings are needed.

A graph of modern temperatures from 1880 -- 2012 is shown below.  (The black lines are mine.)



 
Besides the obvious warming trend, other interesting features stand here, which my lines attempt to demonstrate more clearly.  Notice the warming follows a step pattern, and a highly repetitive one, with periods of longer warming back to back with shorter cooling.  The line is of course rough, but I think the pattern comes out pretty clear. 
 

 
Perhaps better than straight lines is a sine wave fit (sorry for the difficulty reading this).  I prefer this because it is a more natural model of temperature change, the fit is still good, and the last ~decade of temperature stabilizing (or even decreasing) is explained in a straightforward way.
 
For reasons that aren't reasonably clear to me, the warming-catastrophe clique objects to these lines and curves, regarding them as statistical chicanery.  For example, I happened on the chart below:
 
 
This, it claims, is how believers think skeptics view the data.  In this case, I agree; it's too short a time period with too much random noise.  You can also fit noise to some pattern, if you're determined to.
 
Having said this, however, believers then produce this graph:
 
The lower chart, which has been adopted by the catastrophist community without a quibble, uses decades (of course we all know that nature follows human conventions) to create decade by decade average bars.  Why?  Because, quite suddenly the stabilization or even slight cooling 2005 - 2013 instantly disappears.  No other logic than to blind us to what any open-eyed/mind can see is plainly obvious.  Hypocrisy knows no shame.
 
Well, enough for a while.  What have we found?  Some time back I reposted an article showing that much of published, even peer-reviewed, science is simply wrong.  I recently found a similar post showing the same is true of medical science.  Combine that with our ideological, political, cultural, religious, and other biases, not to mention group-think, it is no wonder there is so much confusion and contradiction on this subject.
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Hate speech

From Wikipedia, the free encyclopedia ...