Search This Blog

Monday, January 6, 2014

What Catastrophe? MIT’s Richard Lindzen, the unalarmed climate scientist

What Catastrophe?

MIT’s Richard Lindzen, the unalarmed climate scientist

Jan 13, 2014, Vol. 19, No. 17 • By ETHAN EPSTEIN
Audio versionSingle PagePrintLarger TextSmaller Text Alerts
When you first meet Richard Lindzen, the Alfred P. Sloan professor of meteorology at MIT, senior fellow at the Cato Institute, leading climate “skeptic,” and all-around scourge of James Hansen, Bill McKibben, Al Gore, the Intergovernmental Panel on Climate Change (IPCC), and sundry other climate “alarmists,” as Lindzen calls them, you may find yourself a bit surprised. If you know Lindzen only from the way his opponents characterize him—variously, a liar, a lunatic, a charlatan, a denier, a shyster, a crazy person, corrupt—you might expect a spittle-flecked, wild-eyed loon. But in person, Lindzen cuts a rather different figure. With his gray beard, thick glasses, gentle laugh, and disarmingly soft voice, he comes across as nothing short of grandfatherly.
 
Thomas Fluharty

Thomas Fluharty
 
Granted, Lindzen is no shrinking violet. A pioneering climate scientist with decades at Harvard and MIT, Lindzen sees his discipline as being deeply compromised by political pressure, data fudging, out-and-out guesswork, and wholly unwarranted alarmism. In a shot across the bow of what many insist is indisputable scientific truth, Lindzen characterizes global warming as “small and .  .  . nothing to be alarmed about.” In the climate debate—on which hinge far-reaching questions of public policy—them’s fightin’ words.
 
In his mid-seventies, married with two sons, and now emeritus at MIT, Lindzen spends between four and six months a year at his second home in Paris. But that doesn’t mean he’s no longer in the thick of the climate controversy; he writes, gives myriad talks, participates in debates, and occasionally testifies before Congress. In an eventful life, Lindzen has made the strange journey from being a pioneer in his field and eventual IPCC coauthor to an outlier in the discipline—if not an outcast. 
 
Richard Lindzen was born in 1940 in Webster, Massachusetts, to Jewish immigrants from Germany. His bootmaker father moved the family to the Bronx shortly after Richard was born. Lindzen attended the Bronx High School of Science before winning a scholarship to the only place he applied that was out of town, the Rensselaer Polytechnic Institute, in Troy, New York. After a couple of years at Rensselaer, he transferred to Harvard, where he completed his bachelor’s degree and, in 1964, a doctorate. 
 
Lindzen wasn’t a climatologist from the start—“climate science” as such didn’t exist when he was beginning his career in academia. Rather, Lindzen studied math. “I liked applied math,” he says, “[and] I was a bit turned off by modern physics, but I really enjoyed classical physics, fluid mechanics, things like that.” A few years after arriving at Harvard, he began his transition to meteorology. “Harvard actually got a grant from the Ford Foundation to offer generous fellowships to people in the atmospheric sciences,” he explains. “Harvard had no department in atmospheric sciences, so these fellowships allowed you to take a degree in applied math or applied physics, and that worked out very well because in applied math the atmosphere and oceans were considered a good area for problems. .  .  . I discovered I really liked atmospheric sciences—meteorology. So I stuck with it and picked out a thesis.”
 
And with that, Lindzen began his meteoric rise through the nascent field. In the 1970s, while a professor at Harvard, Lindzen disproved the then-accepted theory of how heat moves around the Earth’s atmosphere, winning numerous awards in the process. Before his 40th birthday, he was a member of the National Academy of Sciences. In the mid-1980s, he made the short move from Harvard to MIT, and he’s remained there ever since. Over the decades, he’s authored or coauthored some 200 peer-reviewed papers on climate.
 
Where Lindzen hasn’t remained is in the mainstream of his discipline. By the 1980s, global warming was becoming a major political issue. Already, Lindzen was having doubts about the more catastrophic predictions being made. The public rollout of the “alarmist” case, he notes, “was immediately accompanied by an issue of Newsweek declaring all scientists agreed. And that was the beginning of a ‘consensus’ argument. Already by ’88 the New York Times had literally a global warming beat.” Lindzen wasn’t buying it. Nonetheless, he remained in the good graces of mainstream climate science, and in the early 1990s, he was invited to join the IPCC, a U.N.-backed multinational consortium of scientists charged with synthesizing and analyzing the current state of the world’s climate science. Lindzen accepted, and he ended up as a contributor to the 1995 report and the lead author of Chapter 7 (“Physical Climate Processes and Feedbacks”) of the 2001 report. Since then, however, he’s grown increasingly distant from prevalent (he would say “hysterical”) climate science, and he is voluminously on record disputing the predictions of catastrophe. 
 
The Earth’s climate is immensely complex, but the basic principle behind the “greenhouse effect” is easy to understand. The burning of oil, gas, and especially coal pumps carbon dioxide and other gases into the atmosphere, where they allow the sun’s heat to penetrate to the Earth’s surface but impede its escape, thus causing the lower atmosphere and the Earth’s surface to warm. Essentially everybody, Lindzen included, agrees. The question at issue is how sensitive the planet is to increasing concentrations of greenhouse gases (this is called climate sensitivity), and how much the planet will heat up as a result of our pumping into the sky ever more CO2, which remains in the atmosphere for upwards of 1,000 years. (Carbon dioxide, it may be needless to point out, is not a poison. On the contrary, it is necessary for plant life.) 
 
Lindzen doesn’t deny that the climate has changed or that the planet has warmed. “We all agree that temperature has increased since 1800,” he tells me. There’s a caveat, though: It’s increased by “a very small amount. We’re talking about tenths of a degree [Celsius]. We all agree that CO2 is a greenhouse gas. All other things kept equal, [there has been] some warming. As a result, there’s hardly anyone serious who says that man has no role. And in many ways, those have never been the questions. The questions have always been, as they ought to be in science, how much?”
Lindzen says not much at all—and he contends that the “alarmists” vastly overstate the Earth’s climate sensitivity. Judging by where we are now, he appears to have a point; so far, 150 years of burning fossil fuels in large quantities has had a relatively minimal effect on the climate. By some measurements, there is now more CO2 in the atmosphere than there has been at any time in the past 15 million years. Yet since the advent of the Industrial Revolution, the average global temperature has risen by, at most, 1 degree Celsius, or 1.6 degrees Fahrenheit. And while it’s true that sea levels have risen over the same period, it’s believed they’ve been doing so for roughly 20,000 years. What’s more, despite common misconceptions stoked by the media in the wake of Katrina, Sandy, and the recent typhoon in the Philippines, even the IPCC concedes that it has “low confidence” that there has been any measurable uptick in storm intensity thanks to human activity. Moreover, over the past 15 years, as man has emitted record levels of carbon dioxide year after year, the warming trend of previous decades has stopped. Lindzen says this is all consistent with what he holds responsible for climate change: a small bit of man-made impact and a whole lot of natural variability.
 
The real fight, though, is over what’s coming in the future if humans continue to burn fossil fuels unabated. According to the IPCC, the answer is nothing good. Its most recent Summary for Policymakers, which was released early this fall—and which some scientists reject as too sanguine—predicts that if emissions continue to rise, by the year 2100, global temperatures could increase as much as 5.5 degrees Celsius from current averages, while sea levels could rise by nearly a meter. If we hit those projections, it’s generally thought that the Earth would be rife with crop failures, drought, extreme weather, and epochal flooding. Adios, Miami.
 
It is to avoid those disasters that the “alarmists” call on governments to adopt policies reducing the amounts of greenhouse gases released into the atmosphere. As a result of such policies—and a fortuitous increase in natural gas production—U.S. greenhouse emissions are at a 20-year low and falling. But global emissions are rising, thanks to massive increases in energy use in the developing world, particularly in China and India. If the “alarmists” are right, then, a way must be found to compel the major developing countries to reduce carbon emissions.
 
But Lindzen rejects the dire projections. For one thing, he says that the Summary for Policymakers is an inherently problematic document. The IPCC report itself, weighing in at thousands of pages, is “not terrible. It’s not unbiased, but the bias [is] more or less to limit your criticism of models,” he says. The Summary for Policymakers, on the other hand—the only part of the report that the media and the politicians pay any attention to—“rips out doubts to a large extent. .  .  . [Furthermore], government representatives have the final say on the summary.” Thus, while the full IPPC report demonstrates a significant amount of doubt among scientists, the essentially political Summary for Policymakers filters it out. 
 
Lindzen also questions the “alarmist” line on water vapor. Water vapor (and its close cousin, clouds) is one of the most prevalent greenhouse gases in the atmosphere. According to most climate scientists, the hotter the planet gets, the more water vapor there will be, magnifying the effects of other greenhouse gases, like CO2, in a sort of hellish positive feedback loop. Lindzen disputes this, contending that water vapor could very well end up having a cooling effect on the planet. As the science writer Justin Gillis explained in a 2012 New York Times piece, Lindzen “says the earth is not especially sensitive to greenhouse gases because clouds will react to counter them, and he believes he has identified a specific mechanism. On a warming planet, he says, less coverage by high clouds in the tropics will allow more heat to escape to space, countering the temperature increase.”
 
If Lindzen is right about this and global warming is nothing to worry about, why do so many climate scientists, many with résumés just as impressive as his, preach imminent doom? He says it mostly comes down to the money—to the incentive structure of academic research funded by government grants. Almost all funding for climate research comes from the government, which, he says, makes scientists essentially vassals of the state. And generating fear, Lindzen contends, is now the best way to ensure that policymakers keep the spigot open. 
 
Lindzen contrasts this with the immediate aftermath of World War II, when American science was at something of a peak. “Science had established its relevance with the A-bomb, with radar, for that matter the proximity fuse,” he notes. Americans and their political leadership were profoundly grateful to the science community; scientists, unlike today, didn’t have to abase themselves by approaching the government hat in hand. Science funding was all but assured. 
 
But with the cuts to basic science funding that occurred around the time of the Vietnam war, taxpayer support for research was no longer a political no-brainer. “It was recognized that gratitude only went so far,” Lindzen says, “and fear was going to be a much greater motivator. And so that’s when people began thinking about .  .  . how to perpetuate fear that would motivate the support of science.”
A need to generate fear, in Lindzen’s telling, is what’s driving the apocalyptic rhetoric heard from many climate scientists and their media allies. “The idea was, to engage the public you needed an event .  .  . not just a Sputnik—a drought, a storm, a sand demon. You know, something you could latch onto. [Climate scientists] carefully arranged a congressional hearing. And they arranged for [James] Hansen [author of Storms of My Grandchildren, and one of the leading global warming “alarmists”] to come and say something vague that would somehow relate a heat wave or a drought to global warming.” (This theme, by the way, is developed to characteristic extremes in the late Michael Crichton’s entertaining 2004 novel State of Fear, in which environmental activists engineer a series of fake “natural” disasters to sow fear over global warming.) 
 
Lindzen also says that the “consensus”—the oft-heard contention that “virtually all” climate scientists believe in catastrophic, anthropogenic global warming—is overblown, primarily for structural reasons. “When you have an issue that is somewhat bogus, the opposition is always scattered and without resources,” he explains. “But the environmental movement is highly organized. There are hundreds of NGOs. To coordinate these hundreds, they quickly organized the Climate Action Network, the central body on climate. There would be, I think, actual meetings to tell them what the party line is for the year, and so on.” Skeptics, on the other hand, are more scattered across disciplines and continents. As such, they have a much harder time getting their message across.
Because CO2 is invisible and the climate is so complex (your local weatherman doesn’t know for sure whether it will rain tomorrow, let alone conditions in 2100), expertise is particularly important. Lindzen sees a danger here. “I think the example, the paradigm of this, was medical practice.” He says that in the past, “one went to a physician because something hurt or bothered you, and you tended to judge him or her according to whether you felt better. That may not always have been accurate, but at least it had some operational content. .  .  . [Now, you] go to an annual checkup, get a blood test. And the physician tells you if you’re better or not and it’s out of your hands.” Because climate change is invisible, only the experts can tell us whether the planet is sick or not. And because of the way funds are granted, they have an incentive to say that the Earth belongs in intensive care.
 
Richard Lindzen presents a problem for those who say that the science behind climate change is “settled.” So many “alarmists” prefer to ignore him and instead highlight straw men: less credible skeptics, such as climatologist Roy Spencer of the University of Alabama (signatory to a declaration that “Earth and its ecosystems—created by God’s intelligent design and infinite power and sustained by His faithful providence—are robust, resilient, self-regulating, and self-correcting”), the Heartland Institute (which likened climate “alarmists” to the Unabomber), and Senator Jim Inhofe of Oklahoma (a major energy-producing state). The idea is to make it seem as though the choice is between accepting the view of, say, journalist James Delingpole (B.A., English literature), who says global warming is a hoax, and that of, say, James Hansen (Ph.D., physics, former head of the NASA Goddard Institute for Space Studies), who says that we are moving toward “an ice-free Antarctica and a desolate planet without human inhabitants.” 
 
But Lindzen, plainly, is different. He can’t be dismissed. Nor, of course, is he the only skeptic with serious scientific credentials. Judith Curry, the chair of the School of Earth and Atmospheric Sciences at Georgia Tech, William Happer, professor of physics at Princeton, John Christy, a climate scientist honored by NASA, now at the University of Alabama, and the famed physicist Freeman Dyson are among dozens of scientists who have gone on record questioning various aspects of the IPCC’s line on climate change. Lindzen, for his part, has said that scientists have called him privately to thank him for the work he’s doing.
 
But Lindzen, perhaps because of his safely tenured status at MIT, or just because of the contours of his personality, is a particularly outspoken and public critic of the consensus. It’s clear that he relishes taking on the “alarmists.” It’s little wonder, then, that he’s come under exceptionally vituperative attack from many of those who are concerned about the impact of climate change. It also stands to reason that they might take umbrage at his essentially accusing them of mass corruption with his charge that they are “stoking fear.” 
 
Take Joe Romm, himself an MIT Ph.D., who runs the climate desk at the left-wing Center for American Progress. On the center’s blog, Romm regularly lights into Lindzen. “Lindzen could not be more discredited,” he says in one post. In another post, he calls Lindzen an “uber-hypocritical anti-scientific scientist.” (Romm, it should be noted, is a bit more measured, if no less condescending, when the klieg lights are off. “I tend to think Lindzen is just one of those scientists whom time and science has passed by, like the ones who held out against plate tectonics for so long,” he tells me.) Seldom, however, does Romm stoop to explain what grounds justify dismissing Lindzen’s views with such disdain. 
 
Andrew Dessler, a climatologist at Texas A&M University, is another harsh critic of Lindzen. As he told me in an emailed statement, “Over the past 25 years, Dr. Lindzen has published several theories about climate, all of which suggest that the climate will not warm much in response to increases in atmospheric CO2. These theories have been tested by the scientific community and found to be completely without merit. Lindzen knows this, of course, and no longer makes any effort to engage with the scientific community about his theories (e.g., he does not present his work at scientific conferences). It seems his main audience today is Fox News and the editorial board of the Wall Street Journal.”
 
The Internet, meanwhile, is filled with hostile missives directed at Lindzen. They’re of varying quality. Some, written by climate scientists, are point-by-point rebuttals of Lindzen’s scholarly work; others, angry ad hominem screeds full of heat, signifying nothing. (When Lindzen transitioned to emeritus status last year, one blog headlined the news “Denier Down: Lindzen Retires.”)
 
For decades, Lindzen has also been dogged by unsubstantiated accusations of corruption—specifically, that he’s being paid off by the energy industry. He denies this with a laugh.
“I wish it were so!” What appears to be the primary source for this calumny—a Harper’s magazine article from 1995—provides no documentation for its assertions. But that hasn’t stopped the charge from being widely disseminated on the Internet. 
 
One frustrating feature of the climate debate is that people’s outlook on global warming usually correlates with their political views. So if a person wants low taxes and restrictions on abortion, he probably isn’t worried about climate change. And if a person supports gay marriage and raising the minimum wage, he most likely thinks the threat from global warming warrants costly public-policy remedies. And of course, even though Lindzen is an accomplished climate scientist, he has his own political outlook—a conservative one. 
 
He wasn’t reared that way. “Growing up in the Bronx, politics, I would say, was an automatic issue. I grew up with a picture of Franklin Roosevelt over my bed.” But his views started to shift in the late ’60s and ’70s. “I think [my politics] began changing in the Vietnam war. I was deeply disturbed by the way vets were being treated,” he says. He also says that his experience in the climate debate—and the rise in political correctness in the universities throughout the ’70s and ’80s—further pushed him to the right. So, yes, Lindzen, a climate skeptic, is also a political conservative whom one would expect to oppose many environmental regulations for ideological, as opposed to scientific, reasons.
By the same token, it is well known that the vast majority of “alarmist” climate scientists, dependent as they are on federal largesse, are liberal Democrats. 
 
But whatever buried ideological component there may be to any given scientist’s work, it doesn’t tell us who has the science right. In a 2012 public letter, Lindzen noted, “Critics accuse me of doing a disservice to the scientific method. I would suggest that in questioning the views of the critics and subjecting them to specific tests, I am holding to the scientific method.” Whoever is right about computer models, climate sensitivity, aerosols, and water vapor, Lindzen is certainly right about that. Skepticism is essential to science.
 
In a 2007 debate with Lindzen in New York City, climate scientist Richard C. J. Somerville, who is firmly in the “alarmist” camp, likened climate skeptics to “some eminent earth scientists [who] couldn’t be persuaded that plate tectonics were real .  .  . when the revolution of continental drift was sweeping through geology and geophysics.” 
 
“Most people who think they’re a Galileo are just wrong,” he said, much to the delight of a friendly audience of Manhattanites. 
 
But Somerville botched the analogy. The story of plate tectonics is the story of how one man, Alfred Wegener, came up with the theory of continental drift, only to be widely opposed and mocked. Wegener challenged the earth science “consensus” of his day. And in the end, his view prevailed.
 
Ethan Epstein is an assistant editor at The Weekly Standard

Sunday, January 5, 2014

Medieval Warm Period

From Wikipedia, the free encyclopedia  (Redirected from Medieval warm period)
    
Northern hemisphere temperature reconstructions for the past 2,000 years.

The Medieval Warm Period (MWP), Medieval Climate Optimum, or Medieval Climatic Anomaly was a time of warm climate in the North Atlantic region that may also have been related to other climate events around the world during that time, including in China[1] and other countries,[2][3][3][4][5][6][7] lasting from about AD 950 to 1250.[8] It was followed by a cooler period in the North Atlantic termed the Little Ice Age. Some refer to the event as the Medieval Climatic Anomaly as this term emphasizes that effects other than temperature were important.[9][10]

Despite substantial uncertainties, especially for the period prior to 1600 for which data are scarce, the warmest period of the last 2,000 years prior to the 20th century very likely occurred between 950 and 1100, but temperatures were probably between 0.1 °C and 0.2 °C below the 1961 to 1990 mean and significantly below the level shown by instrumental data after 1980. Proxy records from different regions show peak warmth at different times during the Medieval Warm Period, indicating the heterogeneous nature of climate at the time.[11] Temperatures in some regions matched or exceeded recent temperatures in these regions, but globally the Medieval Warm Period was cooler than recent global temperatures.[8]
 

Initial research

The Medieval Warm Period (MWP) is generally thought to have occurred from about AD 950–1250, during the European Middle Ages.[8] In 1965 Hubert Lamb, one of the first paleoclimatologists, published research based on data from botany, historical document research and meteorology combined with records indicating prevailing temperature and rainfall in England around 1200 and around 1600. He proposed that "Evidence has been accumulating in many fields of investigation pointing to a notably warm climate in many parts of the world, that lasted a few centuries around A.D. 1000–1200, and was followed by a decline of temperature levels till between 1500 and 1700 the coldest phase since the last ice age occurred."[12]

The warm period became known as the MWP, and the cold period was called the Little Ice Age (LIA). However, this view was questioned by other researchers; the IPCC First Assessment Report of 1990 discussed the "Medieval Warm Period around 1000 AD (which may not have been global) and the Little Ice Age which ended only in the middle to late nineteenth century."[13] The IPCC Third Assessment Report from 2001 summarised research at that time, saying "…current evidence does not support globally synchronous periods of anomalous cold or warmth over this time frame, and the conventional terms of 'Little Ice Age' and 'Medieval Warm Period' appear to have limited utility in describing trends in hemispheric or global mean temperature changes in past centuries".[14] Global temperature records taken from ice cores, tree rings, and lake deposits, have shown that, taken globally, the Earth may have been slightly cooler (by 0.03 degrees Celsius) during the 'Medieval Warm Period' than in the early and mid-20th century.[15][16]

Palaeoclimatologists developing region-specific climate reconstructions of past centuries conventionally label their coldest interval as "LIA" and their warmest interval as the "MWP".[15][17] Others follow the convention and when a significant climate event is found in the "LIA" or "MWP" time frames, associate their events to the period. Some "MWP" events are thus wet events or cold events rather than strictly warm events, particularly in central Antarctica where climate patterns opposite to the North Atlantic area have been noticed.

By world region

Evidence exists across the world, often very sparsely, for changes in climatic conditions over time. Some of the "warm period" events documented below are actually "dry periods" or "wet periods."[18]

Globally

A 2009 study by Michael Mann et al. examining spatial patterns of surface temperatures shown in multi-proxy reconstructions finds that the MWP shows "warmth that matches or exceeds that of the past decade in some regions, but which falls well below recent levels globally."[8] Their reconstruction of MWP pattern is characterised by warmth over large part of North Atlantic, Southern Greenland, the Eurasian Arctic, and parts of North America which appears to substantially exceed that of the late 20th century (1961–1990) baseline and is comparable or exceeds that of the past one-to-two decades in some regions. Certain regions such as central Eurasia, northwestern North America, and (with less confidence) parts of the South Atlantic, exhibit anomalous coolness.

North Atlantic

Central Greenland reconstructed temperature.
The last written records of the Norse Greenlanders are from a 1408 marriage in the church of Hvalsey — today the best-preserved of the Norse ruins.

A radiocarbon-dated box core in the Sargasso Sea shows that the sea surface temperature was approximately 1 °C (1.8 °F) cooler than today approximately 400 years ago (the Little Ice Age) and 1700 years ago, and approximately 1 °C warmer than today 1000 years ago (the Medieval Warm Period).[5]

Using sediment samples from Puerto Rico, the Gulf Coast and the Atlantic Coast from Florida to New England, Mann et al. (2009) found consistent evidence of a peak in North Atlantic tropical cyclone activity during the Medieval Warm Period followed by a subsequent lull in activity.[19]
Through retrieval and isotope analysis of marine cores and examination of mollusc growth patterns from Iceland, Patterson et al were able to reconstruct a mollusc growth record at a decadal resolution from the Roman Warm Period through the Medieval Warm Period and into the Little Ice Age.[20]

North America

1690 copy of the 1570 Skálholt map, based on documentary information about earlier Norse sites in America.

The 2009 Mann et al. study found warmth exceeding 1961–1990 levels in Southern Greenland and parts of North America during the Medieval climate anomaly (defined for this purpose as 950 to 1250) with warmth in some regions exceeding temperatures of the 1990–2010 period. Much of the Northern hemisphere showed significant cooling during the Little Ice Age (defined for the purpose as 1400 to 1700) but Labrador and isolated parts of the United States appeared to be approximately as warm as during the 1961–1990 period.[8]

Norse colonization of the Americas has been associated with warmer periods. The Vikings took advantage of ice-free seas to colonize areas in Greenland and other outlying lands of the far north.[21]
From around 1,000 AD Vikings formed settlements in two areas located near the southern tip of Greenland at a similar latitude to Iceland, the Eastern Settlement at the southern tip, and the Western Settlement to its north. A smaller group of farms between them has been identified by archaeologists as the "Middle Settlement". At that time they farmed cattle and pigs, with around a quarter of their diet from seafood, but after the climate became colder and stormier around 1250 smaller farms gradually changed to farming sheep and goats rather than cows. Around 1300 they abandoned pig farming, and from then on seal hunting provided over three quarters of their food. While there are no signs that this adversely affected their health, by mid century trade with Norway fell away and there was little demand for their exports of seal skins and walrus tusks. One of the last documents of their occupation dates from 1408, and over the remainder of that century the remaining Vikings left in what seems to have been an orderly withdrawal, largely due to social factors such as increased availability of farms in Scandinavian countries.[22]

Around 1000AD the climate was sufficiently warm for the north of Newfoundland to support a Viking colony and led to the descriptor "Vinland." An extensive settlement at L'Anse aux Meadows was found and originally excavated by Helge Ingstad.[23]
L'Anse aux Meadows, Newfoundland, today, with reconstruction of Viking settlement.

In the Chesapeake Bay, researchers found large temperature excursions (changes from the mean temperature of that time) during the Medieval Warm Period (about 950–1250) and the Little Ice Age (about 1400–1700, with cold periods persisting into the early 20th century), possibly related to changes in the strength of North Atlantic thermohaline circulation.[24] Sediments in Piermont Marsh of the lower Hudson Valley show a dry Medieval Warm period from AD 800–1300.[25]

Prolonged droughts affected many parts of the western United States and especially eastern California and the west of Great Basin.[15][26] Alaska experienced three time intervals of comparable warmth: AD 1–300, 850–1200, and post-1800.[27] Knowledge of the North American Medieval Warm Period has been useful in dating occupancy periods of certain Native American habitation sites, especially in arid parts of the western U.S.[28][29] Review of more recent archaeological research shows that as the search for signs of unusual cultural changes during the MWP has broadened, some of these early patterns (for example, violence and health problems) have been found to be more complicated and regionally varied than previously thought while others (for example, settlement disruption, deterioration of long distance trade, and population movements) have been further corroborated.[30]

Other regions

The climate in equatorial east Africa has alternated between drier than today, and relatively wet. The drier climate took place during the Medieval Warm Period (~AD 1000–1270).[31]

A sediment core from the eastern Bransfield Basin, Antarctic Peninsula, preserves climatic events in the Little Ice Age and Medieval Warm Period.[32] The core shows a distinctly cold period about AD 1000–1100, illustrating that during the "warm" period there were, regionally, periods of both warmth and cold.

Corals in the tropical Pacific Ocean suggest that relatively cool, dry conditions may have persisted early in the millennium, consistent with a La Niña-like configuration of the El Niño-Southern Oscillation patterns.[33] Although there is an extreme scarcity of data from Australia (for both the Medieval Warm Period and Little Ice Age) evidence from wave-built shingle terraces for a permanently full Lake Eyre[34] during the 9th and 10th centuries is consistent with this La Niña-like configuration, though of itself inadequate to show how lake levels varied from year to year or what climatic conditions elsewhere in Australia were like.

The MWP has been noted in Chile in a 1500-year lake bed sediment core,[35] as well as in the Eastern Cordillera of Ecuador.[36]

Adhikari and Kumon (2001), whilst investigating sediments in Lake Nakatsuna in central Japan, finding a warm period from AD 900 to 1200 that corresponded to the Medieval Warm Period and three cool phases, of which two could be related to the Little Ice Age.[37] Another research in northeastern Japan shows that there is one warm/humid interval from AD 750 to 1200, and two cold/dry intervals from AD 1 to 750 and 1200 to present.[7] Ge et al. studied temperatures in China during the past 2000 years; they found high uncertainty prior to the 16th century but good consistency over the last 500 years, highlighted by the two cold periods 1620s–1710s and 1800s–1860s, and the warming during the 20th century. They also found that the warming during the 10–14th centuries in some regions might be comparable in magnitude to the warming of the last few decades of the 20th century which was unprecedented within the past 500 years.[38]

A 1979 study from the University of Waikato found that "Temperatures derived from an 18O/16O profile through a stalagmite found in a New Zealand cave (40.67°S, 172.43°E) suggested the Medieval Warm Period to have occurred between AD 1050 and 1400 and to have been 0.75 °C warmer than the Current Warm Period."[39] The MWP has also been evidenced in New Zealand by an 1100-year tree-ring record.[40]

A reconstruction based on ice cores found the Medieval Warm Period could be distinguished in tropical South America from about 1050 to 1300, followed in the 15th century by the Little Ice Age. Peak temperatures did not rise as high as those from the late 20th century, which were unprecedented in the area during the study period going back around 1600 years.[41]

Connection, Connection, Connection…

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
There are approximately 86 billion neurons in the human brain. Over the past decades, we have made enormous progress in understanding their molecular, genetic, and structural makeup as well as their function. However, the real power of the central nervous system lies in the smooth coordination of large numbers of neurons. Neurons are thus organized on many different scales, from small microcircuits and assemblies all the way to regional brain networks. To interact effectively on all these levels, neurons, nuclei, cortical columns, and larger areas need to be connected. The study of neuronal connectivity has expanded rapidly in past years. Large research groups have recently joined forces and formed consortia to tackle the difficult problems of how to experimentally investigate connections in the brain and how to analyze and make sense of the enormous amount of data that arises in the process.
 
This year's neuroscience special issue is devoted to general and also several more specific aspects of research on connectivity in the brain. We invited researchers to review the most recent progress in their fields and to provide us with an outlook on what the future may hold in store.
To make sense of larger structures, we first have to understand the composition of their basic building blocks. Markov et al. (p. 578) describe how interareal connectivity at the single-cell level, revealed by quantitative anatomical tract tracing, is relevant to our understanding of large-scale cortical networks and their hierarchical organization.
 
A different but also rapidly growing research direction deals with the use of connectivity measures to link brain structure and cognition. From the perspective of network theory, Park and Friston (p. 579) review our current understanding of structure-function relationships in large-scale brain networks and their underlying mechanisms.
 
One of the biggest breakthroughs in understanding the heavily connected brain has been the development of noninvasive brain-scanning methods, especially functional magnetic resonance imaging (fMRI). Turk-Browne (p. 580) provides an overview of recent exciting developments in large-scale fMRI data analysis, with a focus on unbiased approaches for examining whole-brain functional connectivity during cognitive tasks. Increased computational power now allows investigation of the whole-brain correlation matrix, the temporal correlation of every voxel with every other voxel throughout the brain, and the application of multivariate pattern analysis to these correlational data.
 
A sophisticated system that depends on the astonishingly precise interaction of a large number of cortical areas is the human ability to produce and understand language and music. Zatorre (p. 585) discusses how brain plasticity in the music and speech domains can be affected by predisposing factors that relate to brain structure and function.

Lies, Damned Lies, and Medical Science

Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.
      
Robyn Twomey/Redux

In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical school’s teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if she’d like to try to prove whether they were true—he seemed to be almost daring her. She accepted the challenge and, with the professor’s and other colleagues’ help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. “It was hard to find a journal willing to publish it, but we did,” recalls Tatsioni. “I also discovered that I really liked research.” Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.

Last spring, I sat in on one of the team’s weekly meetings on the medical school’s campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.

One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”). Another pointed out that Salanti’s study ignored the fact that when drug-company data seemed to show patients’ health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.
Salanti remained poised, as if the grilling were par for the course, and gamely acknowledged that the suggestions were all good—but a single study can’t prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began? “Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?

That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.

The city of Ioannina is a big college town a short drive from the ruins of a 20,000-seat amphitheater and a Zeusian sanctuary built at the site of the Dodona oracle. The oracle was said to have issued pronouncements to priests through the rustling of a sacred oak tree. Today, a different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. “I take all the researchers who visit me here, and almost every single one of them asks the tree the same question,” Ioannidis tells me, as we contemplate the tree the day after the team’s meeting. “‘Will my research grant be approved?’” He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.

He first stumbled on the sorts of problems plaguing the field, he explains, as a young physician-researcher in the early 1990s at Harvard. At the time, he was interested in diagnosing rare diseases, for which a lack of case data can leave doctors with little to go on other than intuition and rules of thumb. But he noticed that doctors seemed to proceed in much the same manner even when it came to cancer, heart disease, and other common ailments. Where were the hard data that would back up their treatment decisions? There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases. A new “evidence-based medicine” movement was just starting to gather force, and Ioannidis decided to throw himself into it, working first with prominent researchers at Tufts University and then taking positions at Johns Hopkins University and the National Institutes of Health. He was unusually well armed: he had been a math prodigy of near-celebrity status in high school in Greece, and had followed his parents, who were both physician-researchers, into medicine. Now he’d have a chance to combine math and medicine by applying rigorous statistical analysis to what seemed a surprisingly sloppy field. “I assumed that everything we physicians did was basically right, but now I was going to help verify it,” he says. “All we’d have to do was systematically review the evidence, trust what it told us, and then everything would be perfect.”

It didn’t turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science “never minds” are hardly secret.
And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesn’t really help fend off Alzheimer’s disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.

But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. “Randomized controlled trials,” which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. “I realized even our gold-standard research had a lot of problems,” he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.

This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was.
“The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”

Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.

In the late 1990s, Ioannidis set up a base at the University of Ioannina. He pulled together his team, which remains largely intact today, and started chipping away at the problem in a series of papers that pointed out specific ways certain studies were getting misleading results. Other meta-researchers were also starting to spotlight disturbingly high rates of error in the medical literature. But Ioannidis wanted to get the big picture across, and to do so with solid data, clear reasoning, and good statistical analysis. The project dragged on, until finally he retreated to the tiny island of Sikinos in the Aegean Sea, where he drew inspiration from the relatively primitive surroundings and the intellectual traditions they recalled. “A pervasive theme of ancient Greek literature is that you need to pursue the truth, no matter what the truth might be,” he says. In 2005, he unleashed two papers that challenged the foundations of medical research.

He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.
Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.

Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.

Driving me back to campus in his smallish SUV—after insisting, as he apparently does with all his visitors, on showing me a nearby lake and the six monasteries situated on an islet within it—Ioannidis apologized profusely for running a yellow light, explaining with a laugh that he didn’t trust the truck behind him to stop. Considering his willingness, even eagerness, to slap the face of the medical-research community, Ioannidis comes off as thoughtful, upbeat, and deeply civil. He’s a careful listener, and his frequent grin and semi-apologetic chuckle can make the sharp prodding of his arguments seem almost good-natured. He is as quick, if not quicker, to question his own motives and competence as anyone else’s. A neat and compact 45-year-old with a trim mustache, he presents as a sort of dashing nerd—Giancarlo Giannini with a bit of Mr. Bean.

The humility and graciousness seem to serve him well in getting across a message that is not easy to digest or, for that matter, believe: that even highly regarded researchers at prestigious institutions sometimes churn out attention-grabbing findings rather than findings likely to be right. But Ioannidis points out that obviously questionable findings cram the pages of top medical journals, not to mention the morning headlines. Consider, he says, the endless stream of results from nutritional studies in which researchers follow thousands of people for some number of years, tracking what they eat and what supplements they take, and how their health changes over the course of the study.
“Then the researchers start asking, ‘What did vitamin E do? What did vitamin C or D or A do? What changed with calorie intake, or protein or fat intake? What happened to cholesterol levels? Who got what type of cancer?’” he says. “They run everything through the mill, one at a time, and they start finding associations, and eventually conclude that vitamin X lowers the risk of cancer Y, or this food helps with the risk of that disease.” In a single week this fall, Google’s news page offered these headlines: “More Omega-3 Fats Didn’t Aid Heart Patients”; “Fruits, Vegetables Cut Cancer Risk for Smokers”; “Soy May Ease Sleep Problems in Older Women”; and dozens of similar stories.
 

The Problem of Perfection

The Problem of Perfection Image source: Om Warrior - http://inspireusall.wordpress.com/tag/perfection/

Entropy seems to be one of those things that professions use to keep outsiders outside. One can almost hear the physicists conspiring: Order is too easy so we need some mystery. Thus when something has more order it has less entropy.

Physicists agree the early universe’s entropy was very low. Low entropy means almost perfect order. This may be the way it was; it’s not the way it goes. If one watches anything that’s isolated so disorder can’t slip out the back door and go someplace else―one sees it get disordered over time. This is sometimes called the Second Law. Some situations seem to show that it is wrong. What could be more striking than the order of a growing human brain? Well, this is where that back-door rule comes in. A human brain won’t grow in isolation. To check the Second Law, one must watch the infant that comes with it, and its inputs and its outputs: the baby food and diapers; the factories that make them; the garbage dumps; the once-clean water going, dirty, down the drain; CO2 emissions and the whole shebang. Life forms make their order using energy to move disorder someplace else. Total entropy goes up.

The order of the universe is a big problem. Since the universe is everything there’s nothing that can interfere. It’s a truly isolated system, no back door. The Second Law says that its order must run down. So it must be more ordered as we look far back in time. The order of the universe at its beginning must have been extremely high. Ever since―every microsecond, every million years―its order has gone down. Translated into English this means the Beginning was orderly to an almost inconceivable degree. Barrow called it very special. So in entropic terms the Problem of Perfection is: In the first instant of the universe its entropy was very low. How did it get to be that way? - See more at: http://www.timeone.ca/clues/the-problem-of-perfection/#sthash.JGLShS26.c8uMtWDH.dpuf

Argumentum Ad Monsantum: Bill Maher and The Lure of a Liberal Logical Fallacy by Kyle Hill

Let’s get real. It doesn’t matter if you think Monsanto is evil. Genetically modified  food is safe—no matter what logical fallacies will lead liberals like Bill Maher to believe.
 
AP Photo/HBO, Janet Van Ham

If Monsanto has anything to do with it, it must be evil. That seems to be the prevailing opinion on the monolithic biotech company. Following that logic, if they produce corn or soybeans or another crop that has been genetically modified (GM), those too must be evil. That’s Bill Maher’s reasoning at least—reasoning that lures liberals away from science and towards denial.

Making the leap from Monsanto’s business practices—whatever you may think of them—to the “dangers” of GM foods is a mistake in logical reasoning. It is akin to saying landscape paintings are potentially evil because the painter was a serial killer. The conclusion does not follow from the premise. And giving some product or process the attributes of its user is the logical fallacy that currently leads typically pro-science liberals like Maher astray on questions of nuclear power, vaccination, and especially GMOs. Whether genetically modified foods are safe is a scientific, not a political, question. To intertwine views of Monsanto with GM foods is therefore an argumentum ad monsantum, a disturbingly popular logical fallacy, and Bill Maher is the classic example.

I am a fan of Real Time with Bill Maher. It’s HBO’s version of The Daily Show, with a liberal host poking fun at the foibles of government and politicians. But every so often, satire can veer off course, lampooning scientific findings as if they were the latest sex scandal. This is the case with Bill Maher. Though you will hear him on Real Time staunchly defending the science of climate change and evolution against politically-charged deniers, you will also hear him railing against vaccines, nuclear power, and GMOs with the same polemic language he satirizes.

For example, in episode #294 of Real Time, Maher invites the director of “GMO OMG” for a conversation about the “dangers” of GM foods. (Note that fellow Scientific American writer Ferris Jabr has convincingly argued why “GMO OMG” is an emotionally manipulative film that skimps on the science.) Maher begins the conversation with a question: “I don’t want to start things off by asking why Monsanto is evil…but why is Monsanto evil?” The director goes on to explain why, with the rest of the panel chiming in. Then you see something very telling. CNN contributor David Frum, a Republican, interrupts to explain how humans have been genetically modifying food ever since we prioritized seeds from desirably growing crops at the dawn of agriculture. He was booed and hissed by the crowd. I mentioned Frum’s political affiliation because Real Time has an admitted slant towards liberalism, and Republicans encounter much resistance on each episode. This time was no different. Though Frum was exactly right on the science, he was treated as exactly wrong. The argumentum ad monsantum struck again.

Maher, who I think gets a lot of science right, gets the science of GM food so wrong because he is unable or unwilling to disentangle the politics from the science. Many liberals seem to have the same problem.

The first component to the liberal opposition to genetically modified food appears to be a genuine misunderstanding of how it works. The genetic modification of food is a much more exact science than many opponents realize. As this fantastic explainer outlines, genetic modification is typically about inserting a single gene—whose effects we test for toxicity and allergenic properties—into a crop. It is not a haphazard Frankenstein process of sowing and suturing animal and plant parts together. In fact, a Frankenstein-style process is exactly what was done before genetic modification.

In the early days of agriculture, farmers crossbred plants to take advantage of the genetic diversity thrown up by evolutionary processes. Whatever beneficial properties emerged were saved in the seeds and transplanted into the next generation. This is a Mary Shelly-style process, with more recent farmers exposing their plants to radiation in the hopes of increasing the genetic variations at their disposal. That’s a fact that is absent from many a Monsanto discussion. If anything exemplifies the messy, unknown nature of altering crops, it’s what farming looked like before genetic modification.

Even when we are taking genes from animals and inserting them into plants or vice-versa, the results are still safe, reduce pesticide use, and dramatically increase crop yields. In fact, this year, a review of over 1,700 papers [PDF] concerning the safety of GM food in the journal Critical Reviews in Biotechnology concluded, “The scientific research conducted so far has not detected any significant hazards directly connected with the use of genetically engineered crops.”

Increasing the hardiness of our crops to better feed the world is also the main benefit of genetic modification, often omitted from the curious liberal opposition to GM food. As climate change picks up its pace, we will need crops that can feed more people while at the same time resisting parasites, infections, and drought. Scientifically established safety is bolstered by moral obligation.

While Bill Maher has a habit of outright denying the safety of GM food, he does sometimes taper his views by offering the alternative—growing food “organically” (GM food is still organic material, of course, but it may not fit the FDA’s designations of what “organic” food is). However, the supposed superiority of organically grown food has little scientific justification. Organically grown food still uses pesticides, those pesticides are largely untested, the pesticide reduction that organic food does offer is insignificant at best, and the food itself is no more nutritious or safe than its engineered alternative.

Still, even though the scientific community is in agreement over the safety of GM foods, there is a question of disclosure—the second component of the argumentum ad absurdum. To Maher, the “evil” nature of Monsanto is bound-up in the fact that GM foods are not currently labeled as such.
We deserve to know what we are eating, and if Monsanto won’t tell us, GM food must be bad for us, or so the argument seems to go. But again, the science must be separated from the politics. No one will deny that Monsanto had a dog in the fight to prevent GM labeling in California, but Maher might be surprised to hear that labeling genetically modified foods is a bad idea, despite the benefits of transparency. There is no scientific reason to label from a safety standpoint, and doing so would likely only create more fear around the already beleaguered technology. And that fear would probably have damaging implications for all advances in food technology. Just look at what happens when people realize that fluoride—a safe and amazingly effective addition to our public water supply—is coming from their tap.

For questions that science, and not politics, has a bearing on, it really does not matter what you think of Monsanto. It does not matter what you think of the corporation’s business tactics or how it treats its customers or employees. Similarly, it does not matter if you think Al Gore a hypocrite or Charles Darwin a heathen–climate change and evolution are real and established. By calling GMOs “poison” and “evil”, Bill Maher poisons the well of reasoned scientific discussion with ideologically driven fear mongering.

It’s fashionable to think that the conservative parties in America are the science deniers. You certainly wouldn’t have trouble supporting that claim. But liberals are not exempt. Though the denial of evolution, climate change, and stem cell research tends to find a home on the right of the aisle, the denial of vaccine, nuclear power, and genetic modification safety have found a home on the left (though the extent to which each side denies the science is debatable). It makes one wonder: Why do liberals like Maher—psychologically considered open to new ideas—deny the science of GM food while accepting the science in other fields?

The answer to that giant question is an unsettled one, but themes do leap out of the literature. Simplifying greatly, cognitive bias and ideology play a large role. We tend to accept information that confirms our prior beliefs and ignore or discredit information that does not. This confirmation bias settles over our eyes like distorting spectacles for everything we look at. Could this be at the root of the argumentum ad monsantum? It isn’t inconsistent with the trend Maher has shown repeatedly on his show. A liberal opposition to corporate power, to capitalistic considerations of human welfare, could be incorrectly coloring the GM discussion. Perhaps GMOs are the latest casualty in a cognitive battle between confirmation bias and reality.

But just how much psychology plays into the opposition of GMOs is a question that can’t even be asked until the politics and the science are untangled.

To his credit, Bill Maher has a record of seeing the science forest for the political trees when it comes to topics like climate change and evolution. He spots the political manipulation of climate change when the Koch brothers fund disinformation. He picks out when arguments to “teach the controversy” are just semantic manipulations to get religious ideology into science classes. I hope that he, and the liberal bastion of science denial he sometimes represents, one day gets real and recognizes how much his political views are manipulating his stance on genetically modified foods.


Tip of the hat to Brian Dunning who came up with the phrase “argumentum ad monsantum” on Twitter.
Kyle HillAbout the Author: Kyle Hill is a freelance science writer and communicator who specializes in finding the secret science in your favorite fandom. Follow on Twitter @Sci_Phile.

SPS-ALPHA: The First Practical Solar Power Satellite via Arbitrarily Large Phased Array

SPS-ALPHA: The First Practical Solar Power Satellite via Arbitrarily Large Phased Array

The vision of delivering solar power to Earth from platforms in space has been known for decades. However, early architectures to accomplish this vision were technically complex and unlikely to prove economically viable...A new SPS concept has been proposed that resolves many, if not all, of those uncertainties: “SPS-ALPHA” (Solar Power Satellite by means of Arbitrarily Large Phased Array).

http://www.nss.org/settlement/ssp/library/SPS_Alpha_2012_Mankins.pdf

Read more!

http://www.nss.org/settlement/ssp/library/index.htm
(A 2011-2012 NASA NIAC Phase 1 Project)
 
 
 
The vision of delivering solar power to Earth from platforms in space has been known for
decades. However, early architectures to accomplish this vision were technically
complex and unlikely to prove economically viable. Some of the issues with these earlier
solar power satellite (SPS) concepts – particularly involving technical feasibility – were
addressed by NASA’s space solar power (SSP) studies and technology research in the mid to
late 1990s. Despite that progress, ten years ago a number of key technical and
economic uncertainties remained. A new SPS concept has been proposed that resolves
many, if not all, of those uncertainties: “SPS-ALPHA” (Solar Power Satellite by means of
Arbitrarily Large Phased Array).
 
During 2011-2012 the NASA Innovative Advanced Concepts (NIAC) Program supported a
Phase 1 “SPS-ALPHA” project, the goal of which was to establish the technical and
economic viability of the SPS-ALPHA concept to an early TRL 3 – analytical proof-of concept
– and provide a framework for further study and technology development. The
objectives of this project were to: (1) conduct an initial end-to-end systems analysis of the
SPS-ALPHA concept in order to determine its technical feasibility; (2) identify and assess
in greater detail the key technology challenges inherent in the architecture (including
figures of merit for each critical technology area); (3) conduct an initial evaluation of the
economic viability of the concept (as a function of key performance parameters); and, (4)
define a preliminary roadmap for the further development of the SPS-ALPHA concept.
This report presents the results of that study.

This work was performed under NASA Grant NNX11AR34G.
 
 
 
 
 
 

 
 

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...