Search This Blog

Friday, May 22, 2015

Creationism – Are We Winning The Battle and Losing The War?



One of the major ambitions of my life is to promote science and critical thinking, which I do under the related banners of scientific skepticism and science-based medicine. This is a huge endeavor, with many layers of complexity. For that reason it is tempting to keep one’s head down, focus on small manageable problems and goals, and not worry too much about the big picture. Worrying about the big picture causes stress and anxiety.

I have been doing this too long to keep my head down, however. I have to worry about the big picture: are we making progress, are we doing it right, how should we alter our strategy, is there anything we are missing?

The answers to these questions are different for each topic we face. While we are involved in one large meta-goal, it is composed of hundreds of sub-goals, each of which may pose their own challenges. Creationism, for example, is one specific topic that we confront within our broader mission or promoting science.

Over my life the defenders of science have won every major battle against creationism, in the form of major court battles, many at the supreme court level. The most recent was Kitzmiller vs Dover, which effectively killed  Intelligent Design as a strategy for pushing creationism into public schools. The courts are a great venue for the side of science, because of the separation clause in the constitution and the way it has been interpreted by the courts. Creationism is a religious belief, pure and simple, and it has no place in a science classroom. Evolution, meanwhile, is an established scientific theory with overwhelming support in the scientific community. It is the exact kind of consensus science that should be taught in the classroom. When we have this debate in the courtroom, where there are rules of evidence and logic, it’s no contest. Logic, facts, and the law are clearly on the side of evolution.

Despite the consistent legal defeat of creationism, over the last 30 years Gallup’s poll of American public belief in creationism has not changed. In 1982 44% of Americans endorsed the statement: “God created humans in their present form.” In 2014 the figure was 42%; in between the figure fluctuated from 40-47% without any trend.

There has been a trend in the number of people willing to endorse the statement that humans evolved without any involvement from God, with an increase from 9 to 19%. This likely reflects a general trend, especially in younger people, away from religious affiliation – but apparently not penetrating the fundamentalist Christian segments of society.

Meanwhile creationism has become, if anything, more of an issue for the Republican party. It seems that any Republican primary candidate must either endorse creationism or at least pander with evasive answers such as, “I am not a scientist” or “teach the controversy” or something similar.

Further, in many parts of the country with a strong fundamentalist Christian population, they are simply ignoring the law with impunity and teaching outright creationism, or at least the made-up “weaknesses” of evolutionary theory.
They are receiving cover from pandering or believing politicians. This is the latest creationist strategy – use “academic freedom” laws to provide cover for teachers who want to introduce creationist propaganda into their science classrooms.

Louisiana is the model for this. Zack Kopplin, who was a high school student when Bobby Jindal signed the law that allows teachers to introduce creationist material into Louisiana classrooms. He has since made it his mission to oppose such laws, and he writes about his frustrations in trying to make any progress. Creationists are simply too politically powerful in the Bible belt.

This brings me back to my core question – how are we doing (at least with respect to the creationism issue)? The battles we have fought needed to be fought and it is definitely a good thing that science and reason won. There are now powerful legal precedents defending the teaching of evolution and opposing the teaching of creationism in public schools, and I don’t mean to diminish the meaning of these victories.

But we have not penetrated in the slightest the creationist culture and political power, which remains solid at around 42% of the US population. It seems to me that the problem is self-perpetuating. Students raised in schools that teach creationism or watered-down evolution and live in families and go to churches that preach creationism are very likely to grow up to be creationists. Some of them will be teachers and politicians.

From one perspective we might say that we held the line defensively against a creationist offense, but that is all – we held the line. Perhaps we need to now figure out a way to go on offense, rather than just waiting to defend against the next creationist offense. The creationists have think tanks who spend their time thinking about the next strategy. At best we have people and organizations (like the excellent National Center for Science Education) who spend their time trying to anticipate the next strategy.

The NCSE’s own description of their mission is, “Defending the teaching of evolution and climate science.” They are in a defensive posture. Again, to be clear, they do excellent and much needed work and I have nothing but praise for them. But looking at the big picture, perhaps we need to add some offensive strategies to our defensive strategies.

I don’t know exactly what form those offensive strategies would take. This would be a great conversation for skeptics to have, however. Rather than just fighting against creationist laws, for example, perhaps we could craft a model pro-science law that will make it more difficult for science teachers to hide their teaching of creationism.
Perhaps we need a federal law to trump any pro-creationist state laws. It’s worth thinking about.

I also think we need a cultural change within the fundamentalist Christian community. This will be a tougher nut to crack. We should, however, be having a conversation with them about how Christian faith can be compatible with science. Faith does not have to directly conflict with the current findings of science. Modeling ways in which Christians can accommodate their faith to science may be helpful. And to be clear – I am not saying that science should accommodate itself to faith, that is exactly what we are fighting against.

Conclusion

As the skeptical movement grows and evolves, I would like to see it mature in the direction where high-level strategizing on major issues can occur. It is still very much a grassroots movement without any real organization. At best there is networking going on, and perhaps that is enough. At the very least we should parlay those networks into goal-oriented strategies on specific issues.

Creationism is one such issue that needs some high-level think tank attention.

‘Natural’ illusions: Biologist’s failed attempt to defend organic food

| May 22, 2015 |

Screen Shot 2015-05-22 at 8.48.37 AM

Here I write of my attempt to defend organic, of the risks of repeating slogans, and of how pieces of worldview are built and change, sometimes as easily as with a comment or two.

“Yes, Monsanto is pure evil,” I said. This was about a year ago, in 2013, and I was defending science and nuanced thinking in the same sentence, no less. “Monsanto is pure evil,” I said, “but genetic engineering is just a tool and in itself is neither good or bad.” My University course literature had given a balanced view of many possible benefits to GM while highlighting a couple of areas of caution. My main insight on Monsanto came from the movie Food Inc., confirmed by plenty of common internet knowledge and a couple of trusted friends of mine.

I had always considered myself a rational and science-minded person so I was upset when I first heard people object to GMOs for reasons such as not wanting genes in their food (in the late nineties, when the topic was still very new and knowledge scarce) or just because ‘it wasn’t natural’, which I saw as a fear of the unknown.

Later on I was incredibly frustrated to find that a lot of people opposed standard vaccinations going counter to scientific evidence. So when I stumbled on a Facebook page called “We love vaccines and GMOs”, though I didn’t exactly think of my view on genetic engineering as ‘love’, I was happy to find a place to share my frustration. But as I started following their posts I was confronted with something that gave me pause. There were several that criticised organic farming.

I had been a loyal organic consumer for a decade. My vegan friends had talked a lot about how detrimental industrial agriculture was for the environment, and even my favourite ecology teacher back in the University mentioned how important it was to buy organic milk and meat. Living on student subsidies and saving on about everything else, I was convinced that buying ecological produce (In Finland the label actually goes under the name ‘Eco’, and the Swedish label, translated roughly to ‘Demand’, also states the food is ecologically produced. In Switzerland it’s called ‘Bio’ for biologically farmed.) was vital for the environment. Paying twice the price was more than worth it.

I couldn’t just leave the criticism unaddressed. Somebody needed to present a nuanced voice of organic farming, so that people would not group it together with anti-science sentiments. So I started digging. I read about comprehensive meta-analysesof studies where they found that organic food was no more nutritious than conventional produce – News in Standford medicine – Little evidence of health benefits from organic foods (or click for the paper behind paywall), for instance, said:
science simply cannot find any evidence that organic foods are in any way healthier than non-organic ones – and scientists have been comparing the two for over 50 years.
and in Scientific American on: Mythbusting 101 organic farming vs conventional agriculture. Interesting, I thought, but hardly devastating. That wasn’t my reason for choosing organic. I read about how organic was an industry like any other, looking for profit, with all the dirt that entails. Even Michael Pollan criticized the organic industry in his book Omnivore’s dilemma. Well sure. It couldn’t exactly be a charity, could it? Not every company was perfectly principled. It didn’t mean that the whole organic label was bad.

Then I read a Swiss animal welfare organisation statement (article in german) that organic did not necessarily reflect in greater well-being for the animals, that it was more narrowly focused on the farming of crops. As a great animal lover I thought, okay, that’s a pity, for animal products I would have to look for different labels. But I would continue to support organic for the most important point, for the sake of the environment.

I continued. Then there was a study about organic pesticides being no more benign than conventional. Well that was surprising, but made sense now that I thought of it – they would all have to be some kind of chemicals that kill plants and insects. It turns out, when they looked at natural vs synthetic chemicals, researchers had come to the conclusion that natural or not does not make a big difference. As one commenter on that concluded:
Until recently, nobody bothered to look at natural chemicals (such as organic pesticides), because it was assumed that they posed little risk. But when the studies were done, the results were somewhat shocking: you find that about half of the natural chemicals studied are carcinogenic as well. This is a case where everyone (consumers, farmers, researchers) made the same, dangerous mistake. We assumed that “natural” chemicals were automatically better and safer than synthetic materials, and we were wrong. It’s important that we be more prudent in our acceptance of “natural” as being innocuous and harmless.
I further read about how the risks from pesticides for the consumer were actually very small, and that people feared them much out of proportion! What a relief. Why did so many seem to think the opposite?

Further, there was a study that said organic farming actually contributed more to pollution of groundwater, and then a meta-analysis of more than a hundred studies saying organic had more ammonia and nitrogen run-off per product unit, leading to more eutrophication as well as acidification potential. Ouch. That was not what I would have thought. But considering the imprecise mode of fertilisation (spreading out manure), that too did make sense. Most importantly, also confirmed by several sources (and here and here), I found out that the big issue with organic farming was the yield – forgoing the more efficient synthetic methods meant having one third (or between a half and one fifth) less of end product. Nature News:
Crop yields from organic farming are as much as 34 percent lower than those from comparable conventional farming practices, the analysis finds. Organic agriculture performs particularly poorly for vegetables and some cereal crops such as wheat, which make up the lion’s share of the food consumed around the world.
Cereals and vegetables need lots of nitrogen to grow, suggesting that the yield differences are in large part attributable to nitrogen deficiencies in organic systems, says Seufert.
This in turn meant that scaling up organic farming, we would need to find a third more land to make up for its inefficiency.

When I looked at these studies one by one, my immediate reaction was: surely now that these results were available, where necessary, organic farming practices could be adapted so that they would continue to provide consumers with the best environmentally friendly sources of food. But that relied on an assumption I held that I had so far not even thought of checking.

I thought organic farming was based on evidence, but it wasn’t. It wasn’t designed by studying what would be best for the environment. On the contrary, to my surprise I found it’s roots were actually in biodynamic agriculture – a method that emphasizes spiritual and mystical perspectives on farming. What? How could I have missed such a point for a decade? The picture I was beginning to piece together was that being ‘organic’ was based on the idea that modern farming – industrial agriculture – was bad, and the old ways of farming were better. That whatever natural was, that was better.

So anything created specifically in a lab, with intention, aim, and knowledge – anything synthetic – had to be bad. Genetic engineering (which I had thought would go hand-in-hand with many of the ecological intentions of organic farming) had to be especially bad. And companies working on modern agricultural approaches were simply the worst.
i_love_monsanto
I couldn’t believe what I had just read.

While I was in the midst of what I call my organic crisis, I saw another post that was at odds with my world view. But this one was over the top. A YouTube video called “I love Monsanto”. I clicked on the link in disbelief as I had never seen those three words in the same sentence before. Obviously it was an attention-seeking stunt, and it worked. The man in the video, Dusty, went through one Monsanto-claim after another, and punched them full of holes. And quite easily too. He urged his watchers not to take his word but to read up on the claims themselves. I did. Alleged lawsuits, bad treatment of employees, terminator seeds, Indian farmer suicides, abusing and controlling farmers, patents, notorious history, being evil, falsifying research, and on and on. I came up empty. There was nothing terrible left that I could accuse Monsanto of. I even skimmed back and forth in the movie Food Inc., and looked for supporting sources online, but instead of finding ammunition, I found more holes. With a few emotional testimonies and dramatised footage the movie painted a worldview which made all its following insinuations plausible. I couldn’t believe I had not seen the gaps in its presentation on the first viewing. Why didn’t they interview any science experts or organisations? What about the FDA? Union representatives? Farming organisations? Lawyers? Immigration officials? Where was the actual evidence?

I was embarrassed and angry over how easily I had been fooled. Not only had I parroted silly slogans such as ‘Monsanto is evil’, but I had long and determinedly supported a branch of agriculture that I thought was making the world better. It dawned on me that the only improvements in fact being made were the ones in the minds of myself and the other organic supporters – thinking better of ourselves for making such ethical choices. I had shunned others for using the ‘natural’ argument, but with my wallet I had supported the idea that ‘natural’ methods were best in a mysterious way that was above and beyond evidence.

I began to question if there even was a ‘natural way to farm’? If natural was defined by, say, the exclusion of human activities, then surely there was nothing natural to farming. On the other hand, if we accepted humans as a part of nature, and our continued innovations as part of *our nature*, then all farming was natural. Saying that more traditional farming practices would be inherently better than those using more advanced technology wasn’t a concept that could be settled by a romantic appeal to nature. Only careful definitions of ‘better’, followed by observations, testing, and evaluation of evidence could tell us something about that.

Another thing which may or may not be considered natural, is how incredibly many humans there are on this planet today. My reading has made me accept that innovations like synthetic pesticides, fertilisers, and enhanced crops are important in the quest of keeping everybody fed. I have even begun to accept that Monsanto – gasp – could play a part in making the world better. As I see it, the best kind of agriculture going forward should be a scientifically oriented one. It should be free to combine the best methods whether they be derived from old traditions or created in the lab, using what makes most sense, in order to arrive at efficient and environmentally friendly ways of farming. And what has made me happy indeed, is realising that this is already being done – just look at Integrated Pest Management, crops adapted to withstand harsher environments, and Conservation tillage or No-till.

Organic labels on the other hand are not adapting. Actually, it appears they are spending considerable sums of money to mislead the public about science (see here, here and here). That is not something I can approve of. And I am not ready to give up one third more land to support the appealing idea of ‘being natural’. That is land which isn’t there. Land which comprises the last dwindling habitats for wild-life – the actual nature.

I am still searching for that label that would say ‘buying this will make the world a better place’. And if I do find one, I will do a proper background-check to see if I can verify its claims. I’ve realised that I am in no way immune to basing my views on unchecked assumptions, and I shouldn’t judge others for making the same mistake. Having to change a deep-seated world view can be exhausting and painful. I am thankful for this experience and see it as a reminder to stay respectful of others, no matter what beliefs they may hold. We can help each other in remaining open for opportunities to learn.

Published originally in the Skepti Forum blog: Iida Ruishalme’s 500 words – Natural Assumptions and my own blog Thoughtscapism.

Iida Ruishalme is a writer and a science communicator who holds a M.Sc. in Biology from Sweden. She is a contributor to both Genetic Literacy Project and Skepti-Forum.org. She blogs over at Thoughtscapism. Follow her on twitter: @Thoughtscapism or on her Facebook page.

Some pushback against Obama’s ridiculous climate remarks at the Coast Guard commencement

 
Original link:  http://wattsupwiththat.com/2015/05/21/some-pushback-against-obamas-ridiculous-climate-remarks-at-the-coast-guard-commencement/
 
Did human-caused climate change lead to war in Syria?Based only on the mainstream press headlines, you almost certainly would think so.

Reading further into the articles where the case is laid out, a few caveats appear, but the chain of events seems strong.

The mechanism? An extreme drought in the Fertile Crescent region—one that a new study finds was made worse by human greenhouse gas emissions—added a spark to the tinderbox of tensions that had been amassing in Syria for a number of years under the Assad regime (including poor water management policies).

It is not until you dig pretty deep into the technical scientific literature, that you find out that the anthropogenic climate change impact on drought conditions in the Fertile Crescent is extremely minimal and tenuous—so much so that it is debatable as to whether it is detectable at all.

This is not to say that a strong and prolonged drought didn’t play some role in the Syria’s pre-war unrest—perhaps it did, perhaps it didn’t (a debate we leave up to folks much more qualified than we are on the topic)—but that the human-influenced climate change impact on the drought conditions was almost certainly too small to have mattered.

In other words, the violence would almost certainly have occurred anyway.

Several tidbits buried in the scientific literature are relevant to assessing the human impact on the meteorology behind recent drought conditions there.

It is true that climate models do project a general drying trend in the Mediterranean region (including the Fertile Crescent region in the Eastern Mediterranean) as the climate warms under increasing greenhouse gas concentrations.  There are two components to the projected drying. The first is a northward expansion of the subtropical high pressure system that typically dominates the southern portion of the region. This poleward expansion of the high pressure system would act to shunt wintertime storm systems northward, increasing precipitation over Europe but decreasing precipitation across the Mediterranean.  The second component is an increase in the temperature which would lead to increased evaporation and enhanced drying.

Our analysis will show that the connection between this drought and human-induced climate change is tenuous at best,  and tendentious at worst.

An analysis in the new headline-generating paper by Colin Kelley and colleagues that just appeared in the Proceeding of the National Academy of Sciences shows the observed trend in the sea level pressure across the eastern Mediterranean as well as the trend projected to have taken place there by a collection of climate models. We reproduce this graphic as Figure 1.  If the subtropical high is expanding northward over the region, the sea level pressure ought to be on the rise. Indeed, the climate models (bottom panel) project a rise in the surface pressure over the 20th century (blue portion of the curve) and predict even more of a rise into the future (red portion of the curve).
However, the observations (top panel, green line) do not corroborate the model hypothesis under the normative rules of science. Ignoring the confusing horizontal lines included by the authors, several things are obvious. First, the level of natural variability is such that no overall trend is readily apparent.

[Note: The authors identify an upwards trend in the observations and describe it as being “marginally significant (P < 0.14)”. In  nobody’s book  (except, we guess, these authors) is a P-value of 0.14 “marginally significant”—it is widely accepted in the scientific literature that P-values must be less than 0.05 for them to be considered statistically significant (i.e., there is a less than 1 in 20 chance that chance alone would produce a similar result). That’s normative science. We’ve seen some rather rare cases where authors attached the term “marginally” significant to P-values up to 0.10, but 0.14 (about a 1 in 7 chance that chance didn’t produce it) is taking things a bit far, hence our previous usage of the word “tendentious.”]

Whether  or not there is an identifiable overall upwards trend, the barometric pressure in  the region during the last decade of the record (when the Syrian drought took place) is not at all unusual when compared to other periods  in the region’s pressure history—including periods that took place long before large-scale greenhouse gas emissions were taking place.

Consequently,  there is little in the pressure record to lend credence to the notion that human-induced climate change played a significant role in the region’s recent drought.

Figure 1. Observed (top) and modeled (bottom) sea level pressure for
the Eastern Mediterranean region (figure adapted from Kelley et al., 2015).

Another clue that the human impact on the recent drought was minimal (at best) comes from a 2012 paper in the Journal of Climate by Martin Hoerling and colleagues. In that paper, Hoerling et al. concluded that about half of the trend towards late-20th century dry conditions in the Mediterranean region was potentially attributable to human emissions of greenhouse gases and aerosols.   They found that climate models run with increasing concentrations of greenhouse gases and aerosols produce drying across the Mediterranean region in general. However, the subregional patterns of the drying are sensitive to the patterns of sea surface temperature (SST) variability and change. Alas, the patterns of SST changes are quite different in reality than they were projected to be by the climate models. Hoerling et al. describe the differences this way “In general, the observed SST differences have stronger meridional [North-South] contrast between the tropics and NH extratropics and also a stronger zonal [East-West] contrast between the Indian Ocean and the tropical Pacific Ocean.”

Figure 2 shows visually what Hoerling was describing—the observed SST change (top) along with the model projected changes (bottom) for the period 1971-2010 minus 1902-1970. Note the complexity that accompanies reality.

Figure 2. Cold season (November–April) sea surface temperature
departures (°C) for the period 1971–2010 minus 1902–70: (top)
observed and (bottom) mean from climate model projections
(from Hoerling et al., 2012).

Hoerling et al. show that in the Fertile Crescent region, the drying produced by climate models is particularly enhanced (by some 2-3 times) if the observed patterns of sea surface temperatures are incorporated into the models rather than patterns that would otherwise be projected by the models (i.e., the top portion of Figure 2 is used to drive the model output rather than the bottom portion).

Let’s be clear here.  The models were unable to accurately reproduce the patterns of SST that have been observed as greenhouse gas concentrations increased.  So the observed data were substituted for the predicted value, and then that was used to generate forecasts of changed rainfall.  We can’t emphasize this enough: what was not supposed to happen from climate change was forced into the models that then synthesized rainfall.

Figure 3 shows these results and Figure 4 shows what has been observed. Note that even using the prescribed SST, the model predicted changes in Figure 3 (lower panel) are only about half as much as has been observed to have taken place in the region around Syria (Figure 4, note scale difference). This leaves the other half of the moisture decline largely unexplained.  From Figure 3 (top), you can also see that only about 10mm out of more than 60mm of observed precipitation decline around Syria during the cold season is “consistent with” human-caused climate change as predicted by climate models left to their own devices.

Nor does “consistent with” mean “caused by” it.

Figure 3. Simulated change in cold season precipitation
(mm) over the Mediterranean region based on the ensemble
average (top) of 22 IPCC models run with observed emissions
of greenhouse gases and aerosols and (bottom) of 40 models
run with observed emissions of greenhouse gases and aerosols
with prescribed sea surface temperatures. The difference plots
in the panels are for the period 1971–2010 minus 1902–70
(source: Hoerling et al., 2012).

For comparative purposes, according to the University of East Anglia climate history, the average cold-season rainfall in Syria is 261mm (10.28 inches).  Climate models, when left to their own devices,  predict a decline averaging about 10mm, or 3.8 per cent of the total.  When “prescribed” (some would use the word “fudged”) sea surface temperatures are substituted for their wrong numbers, the decline in rainfall goes up to a whopping 24mm, or 9.1 per cent of the total.  For additional comparative purposes, population has roughly tripled in the last three decades.

Figure 4. Observed change in cold season precipitation for the period
1971–2010 minus 1902–70. Anomalies (mm) are relative to the
1902–2010 (source: Hoerling et al., 2012).

So what you are left with after carefully comparing the patterns of observed changes in the meteorology and climatology of Syria and the Fertile Crescent region to those produced by climate models, is that the lion’s share of the observed changes are left unexplained by the models run with increasing greenhouse gases. Lacking a better explanation, these unexplained changes get chalked up to “natural variability”—and natural variability dominates the observed climate history.

You wouldn’t come to this conclusion from the cursory treatment of climate that is afforded in the mainstream press.  It requires an examination of scientific literature and a good background and understanding of the rather technical research being discussed. Like all issues related to climate change, the devil is in the details, and, in the haste to produce attention grabbing headlines, the details often get glossed over or dismissed.

Our bottom line: the identifiable influence of human-caused climate change on recent drought conditions in the Fertile Crescent was almost certainly not the so-called straw that broke the camel’s back and led to the outbreak of conflict in Syria. The pre-existing (political) climate in the region was plenty hot enough for a conflict to ignite, perhaps partly fuelled by recent drought conditions—conditions which are part and parcel of the region climate and the intensity and frequency of which remain dominated by natural variability, even in this era of increasing greenhouse gas emissions  from human activities.

References:

Hoerling, M., et al., 2012. On the increased frequency of Mediterranean drought. Journal of Climate, 25, 2146-2161.

Kelley, C. P., et al., 2015. Climate change in the Fertile Crescent and implications of the recent Syrian drought. Proceedings of the National Academy of Sciences, doi:10.1073/pnas.1421533112

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels, director of the Center for the Study of Science, reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

Arctic and Antarctic Temperature Anomaly & The AMO

The following is partially borrowed from https://www.facebook.com/ClimateNews.ca/photos/a.464539090317538.1073741852.306212519483530/673459936092118/?type=1&theater

NASA recently posted that Greenland is currently losing ice at 238 (or maybe 287) billion tons/year.  When I pointed out that this is only 0.1% of the total Greenland ice mass, I was criticized for not taking into account that  (a), only a very small fraction of the melt could significantly rise sea levels, and that  (b) the melting had been (recently) rising about 2-fold over six year, such that the entire melt could happen before 2100.  Fair enough, though I could point out out other facts that could challenge the criticisms, and that the point of my comment -- scary sound bites out of context that didn't promote scientific literacy about science -- was being ignored.

A new, and critical, fact about arctic warming has just been pointed out by "Climate News" (see link above), one that throws a serious monkey wrench into the entire debate.  I'll repeat the entire post below:


Arctic and Antarctic Temperature Anomaly & The AMO



The 'Atlantic Multidecadal Oscillation' (AMO) entered its warm-phase in the mid 1990's. When this happened, Arctic temperature began to warm and Arctic ice began to decline.

The attached graph [above] shows Arctic and Antarctic temperature anomalies. Notice the increase in Arctic temperature anomaly in the mid 1990's? Something triggered it; and I bet it was the warm-phase of the AMO.

The top graph is the Arctic Temperature Anomaly. The graphic on the right is the AMO Index (graphed [above]). We can clearly see that the Arctic began to warm when the AMO flipped to its warm-phase.
 
Observations seem to show that the Atlantic Multidecadal Oscillation has an influence on Arctic temperature, and on sea ice.

The graph at the bottom shows Antarctic temperature anomaly. As you can see, Antarctica hasn't been warming.

Climate data Source: Remote Sensing Systems (RSS)
__________________________________________________________________________________
This data, which I doubt was first discovered by Climate News, strongly suggests that the current arctic warm period will come to and end in the early to mid 2020s; that the arctic will cool (at least relatively), and the melting will be reduced significantly.  If data from the Antarctic are any guide, arctic temperatures could even cool back to pre 1990 levels, and ice melting likewise.

All of which brings me back to my point. Frightening numbers posted out of any context only confuse people and counter scientific literacy.  It also suggests that NASA is biased (probably due to Hansen's reign), and so should not be in the in the climate prediction business (though they are of course a necessary part of it).

Greenhouse effect


From Wikipedia, the free encyclopedia


A representation of the exchanges of energy between the source (the Sun), the Earth's surface, the Earth's atmosphere, and the ultimate sink outer space. The ability of the atmosphere to capture and recycle energy emitted by the Earth surface is the defining characteristic of the greenhouse effect.

Another diagram of the greenhouse effect

The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases, and is re-radiated in all directions. Since part of this re-radiation is back towards the surface and the lower atmosphere, it results in an elevation of the average surface temperature above what it would be in the absence of the gases.[1][2]

Solar radiation at the frequencies of visible light largely passes through the atmosphere to warm the planetary surface, which then emits this energy at the lower frequencies of infrared thermal radiation. Infrared radiation is absorbed by greenhouse gases, which in turn re-radiate much of the energy to the surface and lower atmosphere. The mechanism is named after the effect of solar radiation passing through glass and warming a greenhouse, but the way it retains heat is fundamentally different as a greenhouse works by reducing airflow, isolating the warm air inside the structure so that heat is not lost by convection.[2][3][4]

If an ideal thermally conductive blackbody were the same distance from the Sun as the Earth is, it would have a temperature of about 5.3 °C. However, since the Earth reflects about 30%[5][6] of the incoming sunlight, this idealized planet's effective temperature (the temperature of a blackbody that would emit the same amount of radiation) would be about −18 °C.[7][8] The surface temperature of this hypothetical planet is 33 °C below Earth's actual surface temperature of approximately 14 °C.[9] The mechanism that produces this difference between the actual surface temperature and the effective temperature is due to the atmosphere and is known as the greenhouse effect.[10]

Earth’s natural greenhouse effect makes life as we know it possible. However, human activities, primarily the burning of fossil fuels and clearing of forests, have intensified the natural greenhouse effect, causing global warming.[11]

History

The existence of the greenhouse effect was argued for by Joseph Fourier in 1824. The argument and the evidence was further strengthened by Claude Pouillet in 1827 and 1838, and reasoned from experimental observations by John Tyndall in 1859, and more fully quantified by Svante Arrhenius in 1896.[12][13]
In 1917 Alexander Graham Bell wrote “[The unchecked burning of fossil fuels] would have a sort of greenhouse effect”, and “The net result is the greenhouse becomes a sort of hot-house.”[14][15] Bell went on to also advocate for the use of alternate energy sources, such as solar energy.[16]

Mechanism

The Earth receives energy from the Sun in the form UV, visible, and near IR radiation, most of which passes through the atmosphere without being absorbed. Of the total amount of energy available at the top of the atmosphere (TOA), about 50% is absorbed at the Earth's surface. Because it is warm, the surface radiates far IR thermal radiation that consists of wavelengths that are predominantly much longer than the wavelengths that were absorbed (the overlap between the incident solar spectrum and the terrestrial thermal spectrum is small enough to be neglected for most purposes). Most of this thermal radiation is absorbed by the atmosphere and re-radiated both upwards and downwards; that radiated downwards is absorbed by the Earth's surface. This trapping of long-wavelength thermal radiation leads to a higher equilibrium temperature than if the atmosphere were absent.
This highly simplified picture of the basic mechanism needs to be qualified in a number of ways, none of which affect the fundamental process.

The solar radiation spectrum for direct light at both the top of the Earth's atmosphere and at sea level

Synthetic stick absorption spectrum of a simple gas mixture corresponding to the Earth's atmosphere composition based on HITRAN data [17] created using Hitran on the Web system.[18] Green color - water vapor, red - carbon dioxide, WN - wavenumber (caution: lower wavelengths on the right, higher on the left).
  • The incoming radiation from the Sun is mostly in the form of visible light and nearby wavelengths, largely in the range 0.2–4 μm, corresponding to the Sun's radiative temperature of 6,000 K.[19] Almost half the radiation is in the form of "visible" light, which our eyes are adapted to use.[20]
  • About 50% of the Sun's energy is absorbed at the Earth's surface and the rest is reflected or absorbed by the atmosphere. The reflection of light back into space—largely by clouds—does not much affect the basic mechanism; this light, effectively, is lost to the system.
  • The absorbed energy warms the surface. Simple presentations of the greenhouse effect, such as the idealized greenhouse model, show this heat being lost as thermal radiation. The reality is more complex: the atmosphere near the surface is largely opaque to thermal radiation (with important exceptions for "window" bands), and most heat loss from the surface is by sensible heat and latent heat transport. Radiative energy losses become increasingly important higher in the atmosphere largely because of the decreasing concentration of water vapor, an important greenhouse gas. It is more realistic to think of the greenhouse effect as applying to a "surface" in the mid-troposphere, which is effectively coupled to the surface by a lapse rate.
  • The simple picture assumes a steady state. In the real world there is the diurnal cycle as well as seasonal cycles and weather. Solar heating only applies during daytime. During the night, the atmosphere cools somewhat, but not greatly, because its emissivity is low, and during the day the atmosphere warms. Diurnal temperature changes decrease with height in the atmosphere.
  • Within the region where radiative effects are important the description given by the idealized greenhouse model becomes realistic: The surface of the Earth, warmed to a temperature around 255 K, radiates long-wavelength, infrared heat in the range 4–100 μm.[19] At these wavelengths, greenhouse gases that were largely transparent to incoming solar radiation are more absorbent.[19] Each layer of atmosphere with greenhouses gases absorbs some of the heat being radiated upwards from lower layers. It re-radiates in all directions, both upwards and downwards; in equilibrium (by definition) the same amount as it has absorbed. This results in more warmth below. Increasing the concentration of the gases increases the amount of absorption and re-radiation, and thereby further warms the layers and ultimately the surface below.[8]
  • Greenhouse gases—including most diatomic gases with two different atoms (such as carbon monoxide, CO) and all gases with three or more atoms—are able to absorb and emit infrared radiation. Though more than 99% of the dry atmosphere is IR transparent (because the main constituents—N2, O2, and Ar—are not able to directly absorb or emit infrared radiation), intermolecular collisions cause the energy absorbed and emitted by the greenhouse gases to be shared with the other, non-IR-active, gases.

Greenhouse gases

By their percentage contribution to the greenhouse effect on Earth the four major gases are:[21][22] The major non-gas contributor to the Earth's greenhouse effect, clouds, also absorb and emit infrared radiation and thus have an effect on radiative properties of the atmosphere.[22]

Role in climate change

The Keeling Curve of atmospheric CO2 concentrations measured at Mauna Loa Observatory.

Atmospheric gases only absorb some wavelengths of energy but are transparent to others. The absorption patterns of water vapor (blue peaks) and carbon dioxide (pink peaks) overlap in some wavelengths. Carbon dioxide is not as strong a greenhouse gas as water vapor, but it absorbs energy in wavelengths (12-15 micrometers) that water vapor does not, partially closing the “window” through which heat radiated by the surface would normally escape to space. (Illustration NASA, Robert Rohde)[23]

Strengthening of the greenhouse effect through human activities is known as the enhanced (or anthropogenic) greenhouse effect.[24] This increase in radiative forcing from human activity is attributable mainly to increased atmospheric carbon dioxide levels.[25] According to the latest Assessment Report from the Intergovernmental Panel on Climate Change, "most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations".[26]

CO2 is produced by fossil fuel burning and other activities such as cement production and tropical deforestation.[27] Measurements of CO2 from the Mauna Loa observatory show that concentrations have increased from about 313 ppm[28] in 1960 to about 389 ppm in 2010. It reached the 400ppm milestone on May 9, 2013.[29] The current observed amount of CO2 exceeds the geological record maxima (~300 ppm) from ice core data.[30] The effect of combustion-produced carbon dioxide on the global climate, a special case of the greenhouse effect first described in 1896 by Svante Arrhenius, has also been called the Callendar effect.

Over the past 800,000 years,[31] ice core data shows that carbon dioxide has varied from values as low as 180 parts per million (ppm) to the pre-industrial level of 270ppm.[32] Paleoclimatologists consider variations in carbon dioxide concentration to be a fundamental factor influencing climate variations over this time scale.[33][34]

Real greenhouses


A modern Greenhouse in RHS Wisley

The "greenhouse effect" of the atmosphere is named by analogy to greenhouses which get warmer in sunlight, but the mechanism by which the atmosphere retains heat is different.[35] A greenhouse works primarily by allowing sunlight to warm surfaces inside the structure, but then preventing absorbed heat from leaving the structure through convection, i.e. sensible heat transport. The "greenhouse effect" heats the Earth because greenhouse gases absorb outgoing radiative energy, heating the atmosphere which then emits radiative energy with some of it going back towards the Earth.

A greenhouse is built of any material that passes sunlight, usually glass, or plastic. It mainly heats up because the Sun warms the ground inside, which then warms the air in the greenhouse. The air continues to heat because it is confined within the greenhouse, unlike the environment outside the greenhouse where warm air near the surface rises and mixes with cooler air aloft. This can be demonstrated by opening a small window near the roof of a greenhouse: the temperature will drop considerably. It has also been demonstrated experimentally (R. W. Wood, 1909) that a "greenhouse" with a cover of rock salt (which is transparent to infra red) heats up an enclosure similarly to one with a glass cover.[4] Thus greenhouses work primarily by preventing convective cooling.[3][36]

In contrast, the greenhouse effect heats the Earth because rather than retaining (sensible) heat by physically preventing movement of the air, greenhouse gases act to warm the Earth by re-radiating some of the energy back towards the surface. This process may exist in real greenhouses, but is comparatively unimportant there.

Bodies other than Earth

In the Solar System, Mars, Venus, and the moon Titan also exhibit greenhouse effects; that on Venus is particularly large, due to its atmosphere, which consists mainly of dense carbon dioxide.[37] Titan has an anti-greenhouse effect, in that its atmosphere absorbs solar radiation but is relatively transparent to infrared radiation. Pluto also exhibits behavior superficially similar to the anti-greenhouse effect.[38][39]

A runaway greenhouse effect occurs if positive feedbacks lead to the evaporation of all greenhouse gases into the atmosphere.[40] A runaway greenhouse effect involving carbon dioxide and water vapor is thought to have occurred on Venus.[41]

Thursday, May 21, 2015

Black body


From Wikipedia, the free encyclopedia


As the temperature of a black body decreases, its intensity also decreases and its peak moves to longer wavelengths. Shown for comparison is the classical Rayleigh–Jeans law and its ultraviolet catastrophe.

A black body (also, blackbody) is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. A white body is one with a "rough surface [that] reflects all incident rays completely and uniformly in all directions."[1]

A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic radiation called black-body radiation. The radiation is emitted according to Planck's law, meaning that it has a spectrum that is determined by the temperature alone (see figure at right), not by the body's shape or composition.

A black body in thermal equilibrium has two notable properties:[2]
  1. It is an ideal emitter: at every frequency, it emits as much energy as – or more energy than – any other body at the same temperature.
  2. It is a diffuse emitter: the energy is radiated isotropically, independent of direction.
An approximate realization of a black surface is a hole in the wall of a large enclosure (see below). Any light entering the hole is reflected indefinitely or absorbed inside and is unlikely to re-emerge, making the hole a nearly perfect absorber. The radiation confined in such an enclosure may or may not be in thermal equilibrium, depending upon the nature of the walls and the other contents of the enclosure.[3][4]

Real materials emit energy at a fraction—called the emissivity—of black-body energy levels. By definition, a black body in thermal equilibrium has an emissivity of ε = 1.0. A source with lower emissivity independent of frequency often is referred to as a gray body.[5][6] Construction of black bodies with emissivity as close to one as possible remains a topic of current interest.[7]

In astronomy, the radiation from stars and planets is sometimes characterized in terms of an effective temperature, the temperature of a black body that would emit the same total flux of electromagnetic energy.

Definition

The idea of a black body originally was introduced by Gustav Kirchhoff in 1860 as follows:

...the supposition that bodies can be imagined which, for infinitely small thicknesses, completely absorb all incident rays, and neither reflect nor transmit any. I shall call such bodies perfectly black, or, more briefly, black bodies.[8]

A more modern definition drops the reference to "infinitely small thicknesses":[9]

An ideal body is now defined, called a blackbody. A blackbody allows all incident radiation to pass into it (no reflected energy) and internally absorbs all the incident radiation (no energy transmitted through the body). This is true for radiation of all wavelengths and for all angles of incidence. Hence the blackbody is a perfect absorber for all incident radiation. [10]

Idealizations

This section describes some concepts developed in connection with black bodies.

An approximate realization of a black body as a tiny hole in an insulated enclosure.

Cavity with a hole

A widely used model of a black surface is a small hole in a cavity with walls that are opaque to radiation.[10]
Radiation incident on the hole will pass into the cavity, and is very unlikely to be re-emitted if the cavity is large. The hole is not quite a perfect black surface — in particular, if the wavelength of the incident radiation is longer than the diameter of the hole, part will be reflected. Similarly, even in perfect thermal equilibrium, the radiation inside a finite-sized cavity will not have an ideal Planck spectrum for wavelengths comparable to or larger than the size of the cavity.[11]

Suppose the cavity is held at a fixed temperature T and the radiation trapped inside the enclosure is at thermal equilibrium with the enclosure. The hole in the enclosure will allow some radiation to escape. If the hole is small, radiation passing in and out of the hole has negligible effect upon the equilibrium of the radiation inside the cavity. This escaping radiation will approximate black-body radiation that exhibits a distribution in energy characteristic of the temperature T and does not depend upon the properties of the cavity or the hole, at least for wavelengths smaller than the size of the hole.[11] See the figure in the Introduction for the spectrum as a function of the frequency of the radiation, which is related to the energy of the radiation by the equation E=hf, with E = energy, h = Planck's constant, f = frequency.

At any given time the radiation in the cavity may not be in thermal equilibrium, but the second law of thermodynamics states that if left undisturbed it will eventually reach equilibrium,[12] although the time it takes to do so may be very long.[13] Typically, equilibrium is reached by continual absorption and emission of radiation by material in the cavity or its walls.[3][4][14][15] Radiation entering the cavity will be "thermalized"; by this mechanism: the energy will be redistributed until the ensemble of photons achieves a Planck distribution. The time taken for thermalization is much faster with condensed matter present than with rarefied matter such as a dilute gas.
At temperatures below billions of Kelvin, direct photon–photon interactions[16] are usually negligible compared to interactions with matter.[17] Photons are an example of an interacting boson gas,[18] and as described by the H-theorem,[19] under very general conditions any interacting boson gas will approach thermal equilibrium.

Transmission, absorption, and reflection

A body's behavior with regard to thermal radiation is characterized by its transmission τ, absorption α, and reflection ρ.

The boundary of a body forms an interface with its surroundings, and this interface may be rough or smooth. A nonreflecting interface separating regions with different refractive indices must be rough, because the laws of reflection and refraction governed by the Fresnel equations for a smooth interface require a reflected ray when the refractive indices of the material and its surroundings differ.[20] A few idealized types of behavior are given particular names:

An opaque body is one that transmits none of the radiation that reaches it, although some may be reflected.[21][22]
That is, τ=0 and α+ρ=1

A transparent body is one that transmits all the radiation that reaches it. That is, τ=1 and α=ρ=0.

A gray body is one where α, ρ and τ are uniform for all wavelengths. This term also is used to mean a body for which α is temperature and wavelength independent.

A white body is one for which all incident radiation is reflected uniformly in all directions: τ=0, α=0, and ρ=1.

For a black body, τ=0, α=1, and ρ=0. Planck offers a theoretical model for perfectly black bodies, which he noted do not exist in nature: besides their opaque interior, they have interfaces that are perfectly transmitting and non-reflective.[23]

Kirchhoff's perfect black bodies

Kirchhoff in 1860 introduced the theoretical concept of a perfect black body with a completely absorbing surface layer of infinitely small thickness, but Planck noted some severe restrictions upon this idea. Planck noted three requirements upon a black body: the body must (i) allow radiation to enter but not reflect; (ii) possess a minimum thickness adequate to absorb the incident radiation and prevent its re-emission; (iii) satisfy severe limitations upon scattering to prevent radiation from entering and bouncing back out. As a consequence, Kirchhoff's perfect black bodies that absorb all the radiation that falls on them cannot be realized in an infinitely thin surface layer, and impose conditions upon scattering of the light within the black body that are difficult to satisfy.[24][25]

Realizations

A realization of a black body is a real world, physical embodiment. Here are a few.

Cavity with a hole

In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source.[26] Their design has been used largely unchanged for radiation measurements to the present day. It was a hole in the wall of a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law.[27][28] A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides.[29]

Near-black materials

There is interest in blackbody-like materials for camouflage and radar-absorbent materials for radar invisibility.[30][31] They also have application as solar energy collectors, and infrared thermal detectors. As a perfect emitter of radiation, a hot material with black body behavior would create an efficient infrared heater, particularly in space or in a vacuum where convective heating is unavailable.[32] They are also useful in telescopes and cameras as anti-reflection surfaces to reduce stray light, and to gather information about objects in high-contrast areas (for example, observation of planets in orbit around their stars), where blackbody-like materials absorb light that comes from the wrong sources.

It has long been known that a lamp-black coating will make a body nearly black. An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%.[7][33] In 2009, a team of Japanese scientists created a material called nanoblack which is close to an ideal black body, based on vertically aligned single-walled carbon nanotubes. This absorbs between 98% and 99% of the incoming light in the spectral range from the ultra-violet to the far-infrared regions.[32]

Another example of a nearly perfect black material is super black, made by chemically etching a nickelphosphorus alloy.[34]

Stars and planets

An idealized view of the cross-section of a star. The photosphere contains photons of light nearly in thermal equilibrium, and some escape into space as near-black-body radiation.

A star or planet often is modeled as a black body, and electromagnetic radiation emitted from these bodies as black-body radiation. The figure shows a highly schematic cross-section to illustrate the idea. The photosphere of the star, where the emitted light is generated, is idealized as a layer within which the photons of light interact with the material in the photosphere and achieve a common temperature T that is maintained over a long period of time.
Some photons escape and are emitted into space, but the energy they carry away is replaced by energy from within the star, so that the temperature of the photosphere is nearly steady. Changes in the core lead to changes in the supply of energy to the photosphere, but such changes are slow on the time scale of interest here. Assuming these circumstances can be realized, the outer layer of the star is somewhat analogous to the example of an enclosure with a small hole in it, with the hole replaced by the limited transmission into space at the outside of the photosphere.
With all these assumptions in place, the star emits black-body radiation at the temperature of the photosphere.[35]

Effective temperature of a black body compared with the B-V and U-B color index of main sequence and super giant stars in what is called a color-color diagram.[36]

Using this model the effective temperature of stars is estimated, defined as the temperature of a black body that yields the same surface flux of energy as the star. If a star were a black body, the same effective temperature would result from any region of the spectrum. For example, comparisons in the B (blue) or V (visible) range lead to the so-called B-V color index, which increases the redder the star,[37] with the Sun having an index of +0.648 ± 0.006.[38] Combining the U (ultraviolet) and the B indices leads to the U-B index, which becomes more negative the hotter the star and the more the UV radiation. Assuming the Sun is a type G2 V star, its U-B index is +0.12.[39] The two indices for two types of stars are compared in the figure with the effective surface temperature of the stars assuming they are black bodies. It can be seen that there is only a rough correlation. For example, for a given B-V index from the blue-visible region of the spectrum., the curves for both types of star lie below the corresponding black-body U-B index that includes the ultraviolet spectrum, showing that both types of star emit less ultraviolet light than a black body with the same B-V index. It is perhaps surprising that they fit a black body curve as well as they do, considering that stars have greatly different temperatures at different depths.[40] For example, the Sun has an effective temperature of 5780 K,[41] which can be compared to the temperature of the photosphere of the Sun (the region generating the light), which ranges from about 5000 K at its outer boundary with the chromosphere to about 9500 K at its inner boundary with the convection zone approximately 500 km (310 mi) deep.[42]

Black holes

A black hole is a region of spacetime from which nothing escapes. Around a black hole there is a mathematically defined surface called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, making it almost an ideal black body[43] (radiation with a wavelength equal to or larger than the radius of the hole may not be absorbed, so black holes are not perfect black bodies).[44] Physicists believe that to an outside observer, black holes have a non-zero temperature and emit radiation with a nearly perfect black-body spectrum, ultimately evaporating.[45] The mechanism for this emission is related to vacuum fluctuations in which a virtual pair of particles is separated by the gravity of the hole, one member being sucked into the hole, and the other being emitted.[46] The energy distribution of emission is described by Planck's law with a temperature T:
T=\frac {\hbar c^3}{8\pi Gk_BM} \ ,
where c is the speed of light, ℏ is the reduced Planck constant, kB is Boltzmann's constant, G is the gravitational constant and M is the mass of the black hole.[47] These predictions have not yet been tested either observationally or experimentally.[48]

Cosmic microwave background radiation

The big bang theory is based upon the cosmological principle, which states that on large scales the Universe is homogeneous and isotropic. According to theory, the Universe approximately a second after its formation was a near-ideal black body in thermal equilibrium at a temperature above 1010 K. The temperature decreased as the Universe expanded and the matter and radiation in it cooled. The cosmic microwave background radiation observed today is "the most perfect black body ever measured in nature".[49] It has a nearly ideal Planck spectrum at a temperature of about 2.7K. It departs from the perfect isotropy of true black-body radiation by an observed anisotropy that varies with angle on the sky only to about one part in 100,000.

Radiative cooling

The integration of Planck's law over all frequencies provides the total energy per unit of time per unit of surface area radiated by a black body maintained at a temperature T, and is known as the Stefan–Boltzmann law:
P/A = \sigma T^4 \ ,
where σ is the Stefan–Boltzmann constant, σ ≈ 5.67 × 10−8 W/(m2K4).[50] To remain in thermal equilibrium at constant temperature T, the black body must absorb or internally generate this amount of power P over the given area A.

The cooling of a body due to thermal radiation is often approximated using the Stefan–Boltzmann law supplemented with a "gray body" emissivity ε ≤ 1 (P/A = εσT4). The rate of decrease of the temperature of the emitting body can be estimated from the power radiated and the body's heat capacity.[51] This approach is a simplification that ignores details of the mechanisms behind heat redistribution (which may include changing composition, phase transitions or restructuring of the body) that occur within the body while it cools, and assumes that at each moment in time the body is characterized by a single temperature. It also ignores other possible complications, such as changes in the emissivity with temperature,[52][53] and the role of other accompanying forms of energy emission, for example, emission of particles like neutrinos.[54]

If a hot emitting body is assumed to follow the Stefan–Boltzmann law and its power emission P and temperature T is known, this law can be used to estimate the dimensions of the emitting object, because the total emitted power is proportional to the area of the emitting surface. In this way it was found that X-ray bursts observed by astronomers originated in neutron stars with a radius of about 10 km, rather than black holes as originally conjectured.[55] It should be noted that an accurate estimate of size requires some knowledge of the emissivity, particularly its spectral and angular dependence.[56]

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...