Search This Blog

Sunday, January 5, 2014

The Baloney Detection Kit: Carl Sagan’s Rules for Bullshit-Busting and Critical Thinking by Maria Popova

Necessary cognitive fortification against propaganda, pseudoscience, and general falsehood.
Carl Sagan was many things — a cosmic sage, voracious reader, hopeless romantic, and brilliant philosopher. But above all, he endures as our era’s greatest patron saint of reason and common sense, a master of the vital balance between skepticism and openness. In The Demon-Haunted World: Science as a Candle in the Dark (public library) — the same indispensable volume that gave us
Sagan’s timeless meditation on science and spirituality, published mere months before his death in 1996 — Sagan shares his secret to upholding the rites of reason, even in the face of society’s most shameless untruths and outrageous propaganda.

In a chapter titled “The Fine Art of Baloney Detection,” Sagan reflects on the many types of deception to which we’re susceptible — from psychics to religious zealotry to paid product endorsements by scientists, which he held in especially low regard, noting that they “betray contempt for the intelligence of their customers” and “introduce an insidious corruption of popular attitudes about scientific objectivity.” (Cue in PBS’s Joe Hanson on how to read science news.) But rather than preaching from the ivory tower of self-righteousness, Sagan approaches the subject from the most vulnerable of places — having just lost both of his parents, he reflects on the all too human allure of promises of supernatural reunions in the afterlife, reminding us that falling for such fictions doesn’t make us stupid or bad people, but simply means that we need to equip ourselves with the right tools against them.



Through their training, scientists are equipped with what Sagan calls a “baloney detection kit” — a set of cognitive tools and techniques that fortify the mind against penetration by falsehoods:
The kit is brought out as a matter of course whenever new ideas are offered for consideration. If the new idea survives examination by the tools in our kit, we grant it warm, although tentative, acceptance. If you’re so inclined, if you don’t want to buy baloney even when it’s reassuring to do so, there are precautions that can be taken; there’s a tried-and-true, consumer-tested method.
But the kit, Sagan argues, isn’t merely a tool of science — rather, it contains invaluable tools of healthy skepticism that apply just as elegantly, and just as necessarily, to everyday life. By adopting the kit, we can all shield ourselves against clueless guile and deliberate manipulation. Sagan shares nine of these tools:
Wherever possible there must be independent confirmation of the “facts.”
Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.
Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among “multiple working hypotheses,” has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.
Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will. 
Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.
If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.
Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.
Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle — an electron, say — in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

Just as important as learning these helpful tools, however, is unlearning and avoiding the most common pitfalls of common sense. Reminding us of where society is most vulnerable to those, Sagan writes:
In addition to teaching us what to do when evaluating a claim to knowledge, any good baloney detection kit must also teach us what not to do. It helps us recognize the most common and perilous fallacies of logic and rhetoric. Many good examples can be found in religion and politics, because their practitioners are so often obliged to justify two contradictory propositions.
He admonishes against the twenty most common and perilous ones — many rooted in our chronic discomfort with ambiguity — with examples of each in action:
  1. ad hominem — Latin for “to the man,” attacking the arguer and not the argument (e.g., The Reverend Dr. Smith is a known Biblical fundamentalist, so her objections to evolution need not be taken seriously)
  2. argument from authority (e.g., President Richard Nixon should be re-elected because he has a secret plan to end the war in Southeast Asia — but because it was secret, there was no way for the electorate to evaluate it on its merits; the argument amounted to trusting him because he was President: a mistake, as it turned out)
  3. argument from adverse consequences (e.g., A God meting out punishment and reward must exist, because if He didn’t, society would be much more lawless and dangerous — perhaps even ungovernable. Or: The defendant in a widely publicized murder trial must be found guilty; otherwise, it will be an encouragement for other men to murder their wives)
  4. appeal to ignorance — the claim that whatever has not been proved false must be true, and vice versa (e.g., There is no compelling evidence that UFOs are not visiting the Earth; therefore UFOs exist — and there is intelligent life elsewhere in the Universe. Or: There may be seventy kazillion other worlds, but not one is known to have the moral advancement of the Earth, so we’re still central to the Universe.) This impatience with ambiguity can be criticized in the phrase: absence of evidence is not evidence of absence.
  5. special pleading, often to rescue a proposition in deep rhetorical trouble (e.g., How can a merciful God condemn future generations to torment because, against orders, one woman induced one man to eat an apple? Special plead: you don’t understand the subtle Doctrine of Free Will. Or: How can there be an equally godlike Father, Son, and Holy Ghost in the same Person? Special plead: You don’t understand the Divine Mystery of the Trinity. Or: How could God permit the followers of Judaism, Christianity, and Islam — each in their own way enjoined to heroic measures of loving kindness and compassion — to have perpetrated so much cruelty for so long? Special plead: You don’t understand Free Will again. And anyway, God moves in mysterious ways.)
  6. begging the question, also called assuming the answer (e.g., We must institute the death penalty to discourage violent crime. But does the violent crime rate in fact fall when the death penalty is imposed? Or: The stock market fell yesterday because of a technical adjustment and profit-taking by investors — but is there any independent evidence for the causal role of “adjustment” and profit-taking; have we learned anything at all from this purported explanation?)
  7. observational selection, also called the enumeration of favorable circumstances, or as the philosopher Francis Bacon described it, counting the hits and forgetting the misses (e.g., A state boasts of the Presidents it has produced, but is silent on its serial killers)
  8. statistics of small numbers — a close relative of observational selection (e.g., “They say 1 out of every 5 people is Chinese. How is this possible? I know hundreds of people, and none of them is Chinese. Yours truly.” Or: “I’ve thrown three sevens in a row. Tonight I can’t lose.”)
  9. misunderstanding of the nature of statistics (e.g., President Dwight Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence);
  10. inconsistency (e.g., Prudently plan for the worst of which a potential military adversary is capable, but thriftily ignore scientific projections on environmental dangers because they’re not “proved.” Or: Attribute the declining life expectancy in the former Soviet Union to the failures of communism many years ago, but never attribute the high infant mortality rate in the United States (now highest of the major industrial nations) to the failures of capitalism. Or: Consider it reasonable for the Universe to continue to exist forever into the future, but judge absurd the possibility that it has infinite duration into the past);
  11. non sequitur — Latin for “It doesn’t follow” (e.g., Our nation will prevail because God is great. But nearly every nation pretends this to be true; the German formulation was “Gott mit uns”). Often those falling into the non sequitur fallacy have simply failed to recognize alternative possibilities;
  12. post hoc, ergo propter hoc — Latin for “It happened after, so it was caused by” (e.g., Jaime Cardinal Sin, Archbishop of Manila: “I know of … a 26-year-old who looks 60 because she takes [contraceptive] pills.” Or: Before women got the vote, there were no nuclear weapons)
  13. meaningless question (e.g., What happens when an irresistible force meets an immovable object? But if there is such a thing as an irresistible force there can be no immovable objects, and vice versa)
  14. excluded middle, or false dichotomy — considering only the two extremes in a continuum of intermediate possibilities (e.g., “Sure, take his side; my husband’s perfect; I’m always wrong.” Or: “Either you love your country or you hate it.” Or: “If you’re not part of the solution, you’re part of the problem”)
  15. short-term vs. long-term — a subset of the excluded middle, but so important I’ve pulled it out for special attention (e.g., We can’t afford programs to feed malnourished children and educate pre-school kids. We need to urgently deal with crime on the streets. Or: Why explore space or pursue fundamental science when we have so huge a budget deficit?);
  16. slippery slope, related to excluded middle (e.g., If we allow abortion in the first weeks of pregnancy, it will be impossible to prevent the killing of a full-term infant. Or, conversely: If the state prohibits abortion even in the ninth month, it will soon be telling us what to do with our bodies around the time of conception);
  17. confusion of correlation and causation (e.g., A survey shows that more college graduates are homosexual than those with lesser education; therefore education makes people gay. Or: Andean earthquakes are correlated with closest approaches of the planet Uranus; therefore — despite the absence of any such correlation for the nearer, more massive planet Jupiter — the latter causes the former)
  18. straw man — caricaturing a position to make it easier to attack (e.g., Scientists suppose that living things simply fell together by chance — a formulation that willfully ignores the central Darwinian insight, that Nature ratchets up by saving what works and discarding what doesn’t. Or — this is also a short-term/long-term fallacy — environmentalists care more for snail darters and spotted owls than they do for people)
  19. suppressed evidence, or half-truths (e.g., An amazingly accurate and widely quoted “prophecy” of the assassination attempt on President Reagan is shown on television; but — an important detail — was it recorded before or after the event? Or: These government abuses demand revolution, even if you can’t make an omelette without breaking some eggs. Yes, but is this likely to be a revolution in which far more people are killed than under the previous regime? What does the experience of other revolutions suggest? Are all revolutions against oppressive regimes desirable and in the interests of the people?)
  20. weasel words (e.g., The separation of powers of the U.S. Constitution specifies that the United States may not conduct a war without a declaration by Congress. On the other hand, Presidents are given control of foreign policy and the conduct of wars, which are potentially powerful tools for getting themselves re-elected. Presidents of either political party may therefore be tempted to arrange wars while waving the flag and calling the wars something else — “police actions,” “armed incursions,” “protective reaction strikes,” “pacification,” “safeguarding American interests,” and a wide variety of “operations,” such as “Operation Just Cause.” Euphemisms for war are one of a broad class of reinventions of language for political purposes. Talleyrand said, “An important art of politicians is to find new names for institutions which under old names have become odious to the public”)
Sagan ends the chapter with a necessary disclaimer:
Like all tools, the baloney detection kit can be misused, applied out of context, or even employed as a rote alternative to thinking. But applied judiciously, it can make all the difference in the world — not least in evaluating our own arguments before we present them to others.
The Demon-Haunted World is a timelessly fantastic read in its entirety, timelier than ever in a great many ways amidst our present media landscape of propaganda, pseudoscience, and various commercial motives. Complement it with Sagan on science and “God”.

Saturday, January 4, 2014

Biofuels Vital Graphics - Powering Green Economy


      
Biofuels Vital Graphics visualizes the opportunities, the need for safeguards, and the options that help ensure sustainability of biofuels to make them a cornerstone for a Green Economy. Stories from around the world have been highlighter to exemplify possible approaches, lessons learned, risks and opportunities Biofuels Vital Graphics is meant as a communications tool.

It builds on an earlier report by the International Panel for Sustainable Resource Management of the United Nations Environment Programme, Towards Sustainable Production and Use of Resources: Assessing Biofuels, as well as research produced since.

Read online: 
Web | Mobile | PDF (4mb) | E-book (flash) | iTunes app | Maps & Graphics collection

Liquid, gaseous or solid biofuels hold great promise to deliver an increasing share of the energy required to power a new global green economy. Many in government and the energy industry believe this modern bioenergy can play a significant role in reducing pollution and greenhouse gases, and promoting development through new business opportunities and jobs. Modern bioenergy can be a mechanism for economic development enabling local communities to secure the energy they need, with farmers earning additional income and achieving greater price stability for their production.
 

Third generation photovoltaic cell

From Wikipedia, the free encyclopedia
    
Third generation photovoltaic cells are solar cells that are potentially able to overcome the Shockley–Queisser limit of 31-41% power efficiency for single bandgap solar cells. This includes a range of alternatives to the so-called "first generation solar cells" (which are solar cells made of semiconducting p-n junctions) and "second generation solar cells" (based on reducing the cost of first generation cells by employing thin film technologies). Common third-generation systems include multi-layer ("tandem") cells made of amorphous silicon or gallium arsenide, while more theoretical developments include frequency conversion, hot-carrier effects and other multiple-carrier ejection.
[1] [2][3][4]

Background

Solar cells can be thought of as visible light counterparts to radio receivers. A receiver consists of three basic parts; an antenna that converts the radio waves (light) into wave-like motions of electrons in the antenna material, an electronic valve that traps the electrons as they pop off the end of the antenna, and a tuner that amplifies electrons of a selected frequency. It is possible to build a solar cell identical to a radio, a system known as an optical rectenna, but to date these have not been practical.

Instead, the vast majority of the solar electric market is made up of silicon-based devices. In silicon cells, the silicon acts as both the antenna (or electron donor, technically) as well as the electronic valve. Silicon is almost ideal as a solar cell material; it is widely available, relatively inexpensive, and has a bandgap that is ideal for solar collection. On the downside it is energetically expensive to produce silicon in bulk, and great efforts have been made to reduce or eliminate the silicon in a cell. Moreover it is mechanically fragile, which typically requires a sheet of strong glass to be used as mechanical support and protection from the elements. The glass alone is a significant portion of the cost of a typical solar module.

According to the Shockley–Queisser limit, the majority of a cell's theoretical efficiency is due to the difference in energy between the bandgap and solar photon. Any photon with more energy than the bandgap can cause photoexcitation, but in this case any energy above and beyond the bandgap energy is lost. Consider the solar spectrum; only a small portion of the light reaching the ground is blue, but those photons have three times the energy of red light. Silicon's bandgap is 1.1 eV, about that of red light, so in this case the extra energy contained in blue light is lost in a silicon cell. If the bandgap is tuned higher, say to blue, that energy is now captured, but only at the cost of rejecting all the lower energy photons.

It is possible to greatly improve on a single-junction cell by stacking extremely thin cells with different bandgaps on top of each other - the "tandem cell" or "multi-junction" approach. Traditional silicon preparation methods do not lend themselves to this approach. There has been some progress using thin-films of amorphous silicon, notably Uni-Solar's products, but other issues have prevented these from matching the performance of traditional cells. Most tandem-cell structures are based on higher performance semiconductors, notably gallium arsenide (GaAs). Three-layer GaAs cells hold the production record of 41.6% for experimental examples.[5]

Numerical analysis shows that the "perfect" single-layer solar cell should have a bandgap of 1.13 eV, almost exactly that of silicon. Such a cell can have a maximum theoretical power conversion efficiency of 33.7% - the solar power below red (in the infrared) is lost, and the extra energy of the higher colors is also lost. For a two layer cell, one layer should be tuned to 1.64 eV and the other at 0.94 eV, with a theoretical performance of 44%. A three-layer cell should be tuned to 1.83, 1.16 and 0.71 eV, with an efficiency of 48%. A theoretical "infinity-layer" cell would have a theoretical efficiency of 64%.[

Thursday, January 2, 2014

Home electricity use in US falling to 2001 levels

Dec 30, 2013 by Jonathan Fahey on phys.org
Read more at: http://phys.org/news/2013-12-home-electricity-falling.html#jCp
           Home electricity use in US falling to 2001 levels  

 







This combination of Associated Press file photos shows, left, a Cingular "Fast Forward" cradle and Motorola mobile phone in New York on Tuesday Nov. 4, 2003, and an Apple ultracompact USB Power Adapter, on Friday, Sept. 19, 2008, in New York.
The average amount of electricity consumed in U.S. homes has fallen to levels last seen more than a decade ago, back when the smartest device in people's pockets was a Palm pilot and anyone talking about a tablet was probably an archaeologist or a preacher.
Because of more energy-efficient housing, appliances and gadgets, usage is on track to decline in 2013 for the third year in a row, to its lowest point since 2001, even though our lives are more electrified.
 
Here's a look at what has changed since the last time consumption was so low.
BETTER HOMES
 
In the early 2000s, as energy prices rose, more states adopted or toughened building codes to force builders to better seal homes so heat or air-conditioned air doesn't seep out so fast. That means newer homes waste less energy.
 
Also, insulated windows and other building technologies have dropped in price, making retrofits of existing homes more affordable. In the wake of the financial crisis, billions of dollars in Recovery Act funding was directed toward home-efficiency programs.
 
BETTER GADGETS
Big appliances such as refrigerators and air conditioners have gotten more efficient thanks to federal energy standards that get stricter ever few years as technology evolves.
A typical room —one of the biggest power hogs in the home—uses 20 percent less per hour of full operation than it did in 2001, according to the Association of Home Appliance Manufacturers.
Home electricity use in US falling to 2001 levels       
This combination of Associated Press file photos shows, top, Switch75 light LED bulbs in clear and frosted, on Tuesday, Nov. 8, 2011 in New York and a 100-watt incandescent light bulb at Royal Lighting in Los Angeles on Jan. 21, 2011. LEDs.

 
Central air conditioners, refrigerators, dishwashers, water heaters, washing machines and dryers also have gotten more efficient.
 
Other devices are using less juice, too. Some 40-inch (1-meter) LED televisions bought today use 80 percent less power than the cathode ray tube televisions of the past. Some use just $8 worth of electricity over a year when used five hours a day—less than a 60-watt incandescent bulb would use.
 
Those are being replaced with and LEDs that use 70 to 80 percent less power. According to the Energy Department, widespread use of LED bulbs could save output equivalent to that of 44 large power plants by 2027.

The move to mobile also is helping. Desktop computers with big CRT monitors are being replaced with laptops, tablet computers and smart phones, and these mobile devices are specifically designed to sip power to prolong battery life.

It costs $1.36 to power an iPad for a year, compared with $28.21 for a desktop computer, according to the Electric Power Research Institute.  
 
ON THE OTHER HAND...
We are using more devices, and that is offsetting what would otherwise be a more
dramatic reduction in power consumption.

Home electricity use in US falling to 2001 levels       
This combination of Associated Press file photos shows, top, a house in Duluth, Minn.,with triple-paned, south-facing windows that draw heat from the sun, and bottom an undated photo provided by Lowe's shows weatherstripping being applied.
 
 DVRs spin at all hours of the day, often under more than one television in a home. Game consoles are getting more sophisticated to process better graphics and connect with other players, and therefore use more power.

More homes have central air conditioners instead of window units. They are more efficient, but people use them more often.

Still, Jennifer Amman, the buildings program director at the American Council for an Energy-Efficient Economy, says she is encouraged.
    Home electricity use in US falling to 2001 levels
    In this combination of Associated Press file photos, a man, top, looks at the back of a
   Sony's 4K XBR LED television in Las Vegas, on Monday, Jan. 7, 2013. and bottom, a
   man ooks at a CRT television in Redwood City, Calif., on Wednesday, Oct.

"It's great to see this movement, to see the shift in the national numbers," she says. "I expect we'll see greater improvement over time. There is so much more that can be done."

The Energy Department predicts average residential electricity use per customer will fall again in 2014, by 1 percent.

In a world first, Japan extracted natural gas from frozen undersea deposits this year.

By Lisa Raffensperger @ http://discovermagazine.com/2014/jan-feb/19-fuel-from-fire-ice

   
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Global fuel supplies may soon be dramatically enlarged thanks to new techniques to tap into huge reserves of natural gas trapped under the seafloor. In March, Japan became the first country to successfully extract methane from frozen undersea deposits called gas hydrates. 
These lacy structures of ice, found around the globe buried under permafrost and the oceanfloor, have pores filled with highly flammable gas. By some estimates, hydrates could store more than 10 quadrillion cubic feet of harvestable methane — enough to fulfill the present gas needs of the entire United States for the next 400 years.
 
The following is from Wikipedia, under methane clathrates.
 
Methane clathrate (CH4•5.75H2O[1]), also called methane hydrate, hydromethane, methane ice, fire ice, natural gas hydrate, or gas hydrate, is a solid clathrate compound (more specifically, a clathrate hydrate) in which a large amount of methane is trapped within a crystal structure of water, forming a solid similar to ice.[2] Originally thought to occur only in the outer regions of the Solar System where temperatures are low and water ice is common, significant deposits of methane clathrate have been found under sediments on the ocean floors of Earth.[3]
Methane clathrates are common constituents of the shallow marine geosphere, and they occur both in deep sedimentary structures, and form outcrops on the ocean floor. Methane hydrates are believed to form by migration of gas from depth along geological faults, followed by precipitation, or crystallization, on contact of the rising gas stream with cold sea water. Methane clathrates are also present in deep Antarctic ice cores, and record a history of atmospheric methane concentrations, dating to 800,000 years ago.[4] The ice-core methane clathrate record is a primary source of data for global warming research, along with oxygen and carbon dioxide.

The sedimentary methane hydrate reservoir probably contains 2–10 times the currently known reserves of conventional natural gas, as of 2013.[25] This represents a potentially important future source of hydrocarbon fuel. However, in the majority of sites deposits are thought to be too dispersed for economic extraction.[18] Other problems facing commercial exploitation are detection of viable reserves and development of the technology for extracting methane gas from the hydrate deposits.

A research and development project in Japan is aiming for commercial-scale extraction near Aichi Prefecture by 2016.[26][27] In August 2006, China announced plans to spend 800 million yuan (US$100 million) over the next 10 years to study natural gas hydrates.[28] A potentially economic reserve in the Gulf of Mexico may contain approximately 100 billion cubic metres (3.5×10^12 cu ft) of gas.[18] Bjørn Kvamme and Arne Graue at the Institute for Physics and technology at the University of Bergen have developed a method for injecting CO2 into hydrates and reversing the process; thereby extracting CH4 by direct exchange.[29] The University of Bergen's method is being field tested by ConocoPhillips and state-owned Japan Oil, Gas and Metals National Corporation (JOGMEC), and partially funded by the U.S. Department of Energy. The project has already reached injection phase and was analyzing resulting data by March 12, 2012.[30]

On March 12, 2013, JOGMEC researchers announced that they had successfully extracted natural gas from frozen methane hydrate.[31] In order to extract the gas, specialized equipment was used to drill into and depressurize the hydrate deposits, causing the methane to separate from the ice. The gas was then collected and piped to surface where it was ignited to prove its presence.[32] According to an industry spokesperson, "It [was] the world's first offshore experiment producing gas from methane hydrate".[31] Previously, gas had been extracted from onshore deposits, but never from offshore deposits which are much more common.[32] The hydrate field from which the gas was extracted is located 50 kilometres (31 mi) from central Japan in the Nankai Trough, 300 metres (980 ft) under the sea.[31][32] A spokesperson for JOGMEC remarked "Japan could finally have an energy source to call its own".[32] The experiment will continue for two weeks before it is determined how efficient the gas extraction process has been.[32] Marine geologist Mikio Satoh remarked "Now we know that extraction is possible. The next step is to see how far Japan can get costs down to make the technology economically viable."[32] Japan estimates that there are at least 1.1 trillion cubic meters of methane trapped in the Nankai Trough, enough to meet the country's needs for more than ten years.[32]

NASA's Cold Fusion Folly Posted by Buzz Skyline at http://physicsbuzz.physicscentral.com/2013/04/nasas-cold-fusion-folly.html

I am sad - horrified really - to learn that some NASA scientists have caught cold fusion madness. As is so often the case with companies and research groups that get involved in this fruitless enterprise, they tend to make their case by first pointing out how nice it would be to have a clean, cheap, safe, effectively limitless source of power. Who could say no to that?
NASA Langley scientists are hoping to build spacecraft powered with cold fusion. Image courtesy of NASA.
Here's a word of caution: anytime anyone, especially a scientist, starts by telling you about glorious, nigh-unbelievable futuristic applications of their idea, be very, very skeptical.

NASA, for example, is promoting a cold fusion scheme that they say will power your house and car, and even a space plane that is apparently under development, despite the fact that  cold fusion power supplies don't exist yet and almost certainly never will. And if that's not enough, NASA's brand of cold fusion can solve our climate change problems by converting carbon directly into nitrogen.

The one hitch in the plan, unfortunately, is that they're going to have to violate some very well established physics to make it happen. To say the least, I wouldn't count on it.

To be clear, cold fusion does indeed work - provided you use a heavier cousin of the electron, known as a muon, to make it happen. There is no question that muon-catalyzed fusion is a perfectly sound, well-understood process that would be an abundant source of energy, if only we could find or create a cheap source of muons. Unfortunately, it takes way more energy to create the muons that go into muon-catalyzed fusion than comes out of the reaction.

Cold fusion that doesn't involve muons, on the other hand, doesn't work. In fact, the very same physics principles that make muon-catalyzed fusion possible are the ones that guarantee that the muon-less version isn't possible.

To get around the problem presented by nature and her physical laws, NASA's scientists have joined other cold fusion advocates in rebranding their work under the deceptively scientific moniker LENR (Low Energy Nuclear Reactions), and backing it up with various sketchy theories.

The main theory currently in fashion among cold fusion people is the Widom-Larsen LENR theory, which claims that neutrons can result from interactions with "heavy electrons" and protons in a lump of material in a cold fusion experiment. These neutrons, so the argument goes, can then be absorbed in a material (copper is a popular choice) which becomes unstable and decays to form a lighter material (nickel, assuming you start with copper), giving off energy in the process.

At least one paper argues that Widom and Larsen made some serious errors in their calculations that thoroughly undermine their theory. But even if you assume the Widom-Larsen paper is correct, then there should be detectable neutrons produced in cold fusion experiments. (Coincidentally, it's primarily because no neutrons were detected in the original cold fusion experiments of Pons and Fleischmann that physicists were first clued into the fact no fusion was happening at all.)

Some proponents claim that the neutrons produced in the Widom-Larsen theory are trapped in the sample material and rapidly absorbed by atoms. But because the neutrons are formed at room temperature, they should have energies typical of thermal neutrons, which move on average at about 2000 meters a second. That means that a large fraction of them should escape the sample, and be easily detectable. Those that don't escape, but instead are absorbed by atoms would also lead to detectable radiation as the neutron-activated portions of the material decays. Either way, it would be pretty dangerous to be near an experiment like that, if it worked.  The fact that cold fusion researchers are alive is fairly good evidence that their experiments aren't doing what they think they're doing.

But if you're willing to believe Widom-Larsen, and you suspend your disbelief long enough to accept that the neutrons exclusively stay in the sample for some reason, and that the energy released as a result dosn't include any radiation, it should still be pretty easy to determine if the experiments work. All you'd have to do is look for nickel in a sample that initially consisted of pure copper. If published proof exists, I haven't found it yet (please send links to peer-reviewed publications, if you've seen something).

Instead, people like NASA's Dennis Bushnell are happy with decidedly unscientific evidence for cold fusion. Among other things, Bushnell notes that " . . . several labs have blown up studying LENR and windows have melted, indicating when the conditions are "right" prodigious amounts of energy can be produced and released."

Of course, chemical reactions can blow things up and melt glass too. There's no reason to conclude nuclear reactions were responsible. And it certainly isn't publishable proof of cold fusion. Considering that most of these experiments involve hydrogen gas and electricity, it's not at all surprising that labs go up in flames on occasion.

On a related note, a recent article in Forbes magazine reported that Lewis Larsen, of the above-mentioned Widom-Larsen theory, claims that measurements of the isotopes of mercury in compact fluorescent bulbs indicate that LENR reactions are taking place in light fixtures everywhere. If only it were true, it would offer serious support for the Widom-Larsen theory.

It's too bad the paper Larsen cites says nothing of the sort. According to an article in Chemical and Engineering News, the scientists who performed the study of gas in fluorescent bulbs were motivated by the knowledge that some mercury isotopes are absorbed in the glass of the bulbs more readily than others. The isotope ratio inside isn't changing because of nuclear reactions, but instead by soaking into the glass at different rates. Sorry Lewis Larsen, nice try.

Chimpanzee–human last common ancestor

From Wikipedia, the free encyclopedia

The chimpanzee–human last common ancestor (CHLCA, CLCA, or C/H LCA) is the last species that humans, bonobos and chimpanzees share as a common ancestor.

In human genetic studies, the CHLCA is useful as an anchor point for calculating single-nucleotide polymorphism (SNP) rates in human populations where chimpanzees are used as an outgroup.[citation needed] The CHLCA is frequently cited as an anchor for molecular time to most recent common ancestor (TMRCA) determination because the two species of the genus Pan, the bonobos and the chimpanzee, are the species most genetically similar to Homo sapiens.

Time estimates

The age of the CHLCA is an estimate. The fossil find of Ardipithecus kadabba, Sahelanthropus tchadensis, and Orrorin tugenensis are closest in age and expected morphology to the CHLCA and suggest the LCA (last common ancestor) is older than 7 million years. The earliest studies of apes suggested the CHLCA may have been as old as 25 million years; however, protein studies in the 1970s suggested the CHLCA was less than 8 million years in age. Genetic methods based on Orangutan/Human and Gibbon/Human LCA times were then used to estimate a Chimpanzee/Human LCA of 6 million years, and LCA times between 5 and 7 million years ago are currently used in the literature.[note 1]
One no longer has the option of considering a fossil older than about eight million years as a hominid no matter what it looks like.
—V. Sarich, Background for man[1]

Because chimps and humans share a matrilineal ancestor, establishing the geological age of that last ancestor allows the estimation of the mutation rate. However, fossils of the exact last common ancestor would be an extremely rare find. The CHLCA is frequently cited as an anchor for mt-TMRCA determination because chimpanzees are the species most genetically similar to humans. However, there are no known fossils that represent that CHLCA. It is believed that there are no proto-chimpanzee fossils or proto-gorilla fossils that have been clearly identified. However, Richard Dawkins, in his book The Ancestor's Tale, proposes that robust australopithecines such as Paranthropus are the ancestors of gorillas, whereas some of the gracile australopithecines are the ancestors of chimpanzees (see Homininae).
In effect, there is now no a priori reason to presume that human-chimpanzee split times are especially recent, and the fossil evidence is now fully compatible with older chimpanzee-human divergence dates [7 to 10 Ma...
—White et al. (2009), [2]

Some researchers tried to estimate the age of the CHLCA (TCHLCA) using biopolymer structures which differ slightly between closely related animals. Among these researchers, Allan C. Wilson and Vincent Sarich were pioneers in the development of the molecular clock for humans. Working on protein sequences they eventually determined that apes were closer to humans than some paleontologists perceived based on the fossil record.[note 2] Later Vincent Sarich concluded that the TCHLCA was no greater than 8 million years in age, with a favored range between 4 and 6 million years before present.

This paradigmatic age has stuck with molecular anthropology until the late 1990s, when others began questioning the certainty of the assumption. Currently, the estimation of the TCHLCA is less certain, and there is genetic as well as paleontological support for increasing TCHLCA. A 13 million year TCHLCA is one proposed age.[2][3]

A source of confusion in determining the exact age of the PanHomo split is evidence of a more complex speciation process rather than a clean split between the two lineages. Different chromosomes appear to have split at different times, possibly over as much as a 4 million year period, indicating a long and drawn out speciation process with large scale hybridization events between the two emerging lineages.[4] Particularly the X chromosome shows very little difference between Humans and chimpanzees, though this effect may also partly be the result of rapid evolution of the X chromosome in the last common ancestors.[5] Complex speciation and incomplete lineage sorting of genetic sequences seem to also have happened in the split between our lineage and that of the gorilla, indicating "messy" speciation is the rule rather than exception in large-bodied primates.[6][7] Such a scenario would explain why divergence age between the Homo and Pan has varied with the chosen method and why a single point has been so far hard to track down.

Richard Wrangham argued that the CHLCA was so similar to chimpanzee (Pan troglodytes), that it should be classified as a member of the Pan genus, and called Pan prior.[8]

Notes

  1. Jump up ^ Studies have pointed to the slowing molecular clock as monkeys evolved into apes and apes evolved into humans. In particular, Macaque monkey mtDNA has evolved 30% more rapidly than African ape mtDNA.
  2. Jump up ^ "If man and old world monkeys last shared a common ancestor 30 million years ago, then man and African apes shared a common ancestor 5 million years ago..." Sarich & Wilson (1971)

References

  1. Jump up ^ Background for man: readings in physical anthropology, 1971
  2. ^ Jump up to: a b White TD, Asfaw B, Beyene Y, et al. (October 2009). "Ardipithecus ramidus and the paleobiology of early hominids". Science 326 (5949): 75–86. doi:10.1126/science.1175802. PMID 19810190. 
  3. Jump up ^ Arnason U, Gullberg A, Janke A (December 1998). "Molecular timing of primate divergences as estimated by two nonprimate calibration points". J. Mol. Evol. 47 (6): 718–27. doi:10.1007/PL00006431. PMID 9847414. 
  4. Jump up ^ Patterson N, Richter DJ, Gnerre S, Lander ES, Reich D (June 2006). "Genetic evidence for complex speciation of humans and chimpanzees". Nature 441 (7097): 1103–8. doi:10.1038/nature04789. PMID 16710306. 
  5. Jump up ^ Wakeley J (March 2008). "Complex speciation of humans and chimpanzees". Nature 452 (7184): E3–4; discussion E4. doi:10.1038/nature06805. PMID 18337768. 
  6. Jump up ^ Scally A, Dutheil JY, Hillier LW, et al. (March 2012). "Insights into hominid evolution from the gorilla genome sequence". Nature 483 (7388): 169–75. doi:10.1038/nature10842. PMC 3303130. PMID 22398555. 
  7. Jump up ^ Van Arsdale, A.P. "Go, go, Gorilla genome". The Pleistocene Scene – A.P. Van Arsdale Blog. Retrieved 16 November 2012. 
  8. Jump up ^ De Waal, Frans B. M (2002-10-15). Tree of Origin: What Primate Behavior Can Tell Us About Human Social Evolution. pp. 124–126. ISBN 9780674010048.
 

Neurophilosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...