Search This Blog

Monday, December 23, 2013

To suppose that the eye -- Some wisdom from Charles Darwin

File:Charles Darwin seated crop.jpg

To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree. Yet reason tells me, that if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly, and the variations be inherited, which is certainly the case; and if variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real.

From Quarks to Quasars » The Physics of Death:

From Quarks to Quasars » The Physics of Death:
Posted on December 18, 2013 at 7:13 pm by

The Physics of Death:                

Larger Image: http://imgur.com/gallery/cC8sAOw (via All Science, All the Time)
Credit: depositphotos
Credit: depositphotos

Attached to the bottom of this article, we have included one of the most profound poems we know of about death. It’s one of my favorites, as it doesn’t necessarily touch base on the many disturbing things that happen to the body during the decomposition phase, when the cells and tissues begin to break down, ravishing the physical remnant of a person’s life. Instead, it looks at the subject from a physics standpoint.. the redistribution of energy that occurs during the decomposition process.

In life, the human body is comprised of a combination of matter and energy, both electrically (through neurons and electrical impulses) and chemically. (The same can be said about plants. they are powered by a phenomenon we know as photosynthesis. whereby they generate energy from sunlight) At any given moment, we contain some 20-watts of energy, which is sufficient to power a single light-bulb. This energy is acquired in a plethora of ways. Mostly through the consumption of food, which gives us chemical energy. Said chemical energy is then used to power our muscles and facilities, transforming into kinetic energy.

Credit: Jon Sullivan
Credit: Jon Sullivan

As we know through thermodynamics, energy can not be created nor destroyed.. it can simply change states. Whereas; The total amount of energy in an isolated system does not, can not, change. (and thanks to Einstein, we also know matter and energy are two rungs on the same ladder) The universe as a whole IS closed. However, human bodies (and ecosystems alike) are not closed systems, but open systems, which essentially means that we exchange energy with our surroundings. We can gain energy (again, through chemical processes) and we can lose it (by expelling waste or emitting heat from our bodies).

In death, your atoms (and the energy contained within your body) are returned to the universe, where they are subsequently used in various other substances and forms. These same atoms and energy, which originated during the big bang, will always be around. Therefore, your “light,” the essence of your energy (not to be confused with your actual consciousness) will continue to echo throughout spacetime until the lights around here go out permanently.

And we leave you with this jewel:


Larger Image: http://imgur.com/gallery/cC8sAOw (via All Science, All the Time)
Larger Image: http://imgur.com/gallery/cC8sAOw (via All Science, All the Time)

Someone Please Explain Why This Would Fool Anyone Knowledgeable About Evolution?

Source:  http://darwins-god.blogspot.com/2013/07/evolutionist-on-complexity-to-some.html
Evolutionist on Complexity: “To some extent, it just happens”
 
David Strumfels -- what this is is another example of how to use facts about evolution, and use people's ignorance to make it seem absurd.  There is nothing new here, and nothing that hasn't been refuted a hundred times.

Who Will Tell the People?

No sooner had we pointed out that while, as Andreas Wagner admitted, we know “very little” about how evolutionary innovations originate and that “Exactly how new traits emerge is a question that has long puzzled evolutionary biologists,” such inconvenient truths are rarely admitted in public, then leading science writer Carl Zimmer, as if on cue, writing for Scientific American on the topic of “how organisms can evolve elaborate structures,” informed his readers that when it comes to complexity “To some extent, it just happens,” and that “intricate systems of proteins can evolve from simpler ones,” and finally that “studies suggest” that random mutations “can fuel the emergence of complexity.”

That incredible sequence of whoppers makes us wonder, why is it that we cannot simply tell the truth about the science? Who will tell the people?

Biosynthesis captured in motion

Original:  Biosynthesis captured in motion
Read more at: http://phys.org/news/2013-12-biosynthesis-captured-motion.html#jCp
by Susan Brown

Biosynthesis captured in motion

Linking carrier proteins (red) to enzymes (blue) that synthesize fatty acids, reveals this snapshot of biosynthesis in action.

(Phys.org) —Chemists have caught molecules in the act of biosynthesis revealing an animated view of how a fundamental piece of cellular machinery operates. The system they observed, a critical metabolic pathway, generates fatty acids, essential components of fats and structures such as cell membranes. Nature published their findings in the early online edition December 22.

Scientists would like to regulate this pathway for diverse purposes: to curb the chaotic growth of cancer cells or harmful bacteria, for example, or to boost the production of oils by algae to be harvested for fuel. But these efforts often fail, for lack of complete information about how molecules involved in the synthesis interact.

"We need to decipher the communication code, to understand how proteins work with each other on a molecular level," said Michael Burkart, a biological chemist at the University of California, San Diego, who led the project in partnership with Shiou-Chuan (Sheryl) Tsai, a biochemist at UC Irvine.

The challenge is that at work are usually in motion. "It's analogous to a huge machine," Tsai said. "We need to know how different parts of the engine fit and work together."

Capturing images of the varying configurations required a team of chemists with complementary areas of expertise. Their joint work has revealed the molecular dance of a carrier protein that protects the growing chain of and ferries it between active sites of the enzymes that assemble it.
Tsai is an expert in x-ray crystallography, a method chemists use to lock proteins in place and generate still images of their structures. The protein this team studied, acyl carrier protein or ACP, proved particularly challenging because it moves so much.

To pin down ACP and other proteins, Burkart's research group has developed a molecular toolkit, a set of small molecules that can lock proteins together in their working configurations.

"Mike pins them, and I take the pictures," said Tsai, who holds appointments in the departments of
Molecular Biology and Biochemistry, Chemistry, and Pharmaceutical Sciences at UC Irvine. The strategy resulted in two snapshots, showing two different states of interaction.

Then, working with Stanley Opella, professor of chemistry and biochemistry at UC San Diego, they extended their observation using a different method, , which reveals the motions of the and enzyme in solution. These observations were validated by dynamic modeling of the molecule by a research group led by J. Andrew McCammon, also a professor of chemistry and biochemistry at UC San Diego.

They saw salt bridges form and the enzyme grab helices and pull them apart. The growing chain of fatty acids also loops back into a pocket in the carrier molecule. "We call it the marsupial protein," Burkart says. "It protects its young." That is, until it's time for the product to go.

"We thought this tiny protein was just a transporter," Tsai said. "But when it ejects its cargo, it works like a piston. We see a helical collapse forcing the fatty acid out."

The approach will allow this team to continue to unravel interactions between proteins, a critical step to the successful manipulation of biosynthetic pathways.
Journal reference: Nature

Ode to a Flower: Richard Feynman’s Famous Monologue on Knowledge and Mystery, Animated | Brain Pickings

Ode to a Flower: Richard Feynman’s Famous Monologue on Knowledge and Mystery, Animated | Brain Pickings

by
“The science knowledge only adds to the excitement, the mystery and the awe of a flower.”


Richard Feynmanchampion of scientific culture, graphic novel hero, crusader for integrity, holder of the key to science, adviser of future generations, bongo player, no ordinary genius. In this fantastic animated adaptation of an excerpt from Christopher Sykes’s celebrated 1981 BBC documentary about Feynman, The Pleasure of Finding Things Out — which gave us the great physicist’s timeless words on beauty, honors, and curiosity and his fascinating explanation of where trees actually come fromFraser Davidson captures in stunning motion graphics Feynman’s short, sublime soliloquy on why knowledge enriches life rather than detracting from its mystery, the best thing since that animated adaptation of Carl Sagan’s Pale Blue Dot.

The message at the heart of Feynman’s monologue — to celebrate the beauty of the mysterious, embrace the unfamiliar, and life the questions — is beautiful mantra on which to center the new year.

I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say “look how beautiful it is,” and I’ll agree. Then he says “I as an artist can see how beautiful this is but you as a scientist take this all apart and it becomes a dull thing,” and I think that he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe… 
I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don’t understand how it subtracts.

Scientists Petition U.S. Congress for Return to the Moon | Space.com

Original article:  http://www.space.com/24068-destination-moon-petition-congress.html?cmpid=514648
 
Leonard-david
Lunar Exploration
China’s Chang’e 3 robotic landing on the moon has helped spur a political crusade in the United States to more aggressively explore and utilize the moon.

Scientific Groupthink and a Summary of the Evidence that most Published Research is False

If these articles are both true, then are Americans right to distrust scientists and their claims?

http://www.american.com/archive/2013/december/scientific-groupthink-and-gay-parenting

Part one:  Scientific Groupthink

 
Wednesday, December 18, 2013
The controversy over a recent study on gay parenting illustrates a sociopolitical groupthink operating in the social scientific community. Scientists should go where the science takes them, not where their politics does.

University of Texas sociology professor Mark Regnerus’s study, “How Different Are the Adult Children of Parents Who Have Same-Sex Relationships? Findings from the New Family Structures Study,” published in the academic journal Social Science Research last year, caused a firestorm in the scientific community. Unlike most previous studies, Regnerus found that children of parents who had experienced a same-sex relationship fared worse than children of heterosexual parents on measures of social, emotional, and psychological adjustment as well as educational attainment, employment history, need for public assistance, substance abuse, and criminal justice system involvement.

The reaction to the Regnerus study was swift and harsh. Many of his academic colleagues said it was fatally flawed. Many questioned the motives of the author, reviewers, and journal editor. Did they have an anti-gay political agenda?

The controversy illustrates how tougher standards for assessing scientific worth are applied if a study produces results that are inconsistent with the scientists’ own political views. Suppose Regnerus had conducted an identical study, with the same methodological flaws, that had produced results consistent with previous studies, finding no differences between the children of gay or lesbian ("lesbigay") versus heterosexual parents. Would this one study (among the over 60 studies on lesbigay parenting) receive the same criticism, or any criticism at all, from the academic community? Would 201 scholars send a letter to the journal objecting to its publication of the study? Would the author’s former department chair publish an op-ed saying that she was “furious” about her junior colleague’s “pseudo-science”? Would academics make allegations in blogs and other forums about the integrity of the author, journal editor, and editorial review process?1 Would the professor’s university subject him to an intrusive investigation for possible scientific misconduct (of which it found no evidence)? And would similar attacks have been launched against other researchers who dared to question the scholarly consensus?2
Conservatives’ trust in science has dipped to an all-time low.
This is not the first time that science has clashed with politics. The Bell Curve, a book about the heritability of intelligence and the resulting libertarian or conservative policy implications, created great controversy. The Regnerus case unfolded similarly to the controversy surrounding the publication of a meta-analysis of child sexual abuse studies that was published in the journal Psychological Bulletin and reported that childhood sexual abuse often caused few long-lasting psychological effects. The article caused outrage. The study was attacked as substandard, and many questioned the authors’ motives and alleged scientific misconduct.

Most would acknowledge that science, particularly policy-relevant social science, is often politicized. The Regnerus controversy illustrates that scientists’ sociopolitical views frequently affect the kind of science that is conducted on policy-relevant questions, how findings are interpreted and received, and the degree of critical scrutiny such studies receive.

Scientific Groupthink
“If when a study yields an unpopular conclusion it is subjected to greater scrutiny, and more effort is expended towards its refutation, an obvious bias to ‘find what the community is looking for’ will have been introduced.”3

The Regnerus case illustrates a sociopolitical groupthink operating in the social scientific community. Surveys of the professoriate consistently find faculties to be quite lopsidedly liberal. The political imbalance is particularly acute in the social sciences, with liberal-conservative ratios of between 8:1 to 30:1 in most disciplines, and particularly with respect to social issues like gay marriage.

Such homogeneity of sociopolitical views among social scientists almost invariably leads to “groupthink,” a phenomenon that occurs when group members have relatively homogeneous backgrounds or ideological views. With this groupthink comes self-censorship and pressure on dissenters, the negative stereotyping and discounting of conservative perspectives, and a failure to consider conservative-friendly (as compared with liberal-friendly) question framing and data interpretation. A recent national survey of psychology professors found that one in four reported that they would be less likely to give a positive recommendation on a journal manuscript or grant application having a conservative perspective, and one in six would be less likely to invite conservative colleagues to participate in a symposium. In sociology, Notre Dame University Sociology Professor Christian Smith notes that:
 
The temptation . . . to advance a political agenda is too often indulged in sociology, especially by activist faculty in certain fields, like marriage, family, sex, and gender ... Research programs that advance narrow agendas compatible with particular ideologies are privileged ... the influence of progressive orthodoxy in sociology is evident in decisions made by graduate students, junior faculty, and even senior faculty about what, why, and how to research, publish, and teach ... The result is predictable: Play it politically safe, avoid controversial questions, publish the right conclusions.

Regnerus did not, however, play it safe. He did not publish the right conclusions on a politically controversial topic. Politically correct sociologists, on the other hand, enjoy certain privileges in a very politically conscious and liberal discipline. Indeed, there sometimes is the belief “that social science should be an instrument for social change and thus should promote the ‘correct’ values and ideological positions.”4

No wonder there is so little research by academics that arguably supports conservative policy perspectives. When such research is published, the Regnerus controversy illustrates how it may be received. Critics used the liberal norms and privileges of their discipline to marginalize the Regnerus study. A point-by-point methodological comparison of the Regnerus study alongside previous lesbigay parenting studies reveals the selective scrutiny applied by the critics of the Regnerus study.5

Ideological Diversity Is the Antidote
“No one knows how many research programs [social scientists] have failed to launch, or how many research discoveries they have failed to make, as a result of the skew in the distribution of [political] views within their discipline.”6

Contrary to the critics’ concerns about the political conservatism of Regnerus and his funders, the Regnerus study illustrates the value of ideological diversity among both researchers and funders. The allegedly conservative researcher Regnerus, funded by advocacy organizations opposing gay marriage, conducted a study producing findings useful to gay marriage opponents. Many previous studies were conducted and/or funded by those favoring gay marriage, and they produced findings useful to the gay-marriage cause.
Scientists should go where the science takes them, not where their politics does.
It is not surprising, nor is it indicative of nefarious scientific misconduct, that researchers of different ideological persuasions would produce findings consistent with their own ideology. It is human nature to frame research questions and interpret findings in ways that confirm one’s political beliefs. Such biases are the norm, even among scientists. This is particularly true when it comes to research on social issues because social scientists, many of whom were attracted to social science because of its progressive ideology, often have values invested in the issues they research. One can find such ideological tilt throughout social science research. For instance, how researchers interpret data on the relative contributions of hereditary factors versus environment to intelligence, or on biological factors in personality styles, seems to be partly a function of their political views.

Politics inevitably enter into the scientific endeavor as a consequence of the sociopolitical, parochial, financial, or career interests of researchers, funders, and professional organizations as well as those of the larger scientific community and polity. Scientists’ values and interests influence how they define and conceptualize social and behavioral issues, the data collection and analysis methods chosen, how results are interpreted, how scientists scrutinize and evaluate a study’s quality, and whether there are incentives or disincentives to advance research findings in policy advocacy.

Because biases are endemic to the scientific enterprise, the Regnerus case illustrates how research conducted or funded by those outside the sociopolitical mainstream, insofar as social scientists are concerned, may be the only way that “politically incorrect” research challenging the scientific consensus gets done. Theoretical or ideological homogeneity among researchers tends to produce myopic, one-sided research, whereas ideological diversity fosters a more dynamic climate that encourages unorthodox, diverse (and sometimes politically incorrect) research. Not only do those in the political minority bring diverse perspectives to the research endeavor, but their very presence has the effect of widening perspective and reducing bias in the rest of the scientific community. If social scientists were embedded in ideologically diverse networks of other scientists, they would be more likely to consider and test alternative hypotheses and perspectives on the social issues they research.

Science and Scientists in the Policy Debate
“Social scientists are never more revealing of themselves than when challenging the objectivity of one another’s work. In some fields almost any study is assumed to have a more or less discoverable political purpose.”7

Especially with controversies like the Regnerus study, it is no wonder that policymakers of all political persuasions are often skeptical about policy research coming from the academy, or that conservatives’ trust in science has dipped to an all-time low. This is what happens when policy-relevant research fails to be politically inclusive because virtually everyone funding and doing the research comes from the same political perspective.
Social scientists, many of whom were attracted to social science because of its progressive ideology, often have values invested in the issues they research.
Indeed, scientists who do research on policy issues arguably have an obligation to inform policymakers and the public about their research findings. But it is dangerous for science, policymaking, and the public’s trust in science when scientists are encouraged to do so only when the science supports liberal positions but are discouraged from doing so, or risk disapprobation from their colleagues, when the findings do not. Sadly, this is often the case. Scientists should go where the science takes them, not where their politics does. To attack a study based on the political incorrectness of its findings or its author’s and funder’s politics is scientifically irrelevant and ad hominem.
Rather, studies must stand or fall on the weight of their methodological reliability and validity.

Part Two:  A summary of the evidence that most published research is false


A summary of the evidence that most published research is false


One of the hottest topics in science has two main conclusions:
  • Most published research is false
  • There is a reproducibility crisis in science
The first claim is often stated in a slightly different way: that most results of scientific experiments do not replicate. I recently got caught up in this debate and I frequently get asked about it.
So I thought I'd do a very brief review of the reported evidence for the two perceived crises. An important point is all of the scientists below have made the best effort they can to tackle a fairly complicated problem and this is early days in the study of science-wise false discovery rates. But the take home message is that there is currently no definitive evidence one way or another about whether most results are false.
  1. Paper: Why most published research findings are falseMain idea: People use hypothesis testing to determine if specific scientific discoveries are significant. This significance calculation is used as a screening mechanism in the scientific literature. Under assumptions about the way people perform these tests and report them it is possible to construct a universe where most published findings are false positive results. Important drawback: The paper contains no real data, it is purely based on conjecture and simulation.
  2. Paper: Drug development: Raise standards for preclinical researchMain ideaMany drugs fail when they move through the development process. Amgen scientists tried to replicate 53 high-profile basic research findings in cancer and could only replicate 6. Important drawback: This is not a scientific paper. The study design, replication attempts, selected studies, and the statistical methods to define "replicate" are not defined. No data is available or provided.
  3. Paper: An estimate of the science-wise false discovery rate and application to the top medical literatureMain idea: The paper collects P-values from published abstracts of papers in the medical literature and uses a statistical method to estimate the false discovery rate proposed in paper 1 above. Important drawback: The paper only collected data from major medical journals and the abstracts. P-values can be manipulated in many ways that could call into question the statistical results in the paper.
  4. Paper: Revised standards for statistical evidenceMain idea: The P-value cutoff of 0.05 is used by many journals to determine statistical significance. This paper proposes an alternative method for screening hypotheses based on Bayes factors. Important drawback: The paper is a theoretical and philosophical argument for simple hypothesis tests. The data analysis recalculates Bayes factors for reported t-statistics and plots the Bayes factor versus the t-test then makes an argument for why one is better than the other.
  5. Paper: Contradicted and initially stronger effects in highly cited research Main idea: This paper looks at studies that attempted to answer the same scientific question where the second study had a larger sample size or more robust (e.g. randomized trial) study design. Some effects reported in the second study do not match the results exactly from the first. Important drawback: The title does not match the results. 16% of studies were contradicted (meaning effect in a different direction). 16% reported smaller effect size, 44% were replicated and 24% were unchallenged. So 44% + 24% + 16% = 86% were not contradicted. Lack of replication is also not proof of error.
  6. PaperModeling the effects of subjective and objective decision making in scientific peer reviewMain idea: This paper considers a theoretical model for how referees of scientific papers may behave socially. They use simulations to point out how an effect called "herding" (basically peer-mimicking) may lead to biases in the review process. Important drawback: The model makes major simplifying assumptions about human behavior and supports these conclusions entirely with simulation. No data is presented.
  7. Paper: Repeatability of published microarray gene expression analysesMain idea: This paper attempts to collect the data used in published papers and to repeat one randomly selected analysis from the paper. For many of the papers the data was either not available or available in a format that made it difficult/impossible to repeat the analysis performed in the original paper. The types of software used were also not clear. Important drawbackThis paper was written about 18 data sets in 2005-2006. This is both early in the era of reproducibility and not comprehensive in any way. This says nothing about the rate of false discoveries in the medical literature but does speak to the reproducibility of genomics experiments 10 years ago.
  8. Paper: Investigating variation in replicability: The "Many Labs" replication project. (not yet published) Main ideaThe idea is to take a bunch of published high-profile results and try to get multiple labs to replicate the results. They successfully replicated 10 out of 13 results and the distribution of results you see is about what you'd expect (see embedded figure below). Important drawback: The paper isn't published yet and it only covers 13 experiments. That being said, this is by far the strongest, most comprehensive, and most reproducible analysis of replication among all the papers surveyed here.
I do think that the reviewed papers are important contributions because they draw attention to real concerns about the modern scientific process. Namely
  • We need more statistical literacy
  • We need more computational literacy
  • We need to require code be published
  • We need mechanisms of peer review that deal with code
  • We need a culture that doesn't use reproducibility as a weapon
  • We need increased transparency in review and evaluation of papers
Some of these have simple fixes (more statistics courses, publishing code) some are much, much harder (changing publication/review culture).
The Many Labs project (Paper 8) points out that statistical research is proceeding in a fairly reasonable fashion. Some effects are overestimated in individual studies, some are underestimated, and some are just about right. Regardless, no single study should stand alone as the last word about an important scientific issue. It obviously won't be possible to replicate every study as intensely as those in the Many Labs project, but this is a reassuring piece of evidence that things aren't as bad as some paper titles and headlines may make it seem.

Many labs data. Blue x's are original effect sizes. Other dots are effect sizes from replication experiments (http://rolfzwaan.blogspot.com/2013/11/what-can-we-learn-from-many-labs.html)
 
The Many Labs results suggest that the hype about the failures of science are, at the very least, premature. I think an equally important idea is that science has pretty much always worked with some number of false positive and irreplicable studies. This was beautifully described by Jared Horvath in this blog post from the Economist.  I think the take home message is that regardless of the rate of false discoveries, the scientific process has led to amazing and life-altering discoveries.

Hydrogen-like atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hydrogen-like_atom ...