By Kate Becker on Wed, 11 Feb 2015
Original link: http://www.pbs.org/wgbh/nova/blogs/physics/2015/02/falsifiability/?utm_source=facebook&utm_medium=pbsofficial&utm_campaign=nova_nextIf a theory doesn’t make a testable prediction, it isn’t science.
It’s a basic axiom of the scientific method, dubbed “falsifiability” by the 20th century philosopher of science Karl Popper. General relativity passes the falsifiability test because, in addition to elegantly accounting for previously-observed phenomena like the precession of Mercury’s orbit, it also made predictions about as-yet-unseen effects—how light should bend around the Sun, the way clocks should seem to run slower in a strong gravitational field, and others that have since been borne out by experiment. On the other hand, theories like Marxism and Freudian psychoanalysis failed the falsifiability test—in Popper’s mind, at least—because they could be twisted to explain nearly any “data” about the world. As Wolfgang Pauli is said to have put it, skewering one student’s apparently unfalsifiable idea, “This isn’t right. It’s not even wrong.”
Now, some physicists and philosophers think it is time to reconsider the notion of falsifiability. Could a theory that provides an elegant and accurate account of the world around us—even if its predictions can’t be tested by today’s experiments, or tomorrow’s—still “count” as science?
“We are in various ways hitting the limits of what will ever be testable.”
As theory pulls further and further ahead of the capabilities of experiment, physicists are taking this question seriously. “We are in various ways hitting the limits of what will ever be testable, unless we have misunderstood some essential point about the nature of reality,” says theoretical cosmologist George Ellis. “We have now seen all the visible universe (i.e back to the visual horizon) and only gravitational waves remain to test further; and we are approaching the limits of what particle colliders it will ever be feasible to build, for economic and technical reasons.”
Case in point: String theory. The darling of many theorists, string theory represents the basic building blocks of matter as vibrating strings. The strings take on different properties depending on their modes of vibration, just as the strings of a violin produce different notes depending on how they are played. To string theorists, the whole universe is a boisterous symphony performed upon these strings.
It’s a lovely idea. Lovelier yet, string theory could unify general relativity with quantum mechanics, solving what is perhaps the most stubborn problem in fundamental physics. The trouble? To put string theory to the test, we may need experiments that operate at energies far higher than any modern collider. It’s possible that experimental tests of the predictions of string theory will never be within our reach.
Meanwhile, cosmologists have found themselves at a similar impasse. We live in a universe that is, by some estimations, too good to be true. The fundamental constants of nature and the cosmological constant, which drives the accelerating expansion of the universe, seem “fine-tuned” to allow galaxies and stars to form. As Anil Ananthaswamy wrote elsewhere on this blog, “Tweak the charge on an electron, for instance, or change the strength of the gravitational force or the strong nuclear force just a smidgen, and the universe would look very different, and likely be lifeless.”
Why do these numbers, which are essential features of the universe and cannot be derived from more fundamental quantities, appear to conspire for our comfort?
One answer goes: If they were different, we wouldn’t be here to ask the question.
This is called the “anthropic principle,” and if you think it feels like a cosmic punt, you’re not alone. Researchers have been trying to underpin our apparent stroke of luck with hard science for decades.
String theory suggests a solution: It predicts that our universe is just one among a multitude of universes, each with its own fundamental constants. If the cosmic lottery has played out billions of times, it isn’t so remarkable that the winning numbers for life should come up at least once.
In fact, you can reason your way to the “multiverse” in at least four different ways, according to MIT physicist Max Tegmark’s accounting. The tricky part is testing the idea. You can’t send or receive messages from neighboring universes, and most formulations of multiverse theory don’t make any testable predictions. Yet the theory provides a neat solution to the fine-tuning problem. Must we throw it out because it fails the falsifiability test?
Falsifiability is “just a simple motto that non-philosophically-trained scientists have latched onto.”
“It would be completely non-scientific to ignore that possibility just because it doesn’t conform with some preexisting philosophical prejudices,” says Sean Carroll, a physicist at Caltech, who called for the “retirement” of the falsifiability principle in a controversial essay for Edge last year. Falsifiability is “just a simple motto that non-philosophically-trained scientists have latched onto,” argues Carroll.
He also bristles at the notion that this viewpoint can be summed up as “elegance will suffice,” as Ellis put it in a stinging Nature comment written with cosmologist Joe Silk.
“Elegance can help us invent new theories, but does not count as empirical evidence in their favor,” says Carroll. “The criteria we use for judging theories are how good they are at accounting for the data, not how pretty or seductive or intuitive they are.”
But Ellis and Silk worry that if physicists abandon falsifiability, they could damage the public’s trust in science and scientists at a time when that trust is critical to policymaking. “This battle for the heart and soul of physics is opening up at a time when scientific results—in topics from climate change to the theory of evolution—are being questioned by some politicians and religious fundamentalists,” Ellis and Silk wrote in Nature.
“The fear is that it would become difficult to separate such ‘science’ from New Age thinking, or science fiction,” says Ellis. If scientists backpedal on falsifiability, Ellis fears, intellectual disputes that were once resolved by experiment will devolve into never-ending philosophical feuds, and both the progress and the reputation of science will suffer.
But Carroll argues that he is simply calling for greater openness and honesty about the way science really happens. “I think that it’s more important than ever that scientists tell the truth. And the truth is that in practice, falsifiability is not a good criterion for telling science from non-science,” he says.
Perhaps “falsifiability” isn’t up to shouldering the full scientific and philosophical burden that’s been placed on it. “Sean is right that ‘falsifiability’ is a crude slogan that fails to capture what science really aims at,” argues MIT computer scientist Scott Aaronson, writing on his blog Shtetl Optimized. Yet, writes Aaronson, “falsifiability shouldn’t be ‘retired.’ Instead, falsifiability’s portfolio should be expanded, with full-time assistants (like explanatory power) hired to lighten falsifiability’s load.”
“I think falsifiability is not a perfect criterion, but it’s much less pernicious than what’s being served up by the ‘post-empirical’ faction,” says Frank Wilczek, a physicist at MIT. “Falsifiability is too impatient, in some sense,” putting immediate demands on theories that are not yet mature enough to meet them. “It’s an important discipline, but if it is applied too rigorously and too early, it can be stifling.”
So, where do we go from here?
“We need to rethink these issues in a philosophically sophisticated way that also takes the best interpretations of fundamental science, and its limitations, seriously,” says Ellis. “Maybe we have to accept uncertainty as a profound aspect of our understanding of the universe in cosmology as well as particle physics.”