Atheists are one of the most despised minorities in the U.S., and anti-atheist bigotry is both widespread and socially acceptable in many areas. When we consider the fact that many religious believers have convinced themselves that our refusal to share their beliefs makes us inherently immoral, it is not surprising that they condemn us. Some go so far as to claim that we are less than fully human, reducing the prohibitions against inflicting harm on us that might normally be in place.
One response I have routinely encountered from Christians, and even a few atheists, is that negative attitudes aside, atheists are not actually discriminated against. Ah denial, is there nothing you can't do?
What is Discrimination?
Discrimination is not the same thing as being treated unfairly. In the legal context in which discrimination is most relevant, it can be defined broadly as unequal treatment for a reason other than ability or legal rights. More precise definitions and tests of discrimination are dependent on the context. Thus, employment discrimination may work a bit differently than discrimination involving educational opportunity. Still, we can abstract some general principles from U.S. law. Federal (and state) laws prohibit discrimination in areas such as employment, housing, voting rights, educational opportunity, and civil rights on the basis of race, age, sex, nationality, disability, and religion.
Both Title VII of the Civil Rights Act of 1964 and the Fair Housing Act of 1968 explicitly prohibit discrimination on the basis of religion and the other factors noted above. That is, it is unlawful to discriminate against someone (i.e., to treat them unequally in certain specified matters) on the basis of their religious beliefs (or lack thereof).
Examples of Discrimination Against Atheists
What follows is by no means intended to be an exhaustive list. I intend only to provide a handful of notable examples which can be used to educate those arguing that there atheists in the U.S. do not face any sort of discrimination on the basis of their atheism.
Some judges consider atheism to be a sufficient reason for denying custody to a parent during custody hearings.
Many private organizations, such as the Boy Scouts of America, deny membership solely on the basis of lack of god-belief. Some of these organizations also manage to receive public funding.
Atheists face many forms of employment discrimination, ranging from differential hiring practices to wrongful termination. A school district in Texas went so far as to refuse to do business with an atheist.
In addition to widespread anti-atheist bigotry in the U.S. military, there are reports of institutionalized discrimination designed to quash complaints made by atheists who dare to speak out.
A handful of states retain laws to prevent atheists from being permitted to hold public office in clear violation of the U.S. Constitution.
The mainstream media in the U.S. regularly excludes atheists, even from stories about atheism, while giving voice to religious believers.
A survey of atheist and other freethought groups completed by Margaret Downey in 2000 reveled that the overwhelming majority of instances of discrimination against atheists are never reported. Why? According to Downey,
...the fear of suffering further discrimination as a “whistleblower” was widespread. Some victims told me that they did not want to go public lest still more hatred come their way. This is the trauma of discrimination, just the sort of intimidation that discourages discrimination reports and makes it difficult to find plaintiffs for needed litigation.
We can all find examples of discrimination against atheists on their basis of their lack of god-belief. We should also be able to understand why there are not many more examples in the public record.
Can consciousness be measured? Image: iStock/akarelias
Leonardo Da Vinci, in his Treatise on Painting (Trattato della Pittura), advises painters to pay particular attention to the motions of the mind, moti mentali. “The movement which is depicted must be appropriate to the mental state of the figure,” he advises; otherwise the figure will be considered twice dead: “dead because it is a depiction, and dead yet again in not exhibiting motion either of the mind or of the body.” Francesco Melzi, student and friend to Da Vinci, compiled the Treatise posthumously from fragmented notes left to him. The vivid portrayal of emotions in the paintings from Leonardo’s school shows that his students learned to read the moti mentali of their subjects in exquisite detail.
Associating an emotional expression of the face with a “motion of the mind” was an astonishing insight by Da Vinci and a surprisingly modern metaphor. Today we correlate specific patterns of electrochemical dynamics (i.e. “motions”) of the central nervous system, with emotional feelings.
Consciousness, the substrate for any emotional feeling, is itself a “motion of the mind,” an ephemeral state characterized by certain dynamical patterns of electrical activity. Even if all the neurons, their constituent parts and neuronal circuitry remained structurally the same, a change in the dynamics can mean the difference between consciousness and unconsciousness.
But what kind of motion is it? What are the patterns of electrical activity that correspond to our subjective state of being conscious, and why? Can they be measured and quantified? This is not only a theoretical or philosophical question but also one that is of vital interest to the anesthesiologist trying to regulate the level of consciousness during surgery, or for the neurologist trying to differentiate between different states of consciousness following brain trauma.
Recently, Casali et al have presented a quantitative metric. It provides, according to the authors, a numerical measure of consciousness, separating vegetative states from minimally conscious states. The study provides hints of being able to identify the enigmatic locked-in state, in which the subject is conscious but is unable to communicate with the external world due to motor deficits. What is most interesting is the claim that the measures provide scientific insight into consciousness, by providing an objective measure.
Their metric, like other existing clinical measures of consciousness, is based on Electroencephalography (EEG), where voltages recorded from electrodes placed on the scalp provide a coarse picture of neural activity in the brain. EEG can be used to measure either ongoing brain activity, or that evoked by an external stimulus. In Casali’s case, the activity in question is evoked directly in the brain using a transient magnetic field (Transcranial Magnetic Stimulation). This involves applying a transient magnetic field, which generates an electric field in a particular region of the brain due to Faraday’s law, a bit like attaching a battery to the neural circuitry. This causes currents to flow in the brain, not just in the stimulated region, but in other regions connected to it as well. The spatial and temporal patterns of these currents in the brain are then inferred from the EEG measurements and quantified to produce the metric.
The novelty in the study lies in the method used to quantify the spatiotemporal distribution of current, which is also the basis of the theoretical claims. The idea is that when the brain is unconscious, the evoked activity is either localized (the authors call this “lack of integration”), or widespread and uniform, as might be expected during slow wave sleep or epileptic seizures (“lack of differentiation”). The conscious state on the other hand is supposed to correspond to a distributed, but non-uniform spatiotemporal pattern of current sources. The authors apply a standard data compression scheme (the Lempel-Ziv algorithm, which is used for example in the GIF image format) to distinguish between the two scenarios. The degree of compressibility of the current distribution as inferred from EEG is the consciousness metric they propose.
The scientists report that their measure performs impressively in distinguishing states of consciousness within subjects, as well as across subjects in different clinically identified consciousness stages. These promising results will no doubt attract further study. However, the claim that the measure is theoretically grounded in a conceptual understanding of consciousness deserves a closer look. It is tempting to think that a concretely grounded clinical study of consciousness naturally advances our scientific understanding of the phenomenon, but is this necessarily the case?
It is common in medicine to see engineering-style associative measurements, measurements which aid pragmatic actions but do not originate from a fundamental understanding. Physicians in antiquity were able to diagnose diabetes mellitus (etymologically “sweet urine”, a reference to this original diagnostic method), without any particular insights into the underlying pathology. Clinical utility is not automatically a guarantee of scientific understanding.
There is reason to be cautious even in clinical terms. Some previous attempts to numerically quantify consciousness have proven problematic, a serious matter since awareness during surgery could lead to real suffering. An anesthesiologist cautions in a commentary not to “trust the BIS or any other monitor over common sense and experience.” A human expert still remains the ultimate arbiter of the state of consciousness of another human. This is unlikely to change soon.
There are both practical and conceptual hurdles to developing a “consciousness metric.” In practical terms, we have very little access to the details of the neuronal dynamics in the human brain. DARPA, not shy of ambitious technical challenges, has limited itself to 200 electrodes in a recent call for proposals to directly record from and stimulate the human brain for deep brain stimulation therapy. That is about one billionth of the estimated number of neurons in the brain. The EEG provides a very low capacity, indirect measurement channel into the brain. If we can’t measure the dynamics of the brain neurons in any detail, this could limit any attempt to quantify consciousness.
However, it is theoretically possible that even a limited measurement channel could carry the necessary information. We are looking for a categorical judgment between conscious and unconscious states, a single bit of information that can be solicited from a conscious and communicative subject in an eye-blink or a nod of the head. The conceptual hurdle is the more significant one. The defining characteristic of the conscious state is that of subjective, first person awareness, which fundamentally militates against objective measurements by an independent observer, who can have no access to the primary phenomena except through the subjective report of the conscious individual. It may be possible (and useful) to obtain better and better correlative measurements of this subjective report; but do the measurements themselves shed any light into the phenomenon of consciousness?
To clarify the underlying issues, consider a Turing-like test for consciousness metrics. If a measure of consciousness is to have scientific status, it should not ascribe a high degree of consciousness to a passive, inanimate system at thermodynamic equilibrium. Otherwise we are left with some kind of pan-psychic notion of consciousness. Nevertheless, a simple thought experiment shows that it would be easy to construct such a system for the metric under discussion.
The measure in question relies on the spatiotemporal patterns of currents invoked by a transient magnetic field. However, Maxwell’s equations dictate that a transient magnetic field will generate a pattern of currents in any chunk of matter – matching up some distribution of those evoked currents is simply a matter of the material properties. Consider for example a network of resistor, capacitors and inductors with circuit time-constants tuned to be in the hundred-millisecond range (to match EEG timescales). A radio antenna could be used to detect the changing magnetic field and absorb its energy. It should not be difficult to produce a circuit arrangement that produces a transient, spatiotemporally non-uniform current distribution that is adequately incompressible, and therefore fools the device into producing a high consciousness score.
One could also ask if the metric helps us answer a basic evolutionary question: can it differentiate organisms into “conscious” and “non-conscious” categories? While most neuroscientists would not hesitate to ascribe consciousness to vertebrate animals or to invertebrates with complex brains (think Octopus or Honeybee), they would hesitate when it comes to the invertebrates with simpler nervous systems (Are Jellyfish conscious? How about the Sponges?) Since the methodology under discussion has been prepared with humans in mind, and ultimately depends on correlating with subjective reporting, it is difficult to see how it could be extended across the phylogenetic tree in a way that would help resolve these basic science questions about consciousness.
Where to look for measures of consciousness that advance our scientific understanding? Most neuroscientists would agree that consciousness is associated specifically with animal nervous systems (not trees or rocks). Rather than look generically for abstract mathematical descriptions of consciousness, we may need to specifically study the detailed architecture of brain systems involved in arousal, attention, and so on. Complex animal nervous systems have presumably evolved consciousness because it has some important utility. If the architecture of brain systems involved in arousal shows convergent evolution between invertebrates and vertebrates, this could give us important scientific insights into consciousness as a biological phenomenon. Better neurobiological insights into consciousness could in turn generate advances in clinical measures.
We have come a long way since Da Vinci, but human observers, in the form of teams of expert physicians, remain essential to judging the subtleties of the “motions of the mind” that we call consciousness. No matter how sophisticated our tools, consciousness is still a core mystery with ample scope for conceptual breakthroughs and creative thinking.
Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist and regular contributor to NewYorker.com. Gareth is also the series editor of Best American Infographics, and can be reached at garethideas AT gmail.com or Twitter @garethideas.
ABOUT THE AUTHOR(S)
Partha Mitra is Crick-Clay Professor at Cold Spring Harbor Laboratory. He obtained a PhD in Theoretical Physics at Harvard and was a member of the Theory Group at Bell Laboratories. He has written a book on analyzing brain dynamics and is currently engaged in mapping mouse brain circuits. You can follow him @partha_mitra
Imagine you are sitting in a big symphony hall, listening to an orchestra play, for the first time. You’ve never been to see a live orchestra, but you decided to go anyway. There’s a first time for everything, right? The orchestra is performing a Violin Concerto by Beethoven. As the soloist runs her hands and fingers along the neck of the violin, she produces different notes or pitches. Every note the violin produces has a different sound, pitch, and vibration. With each note comes a different possible direction for the music being created. This concept can be applied to quantum physics as well, within string theory.
QUANTUM BASICS:
As you may know, everything is made up of small particles. Matter is made from atoms, which are in turn made of three basic components: electrons, neutrons, and protons. The electron is fundamental, but neutrons and protons are made of even smaller particles, known as quarks. Quarks are, as far as we know, truly elementary.
What we know about the subatomic composition of the universe is known as the Standard Model of Particle Physics. It describes the fundamental building blocks out of which the world is made, and the forces through which these blocks interact. There are twelve basic building blocks. Six of these building blacks are quarks. They go by the names: up, down, charm, strange, bottom and top. (A proton, for instance, is made of two up quarks and one down quark.) The other six are leptons. These include the electron and its two heavier siblings, the muon and the tauon, as well as three neutrinos.
Within the universe there are four fundamental forces: gravity, electromagnetism, and weak and strong nuclear forces. Each of these four forces is produced by particles that act as carries of the force. The behavior of all of these particles and forces is described by the Standard Model, with one notable exception: gravity.
STRING THEORY:
In the past ten years, string theory has emerged as the most promising candidate for a quantum explanation for gravity. But there is more to string theory than just explaining gravity on a molecular level; string theory is, essentially, the theory of everything.
The idea behind string theory is that each fundamental particle in the Standard Model is made up of tiny strings. These strings manifest in different ways to produce the different particles that we observe. For example, if a string ‘loops’ up a certain way, it will produce a photon. If it loops up another way it will form a neutrino. Another way, a quark… so on and so on. So, if string theory is correct, the entire universe is made up of strings. And to make it even more complicated and mind boggling, string theory works in ten dimensions!
MULTIPLE DIMENSIONS:
In one theory called M-theory, there are eleven dimensions rather than the three space-time dimensions we are used to and can perceive. The original version of string theories from the 1980s stated that the eleventh dimension would be a very small circle or a line. If these formulations are fundamental, then string theory requires ten dimensions. The idea of space-time dimension is not fixed in string theory; it is best thought of as different in different circumstances (Polchinski 1998).
Now, this is where things get really interesting. Like I said, each string can oscillate in different ways to form different particles. If you relate what I’ve already said to the violinist, you can get a simplified version of string theory that is easy to explain and comprehend. Just like the different notes on a violin, each different vibration of the string is a different fundamental particle. You play one note and you get a quark. A different note and you get a photon. So, every different note is a different possible particle. And every different chord and harmony within the music, is a different outcome. By different outcomes, I am talking about different universes. If string theory is correct and every different vibration of the string produces a different ‘something,’ then there should be hundreds of billions of different universes. Maybe even more universes than there are stars in the known universe.
A COSMIC CONCERTO:
Now, let’s return to the idea that the strings of string theory are like the notes played by our violinist. Each vibration of the strings creates the fundamental particles and to the forces of nature, which make up everything in the universe. Just as the orchestra and violinist are playing Beethoven, they could just as well be playing a different piece of music, with different notes and different vibrations.
Mathematically, that different piece of music, would produce different notes or vibrations, which would, in turn, create different particles and different forces of nature… meaning, a different universe. So, just as there are an endless number of possible pieces of music the orchestra could play, so our universe must be one of billions of other universes.
We can’t see these different strings because they are outside of our own universe. They have a different history and background than the strings in our universe. Some universes are unstable and collapse back to where they came from, a big crunch. Others, may not produce gravity, and would never be able to produce stars and would be dark and cold. Others will go on to produce stars, galaxies and planets, just like our universe.
According to Stephen Hawking, “we should not be surprised to find ourselves in a universe that is perfect for us. Our very presence means our universe must be just right” (Stephen Hawking’s Grand Design 2010). If string theory is correct, there may be a universe in which someone, exactly like you, decided not to go see the orchestra play. So, they would never be able to relate string theory to the beautiful music that you heard.
So, the next time you listen to a piece of music, regardless of the genre, try to think outside the box, or in this case, outside the universal box.
INNOVATION, the elixir of progress, has always cost people their jobs. In the Industrial Revolution artisan weavers were swept aside by the mechanical loom. Over the past 30 years the digital revolution has displaced many of the mid-skill jobs that underpinned 20th-century middle-class life. Typists, ticket agents, bank tellers and many production-line jobs have been dispensed with, just as the weavers were.
For those, including this newspaper, who believe that technological progress has made the world a better place, such churn is a natural part of rising prosperity. Although innovation kills some jobs, it creates new and better ones, as a more productive society becomes richer and its wealthier inhabitants demand more goods and services. A hundred years ago one in three American workers was employed on a farm. Today less than 2% of them produce far more food. The millions freed from the land were not consigned to joblessness, but found better-paid work as the economy grew more sophisticated. Today the pool of secretaries has shrunk, but there are ever more computer programmers and web designers.
Optimism remains the right starting-point, but for workers the dislocating effects of technology may make themselves evident faster than its benefits (see article). Even if new jobs and wonderful products emerge, in the short term income gaps will widen, causing huge social dislocation and perhaps even changing politics. Technology’s impact will feel like a tornado, hitting the rich world first, but eventually sweeping through poorer countries too. No government is prepared for it.
Why be worried? It is partly just a matter of history repeating itself. In the early part of the Industrial Revolution the rewards of increasing productivity went disproportionately to capital; later on, labour reaped most of the benefits. The pattern today is similar. The prosperity unleashed by the digital revolution has gone overwhelmingly to the owners of capital and the highest-skilled workers. Over the past three decades, labour’s share of output has shrunk globally from 64% to 59%. Meanwhile, the share of income going to the top 1% in America has risen from around 9% in the 1970s to 22% today. Unemployment is at alarming levels in much of the rich world, and not just for cyclical reasons. In 2000, 65% of working-age Americans were in work; since then the proportion has fallen, during good years as well as bad, to the current level of 59%.
Worse, it seems likely that this wave of technological disruption to the job market has only just started. From driverless cars to clever household gadgets (see article), innovations that already exist could destroy swathes of jobs that have hitherto been untouched. The public sector is one obvious target: it has proved singularly resistant to tech-driven reinvention. But the step change in what computers can do will have a powerful effect on middle-class jobs in the private sector too.
Until now the jobs most vulnerable to machines were those that involved routine, repetitive tasks. But thanks to the exponential rise in processing power and the ubiquity of digitised information (“big data”), computers are increasingly able to perform complicated tasks more cheaply and effectively than people. Clever industrial robots can quickly “learn” a set of human actions. Services may be even more vulnerable. Computers can already detect intruders in a closed-circuit camera picture more reliably than a human can. By comparing reams of financial or biometric data, they can often diagnose fraud or illness more accurately than any number of accountants or doctors. One recent study by academics at Oxford University suggests that 47% of today’s jobs could be automated in the next two decades.
At the same time, the digital revolution is transforming the process of innovation itself, as our special report explains. Thanks to off-the-shelf code from the internet and platforms that host services (such as Amazon’s cloud computing), provide distribution (Apple’s app store) and offer marketing (Facebook), the number of digital startups has exploded. Just as computer-games designers invented a product that humanity never knew it needed but now cannot do without, so these firms will no doubt dream up new goods and services to employ millions. But for now they are singularly light on workers. When Instagram, a popular photo-sharing site, was sold to Facebook for about $1 billion in 2012, it had 30m customers and employed 13 people. Kodak, which filed for bankruptcy a few months earlier, employed 145,000 people in its heyday.
The problem is one of timing as much as anything. Google now employs 46,000 people. But it takes years for new industries to grow, whereas the disruption a startup causes to incumbents is felt sooner.
Airbnb may turn homeowners with spare rooms into entrepreneurs, but it poses a direct threat to the lower end of the hotel business—a massive employer.
If this analysis is halfway correct, the social effects will be huge. Many of the jobs most at risk are lower down the ladder (logistics, haulage), whereas the skills that are least vulnerable to automation (creativity, managerial expertise) tend to be higher up, so median wages are likely to remain stagnant for some time and income gaps are likely to widen.
Anger about rising inequality is bound to grow, but politicians will find it hard to address the problem. Shunning progress would be as futile now as the Luddites’ protests against mechanised looms were in the 1810s, because any country that tried to stop would be left behind by competitors eager to embrace new technology. The freedom to raise taxes on the rich to punitive levels will be similarly constrained by the mobility of capital and highly skilled labour.
The main way in which governments can help their people through this dislocation is through education systems. One of the reasons for the improvement in workers’ fortunes in the latter part of the Industrial Revolution was because schools were built to educate them—a dramatic change at the time. Now those schools themselves need to be changed, to foster the creativity that humans will need to set them apart from computers. There should be less rote-learning and more critical thinking.
Technology itself will help, whether through MOOCs (massive open online courses) or even video games that simulate the skills needed for work.
The definition of “a state education” may also change. Far more money should be spent on pre-schooling, since the cognitive abilities and social skills that children learn in their first few years define much of their future potential. And adults will need continuous education. State education may well involve a year of study to be taken later in life, perhaps in stages.
Yet however well people are taught, their abilities will remain unequal, and in a world which is increasingly polarised economically, many will find their job prospects dimmed and wages squeezed.
The best way of helping them is not, as many on the left seem to think, to push up minimum wages. Jacking up the floor too far would accelerate the shift from human workers to computers. Better to top up low wages with public money so that anyone who works has a reasonable income, through a bold expansion of the tax credits that countries such as America and Britain use.
Innovation has brought great benefits to humanity. Nobody in their right mind would want to return to the world of handloom weavers. But the benefits of technological progress are unevenly distributed, especially in the early stages of each new wave, and it is up to governments to spread them. In the 19th century it took the threat of revolution to bring about progressive reforms. Today’s governments would do well to start making the changes needed before their people get angry.
The study suggests that our remarkably slow metabolisms explain why humans and other primates grow up so slowly and live such long lives.
“A human – even someone with a very physically active lifestyle – would need to run a marathon each day just to approach the average daily energy expenditure of a mammal their size.” – Herman Pontzer. Image via Wikimedia Commons
A study published January 14, 2014 in the Proceedings of the National Academy of Sciences suggests that humans and other primates burn 50% fewer calories each day than other mammals of similar size. This slow metabolism, the researchers say, may help explain why humans and other primates grow up slower and live longer than most mammals.
An international team of scientists examined 17 primate species in zoos, sanctuaries and in the wild. Using a safe and non-invasive technique, they measured the number of calories the primates burned over a 10-day period. Herman Pontzer,Associate Professor of Anthropology at Hunter College, led the study. Pontzer said:
The results were a real surprise. Humans, chimpanzees, baboons and other primates expend only half the calories we’d expect for a mammal. To put that in perspective, a human – even someone with a very physically active lifestyle – would need to run a marathon each day just to approach the average daily energy expenditure of a mammal their size.
The study also found that primates in captivity expend as many calories each day as their counterparts living in the wild. The researchers say this suggests that physical activity may have less of an impact on daily energy expenditure than was previously believed.
Primates, including humans, burn 50% fewer calories each day than other mammals of similar size. Image of chimpanzee family via Science Museum of Minnesota
Bottom line: Humans and other primates burn 50% fewer calories each day than other mammals of similar size. This slow metabolism, the researchers say, may help explain why humans and other primates grow up slower and live longer than most mammals.
Global Warming is the increase in the Earth’s temperature owing to the greenhouse effects of the release of CO2 and other gasses into the atmosphere, mainly by humans burning fossil fuel, but also by the release of Methane from oil wells and melting of Arctic permafrost, natural gas from leaky pipes, and so on. This increase in temperature occurs in both the atmosphere and the oceans, as well as the land surface itself. During some periods of time most of the increase seems to happen in the atmosphere, while during other times it seems to occur more in the oceans. (As an aside: when you use passive geothermal technology to heat and cool your home, the heat in the ground around your house is actually from the sun warming the Earth’s surface.)
“Weather” as we generally think of it consists partly of storms, perturbations in the atmosphere, and we would expect more of at least some kinds of storms, or more severe ones, if the atmosphere has more energy, which it does because of global warming. But “weather” is also temperature, and we recognize that severe heat waves and cold waves, long periods of heavy flooding rains, and droughts are very important, and it is hard to miss the fact that these phenomena have been occurring with increasing frequency in recent years.
We know that global warming changes the way air currents in the atmosphere work, and we know that atmospheric air currents can determine both the distribution and severity of storms and the occurrence of long periods of extreme heat or cold and wet or dry. One of the ways this seems to happen is what is known as “high amplitude waves” in the jet stream. One of the Northern Hemisphere Jet Streams, which emerges as the boundary between temperate air masses and polar air masses, is a fast moving high altitude stream of air. There is a large difference in temperature of the Troposphere north and south of any Jet Stream, and it can be thought of as the boundary between cooler and warmer conditions. Often, the northern Jet Stream encircles the planet as a more or less circular stream of fast moving air, moving in a straight line around the globe. However, under certain conditions the Jet Stream can be wavy, curving north then south then north and so on around the planet. These waves can themselves be either stationary (not moving around the planet) or they can move from west to east. A “high amplitude” Jet Stream is a wavy jet stream, and the waves can be very dramatic. When the jet stream is wavy and the waves themselves are relatively stationary, the curves are said to be “blocking” … meaning that they are keeping masses of either cold (to the north) or warm (to the south) air in place. Also, the turning points of the waves set up large rotating systems of circulation that can control the formation of storms.
So, a major heat wave in a given region can be caused by the northern Jet Stream being both wavy (high amplitude) with a big wave curving north across the region, bringing very warm air with it, at the same time the Jet Stream’s waves are relatively stationary, causing that lobe of southerly warm air to stay in place for many days. Conversely, a lobe of cool air from the north can be spread across a region and kept in place for a while.
Here is a cross section of the Jet Streams in the Northern Hemisphere showing their relationship with major circulating air masses:
Cross section of the atmosphere of the Northern Hemisphere. The Jet Streams form at the highly energetic boundary between major circulating cells, near the top of the Troposphere.
Here is a cartoon of the Earth showing jet streams moving around the planet:
The Jet Streams moving around the planet. Not indicated is the Intertropical Convergence Zone (ITCA) around the equator which is both not a Jet Stream and the Mother of All Jet Streams. This post mainly concerns the “Polar Jet.” Note that the wind in the Jet Streams moves from west to east, and the Jet Streams can be either pretty straight or pretty curvy. Curvy = “high amplitude.” This figure and the one above are from NOAA.
Here is a depiction of the Jet Stream being very curvy. The waves in the Jet Stream are called Rossby waves.
The Jet Stream in a particularly wavy state.
(See also this animation on Wikicommons, which will open in a new window.)
Research published in the Proceedings of the National Academies of Science last February, in a paper titled “Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes,” links global warming to the setup of high amplitude waves in the Jet Stream, as well as relatively stationary, blocking, waves that cause extreme warm or cold conditions to persist for weeks rather than just a few days. According to lead author Vladimir Petoukhov, “An important part of the global air motion in the mid-latitudes of the Earth normally takes the form of waves wandering around the planet, oscillating between the tropical and the Arctic regions. So when they swing up, these waves suck warm air from the tropics to Europe, Russia, or the US, and when they swing down, they do the same thing with cold air from the Arctic…What we found is that during several recent extreme weather events these planetary waves almost freeze in their tracks for weeks. So instead of bringing in cool air after having brought warm air in before, the heat just stays.”
So how does global warming cause the northern Jet Stream to become wavy, with those waves being relatively stationary? It’s complicated. One way to think about it is to observe waves elsewhere in day to day life. On the highway, if there is enough traffic, waves of cars form, as clusters of several cars moving together with relatively few cars to be found in the gaps between these clusters. Change the number of cars, or the speed limit, or other factors, and you may see the size and distribution of these clusters (waves) of cars change as well. If you run the water from your sink faucet at just the right rate, you can see waves moving up and down on the stream of water. If you adjust the flow of water the size and behavior of these “standing waves” changes. In a baseball or football field, when people do “the wave” their hand motions collectively form a wave of silliness that moves around the park, and the width and speed of that wave is a function of how quickly individuals react to their fellow sports fan’s waving activity. Waves form in a medium (of cars, water molecules, people, etc.)
following a number of physical principles that determine the size, shape, speed, and stability of the waves.
The authors of this paper use math that is far beyond the scope of a mere blog post to link together all the relevant atmospheric factors and the shape of the northern Jet Stream. They found that when the effects of Global Warming are added in, the Jet Stream becomes less linear, and the deep meanders (sometimes called Rossby waves) that are set up tend to occur with a certain frequency (6, 7, or 8 major waves encircling the planet) and that these waves tend to not move for many days once they get going. They tested their mathematical model using actual weather data over a period of 32 years and found a good fit between atmospheric conditions, predicted wave patterns, and actual observed wave patterns.
The northern Jet Stream originates as a function of the gradient of heat from the Equatorial regions to the Polar regions. If air temperature was very high at the equator and very low at the poles, the Jet Stream would look one way. If air temperatures were (and this is impossible) the same at the Equator and the poles, there would probably be no Jet Stream at all. At various different plausible gradients of temperature from Equator to the poles, various different possible configurations of Jet Streams emerge.
One of the major effects of global warming has been the warming of the Arctic. This happens for at least two reasons. First, the atmosphere and oceans are simply warmer, so everything gets warmer. In addition, these warmer conditions cause the melting of Arctic ice to be much more extreme each summer, so that there is more exposed water in the Arctic Ocean, for a longer period of time. This means that less sunlight is reflected directly back into space (because there is less shiny ice) and the surface of the ice-free northern sea absorbs sunlight and converts it into heat. For these reasons, the Arctic region is warming at a higher rate than other regions farther to the south in the Northern Hemisphere. This, in turn, makes for a reduced gradient in the atmospheric temperature from tropical to temperate to polar regions.
Changing the gradient of the atmospheric temperature in a north-south axis is like adjusting the rate of water flowing from your faucet, or changing the number of cars on the highway, or replacing all the usual sports fans at the stadium with stoned people with arthritis. The nature of the waves changes.
In the case of the atmosphere of Earth’s Northern Hemisphere, global warming has changed the dynamic of the northern Jet Stream, and this has resulted in changes in weather extremes. This would apply to heat waves, cold snaps, and the distribution of precipitation. The phenomenon that is increasingly being called “Weather Whiplash” … more extremes in all directions, heat vs cold and wet vs. dry, is largely caused by this effect, it would seem.
This study is somewhat limited because it covers only a 32 year period, but the findings of the study are in accord with expectations based on what we know about how the Earth’s climate system works, and the modeling matches empirical reality quite well.
Petoukhov, V., Rahmstorf, S., Petri, S., & Schellnhuber, H. (2013). Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes Proceedings of the National Academy of Sciences, 110 (14), 5336-5341 DOI: 10.1073/pnas.1222000110
A nanophotonic solar thermophotovoltaic device as viewed from the perspective
of the incoming sunlight. Reflective mirrors boost the intensity of the light reaching the
carbon nanotube absorber array (center), enabling the device to reach high temperatures and
record-setting efficiencies. Credit: FELICE FRANKEL
Yesterday, I posted an article about a math video that showed how you can sum up an infinite series of numbers to get a result of, weirdly enough, -1/12.
A lot of stuff happened after I posted it. Some people were blown away by it, and others… not so much. A handful of mathematicians were less than happy with what I wrote, and even more were less than happy with the video. I got a few emails, a lot of tweets, and some very interesting conversations out of it.
I decided to write a follow-up post because I try to correct errors when I make them, and shine more light on a problem if it needs it. There are multiple pathways to take here (which is ironic because that’s actually part of the problem with the math). Therefore this post is part 1) update, 2) correction, 3) and mea culpa, with a defense (hopefully without being defensive).
Let me take a moment to explain right away. No, there is too much. Let me sum up*:
1) The infinite series in the video (1 + 2 + 3 + 4 + 5 …) can in fact be tackled using a rigorous mathematical method, and can in fact be assigned a value of -1/12! This method is quite real, and very useful. And yes, the weirdness of it is brain melting.
2) The method used in the video to write out some series and manipulate them algebraically is actually not a great way to figure this problem out. It uses a trick that’s against the rules, so strictly speaking it doesn’t work. It’s a nice demo to show some fun things, but its utility is questionable at best.
3) I had my suspicions about the method used in the video, but suppressed them. That was a mistake.
That’s the tl;dr version, Here’s the detail.
1) Terms of Endearment
In math, you have to set up rules that allow you to do whatever it is you want to do. These rules can be self-consistent, totally logical, and very useful. Or, they can be self-consistent, totally logical, and not useful. Let me give you an example, inspired by a conversation I had with the delightful mathematician Jordan Ellenberg, who contacted Slate and me after my article went up.
Imagine you come upon a society that uses numbers only as integer magnitudes, that is, to measure the amount of something in integer units (1, 2, 3 etc.). You can have three bricks, and your friend has five bricks. They also have a concept for ratios, so you have 3/5ths as many bricks as your friend.
But in their system, you can’t mix the two. You can’t have 3 and 3/4 bricks, because fractions are only for ratios. In their system, having a fractional brick doesn’t make sense, any more than saying you are six feet nine gallons tall in ours. Those units don’t play well together. Mind you, their system is self-consistent and logical, but I’d argue it has limited use. Fractions can be wildly multipurpose.
It’s similar to infinite series. In the method you learned in high school, the series
1 + 2 + 3 + 4 + 5 … doesn’t converge, and tends to go to infinity. That is also self-consistent, logical, but of limited use in this case. The rules of how we deal with series don’t let you do much with that.
Phil PlaitPhil Plait writes Slate’s Bad Astronomy blog and is an astronomer, public speaker, science evangelizer, and author of Death from the Skies! Follow him on Twitter.
But there is a method called analytic continuation that does. It redefines things a bit, uses different rules, and they allow for dealing with such things. The mathematicians Euler and Riemann used it to get around the problems of infinite diverging series, and allowed them to assign the value -1/12 to it. Those rules are self-consistent, logical, and highly useful. In fact, as I pointed out in the previous post, they’re used to great success in many fields of physics. It gets complicated quickly, but you can read more about this here and especially here (that second one deals with this problem specifically, and in fact shows how analytic continuation can handle the problems of all the series presented in the Numberphile video). One of the greatest mathematicians the world has ever seen, Ramanujan, also did this. In fact, you should read about him; his story is as fascinating as it is tragic.
Anyway, neither set of rules is wrong. One is just better at handling certain things than the other. And you have to be sure to color within the lines depending on which rules you use. Which brings me to…
2) Canceled Series
Before I get to this, I want to say that early on in the main Numberphile video (and in my post) there is a link to the rigorous analytic continuation solution to this problem. So that part was good. However, they then employ a trick that is a bit of a no-no.
There are rules for dealing with infinite series, many developed by Cauchy in the 1800s. One of them is that when you have a series that diverges, that is, does not approach a finite limit, you can’t go around adding and subtracting other series from it, or substituting values for it.
But in the video, they do just that. They write down Grandi’s series, show that’s equal to ½ (more on that below), then use it to show that 1 – 2 + 3 – 4 + 5 … has ¼ as a solution. But given the rules of dealing with series in this way, that’s a fudge (ironically, similar to the trick of “proving” 1 = 0, something I mentioned in my first post). So in the video where they multiply through the series, shift them, subtract them from one another… that’s not allowed. It works for a finite number of terms, but leaves that aggravating tail of infinite terms to mess things up. That tail winds up wagging the dog, negating the whole thing.
Again, using Riemann’s and Euler’s work, you can work this through legally. But using series written out in the way of the video, not so much.
3) Reaching My Limit
[This may be of more interest to writers than math people. Caveat emptor.]
Overall, a lot of what I wrote in the article is correct prima facie. A lot of it wasn’t. How this came to be makes me a bit red-faced as well as has me chuckling at myself.
I did talk a bit about the analytic continuation method, called it rigorous, and said it shows that the series can have a value of -1/12. But I made a couple of mistakes: one was not trusting my instincts, and the other was trusting them too much.
In the first case — and this is killing me — is that in my original draft of the post, I had a section pointing out you can’t just add and subtract divergent series from each other! It was literally the first thing I wrote down after watching the video, because my math/science instinct told me there was a problem there. But I wound up removing it. Why?
Because of my writing instincts. I started digging into the cool Riemann stuff, and realized that since there really was a way to assign a value to the series, I didn’t need to worry about the actual way they did it in the video. It was a trick, but it got the value I expected, so I took the section out. It made some sense at the time; I had the analytic stuff first, and the video second. I figured I had established the -1/12 bit, and it was good.
But the article wasn’t working. I needed to rearrange it, put the video near the top of the post; starting it off with lots of thin-air math might not be the best bet. So I put the rigorous math after the video, and totally left out the deleted section on the trick. Had I left it in, I suspect the new arrangement would’ve triggered alarm bells in my head, and I wouldn’t have been so laissez faire with the video.
Still, it didn’t, and I wound up not dissecting something I should have.
On top of that, I should’ve stressed the analytic solution more. I also should have stressed the idea that the examples I put in (the zigzag graph and the staircase) were only there as thought experiments to help understand the problem; they weren’t meant to be rigorous. I probably should have just left that whole part out. Again, mea culpa.
It’s an interesting balancing act, this writing about science and math. Sometimes it tips the wrong way. I blew it, and I'll try to be more careful in the future.
Term Limits
One more bit of exposition: You may have noticed that all through this post, I have avoided writing “This series equals -1/12,” or “the value of the sum of the series is -1/12.” This is due to my conversation with Ellenberg, which was fascinating to me. We talked about different methods, different rules, how new concepts were not accepted at first, and that things we think are simple now (like using fractions) were at one point in history heatedly debated as to their reality and usefulness. He put it very well:
It's not quite right to describe what the video does as “proving” that 1 + 2 + 3 + 4 + .... = -1/12. When we ask “what is the value of the infinite sum,” we've made a mistake before we even answer! Infinite sums don't have values until we assign them a value, and there are different protocols for doing that. We should be asking not what IS the value, but what should we define the value to be? There are different protocols, each with their own strengths and weaknesses. The protocol you learn in calculus class, involving limits, would decline to assign any value at all to the sum in the video. A different protocol assigns it the value -1/12. Neither answer is more correct than the other.
Nice. Though I’ll add that one answer has more use than the other in certain circumstances; the point I made above.
This conversation led down the rabbit hole of how we use math and what for, and has inspired me to do some follow-up reading, more about the philosophy and development of various mathematical
methods than the methods themselves. This is pretty cool stuff.
Other Methods
Finally, I got a lot of polite and informative notes from folks correcting me and pointing out details of all this, many of which overlapped with each other (including, interestingly, the last bit about the value assigned to a sum). Thanks to everyone who did so. Here are a few links I was sent for those who want to venture a few more terms down the series:
One of the stated missions of the conference at Bielefeld’s Center for Interdisciplinary Research was to confront the leaky battleship called the disease model of addiction. Is it the name that needs changing, or the entire concept? Is addiction “hardwired,” or do things like learning and memory and choice and environmental circumstance play commanding roles that have been lost in the excitement over the latest fMRI scan?
What exactly is this neuroplasticity the conference was investigating? From a technical point of view, it refers to the brain’s ability to form new neural connections in response to illness, injury, or new environmental situations, just to name three. Nerve cells engage in a bit of conjuring known as “axonal sprouting,” which can include rerouting new connections around damaged axons. Alternatively, connections are pruned or reduced. Neuroplasticity is not an unmitigated blessing. Consider intrusive tinnitus, a loud and continuous ringing or hissing in the ears, which is thought to be the result of the rewiring of brain cells involved in the processing of sound, rather than the sole result of injury to cochlear hair cells.
The fact that the brain is malleable is not a new idea, to be sure. Psychologist Vaughn Bell, writing at Mind Hacks, has listed a number of scientific papers, from as early as 1896, which discuss the possibility of neural regeneration. But there is a problem with neuroplasticity, writes Bell, and it is that “there is no accepted scientific definition for the term, and, in its broad sense, it means nothing more than ‘something in the brain has changed.’” Bell quotes the introduction to the science text, Toward a Theory of Neuroplasticity: “While many scientists use the word neuroplasticity as an umbrella term, it means different things to different researchers in different subfields… In brief, a mutually agreed upon framework does not appear to exist.”
So the conference was dealing with two very slippery semantic concepts when it linked neuroplasticity and addiction. There were discussions of the epistemology of addiction, and at least one reference to Foucault, and plenty of arguments about dopamine, to keep things properly interdisciplinary. “Talking about ‘neuroscience,’” said Robert Malenka of Stanford University’s Institute for Neuro-Innovation and Translational Neurosciences, “is like talking about ‘art.’”
What do we really know about synaptic restructuring, or “brains in the wild,” as anthropologist Daniel Lende of the University of South Florida characterized it during his presentation? Lende, who called for using both neurobiology and ethnography in investigative research, said that more empirical work was needed if we are to better understand addiction “outside of clinical and laboratory settings.” Indeed, the prevailing conference notion was to open this discussion outwards, to include plasticity in all its ramifications—neural, medical psychological, sociological, and legal—including, as well, the ethical issues surrounding addiction.
Among the addiction treatment modalities discussed in conference presentations were optogenetics, deep brain stimulation, psychedelic drugs, moderation, and cognitive therapies modeled after systems used to treat various obsessive-compulsive disorders. Some treatment approaches, such as optogenetics and deep brain stimulation, “have the potential to challenge previous notions of permanence and changeability, with enormous implications for legal strategies, treatment, stigmatization, and addicts’ conceptions of themselves,” in the words of Clark and Nagel.
Interestingly, there was little discussion of anti-craving medications, like naltrexone for alcohol and methadone for heroin. Nor was the standard “Minnesota Model” of 12 Step treatment much in evidence during the presentations oriented toward treatment. The emphasis was on future treatments, which was understandable, given that almost no one is satisfied with treatment as it is now generally offered. (There was also a running discussion of the extent to which America’s botched health care system and associated insurance companies have screwed up the addiction treatment landscape for everybody.)
It sometimes seems as if the more we study addiction, the farther it slips from our grasp, receding as we advance. Certainly health workers of every stripe, in every field from cancer to infectious diseases to mental health disorders, have despaired about their understanding of the terrain of the disorder they were studying. But even the term addiction is now officially under fire. The DSM5 has banished the word from its pages, for starters.
Developmental psychologist Reinout Wiers of the University of Amsterdam used a common metaphor, the rider on an unruly horse, to stand in for the bewildering clash of top-down and bottom-up neural processes that underlie addictive behaviors. The impulsive horse and the reflective rider must come to terms, without entering into a mutually destructive spiral of negative behavior patterns. Not an easy task.