A Medley of Potpourri is just what it says; various thoughts, opinions, ruminations, and contemplations on a variety of subjects.
Search This Blog
Saturday, February 1, 2014
Scientific Pride and Prejudice
Olimpia Zagnola
By MICHAEL SUK-YOUNG CHWE
January 31, 2014
David Strumfels: Watch Jonathan Eisen writhe his hands over this. What? Scientists are above cognitive biases, unlike ordinary mortals (who required them to function). And they -- the NYT -- present no evidence! I have bad news for him. Cognitive bias is built into how the human brain functions. High intelligence and years of education and research do no make it just go away -- far from it, by creating the illusion of being above it, the Eisens and every other scientist in the world (including me) are that much more susceptible to it. The truth is, it should be Eisen to be the one presenting evidence that scientists really are just that far superior. I point to his outrage that he is probably (albeit unconsciously) knows he hasn't a leg to stand on.
It is the cross-checking, competition, and self-correctiveness of science that allows it to make progress, not the perfect lack of biases and other human frailties among the people who do it. Eisen, and every scientist, should know this, know it by heart one might say, if at least not by mind.
End Strumfels
SCIENCE is in crisis, just when we need it most. Two years ago, C. Glenn Begley and Lee M. Ellis reported in Nature that they were able to replicate only six out of 53 “landmark” cancer studies. Scientists now worry that many published scientific results are simply not true. The natural sciences often offer themselves as a model to other disciplines. But this time science might look for help to the humanities, and to literary criticism in particular.
A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias”: We seek out information that confirms what we already believe. “We each begin probably with a little bias,” as Jane Austen writes in “Persuasion,” “and upon that bias build every circumstance in favor of it.”
Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity. In his 1967 book “Validity in Interpretation,” E. D. Hirsch writes that “an interpretive hypothesis,” about a poem “is ultimately a probability judgment that is supported by evidence.” This is akin to the statistical approach used in the sciences; Mr. Hirsch was strongly influenced by John Maynard Keynes’s “A Treatise on Probability.”
However, Mr. Hirsch also finds that “every interpreter labors under the handicap of an inevitable circularity: All his internal evidence tends to support his hypothesis because much of it was constituted by his hypothesis.” This is essentially the problem faced by science today. According to Mr. Begley and Mr. Ellis’s report in Nature, some of the nonreproducible “landmark” studies inspired hundreds of new studies that tried to extend the original result without verifying if the original result was true. A claim is not likely to be disproved by an experiment that takes that claim as its starting point. Mr. Hirsch warns about falling “victim to the self-confirmability of interpretations.”
It’s a danger the humanities have long been aware of. In his 1960 book “Truth and Method,” the influential German philosopher Hans-Georg Gadamer argues that an interpreter of a text must first question “the validity — of the fore-meanings dwelling within him.” However, “this kind of sensitivity involves neither ‘neutrality’ with respect to content nor the extinction of one’s self.”
Rather, “the important thing is to be aware of one’s own bias.” To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades.
Sometimes prejudgments are hard to resist. In December 2010, for example, NASA-funded researchers, perhaps eager to generate public excitement for new forms of life, reported the existence of a bacterium that used arsenic instead of phosphorus in its DNA. Later, this study was found to have major errors. Even if such influences don’t affect one’s research results, we should at least be able to admit that they are possible.
Austen might say that researchers should emulate Mr. Darcy in “Pride and Prejudice,” who submits, “I will venture to say that my investigations and decisions are not usually influenced by my hopes and fears.” At least Mr. Darcy acknowledges the possibility that his personal feelings might influence his investigations.
But it would be wrong to say that the ideal scholar is somehow unbiased or dispassionate. In my freshman physics class at Caltech, David Goodstein, who later became vice provost of the university, showed us Robert Millikan’s lab notebooks for his famed 1909 oil drop experiment with Harvey Fletcher, which first established the electric charge of the electron.
The notebooks showed many fits and starts and many “results” that were obviously wrong, but as they progressed, the results got cleaner, and Millikan could not help but include comments such as “Best yet — Beauty — Publish.” In other words, Millikan excluded the data that seemed erroneous and included data that he liked, embracing his own confirmation bias.
Mr. Goodstein’s point was that the textbook “scientific method” of dispassionately testing a hypothesis is not how science really works. We often have a clear idea of what we want the results to be before we run an experiment. We freshman physics students found this a bit hard to take. What Mr. Goodstein was trying to teach us was that science as a lived, human process is different from our preconception of it. He was trying to give us a glimpse of self-understanding, a moment of self-doubt.
When I began to read the novels of Jane Austen, I became convinced that Austen, by placing sophisticated characters in challenging, complex situations, was trying to explicitly analyze how people acted strategically. There was no fancy name for this kind of analysis in Austen’s time, but today we call it game theory. I believe that Austen anticipated the main ideas of game theory by more than a century.
As a game theorist myself, how do I know I am not imposing my own way of thinking on Austen? I present lots of evidence to back up my claim, but I cannot deny my own preconceptions and training. As Mr. Gadamer writes, a researcher “cannot separate in advance the productive prejudices that enable understanding from the prejudices that hinder it.” We all bring different preconceptions to our inquiries, whether about Austen or the electron, and these preconceptions can spur as well as blind us.
Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research. In an open email in September 2012 to fellow psychologists, the Nobel laureate Daniel Kahneman suggests that “to deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating.”
Everyone, including natural scientists, social scientists and humanists, could use a little more self-awareness. Understanding science as fundamentally a human process might be necessary to save science itself.
Michael Suk-Young Chwe, a political scientist at U.C.L.A., is the author of “Jane Austen, Game Theorist. ”
It is the cross-checking, competition, and self-correctiveness of science that allows it to make progress, not the perfect lack of biases and other human frailties among the people who do it. Eisen, and every scientist, should know this, know it by heart one might say, if at least not by mind.
End Strumfels
SCIENCE is in crisis, just when we need it most. Two years ago, C. Glenn Begley and Lee M. Ellis reported in Nature that they were able to replicate only six out of 53 “landmark” cancer studies. Scientists now worry that many published scientific results are simply not true. The natural sciences often offer themselves as a model to other disciplines. But this time science might look for help to the humanities, and to literary criticism in particular.
A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias”: We seek out information that confirms what we already believe. “We each begin probably with a little bias,” as Jane Austen writes in “Persuasion,” “and upon that bias build every circumstance in favor of it.”
Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity. In his 1967 book “Validity in Interpretation,” E. D. Hirsch writes that “an interpretive hypothesis,” about a poem “is ultimately a probability judgment that is supported by evidence.” This is akin to the statistical approach used in the sciences; Mr. Hirsch was strongly influenced by John Maynard Keynes’s “A Treatise on Probability.”
However, Mr. Hirsch also finds that “every interpreter labors under the handicap of an inevitable circularity: All his internal evidence tends to support his hypothesis because much of it was constituted by his hypothesis.” This is essentially the problem faced by science today. According to Mr. Begley and Mr. Ellis’s report in Nature, some of the nonreproducible “landmark” studies inspired hundreds of new studies that tried to extend the original result without verifying if the original result was true. A claim is not likely to be disproved by an experiment that takes that claim as its starting point. Mr. Hirsch warns about falling “victim to the self-confirmability of interpretations.”
It’s a danger the humanities have long been aware of. In his 1960 book “Truth and Method,” the influential German philosopher Hans-Georg Gadamer argues that an interpreter of a text must first question “the validity — of the fore-meanings dwelling within him.” However, “this kind of sensitivity involves neither ‘neutrality’ with respect to content nor the extinction of one’s self.”
Rather, “the important thing is to be aware of one’s own bias.” To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades.
Sometimes prejudgments are hard to resist. In December 2010, for example, NASA-funded researchers, perhaps eager to generate public excitement for new forms of life, reported the existence of a bacterium that used arsenic instead of phosphorus in its DNA. Later, this study was found to have major errors. Even if such influences don’t affect one’s research results, we should at least be able to admit that they are possible.
Austen might say that researchers should emulate Mr. Darcy in “Pride and Prejudice,” who submits, “I will venture to say that my investigations and decisions are not usually influenced by my hopes and fears.” At least Mr. Darcy acknowledges the possibility that his personal feelings might influence his investigations.
But it would be wrong to say that the ideal scholar is somehow unbiased or dispassionate. In my freshman physics class at Caltech, David Goodstein, who later became vice provost of the university, showed us Robert Millikan’s lab notebooks for his famed 1909 oil drop experiment with Harvey Fletcher, which first established the electric charge of the electron.
The notebooks showed many fits and starts and many “results” that were obviously wrong, but as they progressed, the results got cleaner, and Millikan could not help but include comments such as “Best yet — Beauty — Publish.” In other words, Millikan excluded the data that seemed erroneous and included data that he liked, embracing his own confirmation bias.
Mr. Goodstein’s point was that the textbook “scientific method” of dispassionately testing a hypothesis is not how science really works. We often have a clear idea of what we want the results to be before we run an experiment. We freshman physics students found this a bit hard to take. What Mr. Goodstein was trying to teach us was that science as a lived, human process is different from our preconception of it. He was trying to give us a glimpse of self-understanding, a moment of self-doubt.
When I began to read the novels of Jane Austen, I became convinced that Austen, by placing sophisticated characters in challenging, complex situations, was trying to explicitly analyze how people acted strategically. There was no fancy name for this kind of analysis in Austen’s time, but today we call it game theory. I believe that Austen anticipated the main ideas of game theory by more than a century.
As a game theorist myself, how do I know I am not imposing my own way of thinking on Austen? I present lots of evidence to back up my claim, but I cannot deny my own preconceptions and training. As Mr. Gadamer writes, a researcher “cannot separate in advance the productive prejudices that enable understanding from the prejudices that hinder it.” We all bring different preconceptions to our inquiries, whether about Austen or the electron, and these preconceptions can spur as well as blind us.
Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research. In an open email in September 2012 to fellow psychologists, the Nobel laureate Daniel Kahneman suggests that “to deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating.”
Everyone, including natural scientists, social scientists and humanists, could use a little more self-awareness. Understanding science as fundamentally a human process might be necessary to save science itself.
Michael Suk-Young Chwe, a political scientist at U.C.L.A., is the author of “Jane Austen, Game Theorist. ”
Princeton’s Nanomesh Triples Solar Cell Efficiency
December 26, 2012
Researchers at Princeton University have used nanotechnology to develop a nanotechnological mesh that increases efficiency over organic solar cells nearly threefold.
The scientists published their findings in the journal Optics Express¹. The team was able to reduce reflexivity and capture more of the light that isn’t reflected. The resulting solar cell is thinner, less reflective and utilizes sandwiched plastic and metal with the nanomesh. The so-called “Plasmonic Cavity with Subwavelength Hole array,” or “PlaCSH,” reduces the potential for losing light itself and reflects only 4% of direct sunlight, leading to a 52% increase in efficiency compared to conventional, organic solar cells.
PlaCSH is capable of capturing large amounts of sunlight even when the sunlight is dispersed on cloudy days, which translates to an increase of 81% in efficiency in indirect lighting conditions. PlaCSH is up to 175% more efficient than traditional solar cells.
The mesh is only 30 nanometers thick and the holes in it are only 175 nm in diameter. This replaces the thicker, traditional top layer that is made out of indium-tin-oxide (ITO). Since this mesh is smaller than the wavelength of the light it’s trying to collect, it is able to exploit the bizarre way that light works in subwavelength structures, allowing the cell to capture light once it enters the holes in the mesh instead of letting much of it reflect away.
The scientists believe that the cells can be made cost effectively and that this method should work for silicon and gallium arsenide solar cells as well.
References
Researchers at Princeton University have used nanotechnology to develop a nanotechnological mesh that increases efficiency over organic solar cells nearly threefold.
The scientists published their findings in the journal Optics Express¹. The team was able to reduce reflexivity and capture more of the light that isn’t reflected. The resulting solar cell is thinner, less reflective and utilizes sandwiched plastic and metal with the nanomesh. The so-called “Plasmonic Cavity with Subwavelength Hole array,” or “PlaCSH,” reduces the potential for losing light itself and reflects only 4% of direct sunlight, leading to a 52% increase in efficiency compared to conventional, organic solar cells.
PlaCSH is capable of capturing large amounts of sunlight even when the sunlight is dispersed on cloudy days, which translates to an increase of 81% in efficiency in indirect lighting conditions. PlaCSH is up to 175% more efficient than traditional solar cells.
The mesh is only 30 nanometers thick and the holes in it are only 175 nm in diameter. This replaces the thicker, traditional top layer that is made out of indium-tin-oxide (ITO). Since this mesh is smaller than the wavelength of the light it’s trying to collect, it is able to exploit the bizarre way that light works in subwavelength structures, allowing the cell to capture light once it enters the holes in the mesh instead of letting much of it reflect away.
The scientists believe that the cells can be made cost effectively and that this method should work for silicon and gallium arsenide solar cells as well.
References
- Chou, S. Y., et al., Optics Express, Vol. 21, Issue S1, pp. A60-A76 (2013), dx.doi.org/10.1364/OE.21.000A60
Researchers Discover a Simple Way to Increase Solar Cell Efficiency
January 3, 2014
By modifying the molecular structure of a polymer used in solar cells, an international team of researchers has increased solar efficiency by more than thirty percent.
Researchers from North Carolina State University and the Chinese Academy of Sciences have found an easy way to modify the molecular structure of a polymer commonly used in solar cells. Their modification can increase solar cell efficiency by more than 30 percent.
Polymer-based solar cells have two domains, consisting of an electron acceptor and an electron donor material. Excitons are the energy particles created by solar cells when light is absorbed. In order to be harnessed effectively as an energy source, excitons must be able to travel quickly to the interface of the donor and acceptor domains and retain as much of the light’s energy as possible.
One way to increase solar cell efficiency is to adjust the difference between the highest occupied molecular orbit (HOMO) of the acceptor and lowest unoccupied molecular orbit (LUMO) levels of the polymer so that the exciton can be harvested with minimal loss. One of the most common ways to accomplish this is by adding a fluorine atom to the polymer’s molecular backbone, a difficult, multi-step process that can increase the solar cell’s performance, but has considerable material fabrication costs.
A team of chemists led by Jianhui Hou from the Chinese Academy of Sciences created a polymer known as PBT-OP from two commercially available monomers and one easily synthesized monomer. Wei Ma, a post-doctoral physics researcher from NC State and corresponding author on a paper describing the research, conducted the X-ray analysis of the polymer’s structure and the donor:acceptor morphology.
PBT-OP was not only easier to make than other commonly used polymers, but a simple manipulation of its chemical structure gave it a lower HOMO level than had been seen in other polymers with the same molecular backbone. PBT-OP showed an open circuit voltage (the voltage available from a solar cell) value of 0.78 volts, a 36 percent increase over the ~ 0.6 volt average from similar polymers.
According to NC State physicist and co-author Harald Ade, the team’s approach has several advantages. “The possible drawback in changing the molecular structure of these materials is that you may enhance one aspect of the solar cell but inadvertently create unintended consequences in devices that defeat the initial intent,” he says. “In this case, we have found a chemically easy way to change the electronic structure and enhance device efficiency by capturing a lager fraction of the light’s energy, without changing the material’s ability to absorb, create and transport energy.”
The researchers’ findings appear in Advanced Materials. The research was funded by the U.S. Department of Energy, Office of Science, Basic Energy Science and the Chinese Ministry of Science and Technology. Dr. Maojie Zhang synthesized the polymers; Xia Guo,Shaoqing Zhang and Lijun Huo from the Chinese Academy of Sciences also contributed to the work.
Publication: Maojie Zhang, et al., “An Easy and Effective Method to Modulate Molecular Energy Level of the Polymer Based on Benzodithiophene for the Application in Polymer Solar Cells,” Advanced Matererilas, 2013; doi: 10.1002/adma.201304631
Source: Tracey Peake, North Carolina State University
Image: Maojie Zhang, et al. doi: 10.1002/adma.201304631
Researchers from North Carolina State University and the Chinese Academy of Sciences have found an easy way to modify the molecular structure of a polymer commonly used in solar cells. Their modification can increase solar cell efficiency by more than 30 percent.
Polymer-based solar cells have two domains, consisting of an electron acceptor and an electron donor material. Excitons are the energy particles created by solar cells when light is absorbed. In order to be harnessed effectively as an energy source, excitons must be able to travel quickly to the interface of the donor and acceptor domains and retain as much of the light’s energy as possible.
One way to increase solar cell efficiency is to adjust the difference between the highest occupied molecular orbit (HOMO) of the acceptor and lowest unoccupied molecular orbit (LUMO) levels of the polymer so that the exciton can be harvested with minimal loss. One of the most common ways to accomplish this is by adding a fluorine atom to the polymer’s molecular backbone, a difficult, multi-step process that can increase the solar cell’s performance, but has considerable material fabrication costs.
A team of chemists led by Jianhui Hou from the Chinese Academy of Sciences created a polymer known as PBT-OP from two commercially available monomers and one easily synthesized monomer. Wei Ma, a post-doctoral physics researcher from NC State and corresponding author on a paper describing the research, conducted the X-ray analysis of the polymer’s structure and the donor:acceptor morphology.
PBT-OP was not only easier to make than other commonly used polymers, but a simple manipulation of its chemical structure gave it a lower HOMO level than had been seen in other polymers with the same molecular backbone. PBT-OP showed an open circuit voltage (the voltage available from a solar cell) value of 0.78 volts, a 36 percent increase over the ~ 0.6 volt average from similar polymers.
According to NC State physicist and co-author Harald Ade, the team’s approach has several advantages. “The possible drawback in changing the molecular structure of these materials is that you may enhance one aspect of the solar cell but inadvertently create unintended consequences in devices that defeat the initial intent,” he says. “In this case, we have found a chemically easy way to change the electronic structure and enhance device efficiency by capturing a lager fraction of the light’s energy, without changing the material’s ability to absorb, create and transport energy.”
The researchers’ findings appear in Advanced Materials. The research was funded by the U.S. Department of Energy, Office of Science, Basic Energy Science and the Chinese Ministry of Science and Technology. Dr. Maojie Zhang synthesized the polymers; Xia Guo,Shaoqing Zhang and Lijun Huo from the Chinese Academy of Sciences also contributed to the work.
Publication: Maojie Zhang, et al., “An Easy and Effective Method to Modulate Molecular Energy Level of the Polymer Based on Benzodithiophene for the Application in Polymer Solar Cells,” Advanced Matererilas, 2013; doi: 10.1002/adma.201304631
Source: Tracey Peake, North Carolina State University
Image: Maojie Zhang, et al. doi: 10.1002/adma.201304631
To calculate long-term conservation pay off, factor in people
"Short attention-span worlds with long attention-span problems" is how Xiaodong Chen, a former Michigan State University doctoral student now on faculty at the University of North Carolina-Chapel Hill sums it up.
Chen, with his adviser Jianguo "Jack" Liu, director of the MSU Center for Systems Integration and Sustainability (CSIS) and others, have developed a new way to evaluate and model the long-term effectiveness of conservation investments. Their achievement is not only factoring in ecological gains – like, more trees growing – but also putting the actions and reactions of people into the equation.
The paper, Assessing the Effectiveness of Payments for Ecosystem Services: an Agent-Based Modeling Approach, appears in this week's online edition of Ecology and Society.
The paper examines payments for ecosystem services – the practice of paying people to perform tasks or engage in practices that aid conservation. The authors examined one of China's most sweeping – the National Forest Conservation Program, in which residents in Wolong Nature Reserve are paid to stop chopping down trees for timber and fuel wood.
Chen explained they tapped into both social data and environmental information to be able to create a computer model to simulate how the policy would fare over many years in a variety of scenarios.
Studies documenting results on land cover change and panda habitat dynamics were merged with studies revealing how people were likely to behave if new households were formed or incentives for conservation activities were varied.
"Usually studies are developed in either the social sciences or the natural sciences, and the importance of the other perspectives are not built into scientific exploration," Chen said. "We were able to develop this kind of simulation because of collaborative interdisciplinary research - by putting people with different backgrounds together."
He also said the model's ability to run scenarios about how policy could work over decades is crucial because many goals of conservation, like restoring wildlife habitat, can take decades. In the meantime, the actions of individuals living in the area can change.
Read more at: http://phys.org/news/2014-01-long-term-factor-people.html#jCp
Energy is the Key to Solving Income Inequality
Posted on January 29, 2014 at 5:48 pm by David Holt in Electricity, featured, Jobs & Career Advice, People, Politics/Policy
When exploring solutions to income inequality policy makers pay close attention to the costs. The cost of healthcare. The cost of food. The cost of child care. The cost of housing.
What about the cost of energy?
According to the Bureau of Labor Statistics, in 2012 the average U.S. family spent over $4,600 or about 9 percent of their budget to heat and power their homes and fuel their vehicles. Families in the bottom fifth of income earners spent nearly 33 percent more of their budget on energy costs than average $2,500 a year or 12% of their annual budget.
Reference the chart to the left and you will find that low-income families spend two and half times more on energy than on health services. Unlike food and housing, consumers cannot shop around for the lowest cost energy. Bargains can be found in the supermarket, but, prices at the pump do not vary from one station to the next. Conservation similarly is not an option when it’s a choice between driving to work or saving a gallon of gasoline.
A solution to remedying income inequality is tackling rising energy costs. The U.S. Energy Information Administration projects the price of electricity will rise 13.6 percent and the price of gasoline by 15.7 percent from now until 2040. Rising global demand, aging and insufficient energy infrastructure and restrictive government policies all play a role in increasing costs.President Obama has the ability to reverse this trend and lessen the blow to all consumers.
Take the shale gas boom for example. Increasing access to private and state lands and sound state regulatory programs have boosted production of natural gas and led to a significant lowering of prices. IHS CERA predicted that the shale revolution lifted household income by more than $1,200 in 2012 through lower energy costs, more job opportunities and greater federal and state tax revenues.
Policy makers should promote responsible energy development with the knowledge that it will have a positive affect on even the most vulnerable. The president has the power to act. Permitting energy infrastructure – including the Keystone XL Pipeline, opening new offshore areas to oil and natural gas development, and finalizing the nuclear waste confidence rulemaking, could transform the energy economy.
If policy makers want to take meaningful action to help our nation’s low income families, they must pursue actions that help lower – not raise – the cost of energy.
What Actually Happens While You Sleep and How It Affects Your Every Waking Moment
by Maria Popova
Most of us will spend a full third of our lives asleep, and yet we don’t have the faintest idea of what it does for our bodies and our brains. Research labs offer surprisingly few answers. Sleep is one of the dirty little secrets of science. My neurologist wasn’t kidding when he said there was a lot that we don’t know about sleep, starting with the most obvious question of all — why we, and every other animal, need to sleep in the first place.
But before we get too anthropocentrically arrogant in our assumptions, it turns out the quantitative requirement of sleep isn’t correlated with how high up the evolutionary chain an organism is:
Lions and gerbils sleep about thirteen hours a day. Tigers and squirrels nod off for about fifteen hours. At the other end of the spectrum, elephants typically sleep three and a half hours at a time, which seems lavish compared to the hour and a half of shut-eye that the average giraffe gets each night.
Humans need roughly one hour of sleep for every two hours they are awake, and the body innately knows when this ratio becomes out of whack. Each hour of missed sleep one night will result in deeper sleep the next, until the body’s sleep debt is wiped clean.
What, then, happens as we doze off, exactly? Like all science, our understanding of sleep seems to be a constant “revision in progress”:
Despite taking up so much of life, sleep is one of the youngest fields of science. Until the middle of the twentieth century, scientists thought that sleep was an unchanging condition during which time the brain was quiet. The discovery of rapid eye movements in the 1950s upended that. Researchers then realized that sleep is made up of five distinct stages that the body cycles through over roughly ninety-minute periods. The first is so light that if you wake up from it, you might not realize that you have been sleeping. The second is marked by the appearance of sleep-specific brain waves that last only a few seconds at a time. If you reach this point in the cycle, you will know you have been sleeping when you wake up. This stage marks the last drop before your brain takes a long ride away from consciousness. Stages three and four are considered deep sleep. In three, the brain sends out long, rhythmic bursts called delta waves. Stage four is known as slow-wave sleep for the speed of its accompanying brain waves. The deepest form of sleep, this is the farthest that your brain travels from conscious thought. If you are woken up while in stage four, you will be disoriented, unable to answer basic questions, and want nothing more than to go back to sleep, a condition that researchers call sleep drunkenness. The final stage is REM sleep, so named because of the rapid movements of your eyes dancing against your eyelids. In this stage of sleep, the brain is as active as it is when it is awake. This is when most dreams occur.
(Recall the role of REM sleep in regulating negative emotions.)
Randall’s most urgent point, however, echoes what we’ve already heard from German chronobiologist Till Roenneberg, who studies internal time — in our blind lust for the “luxuries” of modern life, with all its 24-hour news cycles, artificial lighting on demand, and expectations of round-the-clock telecommunications availability, we’ve thrown ourselves into a kind of circadian schizophrenia:
We are living in an age when sleep is more comfortable than ever and yet more elusive. Even the worst dorm-room mattress in America is luxurious compared to sleeping arrangements that were common not long ago. During the Victorian era, for instance, laborers living in workhouses slept sitting on benches, with their arms dangling over a taut rope in front of them. They paid for this privilege, implying that it was better than the alternatives. Families up to the time of the Industrial Revolution engaged in the nightly ritual of checking for rats and mites burrowing in the one shared bedroom. Modernity brought about a drastic improvement in living standards, but with it came electric lights, television, and other kinds of entertainment that have thrown our sleep patterns into chaos.
Work has morphed into a twenty-four-hour fact of life, bringing its own set of standards and expectations when it comes to sleep … Sleep is ingrained in our cultural ethos as something that can be put off, dosed with coffee, or ignored. And yet maintaining a healthy sleep schedule is now thought of as one of the best forms of preventative medicine.
Reflecting on his findings, Randall marvels:
As I spent more time investigating the science of sleep, I began to understand that these strange hours of the night underpin nearly every moment of our lives.
Indeed, Dreamland goes on to explore how sleep — its mechanisms, its absence, its cultural norms — affects everyone from police officers and truck drivers to artists and entrepreneurs, permeating everything from our decision-making to our emotional intelligence.
“We are living in an age when sleep is more comfortable than ever and yet more elusive.”
The Ancient Greeks believed that one fell asleep when the brain filled with blood and awakened once it drained back out. Nineteenth-century philosophers contended that sleep happened when the brain was emptied of ambitions and stimulating thoughts. “If sleep doesn’t serve an absolutely vital function, it is the greatest mistake evolution ever made,” biologist Allan Rechtschaffen once remarked. Even today, sleep remains one of the most poorly understood human biological functions, despite some recent strides in understanding the “social jetlag” of our internal clocks and the relationship between dreaming and depression. In Dreamland: Adventures in the Strange Science of Sleep (public library), journalist David K. Randall — who stumbled upon the idea after crashing violently into a wall while sleepwalking — explores “the largest overlooked part of your life and how it affects you even if you don’t have a sleep problem.” From gender differences to how come some people snore and others don’t to why we dream, he dives deep into this mysterious third of human existence to illuminate what happens when night falls and how it impacts every aspect of our days.Most of us will spend a full third of our lives asleep, and yet we don’t have the faintest idea of what it does for our bodies and our brains. Research labs offer surprisingly few answers. Sleep is one of the dirty little secrets of science. My neurologist wasn’t kidding when he said there was a lot that we don’t know about sleep, starting with the most obvious question of all — why we, and every other animal, need to sleep in the first place.
But before we get too anthropocentrically arrogant in our assumptions, it turns out the quantitative requirement of sleep isn’t correlated with how high up the evolutionary chain an organism is:
Lions and gerbils sleep about thirteen hours a day. Tigers and squirrels nod off for about fifteen hours. At the other end of the spectrum, elephants typically sleep three and a half hours at a time, which seems lavish compared to the hour and a half of shut-eye that the average giraffe gets each night.
Humans need roughly one hour of sleep for every two hours they are awake, and the body innately knows when this ratio becomes out of whack. Each hour of missed sleep one night will result in deeper sleep the next, until the body’s sleep debt is wiped clean.
What, then, happens as we doze off, exactly? Like all science, our understanding of sleep seems to be a constant “revision in progress”:
Despite taking up so much of life, sleep is one of the youngest fields of science. Until the middle of the twentieth century, scientists thought that sleep was an unchanging condition during which time the brain was quiet. The discovery of rapid eye movements in the 1950s upended that. Researchers then realized that sleep is made up of five distinct stages that the body cycles through over roughly ninety-minute periods. The first is so light that if you wake up from it, you might not realize that you have been sleeping. The second is marked by the appearance of sleep-specific brain waves that last only a few seconds at a time. If you reach this point in the cycle, you will know you have been sleeping when you wake up. This stage marks the last drop before your brain takes a long ride away from consciousness. Stages three and four are considered deep sleep. In three, the brain sends out long, rhythmic bursts called delta waves. Stage four is known as slow-wave sleep for the speed of its accompanying brain waves. The deepest form of sleep, this is the farthest that your brain travels from conscious thought. If you are woken up while in stage four, you will be disoriented, unable to answer basic questions, and want nothing more than to go back to sleep, a condition that researchers call sleep drunkenness. The final stage is REM sleep, so named because of the rapid movements of your eyes dancing against your eyelids. In this stage of sleep, the brain is as active as it is when it is awake. This is when most dreams occur.
(Recall the role of REM sleep in regulating negative emotions.)
Randall’s most urgent point, however, echoes what we’ve already heard from German chronobiologist Till Roenneberg, who studies internal time — in our blind lust for the “luxuries” of modern life, with all its 24-hour news cycles, artificial lighting on demand, and expectations of round-the-clock telecommunications availability, we’ve thrown ourselves into a kind of circadian schizophrenia:
We are living in an age when sleep is more comfortable than ever and yet more elusive. Even the worst dorm-room mattress in America is luxurious compared to sleeping arrangements that were common not long ago. During the Victorian era, for instance, laborers living in workhouses slept sitting on benches, with their arms dangling over a taut rope in front of them. They paid for this privilege, implying that it was better than the alternatives. Families up to the time of the Industrial Revolution engaged in the nightly ritual of checking for rats and mites burrowing in the one shared bedroom. Modernity brought about a drastic improvement in living standards, but with it came electric lights, television, and other kinds of entertainment that have thrown our sleep patterns into chaos.
Work has morphed into a twenty-four-hour fact of life, bringing its own set of standards and expectations when it comes to sleep … Sleep is ingrained in our cultural ethos as something that can be put off, dosed with coffee, or ignored. And yet maintaining a healthy sleep schedule is now thought of as one of the best forms of preventative medicine.
Reflecting on his findings, Randall marvels:
As I spent more time investigating the science of sleep, I began to understand that these strange hours of the night underpin nearly every moment of our lives.
Indeed, Dreamland goes on to explore how sleep — its mechanisms, its absence, its cultural norms — affects everyone from police officers and truck drivers to artists and entrepreneurs, permeating everything from our decision-making to our emotional intelligence.
Subscribe to:
Posts (Atom)
Nature religion
From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Nature_religion An aukuras , a type o...
-
From Wikipedia, the free encyclopedia Islamic State of Iraq and the Levant الدولة الإسلامية في العراق والشام ( ...
-
From Wikipedia, the free encyclopedia A reproduction of the palm -leaf manuscript in Siddham script ...