Search This Blog

Monday, September 28, 2015

Bioinorganic chemistry


From Wikipedia, the free encyclopedia

Bioinorganic chemistry is a field that examines the role of metals in biology. Bioinorganic chemistry includes the study of both natural phenomena such as the behavior of metalloproteins as well as artificially introduced metals, including those that are non-essential, in medicine and toxicology. Many biological processes such as respiration depend upon molecules that fall within the realm of inorganic chemistry. The discipline also includes the study of inorganic models or mimics that imitate the behaviour of metalloproteins.[1]
As a mix of biochemistry and inorganic chemistry, bioinorganic chemistry is important in elucidating the implications of electron-transfer proteins, substrate bindings and activation, atom and group transfer chemistry as well as metal properties in biological chemistry.

Composition of living organisms

About 99% of mammals' mass are the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur.[2] The organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen and most of the oxygen and hydrogen is present as water.[2] The entire collection of metal-containing biomolecules in a cell is called the metallome.

History

Paul Ehrlich used organoarsenic (“arsenicals”) for the treatment of syphilis, demonstrating the relevance of metals, or at least metalloids, to medicine, that blossomed with Rosenberg’s discovery of the anti-cancer activity of cisplatin (cis-PtCl2(NH3)2). The first protein ever crystallized (see James B. Sumner) was urease, later shown to contain nickel at its active site. Vitamin B12, the cure for pernicious anemia was shown crystallographically by Dorothy Crowfoot Hodgkin to consist of a cobalt in a corrin macrocycle. The Watson-Crick structure for DNA demonstrated the key structural role played by phosphate-containing polymers.

Themes in bioinorganic chemistry

Several distinct systems are of identifiable in bioinorganic chemistry. Major areas include:

Metal ion transport and storage

This topic covers a diverse collection of ion channels, ion pumps (e.g. NaKATPase), vacuoles, siderophores, and other proteins and small molecules which control the concentration of metal ions in the cells. One issue is that many metals that are metabolically required are not readily available owing to solubility or scarcity. Organisms have developed a number of strategies for collecting such elements and transporting them.

Enzymology

Many reactions in life sciences involve water and metal ions are often at the catalytic centers (active sites) for these enzymes, i.e. these are metalloproteins. Often the reacting water is a ligand (see metal aquo complex). Examples of hydrolase enzymes are carbonic anhydrase, metallophosphatases, and metalloproteinases. Bioinorganic chemists seek to understand and replicate the functi on of these metalloproteins.

Metal-containing electron transfer proteins are also common. They can be organized into three major classes: iron-sulfur proteins (such as rubredoxins, ferredoxins, and Rieske proteins), blue copper proteins, and cytochromes. These electron transport proteins are complementary to the non-metal electron transporters nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD). The nitrogen cycle make extensive use of metals for the redox interconversions.


4Fe-4S clusters serve as electron-relays in proteins.

Oxygen transport and activation proteins

Aerobic life make extensive use of metals such as iron, copper, and manganese. Heme is utilized by red blood cells in the form of hemoglobin for oxygen transport and is perhaps the most recognized metal system in biology. Other oxygen transport systems include myoglobin, hemocyanin, and hemerythrin. Oxidases and oxygenases are metal systems found throughout nature that take advantage of oxygen to carry out important reactions such as energy generation in cytochrome c oxidase or small molecule oxidation in cytochrome P450 oxidases or methane monooxygenase. Some metalloproteins are designed to protect a biological system from the potentially harmful effects of oxygen and other reactive oxygen-containing molecules such as hydrogen peroxide. These systems include peroxidases, catalases, and superoxide dismutases. A complementary metalloprotein to those that react with oxygen is the oxygen evolving complex present in plants. This system is part of the complex protein machinery that produces oxygen as plants perform photosynthesis.

Myoglobin is a prominent subject in bioinorganic chemistry, with particular attention to the iron-heme complex that is anchored to the protein.

Bioorganometallic chemistry

Bioorganometallic systems feature metal-carbon bonds as structural elements or as intermediates. Bioorganometallic enzymes and proteins include the hydrogenases, FeMoco in nitrogenase, and methylcobalamin. These naturally occurring organometallic compounds. This area is more focused on the utilization of metals by unicellular organisms. Bioorganometallic compounds are significant in environmental chemistry.[3]


Structure of FeMoco, the catalytic center of nitrogenase.

Metals in medicine

A number of drugs contain metals. This theme relies on the study of the design and mechanism of action of metal-containing pharmaceuticals, and compounds that interact with endogenous metal ions in enzyme active sites. The most widely used anti-cancer drug is cisplatin. MRI contrast agent commonly contain gadolinium. Lithium carbonate has been used to treat the manic phase of bipolar disorder. Gold antiarthritic drugs, e.g. auranofin have been commerciallized. Carbon monoxide-releasing molecules are metal complexes have been developed to suppress inflammation by releasing small amounts of carbon monoxide. The cardiovascular and neuronal importance of nitric oxide has been examined, including the enzyme nitric oxide synthase. (See also: nitrogen assimilation.)

Environmental chemistry

Environmental chemistry traditionally emphasizes the interaction of heavy metals with organisms. Methylmercury has caused major disaster called Minamata disease. Arsenic poisoning is a widespread problem owing largely to arsenic contamination of groundwater, which affects many millions of people in developing countries. The metabolism of mercury- and arsenic-containing compounds involves cobalamin-based enzymes.

Biomineralization

Biomineralization is the process by which living organisms produce minerals, often to harden or stiffen existing tissues. Such tissues are called mineralized tissues.[4][5][6] Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates.Other examples include copper, iron and gold deposits involving bacteria. Biologically-formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin). Because extracellular[7] iron is strongly involved in inducing calcification,[8][9] its control is essential in developing shells; the protein ferritin plays an important role in controlling the distribution of iron.[10]

Types of inorganic elements in biology

Alkali and alkaline earth metals


Like many antibiotics, monensin-A is an ionophore that tighlty bind Na+ (shown in yellow).[11]

The abundant inorganic elements act as ionic electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate, and the organic ion bicarbonate. The maintenance of precise gradients across cell membranes maintains osmotic pressure and pH.[12] Ions are also critical for nerves and muscles, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cytosol.[13] Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules.[14]

Transition metals

The transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant.[15][16][17] These metals are used in some proteins as cofactors and are essential for the activity of enzymes such as catalase and oxygen-carrier proteins such as hemoglobin.[18] These cofactors are bound tightly to a specific protein; although enzyme cofactors can be modified during catalysis, cofactors always return to their original state after catalysis has taken place. The metal micronutrients are taken up into organisms by specific transporters and bound to storage proteins such as ferritin or metallothionein when not being used.[19][20] Cobalt is essential for the functioning of vitamin B12.[21]

Main group compounds

Many other elements aside from metals are bio-active. Sulfur and phosphorus are required for all life. Phosphorus almost exclusively exists as phosphate and its various esters. Sulfur exists in a variety of oxidation states, ranging from sulfate (SO42−) down to sulfide (S2−). Selenium is a trace element involved in proteins that are antioxidants. Cadmium is important because of its toxicity.[22]

MIT Physicist Proposes New "Meaning of Life"

Original source:  http://bigthink.com/ideafeed/mit-physicist-proposes-new-meaning-of-life

MIT physicist Jeremy England claims that life may not be so mysterious after all, despite the fact it is apparently derived from non-living matter. In a new paper, England explains how simple physical laws make complex life more likely than not. In other words, it would be more surprising to find no life in the universe than a buzzing place like planet Earth.

What does all matter—rocks, plants, animals, and humans—have in common? We all absorb and dissipate energy. While a rock absorbs a small amount of energy before releasing what it doesn't use back into the universe, life takes in more energy and releases less. This makes life better at redistributing energy, and the process of converting and dissipating energy is simply a fundamental characteristic of the universe.
[S]imple physical laws make complex life more likely than not.
According to England, the second law of thermodynamics gives life its meaning. The law states that entropy, i.e. decay, will continuously increase. Imagine a hot cup of coffee sitting at room temperature. Eventually, the cup of coffee will reach room temperature and stay there: its energy will have dissipated. Now imagine molecules swimming in a warm primordial ocean. England claims that matter will slowly but inevitably reorganize itself into forms that better dissipate the warm oceanic energy.
[T]he second law of thermodynamics gives life its meaning.
The strength of England's theory is that it provides an underlying physical basis for Darwin's theory of evolution and helps explain some evolutionary tendencies that evolution cannot. Adaptations that don't clearly benefit a species in terms of survivability can be explained thusly: "the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve."

Sunday, September 27, 2015

Why Is There Something Rather Than Nothing

By Robert Adler
6 November 2014
Original source:  http://www.bbc.com/earth/story/20141106-why-does-anything-exist-at-all

People have wrestled with the mystery of why the universe exists for thousands of years. Pretty much every ancient culture came up with its own creation story - most of them leaving the matter in the hands of the gods - and philosophers have written reams on the subject. But science has had little to say about this ultimate question.

However, in recent years a few physicists and cosmologists have started to tackle it. They point out that we now have an understanding of the history of the universe, and of the physical laws that describe how it works. That information, they say, should give us a clue about how and why the cosmos exists.

Their admittedly controversial answer is that the entire universe, from the fireball of the Big Bang to the star-studded cosmos we now inhabit, popped into existence from nothing at all. It had to happen, they say, because "nothing" is inherently unstable.

This idea may sound bizarre, or just another fanciful creation story. But the physicists argue that it follows naturally from science's two most powerful and successful theories: quantum mechanics and general relativity.

Here, then, is how everything could have come from nothing.














(Credit: NASA, ESA, M. Postman (STScI), CLASH Team, Hubble Heritage Team (STScI/AURA))

Particles from empty space


First we have to take a look at the realm of quantum mechanics. This is the branch of physics that deals with very small things: atoms and even tinier particles. It is an immensely successful theory, and it underpins most modern electronic gadgets.

Quantum mechanics tells us that there is no such thing as empty space. Even the most perfect vacuum is actually filled by a roiling cloud of particles and antiparticles, which flare into existence and almost instantaneously fade back into nothingness.

These so-called virtual particles don't last long enough to be observed directly, but we know they exist by their effects.














The Stephan's Quintet group of galaxies (Credit: NASA, ESA, and the Hubble SM4 ERO Team)

Space-time, from no space and no time

From tiny things like atoms, to really big things like galaxies. Our best theory for describing such large-scale structures is general relativity, Albert Einstein's crowning achievement, which sets out how space, time and gravity work.

Relativity is very different from quantum mechanics, and so far nobody has been able to combine the two seamlessly. However, some theorists have been able to bring the two theories to bear on particular problems by using carefully chosen approximations. For instance, this approach was used by Stephen Hawking at the University of Cambridge to describe black holes.

    In quantum physics, if something is not forbidden, it necessarily happens

One thing they have found is that, when quantum theory is applied to space at the smallest possible scale, space itself becomes unstable. Rather than remaining perfectly smooth and continuous, space and time destabilize, churning and frothing into a foam of space-time bubbles.

In other words, little bubbles of space and time can form spontaneously. "If space and time are quantized, they can fluctuate," says Lawrence Krauss at Arizona State University in Tempe. "So you can create virtual space-times just as you can create virtual particles."

What's more, if it's possible for these bubbles to form, you can guarantee that they will. "In quantum physics, if something is not forbidden, it necessarily happens with some non-zero probability," says Alexander Vilenkin of Tufts University in Boston, Massachusetts.














Maybe it all began with bubbles (Credit: amira_a, CC by 2.0)

A universe from a bubble

So it's not just particles and antiparticles that can snap in and out of nothingness: bubbles of space-time can do the same. Still, it seems like a big leap from an infinitesimal space-time bubble to a massive universe that hosts 100 billion galaxies. Surely, even if a bubble formed, it would be doomed to disappear again in the blink of an eye?

    If all the galaxies are flying apart, they must once have been close together

Actually, it is possible for the bubble to survive. But for that we need another trick: cosmic inflation.


Most physicists now think that the universe began with the Big Bang. At first all the matter and energy in the universe was crammed together in one unimaginably small dot, and this exploded. This follows from the discovery, in the early 20th century, that the universe is expanding. If all the galaxies are flying apart, they must once have been close together.

Inflation theory proposes that in the immediate aftermath of the Big Bang, the universe expanded much faster than it did later. This seemingly outlandish notion was put forward in the 1980s by Alan Guth at the Massachusetts Institute of Technology, and refined by Andrei Linde, now at Stanford University.

    As weird as it seems, inflation fits the facts

The idea is that, a fraction of a second after the Big Bang, the quantum-sized bubble of space expanded stupendously fast. In an incredibly brief moment, it went from being smaller than the nucleus of an atom to the size of a grain of sand. When the expansion finally slowed, the force field that had powered it was transformed into the matter and energy that fill the universe today. Guth calls inflation "the ultimate free lunch".

As weird as it seems, inflation fits the facts rather well. In particular, it neatly explains why the cosmic microwave background, the faint remnant of radiation left over from the Big Bang, is almost perfectly uniform across the sky. If the universe had not expanded so rapidly, we would expect the radiation to be patchier than it is.














The cosmic microwave background
(Credit: NASA / WMAP Science Team)

The universe is flat and why that's important


Inflation also gave cosmologists the measuring tool they needed to determine the underlying geometry of the universe. It turns out this is also crucial for understanding how the cosmos came from nothing.

Einstein's theory of general relativity tells us that the space-time we live in could take three different forms. It could be as flat as a table top. It could curve back on itself like the surface of a sphere, in which case if you travel far enough in the same direction you would end up back where you started. Alternatively, space-time could curve outward like a saddle. So which is it?

There is a way to tell. You might remember from maths class that the three angles of a triangle add up to exactly 180 degrees. Actually your teachers left out a crucial point: this is only true on a flat surface. If you draw a triangle on the surface of a balloon, its three angles will add up to more than 180 degrees. Alternatively, if you draw a triangle on a surface that curves outward like a saddle, its angles will add up to less than 180 degrees.

So to find out if the universe is flat, we need to measure the angles of a really big triangle. That's where inflation comes in. It determined the average size of the warmer and cooler patches in the cosmic microwave background. Those patches were measured in 2003, and that gave astronomers a selection of triangles. As a result, we know that on the largest observable scale our universe is flat.














(Credit: It may not look flat... (Credit: NASA, ESA, and The Hubble Heritage Team (AURA/STScI))

It turns out that a flat universe is crucial. That's because only a flat universe is likely to have come from nothing.

Everything that exists, from stars and galaxies to the light we see them by, must have sprung from somewhere. We already know that particles spring into existence at the quantum level, so we might expect the universe to contain a few odds and ends. But it takes a huge amount of energy to make all those stars and planets.

    The energy of matter is exactly balanced by the energy of the gravity the mass creates

Where did the universe get all this energy? Bizarrely, it may not have had to get any. That's because every object in the universe creates gravity, pulling other objects toward it. This balances the energy needed to create the matter in the first place.

It's a bit like an old-fashioned measuring scale. You can put a heavy weight on one side, so long as it is balanced by an equal weight on the other. In the case of the universe, the matter goes on one side of the scale, and has to be balanced by gravity.

Physicists have calculated that in a flat universe the energy of matter is exactly balanced by the energy of the gravity the mass creates. But this is only true in a flat universe. If the universe had been curved, the two sums would not cancel out.














Matter on one side, gravity on the other (Credit: Da Sal, CC by 2.0)

Universe or multiverse?


At this point, making a universe looks almost easy. Quantum mechanics tells us that "nothing" is inherently unstable, so the initial leap from nothing to something may have been inevitable. Then the resulting tiny bubble of space-time could have burgeoned into a massive, busy universe, thanks to inflation. As Krauss puts it, "The laws of physics as we understand them make it eminently plausible that our universe arose from nothing - no space, no time, no particles, nothing that we now know of."

So why did it only happen once? If one space-time bubble popped into existence and inflated to form our universe, what kept other bubbles from doing the same?

    There could be a mind-boggling smorgasbord of universes

Linde offers a simple but mind-bending answer. He thinks universes have always been springing into existence, and that this process will continue forever.

When a new universe stops inflating, says Linde, it is still surrounded by space that is continuing to inflate. That inflating space can spawn more universes, with yet more inflating space around them. So once inflation starts it should make an endless cascade of universes, which Linde calls eternal inflation. Our universe may be just one grain of sand on an endless beach.

Those universes might be profoundly different to ours. The universe next door might have five dimensions of space rather than the three – length, breadth and height – that ours does. Gravity might be ten times stronger or a thousand times weaker, or not exist at all. Matter might be built out of utterly different particles.

So there could be a mind-boggling smorgasbord of universes. Linde says eternal inflation is not just the ultimate free lunch: it is the only one at which all possible dishes are available.

As yet we don't have hard evidence that other universes exist. But either way, these ideas give a whole new meaning to the phrase "Thanks for nothing".

Lawrence M. Krauss


From Wikipedia, the free encyclopedia

Lawrence M. Krauss
Laurence Krauss.JPG
Krauss at Ghent University, October 17, 2013
Born Lawrence Maxwell Krauss
(1954-05-27) May 27, 1954 (age 61)
New York, New York, USA
Nationality American
Fields
Institutions
Alma mater
Thesis Gravitation and phase transitions in the early universe (1982)
Doctoral advisor Roscoe Giles[1]
Known for
Notable awards Andrew Gemant Award (2001)
Lilienfeld Prize (2001)
Science Writing Award (2002)
Oersted Medal (2004)
Spouse
  • Katherine Kelley (1980–2012; divorced, 1 child)
  • Nancy Dahl (2014–present)
Website
krauss.faculty.asu.edu
Lawrence Maxwell Krauss (born May 27, 1954) is an American theoretical physicist and cosmologist who is Foundation Professor of the School of Earth and Space Exploration at Arizona State University and director of its Origins Project.[2] He is known as an advocate of the public understanding of science, of public policy based on sound empirical data, of scientific skepticism and of science education and works to reduce the impact of what he opines as superstition and religious dogma in pop culture.[3] Krauss is also the author of several bestselling books, including The Physics of Star Trek and A Universe from Nothing, and chairs the Bulletin of the Atomic Scientists Board of Sponsors.[4]

Biography

Early life and education

Krauss was born in New York City, but spent his childhood in Toronto, Ontario, Canada.[5] Krauss received undergraduate degrees in mathematics and physics with first class honours at Carleton University (Ottawa) in 1977, and was awarded a Ph.D. in physics at the Massachusetts Institute of Technology in 1982.[6][7]

Personal life

On January 19, 1980, he married Katherine Kelley, a native of Nova Scotia. Their daughter, Lilli was born November 23, 1984. Krauss and Kelley separated in 2010 and were divorced in 2012. Krauss married Australian/American Nancy Dahl on January 7, 2014, and spends some of the Arizona summer in Australia at the Mount Stromlo Observatory.[8][9]

Career

After some time in the Harvard Society of Fellows, Krauss became an assistant professor at Yale University in 1985 and associate professor in 1988. He was named the Ambrose Swasey Professor of Physics, professor of astronomy, and was chairman of the physics department at Case Western Reserve University from 1993 to 2005. In 2006, Krauss led the initiative for the no-confidence vote against Case Western Reserve University's president Edward M. Hundert and provost Anderson by the College of Arts and Sciences faculty. On March 2, 2006, both no-confidence votes were carried: 131–44 against Hundert and 97–68 against Anderson.

In August 2008, Krauss joined the faculty at Arizona State University as a Foundation Professor in the School of Earth and Space Exploration at the Department of Physics in the College of Liberal Arts and Sciences. He also became the Director of the Origins Project, a university initiative.[10] In 2009, he helped inaugurate this initiative at the Origins Symposium, in which eighty scientists participated and three thousand people attended.[11]

Krauss appears in the media both at home and abroad to facilitate public outreach in science. He has also written editorials for The New York Times. As a result of his appearance in 2004 before the state school board of Ohio, his opposition to intelligent design has gained national prominence.[12]

Krauss attended and was a speaker at the Beyond Belief symposia in November 2006 and October 2008. He served on the science policy committee for Barack Obama's first (2008) presidential campaign and, also in 2008, was named co-president of the board of sponsors of the Bulletin of the Atomic Scientists. In 2010, he was elected to the board of directors of the Federation of American Scientists, and in June 2011, he joined the professoriate of the New College of the Humanities, a private college in London.[13] In 2013, he accepted a part-time professorship at the Research School of Astronomy and Astrophysics in the Physics Department of the Australian National University.[9]

Krauss is a critic of string theory, which he discusses in his 2005 book Hiding in the Mirror.[14] Another book, released in March 2011, was titled Quantum Man: Richard Feynman's Life in Science, while A Universe from Nothing —with an afterword by Richard Dawkins—was released in January 2012 and became a New York Times bestseller within a week. Originally, its foreword was to have been written by Christopher Hitchens, but Hitchens grew too ill to complete it.[15][16] The paperback version of the book appeared in January 2013 with a new question-and-answer section and a preface integrating the 2012 discovery of the Higgs boson at the LHC.

A July 2012 article in Newsweek, written by Krauss, indicates how the Higgs particle is related to our understanding of the Big Bang. He also wrote a longer piece in the New York Times explaining the science behind and significance of the particle.[17]

Scientific work


Krauss lecturing about cosmology at TAM 2012

Krauss mostly works in theoretical physics and has published research on a great variety of topics within that field. His primary contribution is to cosmology as one of the first physicists to suggest that most of the mass and energy of the universe resides in empty space, an idea now widely known as "dark energy". Furthermore, Krauss has formulated a model in which the universe could have potentially come from "nothing," as outlined in his 2012 book A Universe from Nothing. He explains that certain arrangements of relativistic quantum fields might explain the existence of the universe as we know it while disclaiming that he "has no idea if the notion [of taking quantum mechanics for granted] can be usefully dispensed with".[18] As his model appears to agree with experimental observations of the universe (such as of its shape and energy density), it is referred to as a "plausible hypothesis".[19][20]

Initially, Krauss was skeptical of the Higgs mechanism. However, after the existence of the Higgs boson was confirmed by CERN, he has been researching the implications of the Higgs field on the nature of dark energy.[21]

Atheist activism

Krauss describes himself as an antitheist[22] and takes part in public debates on religion. Krauss featured in the 2013 documentary The Unbelievers, in which he and Richard Dawkins travel across the globe speaking publicly about the importance of science and reason as opposed to religion and superstition. The documentary also contains short clips of prominent figures such as Ayaan Hirsi Ali, Cameron Diaz, Sam Harris, and Stephen Hawking.[23]

In his book, A Universe from Nothing: Why There is Something Rather than Nothing (2012), Krauss discusses the premise that something cannot come from nothing, which has often been used as an argument for the existence of a Prime mover. He has since argued in a debate with John Ellis and Don Cupitt that the laws of physics allow for the universe to be created from nothing. "What would be the characteristics of a universe that was created from nothing, just with the laws of physics and without any supernatural shenanigans? The characteristics of the universe would be precisely those of the ones we live in." [24] In an interview with The Atlantic, however, he states that he has never claimed that "questions about origins are over." According to Krauss, "I don't ever claim to resolve that infinite regress of why-why-why-why-why; as far as I'm concerned it's turtles all the way down."[25]

Krauss has participated in many debates with theologians and apologists, including William Lane Craig and Hamza Tzortzis.[26] The debate with Tzortzis resulted in controversy when Krauss complained to the iERA organisers about the gender segregation of the audience; he only stayed when men and women were allowed to sit together.[27] Later, in discussions around secular liberal democracies and homosexuality, Krauss was asked "Why is incest wrong?" and answered that "Generally incest produces genetic defects" leading to "an ingrained incest taboo in almost all societies" though it could be theoretically permissible under rare circumstances where contraception is used.[28][29]

Honors

Krauss is one of the few living physicists described by Scientific American as a "public intellectual"[20] and he is the only physicist to have received awards from all three major American physics societies: the American Physical Society, the American Association of Physics Teachers, and the American Institute of Physics. In 2012, he was awarded the National Science Board's Public Service Medal for his contributions to public education in science and engineering in the United States.[30]

During December 2011, Krauss was named as a non-voting honorary board member for the Center for Inquiry.[31]

Bibliography

Krauss has authored or co-authored more than three hundred scientific studies and review articles on cosmology and theoretical physics.

Books

Contributor

  • 100 Things to Do Before You Die (plus a few to do afterwards). 2004. Profile Books.
  • The Religion and Science Debate: Why Does It Continue? 2009. Yale Press.

Articles

  • THE ENERGY OF EMPTY SPACE THAT ISN'T ZERO. 2006. Edge.org [33]
  • A dark future for cosmology. 2007. Physics World.
  • The End of Cosmology. 2008. Scientific American.
  • The return of a static universe and the end of cosmology. 2008. International journal of modern physics.
  • Late time behavior of false vacuum decay: Possible implications for cosmology and metastable inflating states. 2008. Physical Review Letters.
  • Krauss, Lawrence M. (June 2010). "Why I love neutrinos". Scientific American 302 (6): 19. doi:10.1038/scientificamerican0610-34. 

Media

Documentary films

Television

Films

Awards


Krauss (right) during TAM9 in 2011, with Neil DeGrasse Tyson and Pamela Gay.

Saturday, September 26, 2015

Cognitive neuroscience

From Wikipedia, the free encyclopedia

Cognitive neuroscience is an academic field concerned with the scientific study of biological substrates underlying cognition,[1] with a specific focus on the neural substrates of mental processes. It addresses the questions of how psychological/cognitive functions are produced by neural circuits in the brain. Cognitive neuroscience is a branch of both psychology and neuroscience, overlapping with disciplines such as physiological psychology, cognitive psychology, and neuropsychology.[2] Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neuropsychology, and computational modeling.[2]

Due to its multidisciplinary nature, cognitive neuroscientists may have various backgrounds. Other than the associated disciplines just mentioned, cognitive neuroscientists may have backgrounds in neurobiology, bioengineering, psychiatry, neurology, physics, computer science, linguistics, philosophy, and mathematics.

Methods employed in cognitive neuroscience include experimental paradigms from psychophysics and cognitive psychology, functional neuroimaging, electrophysiology, cognitive genomics, and behavioral genetics. Studies of patients with cognitive deficits due to brain lesions constitute an important aspect of cognitive neuroscience. Theoretical approaches include computational neuroscience and cognitive psychology.

Cognitive neuroscience can look at the effects of damage to the brain and subsequent changes in the thought processes due to changes in neural circuitry resulting from the ensued damage. Also, cognitive abilities based on brain development is studied and examined under the subfield of developmental cognitive neuroscience.

Historical origins

Timeline of development of field of cognitive neuroscience
Timeline showing major developments in science that led to the emergence of the field cognitive neuroscience.

Cognitive neuroscience is an interdisciplinary area of study that has emerged from many other fields, perhaps most significantly neuroscience, psychology, and computer science.[3] There were several stages in these disciplines that changed the way researchers approached their investigations and that led to the field becoming fully established.

Although the task of cognitive neuroscience is to describe how the brain creates the mind, historically it has progressed by investigating how a certain area of the brain supports a given mental faculty. However, early efforts to subdivide the brain proved to be problematic. The phrenologist movement failed to supply a scientific basis for its theories and has since been rejected. The aggregate field view, meaning that all areas of the brain participated in all behavior,[4] was also rejected as a result of brain mapping, which began with Hitzig and Fritsch’s experiments [5] and eventually developed through methods such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI).[6] Gestalt theory, neuropsychology, and the cognitive revolution were major turning points in the creation of cognitive neuroscience as a field, bringing together ideas and techniques that enabled researchers to make more links between behavior and its neural substrates.

Origins in philosophy

Philosophers have always been interested in the mind. For example, Aristotle thought the brain was the body’s cooling system and the capacity for intelligence was located in the heart. It has been suggested that the first person to believe otherwise was the Roman physician Galen in the second century AD, who declared that the brain was the source of mental activity [7] although this has also been accredited to Alcmaeon.[8] Psychology, a major contributing field to cognitive neuroscience, emerged from philosophical reasoning about the mind.[9]

19th century

Phrenology


A page from the American Phrenological Journal
Main article: Phrenology

One of the predecessors to cognitive neuroscience was phrenology, a pseudoscientific approach that claimed that behavior could be determined by the shape of the scalp. In the early 19th century, Franz Joseph Gall and J. G. Spurzheim believed that the human brain was localized into approximately 35 different sections. In his book, The Anatomy and Physiology of the Nervous System in General, and of the Brain in Particular, Gall claimed that a larger bump in one of these areas meant that that area of the brain was used more frequently by that person. This theory gained significant public attention, leading to the publication of phrenology journals and the creation of phrenometers, which measured the bumps on a human subject's head. While phrenology remained a fixture at fairs and carnivals, it did not enjoy wide acceptance within the scientific community.[10] The major criticism of phrenology is that researchers were not able to test theories empirically.[3]

Localizationist view

The localizationist view was concerned with mental abilities being localized to specific areas of the brain rather than on what the characteristics of the abilities were and how to measure them.[3] Studies performed in Europe, such as those of John Hughlings Jackson, supported this view. Jackson studied patients with brain damage, particularly those with epilepsy. He discovered that the epileptic patients often made the same clonic and tonic movements of muscle during their seizures, leading Jackson to believe that they must be occurring in the same place every time. Jackson proposed that specific functions were localized to specific areas of the brain,[11] which was critical to future understanding of the brain lobes.

Aggregate field view

According to the aggregate field view, all areas of the brain participate in every mental function.[4]
Pierre Flourens, a French experimental psychologist, challenged the localizationist view by using animal experiments.[3] He discovered that removing the cerebellum in rabbits and pigeons affected their sense of muscular coordination, and that all cognitive functions were disrupted in pigeons when the cerebral hemispheres were removed. From this he concluded that the cerebral cortex, cerebellum, and brainstem functioned together as a whole.[12] His approach has been criticised on the basis that the tests were not sensitive enough to notice selective deficits had they been present.[3]

Emergence of neuropsychology

Perhaps the first serious attempts to localize mental functions to specific locations in the brain was by Broca and Wernicke. This was mostly achieved by studying the effects of injuries to different parts of the brain on psychological functions.[13] In 1861, French neurologist Paul Broca came across a man who was able to understand language but unable to speak. The man could only produce the sound "tan". It was later discovered that the man had damage to an area of his left frontal lobe now known as Broca's area. Carl Wernicke, a German neurologist, found a patient who could speak fluently but non-sensibly. The patient had been the victim of a stroke, and could not understand spoken or written language. This patient had a lesion in the area where the left parietal and temporal lobes meet, now known as Wernicke's area. These cases, which suggested that lesions caused specific behavioral changes, strongly supported the localizationist view.

Mapping the brain

In 1870, German physicians Eduard Hitzig and Gustav Fritsch published their findings about the behavior of animals. Hitzig and Fritsch ran an electric current through the cerebral cortex of a dog, causing different muscles to contract depending on which areas of the brain were electrically stimulated. This led to the proposition that individual functions are localized to specific areas of the brain rather than the cerebrum as a whole, as the aggregate field view suggests.[5] Brodmann was also an important figure in brain mapping; his experiments based on Franz Nissl’s tissue staining techniques divided the brain into fifty-two areas.

20th century

Cognitive revolution

At the start of the 20th century, attitudes in America were characterised by pragmatism, which led to a preference for behaviorism as the primary approach in psychology. J.B. Watson was a key figure with his stimulus-response approach. By conducting experiments on animals he was aiming to be able to predict and control behaviour. Behaviourism eventually failed because it could not provide realistic psychology of human action and thought – it was too based in physical concepts to explain phenomena like memory and thought. This led to what is often termed as the "cognitive revolution".[14]

Neuron doctrine

In the early 20th century, Santiago Ramón y Cajal and Camillo Golgi began working on the structure of the neuron. Golgi developed a silver staining method that could entirely stain several cells in a particular area, leading him to believe that neurons were directly connected with each other in one cytoplasm. Cajal challenged this view after staining areas of the brain that had less myelin and discovering that neurons were discrete cells. Cajal also discovered that cells transmit electrical signals down the neuron in one direction only. Both Golgi and Cajal were awarded a Nobel Prize in Physiology or Medicine in 1906 for this work on the neuron doctrine.[15]

Mid-late 20th century

Several findings in the 20th century continued to advance the field, such as the discovery of ocular dominance columns, recording of single nerve cells in animals, and coordination of eye and head movements. Experimental psychology was also significant in the foundation of cognitive neuroscience. Some particularly important results were the demonstration that some tasks are accomplished via discrete processing stages, the study of attention, and the notion that behavioural data do not provide enough information by themselves to explain mental processes. As a result, some experimental psychologists began to investigate neural bases of behaviour. Wilder Penfield built up maps of primary sensory and motor areas of the brain by stimulating cortices of patients during surgery. Sperry and Gazzaniga’s work on split brain patients in the 1950s was also instrumental in the progress of the field.[7]

Brain mapping

New brain mapping technology, particularly fMRI and PET, allowed researchers to investigate experimental strategies of cognitive psychology by observing brain function. Although this is often thought of as a new method (most of the technology is relatively recent), the underlying principle goes back as far as 1878 when blood flow was first associated with brain function.[6] Angelo Mosso, an Italian psychologist of the 19th century, had monitored the pulsations of the adult brain through neurosurgically created bony defects in the skulls of patients. He noted that when the subjects engaged in tasks such as mathematical calculations the pulsations of the brain increased locally. Such observations led Mosso to conclude that blood flow of the brain followed function.[6]

Emergence of a new discipline

Birth of cognitive science

On September 11, 1956, a large-scale meeting of cognitivists took place at the Massachusetts Institute of Technology. George A. Miller presented his "The Magical Number Seven, Plus or Minus Two" paper while Noam Chomsky and Newell & Simon presented their findings on computer science. Ulric Neisser commented on many of the findings at this meeting in his 1967 book Cognitive Psychology. The term "psychology" had been waning in the 1950s and 1960s, causing the field to be referred to as "cognitive science". Behaviorists such as Miller began to focus on the representation of language rather than general behavior. David Marr concluded that one should understand any cognitive process at three levels of analysis. These levels include computational, algorithmic/representational, and physical levels of analysis.[16]

Combining neuroscience and cognitive science

Before the 1980s, interaction between neuroscience and cognitive science was scarce.[17] The term 'cognitive neuroscience' was coined by George Miller and Michael Gazzaniga toward the end of the 1970s.[17] Cognitive neuroscience began to integrate the newly laid theoretical ground in cognitive science, that emerged between the 1950s and 1960s, with approaches in experimental psychology, neuropsychology and neuroscience. (Neuroscience was not established as a unified discipline until 1971[18]). In the very late 20th century new technologies evolved that are now the mainstay of the methodology of cognitive neuroscience, including TMS (1985) and fMRI (1991). Earlier methods used in cognitive neuroscience includes EEG (human EEG 1920) and MEG (1968). Occasionally cognitive neuroscientists utilize other brain imaging methods such as PET and SPECT. An upcoming technique in neuroscience is NIRS which uses light absorption to calculate changes in oxy- and deoxyhemoglobin in cortical areas. In some animals Single-unit recording can be used. Other methods include microneurography, facial EMG, and eye-tracking. Integrative neuroscience attempts to consolidate data in databases, and form unified descriptive models from various fields and scales: biology, psychology, anatomy, and clinical practice.[19] Brenda Milner, Marcus Raichle and John O'Keefe received the Kavli Prize in Neuroscience “for the discovery of specialized brain networks for memory and cognition" in 2014[20] and O'Keefe shared the Nobel Prize in Physiology or Medicine in the same year with May-Britt Moser and Edvard Moser "for their discoveries of cells that constitute a positioning system in the brain".[21]

Recent trends

Recently the foci of research have expanded from the localization of brain area(s) for specific functions in the adult brain using a single technology, studies have been diverging in several different directions [22] such as monitoring REM sleep via polygraphy, a machine that is capable of recording the electrical activity of a sleeping brain. Advances in non-invasive functional neuroimaging and associated data analysis methods have also made it possible to use highly naturalistic stimuli and tasks such as feature films depicting social interactions in cognitive neuroscience studies.[23]

Declaration of the Rights of Man and of the Citizen

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Declarati...