Search This Blog

Saturday, December 21, 2013

The Basic, but Surprising Structure of Atom.


The Idiot’s Guide to Making Atoms

Avagadro’s Number and Moles

Writing this chapter has reminded me of the opening of a story by a well-known science fiction author (whose name, needless to say, I can’t recall): “This is a warning, the only one you’ll get so don’t take it lightly.” Alice in Wonderland or “We’re not in Kansas anymore” also pop into mind. What I mean by this is that I could find no way of writing it without requiring the reader to put his thinking (and imagining) cap on. So: be prepared.

A few things about science in general before I plunge headlong into the subject I’m going to cover. I have already mentioned the way science is a step-by-step, often even torturous, process of discovering facts, running experiments, making observations, thinking about them, and so on; a slow but steady accumulation of knowledge and theory which gradually reveals to us the way nature works, as well as why. But there is more to science than this. This more has to do with the concept, or hope I might say, of trying to understand things like the universe as a whole, or things as tiny as atoms, or geological time, or events that happen over exceedingly short times scales, like billionths of a second. I say hope because in dealing with such things, we are extremely removed from reality as we deal with it every day, in the normal course of our lives.

The problem is that, when dealing with such extremes, we find that most of our normal ideas and expectations – our intuitive, “common sense”, feeling grasp of reality – all too frequently starts to break down. There is of course good reason why this should be, and is, so. Our intuitions and common sense reasoning have been sculpted by our evolution – I will resist the temptation to say designed, although that often feels to be the case, for, ironically, the same reasons – to grasp and deal with ordinary events over ordinary scales of time and space. Our minds are not well endowed with the ability to intuitively understand nature’s extremes, which is why these extremes so often seem counter-intuitive and even absurd to us.

Take, as one of the best examples I know of this, biological evolution, a lá Darwin. As the English biologist and author Richard Dawkins has noted several times in his books, one of the reasons so many people have a hard time accepting Darwinian evolution is the extremely long time scale over which it occurs, time scales in the millions of years and more. None of us can intuitively grasp a million years; we can’t even grasp, for that matter, a thousand years, which is one-thousandth of a million. As a result, the claim that something like a mouse can evolve into something like an elephant feels “obviously” false. But that feeling is precisely what we should ignore in evaluating the possibility of such events, because we cannot have any such feeling for the exceedingly long time span it would take. Rather, we have to evaluate the likelihood using evidence and hard logic; commonsense can seriously mislead us.

The same is true for nature on the scale of the extremely small. When we start poking around in this territory, around with things like atoms and sub-atomic particles, we find ourselves in a world which bears little resemblance to the one we are used to. I am going to try various ways of giving you a sense of how the ultra-tiny works, but I know in advance that no matter what I do I am still going to be presenting concepts and ideas that seem, if anything, more outlandish than Darwinian evolution; ideas and concepts that might, no, probably will, leave your head spinning. If it is any comfort, they often leave my mind spinning as well. And again, the only reason to accept them is that they pass the scientific tests of requiring evidence and passing the muster of logic and reason; but they will often seem preposterous, nevertheless.

First, however, let’s try to grab hold of just how tiny the world we are about to enter is. Remember Avogadro’s number, the number of a mole of anything, from the last chapter? The reason we need such an enormous number when dealing with atoms is that they are so mind-overwhelmingly small. When I say mind-overwhelmingly, I really mean it. A good illustration of just how small that I enjoy is to compare the number of atoms in a glass of water to the number of glasses of water in all the oceans on our planet. As incredible as it sounds, the ratio of the former to the latter is around 10,000 to 1. This means that if you fill a glass with water, walk down to the seashore, pour the water into the ocean and wait long enough for it to disperse evenly throughout all the oceans (if anyone has managed to calculate how long this would take, please let me know), then dip your now empty glass into the sea and re-fill it, you will have scooped up some ten thousand of the original atoms that it contained. Another good way of stressing the smallness of atoms is to note that every time you breathe in you are inhaling some of the atoms that some historical figure – say Benjamin Franklin or Muhammad – breathed in his lifetime. Or maybe just in one of their breaths; I can’t remember which – that’s how hard to grasp just how small they are.

One reason all this matters is that nature in general does not demonstrate the property that physicists and mathematicians call “scale invariance.” Scale invariance simply means that, if you take an object or a system of objects, you can increase its size up to as large as you want, or decrease it down, and its various properties and behaviors will not change. Some interesting systems that do possess scale invariance are found among the mathematical entities called fractals: no matter how much you enlarge or shrink these fractals, their patterns repeat themselves over and over ad infinitum without change. A good example of this is the Koch snowflake:

which is just a set of repeating triangles, to as much depth as you want. There are a number of physical systems that have scale invariance as well, but, as I just said, in general this is not true. For example, going back to the mouse and the elephant, you could not scale the former up to the size of the latter and let it out to frolic in the African savannah with the other animals; our supermouse’s proportionately tiny legs, for one thing, would not be strong enough to lift it from the ground. Making flies human sized, or vice-versa, run into similar kinds of problems (a fly can walk on walls and ceilings because it is so small that electrostatic forces dominate its behavior far more than gravity).


Scale Invariance – Why it Matters

One natural phenomenon that we know lacks scale invariance, we met in the last chapter is matter itself. We know now that you cannot take a piece of matter, a nugget of gold for example, and keep cutting it into smaller and smaller pieces, and so on until the end of time. Eventually we reach the scale of individual gold atoms, and then even smaller, into the electrons, protons, and neutrons that comprise the atoms, all of which are much different things than the nugget we started out with. I hardly need to say that all elements, and all their varied combinations, up to stars and galaxies and larger, including even the entire universe, suffer the same fate. I should add, for the sake of completeness, that we cannot go in the opposite direction either; as we move toward increasingly more massive objects, their behavior is more and more dominated by the field equations of Einstein’s general relativity, which alters the space and time around and inside them to a more and more significant degree.

Why do I take the time to mention all this? Because we are en route to explaining how atoms, electrons and all, are built up and how they behave, and we need to understand that what goes on in nature at these scales is very different than what we are accustomed to, and that if we cannot adopt our thinking to these different behaviors we are going to find it very tough, actually impossible, sledding, indeed.

In my previous book, Wondering About, I out of necessity gave a very rough picture of the world of atoms and electrons, and how that picture helped explained the various chemical and biological behaviors that a number of atoms (mostly carbon) displayed. I say “of necessity” because I didn’t, in that book, want to mire the reader in a morass of details and physics and equations which weren’t needed to explain the things I was trying to explain in a chapter or two. But here, in a book largely dedicated to chemistry, I think the sledding is worth it, even necessary, even if we do still have to make some dashes around trees and skirt the edges of ponds and creeks, and so forth.

Actually, it seems to me that there are two approaches to this field, the field of quantum mechanics, the world we are about to enter, and how it applies to chemistry. One is to simply present the details, as if out of a cook book: so we are presented our various dishes of, first, classical mechanics, then the LaGrangian equation of motion and Hamiltonium operators and so forth, followed by Schrödinger’s various equations and Heisenberg’s matrix approach, with eigenvectors and eigenvalues, and all sorts of stuff that one can bury one’s head into and never come up for air. Incidentally, if you do want to summon your courage and take the plunge, a very good book to start with is Melvin Hanna’s Quantum Mechanics in Chemistry, of which I possess the third edition, and go perusing through from time to time when I am in the mood for such fodder.

The problem with this approach is that, although it cuts straight to the chase, it leaves out the historical development of quantum mechanics, which, I believe, is needed if we are to understand why and how physicists came to present us with such a peculiar view of reality. They had very good reasons for doing so, and yet the development of modern quantum mechanical theory is something that took several decades to mature and is still in some respects an unfinished body of work. Again, this is largely because some it its premises and findings are at odds with what we would intuitively expect about the world (another is that the math can be very difficult). These are premises and findings such as the quantitization of energy and other properties to discrete values in very small systems such as atoms. Then there is Heisenberg’s famous though still largely misunderstood uncertainly principle (and how the latter leads to the former).


Talking About Light and its Nature

A good way of launching this discussion is to begin with light, or, more precisely, electromagnetic radiation. What do I mean by these polysyllabic words? Sticking with the historical approach, the phenomena of electricity and magnetism had been intensely studied in the 1800s by people like Faraday and Gauss and Ørsted, among others. The culmination of all this brilliant theoretical and experimental work was summarized by the Scottish physicist James Clerk Maxwell, who in 1865 published a set of eight equations describing the relationships between the two phenomena and all that had been discovered about them. These equations were then further condensed down into four and placed in one of their modern forms in 1884 by Oliver Heaviside. One version of these equations is (if you are a fan of partial differential equations):





 
Don’t worry if you don’t understand this symbolism (most of it I don’t). The important part here is that the equations predict the existence of electromagnetic waves propagating through free space at the speed of light; waves rather like water waves on the open ocean albeit different in important respects. Maxwell at once realized that light must be just such a wave, but, more importantly, that there must be a theoretically infinite number of such waves, each with different wavelengths ranging from the very longest, what we now call radio waves, to the shortest, or gamma rays. An example of such a wave is illustrated below:



To assist you in understanding this wave, look at just one component of it, the oscillating electric field, or the part that is going up and down. For those not familiar with the idea of an electric (or magnetic) field, simply take a bar magnet, set it on a piece of paper, and sprinkle iron filings around it. You will discover, to your pleasure I’m certain, that the filings quickly align themselves according to the following pattern:


The pattern literally traces out the, in this case, magnetic field of the bar magnet, but we could have used an electrically charged source to produce a somewhat different pattern. The point is, the field makes the iron filings move into their respective positions; furthermore, if we were to move the magnet back and forth or side to side the filings would continuously move with it to assume their desired places. This happens because the outermost electrons in the filings (which, in addition to carrying an electric charge, also behave as very tiny magnets) are basically free to orient themselves anyway they want, so they respond to the bar’s field with gusto, in the same way a compass needle responds to Earth’s magnetic field. If we were using an electric dipole it would be the electric properties of the filings’ electrons performing the trick, but the two phenomena are highly interrelated.

Go back to the previous figure, of the electromagnetic wave. The wave is a combination of oscillating electric and magnetic fields, at right angles (90°) to each other, propagating through space. Now, imagine this wave passing through a wire made of copper or any other metal. Hopefully you can perceive by now that, if the wave is within a certain frequency range, it will cause the electrons in the wire’s atoms to start spinning around and gyrating in order to accommodate the changing electric and magnetic fields, just as you saw with the iron filings and the bar magnet. Not only would they do that, but the resulting electron motions could be picked up by the right kinds of electronic gizmos, transistors and capacitators and resistors and the like – here, we have just explained the basic working principle of radio transmission and receiving, assuming the wire is the antenna. Not bad for a few paragraphs of reading.

This sounds all very nice and neat, yet it is but our first foot into the door of what leads to modern quantum theory. The reason for this is that this pat, pretty perception of light as a wave just didn’t jibe with some other phenomena scientists were trying to explain at the end of the nineteenth century / beginning of the twentieth century. The main such phenomena along these lines which quantum thinking solved were the puzzles of the so-called “blackbody” radiation spectrum and the photo-electric effect.


Blackbody Radiation and the Photo-electric Effect

If you take an object, say, the tungsten filament of the familiar incandescent light bulb, and start pumping energy into it, not only will its temperature rise but at some point it will begin to emit visible light: first a dull red, then brighter red, then orange, then yellow – the filament eventually glows with a brilliant white light, meaning all of the colors of the visible spectrum are present in more or less equal amounts, illuminating the room in which we switched the light on. Even before it starts to visibly glow, the filament emits infrared radiation, which consist of longer wavelengths than visible red, and is outside our range of vision. It does so in progressively greater and greater amounts and shorter and shorter wavelengths, until the red light region and above is finally reached. At not much higher temperatures the filament melts, or at least breaks at one of its ends (which is why it is made from tungsten, the metal with the highest melting point), breaking the electric current and causing us to replace the bulb.

The filament is a blackbody in the sense that, to a first approximation, it completely absorbs all radiation poured onto it, and so its electromagnetic spectrum depends only on its temperature and not any on properties of its physical or chemical composition. Other such objects which are blackbodies include the sun and stars, and even our own bodies – if you could see into right region of the infrared range of radiation, we would all be glowing. A set of five blackbody electromagnetic spectra are illustrated below:


Examine these spectra, the colored curves, carefully. They all start out at zero on the left which is the shortest end of the temperature, or wavelength (λ, a Greek letter which is pronounced lambda) scale; the height of the curves then quickly rises to a maximum λ at a certain temperature, followed by a gradual decline at progressively lower temperatures until they are basically back at zero again. What is pertinent to the discussion here is that, if we were living around 1900, all these spectra would be experimental; it was not possible then, using the physical laws and equations known at the end of the 1800s, to explain or predict them theoretically. Instead, from the laws of physics as known then, the predicted spectra would simply keep increasing as λ grew shorter / temperature grew higher, resulting it what was called “the ultraviolet catastrophe.”

Another, seemingly altogether different, phenomenon that could not be explained using classical physics principles was the so-called photoelectric effect. The general idea is simple enough: if you shine enough light of the right wavelength or shorter onto certain metals – the alkali metals, including sodium and potassium, show this effect the strongest – electrons will be ejected from the metal, which can then be easily detected:


This illustration not only shows the effect but also the problem 19’th century physicists had explaining it. There are three different light rays shown striking the potassium plate: red at a wavelength of 700 nanometers or nm (an nm is a billionth of a meter), green at 550 nm, and purple at 400 nm. Note that the red light fails to eject any electrons at all, while the green and purple rays eject only one electron, with the purple electron escaping with a higher velocity, meaning higher energy, than the green.

The reason this is so difficult to explain with the physics of the 1800’s is that physics then defined the energy of all waves using both the wave’s amplitude, which is the distance from crest or highest point to trough or lowest point, in combination with the wavelength (the shorter the wavelength the more waves can strike within a given time). This is something you can easily appreciate by walking into the ocean until the water is up to your chest; both the higher the waves are and the faster they hit you, the harder it is to stay on your feet.

Why don’t the electrons in the potassium plate above react in the same way? If light behaved as a classical wave it should not only be the wavelength but the intensity or brightness (assuming this is the equivalent of amplitude) that determines how many electrons are ejected and with what velocity. But this is not what we see: e.g., no matter how much red light, of what intensity, we shine on the plate no electrons are emitted at all, while for green and purple light only the shortening of the wavelength in and of itself increases the energy of the ejected electrons, once again, regardless of intensity. In fact, increasing the intensity only increases the number of escaping electrons, assuming any escape at all, not their velocity. All in all, a very strange situation, which, as I said, had physicists scratching their heads all over at the end of the 1800s.

The answers to these puzzles, and several others, comes back to the point I made earlier about nature not being scale invariant. These conundrums were simply insolvable until scientists began to think of things like atoms and electrons and light waves as being quite unlike anything they were used to on the larger scale of human beings and the world as we perceive it. Using such an approach, the two men who cracked the blackbody spectrum problem and the photoelectric effect, Max Planck and Albert Einstein, did so by discarding the concept of light being a classical wave and instead, as Newton had insisted two hundred years earlier, thought of it as a particle, a particle which came to be called a photon. But they also did not allude to the photon as a classical particle either but as a particle with a wavelength; furthermore, that the energy E of this particle was described, or quantized, by the equation


in which c was the speed of light, λ the photon’s wavelength, and h was Planck’s constant, the latter of which is equal to 6.626 × 10-34 joules seconds – please note the extremely small value of this number. In contrast to our earlier, classical description of waves, the amplitude is to be found nowhere in the equation; only the wavelength, or frequency, of the photon determines its energy.

If you are starting to feel a little dizzy at this point in the story, don’t worry; you are in good company. A particle with a wavelength? Or, conversely, a wave that acts like a particle even if only under certain circumstances? A wavicle? Trying to wrap your mind around such a concept is like awakening from a strange dream in which bizarre things, only vaguely remembered, happened. And the only justification of this dream world is that it made sense of what was being seen in the laboratories of those who studied these phenomena. Max Planck, for example, was able, using this definition, to develop an equation which correctly predicted the shapes of blackbody spectra at all possible temperature ranges. And Einstein elegantly showed how it solved the mystery of the photoelectric effect: it took a minimum energy to eject an electron from a metal atom, an energy dictated by the wavelength of the incoming photon; the velocity or kinetic energy of the emitted electron came solely from the residual energy of the photon after the ejection. The number of electrons freed this way was simply equal to the number of the photons that showered down on the metal, or the light’s intensity. It all fit perfectly. The world of the quantum had made its first secure foot prints in the field of physics.
There was much, much more to come.

The Quantum and the Atom

Another phenomena that scientists couldn’t explain until the concept of the quantum came along around 1900-1905 was the atom itself. Part of the reason for this is that, as I have said, atoms were not widely accepted as real, physical entities until electrons and radioactivity were discovered by people like the Curies and J. J. Thompson, Rutherford performed his experiments with alpha particles, and Einstein did his work on Brownian motion and the photo-electric effect (the results of which he published in 1905, the same year he published his papers on special relativity and the E = mc2 equivalence of mass and energy in the same year, all at the tender age of twenty-six!). Another part is that, even if accepted, physics through the end of the 1800s simply could not explain how atoms could be stable entities.

The problem with the atomic structure became apparent in 1911, when Rutherford published his “solar system” model, in which a tiny, positively charged nucleus (again, neutrons were not discovered until 1932 so at the time physicists only knew about the atomic masses of elements) was surrounded by orbiting electrons, in much the same way as the planets orbit the sun. The snag with this rather intuitive model involved – here we go both with not trusting intuition and nature not being scale invariant again – something physicists had known for some time about charged particles.

When a charged particle changes direction, it will emit electromagnetic radiation and thereby lose energy. Orbiting electrons are electrons which are constantly changing direction and so, theoretically, should lose their energy and fall into the nucleus in a tiny fraction of a second (the same is true with planets orbiting a sun, but it takes many trillions of years for it to happen). It appeared that the Rutherford model, although still commonly evoked today, suffered from a lethal flaw.

And yet this model was compelling enough that there ought to be some means of rescuing it from its fate. That means was published two years later, in 1913, by Niels Bohr, possibly behind Einstein the most influential physicist of the twentieth century. Bohr’s insight was to take Planck’s and Einstein’s idea of the quantitization of light and apply it to the electrons’ orbits. It was a magnificent synthesis of scientific thinking; I cannot resist inserting here Jacob Bronowski’s description of Bohr’s idea, from his book The Ascent of Man:

Now in a sense, of course, Bohr’s task was easy. He had the Rutherford atom in one hand, he had the quantum in the other. What was there so wonderful about a young man of twenty-seven in 1913 putting the two together and making the modern image of the atom? Nothing but the wonderful, visible thought-process: nothing but the effort of synthesis. And the idea of seeking support for it in the one place where it could be found: the fingerprint of the atom, namely the spectrum in which its behavior becomes visible to us, looking at it from outside.

Reading this reminds me of another feature of atoms I have yet to mention. Just as blackbodies emit a spectrum of radiation, one based purely on their temperature, so did the different atoms have their own spectra. But the latter had the twist that, instead of being continuous, they consisted of a series a sharp lines and were not temperature dependent but were invoked usually by electric discharges into a mass of the atoms. The best known of these spectra, and the one shown below, is that of atomic hydrogen (atomic because hydrogen usually exists as diatomic molecules, H2, but the electric discharge also dissociates the molecules into discrete atoms):


This is the visible part of the hydrogen atom spectrum, or so-called Balmer series, in which there are four distinct lines: from right to left, the red one at 656 nanometers (nm), the blue-green at 486 nm, the blue-violet at 434 nm, and the violet at 410 nm.

Bohr’s dual challenge was explain both why the atom, in this case hydrogen, the simplest of atoms, didn’t wind down like a spinning top as classical physics predicted, and why its spectrum consisted of these sharp lines instead of being continuous as the energy is lost. As said, he accomplished both tasks by invoking quantum ideas. His reasoning was more or less as this: the planets in their paths around the sun can potentially occupy any orbit, in the same continuous fashion we have learned to expect from the world at large. As we now might begin to suspect however, this is not true for the electrons “orbiting” (I put this in quotes because we shall see that this is not actually the case) the nucleus. Indeed, this is the key concept which solves the puzzle of atomic structure, and which allowed scientists and other people to finally breathe freely while they accepted the reality of atoms.

Bohr kept the basic solar system model, but modified it by saying that there was not a continuous series of orbits the electrons could occupy but instead a set of discrete ones, in-between which there was a kind of no man’s land where electrons could never enter. Without going into details you can see how, at one stroke, this solved the riddle of the line spectra of atoms: each spectral line represented the transition of an electron from a higher orbit (more energy) to a lower one (less energy). For example, the 656 nm red line in the Balmer spectrum of hydrogen is caused by an electron dropping from orbit level three to orbit level two:


Here again we see the magical formula , the energy of the emitted photon, in this case being equal to E, the difference in energy between the two orbits. Incidentally, if the electron falls further inward, from orbit level two to orbit level one – this is what is known as the Lyman series, in this case accompanied by a photon emission of 122 nm, well into the ultraviolet and invisible to our visual systems. Likewise, falls to level three from above, the so-called Paschen series, occur in the equally invisible infrared spectrum. There is also a level four, five, six … potentially out to infinity. It was the discovery of these and other series which confirmed Bohr’s model and in part earned him the Nobel Prize in physics in 1932.

This is fundamentally the way science works. Inexplicable features of reality are solved, step by step, sweat drop by tear drop , and blood drop by drop, by the application of known physical laws; or, when needed, new laws and new ideas are summoned forth to explain them. Corks are popped, the bubbly flows, and awards are apportioned among the minds that made the breakthroughs. But then, as always, when the party is over and the guests start working off their hangovers, we realize that although, yes, progress has been made, there is still more territory to cover. Ironically, sometimes the new territory is a direct consequence of the conquests themselves.

Bohr’s triumph over atomic structure is perhaps the best known entrée in this genre of the story of scientific progress. There were two problems, one empirical and one theoretical, which arose from it in particular, problems which sobered up the scientific community. The empirical problem was that Bohr’s atomic model, while it perfectly explained the behavior of atomic hydrogen, could not be successfully applied to any other atom or molecule, not even seemingly simple helium or molecular hydrogen (H2), the former of which is just after hydrogen in the periodic table. The theoretical problem was that the quantitization of orbits was purely done on an ad hoc basis, without any meaningful physical insight as to why it should be true.
And so the great minds returned to their offices and chalkboards, determined to answer these new questions.

Key Ideas in the Development of Quantum Mechanics

The key idea which came out of trying to solve these problems was that, if that which had been thought of as a wave, light, could also possess particle properties, then perhaps the reverse was also true: that which had been thought of as having a particle nature, such as the electron, could also have the characteristics of waves. Louis de Broglie, in his 1924 model of the hydrogen atom, introduced this, what was to become called the wave-particle duality concept, explaining the discrete orbits concept of Bohr by recasting them as distances from the nuclei where standing electron waves could exist only in whole numbers, as the mathematical theory behind waves demanded:


De Broglie’s model was supported in the latter 1920’s by experiments which showed that electrons did indeed show wave features, at least under the right conditions. Yet, though a critical step forward in the formulation of the quantum mechanical description of atoms, de Broglie still fell short. For one thing, like Bohr, he could only predict the properties of the simplest atom, hydrogen. Second, and more importantly, he still gave no fundamental insight as to how or why particles could behave as waves and/or vice-versa. Although I have said that reality on such small scales should not be expected to behave in the same matter as the scales we are used to, there still has to be some kind of underlying theory, an intellectual glue if you prefer, that allows us to make at least some sense of what is really going on. And scientists in the early 1920’s still did not possess that glue.

That glue was first provided by people like Werner Heisenberg and Max Born, who, only a few years after de Broglie’s publication, created a revelation, or perhaps I should say revolution, of one of scientific – no, philosophic – history’s most astonishing ideas. In 1925 Heisenberg, working with Born, introduced the technique of matrix mechanics, one of the modern ways of formulating quantum mechanical systems. Crucial to the technique was the concept that at the smallest levels of nature, such as with electrons in an atom, neither the positions nor motions of particles could be defined exactly. Rather, these properties were “smeared out” in a way that left the particles with a defined uncertainty. This led, within two years, to Heisenberg’s famous Uncertainty Principle, which declared that certain pairs of properties of a particle in any system could not be simultaneously known with perfect precision, but only within a region of uncertainty. One formulation of this principle is, as I have used before:

x × s h / (2π × m)

which states that the product of the uncertainty of a particle’s position (x) and its speed (s) is always less than or equal to Planck’s (h) constant divided by 2π times the object’s mass (m). Now, there is something I must say upfront. It is critical to understand that this uncertainty is not due to deficiencies in our measuring instruments, but is built directly into nature, at a fundamental level. When I say fundamental I mean just that. One could say that, if God or Mother Nature really exists, even He Himself (or Herself, or Itself) does not and cannot know these properties with zero uncertainty. They simply do not have a certainty to reveal to any observer, not even to a supernatural one, should such an observer exist.
Yes, this is what I am saying. Yes, nature is this strange.


The Uncertainty Principle and Schrödinger’s Breakthrough

Another, more precise way of putting this idea is that you can specify the exact position of an object at a certain time, but then you can say nothing about its speed (or direction of motion); or the reverse, that speed / direction can be perfectly specified but then the position is a complete unknown. A critical point here is that the reason we do not notice this bizarre behavior in our ordinary lives – and so, never suspected it until the 20’th century – is that the product of these two uncertainties is inversely proportional to the object’s mass (that is, proportional to 1/m) as well as directly proportional to the tiny size of Planck’s constant h. The result of this is that large objects, such as grains of sand, are simply much too massive to make this infinitetesimally small uncertainty product measurable by any known or even imaginable technique.

Whew, I know. And just what does all this talk about uncertainty have to do with waves? Mainly it is that trigonometric wave functions, like sine and cosine, are closely related to probability functions, such as the well-known Gaussian, or bell-shaped, curve. Let’s start with the latter. This function starts off near (but never at) zero at very large negative x, rises to a maximum y = f(x) value at a certain point, say x = 0, and then, as though reflected through a mirror, trails off again at large positive x. A simple example should help make it clear. Take a large group of people. It could be the entire planet’s human’s population, though in practice that would make this exercise difficult. Record the heights of all these people, rounding the numbers off to a convenient unit, say, centimeters or cm. Now make sub-groups of these people, each sub-group consisting of all individuals of a certain height in cm. If you make a plot of the number of people within each sub-group, or the y value, versus the height of that sub-group, the x value, you will get a graph looking rather (but not exactly) like this:


Here, the y or f(x) value is called dnorm(x). Value x = 0 represents the average height of the population, and each x point (which have been connected together in a continuous line) the greater or lesser height on either side of x = 0. You see the bell shape of this curve, hence its common name.

What about those trigonometric functions? As another example, a sine function, which is the typical shape of a wave, looks like this:


The resemblances, I assume, are obvious; this function looks a lot like a bunch of bell shaped curves (both upright and upside-down), all strung together. In fact the relationship is so significant that a probability curve such as the Gaussian can be modeled using a series of sine (and cosine) curves in what mathematicians call a Fourier transformation. So obvious that Erwin Schrödinger, following up de Broglie’s work, in 1926 produced what is now known as the Schrödinger wave equation, or equations rather, which described the various properties of physical systems via one or more differential equations (if you know any calculus, these are equations with relate a function to one or more of its derivatives; if you don’t, don’t worry about it), whose solutions were a series of complex wave functions (a complex function or number is one that includes the imaginary number i, or square root of negative one), given the formal symbolic designation ψ. In addition to his work with Heisenberg, Max Born almost immediately followed Schrödinger‘s discovery with the description of the so-called complex square of ψ, or ψ* ψ , being the probability distribution of the object, in this case, the electron in the atom.

It is possible to set up Schrödinger’s equation for any physical system, including any atom. Alas, for all atoms except hydrogen, the equation is unsolvable due to a stone wall in mathematical physics known as the three-body problem; any system with more than two interacting components, say the two electrons plus nucleus of helium, simply cannot be solved by any closed algorithm. Fortunately, for hydrogen, where there is only a single proton and a single electron, the proper form of the equation can be devised and then solved, albeit with some horrendous looking mathematics, to yield a set of ψ, or wave functions. The complex squares of these functions as described above, or solutions I should say as there are an infinite number of them, describe the probability distributions and other properties of the hydrogen atom’s electron.
The nut had at last been (almost) cracked.

Solving Other Atoms

So all of this brilliance and sweat and blood, from Planck to Born, came down to the bottom line of, find the set of wave functions, or ψs, that solve the Schrödinger equation for hydrogen and you have solved the riddle of how electrons behave in atoms.

Scientists, thanks to Robert Mullikan in 1932, even went so far as to propose a name for the squared functions, or probability distribution functions, a term I dislike because it still invokes the image of electrons orbiting the nucleus: the atomic orbital.

Despite what I just said, actually, we haven’t completely solved the riddle. As I said, the Schrödinger equation cannot be directly solved for any other atom besides hydrogen. But nature can be kind sometimes as well as capricious, and thus allows us to find side door entrances into her secret realms. In the case of orbitals, it turns out that their basic pattern holds for almost all the atoms, with a little tweaking here, and some further (often computer intensive) calculations there. For our purposes here, it is the basic pattern that matters in cooking up atoms.

Orbitals. Despite the name, again, the electrons do not circle the nucleus (although most of them do have what is called angular momentum, which is the physicists’ fancy term for moving in a curved path). I’ve thought and thought about this, and decided that the only way to begin describing them is to present the general solution (a wave function, remember) to the Schrödinger equation for the hydrogen atom in all its brain-overloading detail:

Don’t panic: we are not going to muddle through all the symbols and mathematics involved here. What I want you to do is focus on three especially interesting symbols in the equation: n, , and m. Each appears in the ψ function in one or more places (search carefully), and their numeric values determine the exact form of the ψ we are referring to. Excuse me, I mean the exact form of the ψ* ψ, or squared wave function, or orbital, that is.

The importance of n, , and m lies in the fact that they are not free to take on any values, and that the values they can have are interrelated. Collectively, they are called quantum numbers, and since n is dubbed the principle quantum number, we will start with it. It is also the easiest to understand: its potential values are all the positive integers (whole numbers), from one on up. Historically, it roughly corresponds to the orbit numbers in Bohr’s 1913 orbiting model of the hydrogen atom. Note that one is its lowest possible value; it cannot be zero, meaning that the electron cannot collapse into the nucleus. Also sprach Zarathustra!

The next entry in the quantum number menagerie is , the angular momentum quantum number. As with n it is also restricted to integer values, but with the additional caveat that for every n it can only have values from zero to n-one. So, for example, if n is one, then can only equal one value, that of zero, while if n is two, then can be either zero or one, and so on. Another way of thinking about is that it describes the kind of orbital we are dealing with: a value of zero refers to what is called an s orbital, while a value of one means a so-called p orbital.

What about m, the magnetic moment quantum number? This can range in value from – to , and represents the number of orbitals of a given type, as designated by . Again, for an n of one, has just the one value of zero; furthermore, for equals zero m can only be zero (so there is only one s orbital), while for equals one m can be one of three integers: minus one, zero, and one. Seems complicated? Play around with this system for a while and you will get the hang of it. See? College chemistry isn’t so bad after all.

* * *

Let’s summarize before moving on. I have mentioned two kinds of orbitals, or electron probability distribution functions, so far: s and p. When equals zero we are dealing only with an s orbital, while for equals one the orbital is type p. Furthermore, when equals one m can be either minus one, zero, or one, meaning that at each level (as determined by n) there are always three p orbitals, and only one s orbital.

What about when n equals two? Following our scheme, for this value of n there are three orbital types, as can go from zero to one to two. The orbital designation when equals two is d; and as m can now vary from minus two to plus two (-2, -1, 0, 1, 2), there are five of these d type orbitals. I could press onward to ever increasing ns and their orbital types (f, g, etc.), but once again nature is cooperative, and for all known elements we rarely get past f orbitals, at least at the ground energy level (even though n reaches seven in the most massive atoms, as we shall see).

How solar power has changed over the last 10 years

Original article:  7 impressive solar energy facts ( charts) - ABB Conversations

7 impressive solar energy facts (+ charts)
Solar power is in a tremendously different place today than it was in 10 years ago. Below are a handful of impressive stats about solar power’s growth, as well as some general stats about solar energy potential that are also quite noteworthy.
 
1. Even yearly energy potential from sunshine dwarfs total energy potential from any other source.
The annual energy potential from solar energy is 23,000 TWy. Energy potential from total recoverable reserves of coal is 900 TWy. For petroleum, it’s 240 TWy; and for natural gas, it’s 215 TWy. Wind energy’s yearly energy potential is 25–70 TWy.
[Source: A Fundamental Look at Energy Reserves for the Planet]
2. Approximately 66% of installed world solar PV power capacity has been installed in the past 2½ years.
Furthermore, total installed capacity is projected to double in the coming 2½ years.
[Source: GTM Research]
3. Global solar PV power capacity grew from about 2.2 GW in 2002 to 100 GW in 2012.
From 2007 to 2012, it grew 10 times over, from 10 GW to 100 GW.
[Source: Renewables 2013 Global Status Report]
4. There are now about 1.36 million jobs in the global solar PV industry.
There are also about 892,000 in the solar heating & cooling industry.
[Source: Renewables 2013 Global Status Report]
5. Germany accounted for nearly one third of global solar PV capacity at the end of 2012.
Italy (16%) and Germany (32%) combined accounted for nearly half of global solar PV capacity.
[Source: Renewables 2013 Global Status Report]
6. The price of solar PV panels dropped about 100 times over from 1977 to 2012.
Since 2008, the price of solar PV panels has dropped about 80%.
[Data Source: Bloomberg New Energy Finance / Chart Source: Cost of Solar/Unknown]
7. The sunshine hitting Texas in one month contains more energy than all the oil and gas ever pumped out of the state.
 
Nonetheless, New Jersey has about 10 times more solar PV power capacity installed than the entire state of Texas.
[Data Source: SEIA / Image Source: 1Sun4All.com]
Those are some of the most impressive solar energy facts and charts I’ve seen, but please let us know if there are some big ones you think I’m missing.
Editor’s note: This is a guest post written by Zachary Shahan, editor of CleanTechnica and Planetsave. The views expressed in this post do not necessarily reflect or represent the views of ABB or its employees.

Bolivian satellite successfully launches from China

 
By -
The Bolivian satellite Tupac Katari (TKSAT-1) was successfully launched on December 20 from the Xinchang satellite launch center in China, Bolivian state news agency ABI reported.

The aircraft is expected to enter into operations in March 2014 after a three-month trial once put in orbit.

The Túpac Katari satellite was constructed in China by Chinese firm China Great Wall Industry Corporation (CGWIC). It was launched by the LM-3B/E vehicle developed by the China Academy of Launch Vehicle Technology (CALT). The satellite required parts manufactured in the US, France and Germany, according to the report.

The satellite will provide the government of Bolivia with data for internet access and telemedicine projects as well as distance learning.

The overall financial benefit of the Bolivian satellite Tupac Katari (TKSAT-1) is expected to reach US$600mn, according to previous reports. This figure represents the amount that may be invested by both private and state firms to develop projects associated with the launch of the aircraft.

The total cost of the satellite project is US$302mn. The project, which is jointly financed by China's Development Bank and the government of Bolivia, also stipulates the construction of terrestrial base stations in La Paz and Santa Cruz. According to the country's space agency (AEB) head Iván Zambrana, Bolivia will recover the investments made in the satellite in approximately 10 years. Zambrana also said that the satellite will generate annual revenues of nearly US$40mn for the provision of services to local and international firms.

In December 2011, Bolivia signed a contract with the Chinese firm to build the satellite, which is intended to improve communications in the country's rural areas, including TV, internet and telephony. The aircraft will also facilitate the development of civil projects like remote education and telemedicine.

The Túpac Katari satellite will have 30 transponders and a useful life of 15 years.

Several countries have already expressed interest in acquiring satellite capacity from the satellite, international press reported.

String Theories and Possible Universes?

Calabi-Yau-alternate.pngLet me confess up front:  I am an expert on none of these subjects, and most of what I'm about to say is my own reasonable informed speculation.

One, a brief description of strings in String Theories.  Imagine a string, made of pure energy, far, far smaller than an electron; so small, in fact, that we can never observe it.  Physicists say it is on the order of a Plank length.

Now vibrate that string in a space-time of eleven dimensions.  It turns out that the various vibrations correspond to the actual sub-atomic particles:  the electron, the quark, the photon, and so on (there are many more though we aren't usually aware of them).  This is the fundamental essence of String Theories (there are quite a number of them) that allegedly tie together General Relativity and Quantum Mechanics -- if you are not familiar with the issue here you should do some further reading, perhaps starting at http://en.wikipedia.org/wiki/Theory_of_Everything.  The point of String Theories is that, if proven (this has been the snag so far) it would allegedly provide a Theory of Everything (TOE).

Onto branes.  "A brane, in string theory and related theories such as supergravity theories, is a physical object that generalizes the notion of a point particle to higher dimensions.[1] For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher dimensional branes. In dimension p, these are called p-branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane." (from Wikipedia)

Branes, if they exist, are fascinating objects, and can in fact be quite large, even much, much larger than our entire universe -- a least in theory.  There could be many branes of 2+ dimensions floating about the universe, sometimes interacting with ordinary matter with possibly unpleasant results.

But I want to talk about the really big branes.  There is a hypothesis -- I think I can call it that -- that our (and probably many others, even an infinite) universe started by the contact of two branes, which supplied the energy to start the Big Bang.

If this is so, then I don't see how my idea could be true.  But so much is uncertain (and all unproven) in this field that I can't resist.  Let my quote Wikipedia again:

"The holographic principle is a property of quantum gravity and string theories that states that the description of a volume of space can be thought of as encoded on a boundary to the region—preferably a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind[1] who combined his ideas with previous ones of 't Hooft and Charles Thorn.[1][2] As pointed out by Raphael Bousso,[3] Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way.

In a larger sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure "painted" on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time.[4][5]

The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects that have fallen into the hole can be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory"

Back to my idea.  Is it possible that two two dimensional branes, under the right conditions and situation, could intersect, such that their line of intersection becomes the hologram in which one or more holographic three dimensional universes would be generated?  If so, I wonder what the cosmological horizon would be construed here.  At this point, let me remind the reader that I am no expert in these fields and don't even know how to evaluate my speculation (I cannot, of course, call it a hypothesis)?  But I am hoping to generate some discussion -- even if just criticism -- on this.

Thank you.



Fly Agaric. Hallucinogen at modest doses, lethal at higher dosage. Beautiful in either case.

 
Fly Agaric.  Hallucinogen at modest doses, lethal at higher dosage.  Beautiful in either case.
 
 


A 3d electron in an atom

Stephen Jeffers@stephenj66 5h  
 
 
I hear this is a computer simulation, not an actual photo.  Still, the stunning beauty, like a multi-winged butterfly, belies our ordinary conception of atoms being boring.  This is a work of art.

Breakthrough: One step closer to nuclear fusion power station from Phys.org

Breakthrough: One step closer to nuclear fusion power station#ajTabs from Phys.org 12/19/2013

Breakthrough: One step closer to nuclear fusion power station
The superconductivity research group of the University of Twente (UT) has made a technological breakthrough crucial to the success of nuclear fusion reactors, allowing for clean, inexhaustible energy generation based on the workings of the stars in our galaxy.

The crux of the new development is a highly ingenious and robust superconducting cable system. This makes for a remarkably strong that controls the very hot, energy-generating plasma in the reactor core, laying the foundation for nuclear fusion. The new are far less susceptible to heating due to a clever way of interweaving, which allows for a significant increase in the possibilities to control the plasma. Moreover, in combination with an earlier UT invention, the cables are able to withstand the immense forces inside the reactor for a very long time. The increased working life of the superconductors and the improved control of the plasma will soon make more reliable: the magnet coils take up one third of the costs of a nuclear fusion power station. The longer their working life, the cheaper the energy will be. The research is a project within the context of the Green Energy Initiative of the University of Twente.

Cost-effective clean energy Project leader Arend Nijhuis: 'The worldwide development of is picking up steam, and this breakthrough leads to a new impulse. Our new cables have already been extensively tested in two institutes.' Mr Nijhuis has been invited for a new collaboration with China and expects that the UT system will become a global standard. The world's largest nuclear fusion reactor, ITER, is under construction in Cadarache in France, and is expected to start operation by 2020, as a joint project of the US, EU, Russia, India, Japan, South Korea and China. However, China and South Korea have also initiated their own national large-scale nuclear fusion projects, in which the UT technology can be incorporated.

Breakthrough: One step closer to nuclear fusion power station

How does it work?
Nuclear fusion takes place in the heart of the reactor, in plasma with a temperature of 150 million degrees Celsius. An enormously strong magnetic field (of 13 teslas) is required in order to control this incredibly hot plasma. This magnetic field can only be generated efficiently through superconductivity. That is why liquid helium flows through the hollow cables of the coils. This reduces the temperature to approximately 4.5 K (-269 °C), which allows for zero resistance inside the cables and the amperage to increase up to 45,000 amperes, the generated magnetic field controlling the plasma. This immense amperage will, however, also put so much pressure on the wires that it is necessary to prevent a quick wear of these wires. Moreover, rapid changes of magnetic field can create excessive temperatures inside the cables, causing the superconductivity to break down and the fusion process to extinguish. It is exactly this problem which has now been solved by interweaving the superconducting wires of the coil in a special way.

Breakthrough: One step closer to nuclear fusion power station
Clever way of weaving
The wrist-thick cables around (six) coils with a total height of 13 metres inside the consist of interwoven wires with a thickness of 0.8 mm. The first step is to bundle three of these thin wires: two wires made of superconducting niobium-tin and one wire made of copper. This copper makes the whole resistant to heating during any undesired sudden end of the superconducting state. Three of those first-level wires are twisted around each other. After that, the weaving process continues until the desired thickness has been reached. The length across which the wire spirals once - the pitch - and the mutual proportions between the successive weave levels appear to be crucial. An increased pitch of the first weave levels ensures that the cables resist the immense mechanical forces better and prevents any strong distortions. However, the breakthrough which gathered most international surprise, even though already predicted at the UT, is that the new 'pitch proportions' result in such a strong reduction in the currents between the wires that there is much less heating of the cables and the cables therefore continue to be superconducting. So the new cables have considerably increased the chance that energy stations will soon generate power in a reliable way.
Explore further: Plasma experiment demonstrates admirable self-control

Obama Appears To Soften His Stance On The NSA, Edward Snowden

When the NSA scandal first broke six months ago, President Barack Obama was forceful in defending the government's surveillance programs, while criticizing Edward Snowden.

At his year-end press conference on Friday, the president's defense of NSA and condemnation of Snowden appeared weakened. Speaking from the press briefing room, Obama was asked a range of questions over the hour-long span, and at the head of the list of issues was surveillance.
Here's how his Friday comments stacked up to remarks made over the summer:

ON THE NSA: Then
On June 7, Obama was asked at a press conference a) to react to the reports of secret government surveillance of phone and the Internet and b) if he could assure Americans that there wasn't some massive secret database containing all of their personal information. The president assured onlookers that the programs were classified for a reason, but far from secret to Congress.

"And in the abstract, you can complain about Big Brother and how this is a potential program run amuck, but when you actually look at the details, then I think we've struck the right balance," Obama said.

ON THE NSA: Now
By Dec. 20, Obama was asked if his credibility had taken a hit with that "right balance" comment. Earlier this week alone, a judge ruled the program was unconstitutional and a presidential task force urged limits on NSA spying.

The president replied that it was "important to note" that balance is subject to a series of judgment calls that "make sure the American people are protected."
"What is absolutely clear to me is that given the public debate that's taken place and the disclosures that have taken place over the last several months, this is only going to work if the American people have confidence and trust. Now, part of the challenge is that because of the manner in which these disclosures took place, in dribs and drabs, often times shaded in a particular way, and because of some of the constraints that we've had in terms of declassifying information and getting it out there, that trust in how many safeguards exist and how these programs are run has been diminished. So what's going to be important is how to build that back up."
ON EDWARD SNOWDEN: Then
On Aug. 9, Obama made his thoughts clear on Snowden, saying that he did not think he was a patriot.
"The fact is, Mr. Snowden has been charged with three felonies," Obama said.

ON EDWARD SNOWDEN: Now
By Dec. 20, CBS News' Major Garrett asked Obama what he would say to Americans who believe Snowden "set in motion something that is proper and just."

Obama replied:
I've got to be careful here, Major, because Mr. Snowden is under indictment. He has been charged with -- with crimes, and that's the province of the attorney general and ultimately, a judge and a jury. So I -- I can't weigh in specifically on this case at this point. I'll try to see if I can get at the spirit of the question, even if I can't talk about the specifics.  
I have said before and I believe that this is an important conversation that we needed to have. I have also said before that the way in which these disclosures happened have been -- have been damaging to the United States and damaging to our intelligence capabilities.
And I think that there was a way for us to have this conversation without that damage.

Researchers team up on potential fuel cell advance from PhysOrg.com

Researchers team up on potential fuel cell advance

Dec 19, 2013 by Lori Ann White

Researchers team up on potential fuel cell advance
SLAC researchers Hernan Sanchez Casalongue (left) and Hirohito Ogasawara tune the custom fuel cell built for SSRL Beam Line 13-2. Credit: Brad Plummer/SLAC

Read more at: http://phys.org/news/2013-12-team-potential-fuel-cell-advance.html#jCp

Scientists at SLAC National Accelerator Laboratory put together clues from experiments and theory to discover subtle variations in the way fuel cells generate electricity – an advance that could lead to ways to make the cells more efficient.

As reported today in Nature Communications, researchers focused powerful X-rays from SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) on one half of a tiny but functional fuel cell and watched it combine oxygen and hydrogen to make water. They saw something they didn't expect.

"We were surprised to find two possible routes for this reaction to take place," said Hirohito Ogasawara, a staff scientist at SSRL and with the SLAC/Stanford SUNCAT Center for Interface Science and Catalysis. What's more, one route uses less of the fuel cell's energy to complete – leaving more energy to power a car, for example.

However, the news wasn't a surprise to SUNCAT theorists, who had already proposed the existence of such variations in fuel cell chemistry. These variations are important because fuel cells turn chemical energy to electricity, and even a subtle difference can add up to a considerable amount of electricity over time.

However, the news wasn't a surprise to SUNCAT theorists, who had already proposed the existence of such variations in fuel cell chemistry. These variations are important because fuel cells turn chemical energy to electricity, and even a subtle difference can add up to a considerable amount of electricity over time.

On one side of a fuel cell, hydrogen gas is split into protons and electrons, which travel to the other side of the cell along different paths, providing electricity along the way. There they combine with oxygen gas to form water, a process that requires a catalyst to propel the reaction along. The most commonly used catalyst is platinum, a metal more costly than gold; research has focused on ways to decrease the amount of platinum needed by making the catalyst as efficient as possible. This has been a difficult task without tools that show each reaction as it takes place, step by step.

Ogasawara and colleagues used a technique called ambient pressure photoelectron spectroscopy (APXPS) at SSRL to watch the reactions taking place on the surface of the platinum catalyst in minute detail, and under realistic conditions.

"At first, what was new was the technique, and that we could see what was happening under working conditions," said Hernan Sanchez Casalongue, a graduate student in chemistry who designed and built the miniature fuel cell used to help test the efficacy of APXPS in this research. "But as we analyzed our results, we saw there were two different kinds of hydroxide on the surface of the platinum."

Hydroxide is an "intermediate species" that briefly forms on the way to the creation of water. It consists of one hydrogen nucleus bonded to one oxygen atom – O-H instead of H2O. In the fuel cell the researchers found that one type of hydroxide is "hydrated," or loosely bonded with a water molecule, and the other is not, and the one that's not hydrated requires less energy to take that final step to becoming H2O.

Ogasawara and Sanchez Casalongue took their discovery to SUNCAT theorists, who had already theorized that a change in the voltage applied to the fuel cell could affect the formation of hydroxide.
"This led us to the insight that tuning the hydration of hydroxide may lead to more efficient catalysis, but at the time there was no experimental evidence to back that up," said Venkat Viswanathan, then a graduate student at SUNCAT and now a faculty member at Carnegie Mellon.

Ogasawara's experiment, possible only with APXPS, provided Viswanathan and the other SUNCAT theorists with their experimental evidence. It also gives scientists another tool for improving fuel cells: Figure out how to make more of the non-hydrated hydroxides, and the fuel cell efficiency will improve.

Anders Nilsson, deputy director of SUNCAT and a co-author on the paper, said, "This represents a real breakthrough in electrocatalysis. These intermediate chemical species have long been speculated on but have never before been directly observed. This discovery could lead to more efficient catalysts."

Ogasawara said the researchers can't give any firm numbers on how much this can boost energy production from fuel cells – "This was a proof-of-concept experiment" – but it's an encouraging development, and they are looking at reactions involving other catalysts for similar phenomena. They're also going to use APXPS to study the other side of the reaction – splitting water to make hydrogen and oxygen.

Explore further: New catalyst for fuel cells a potential substitute for platinum

More information: "Direct observation of the oxygenated species during oxygen reduction on a platinum fuel cell cathode." Hernan Sanchez Casalongue, Sarp Kaya, Venkatasubramanian Viswanathan, Daniel J. Miller, Daniel Friebel, Heine A. Hansen, Jens K. Nørskov, Anders Nilsson, Hirohito Ogasawara. Nature Communications 4, Article number: 2817 DOI: 10.1038/ncomms3817

Friday, December 20, 2013

Yew Trees Beckoning You to Enter


Sunset on Mars


2001: A Space Odyssey Redux

Hard to believe it has been 45 years since the best science fiction movie ever came out.  I remember going with my mother and sister and being utterly dazzled, if perplexed by its meaning (I found out when I read the book some years later).  With Apollo mission in full progress (though we hadn't landed a man on the moon yet), it gave me the optimistic vision of the future I retain to this day.
 


2001: A Space Odyssey is a 1968 British-American science fiction film produced and directed by Stanley Kubrick. The screenplay was written by Kubrick and Arthur C. Clarke, and was partially inspired by Clarke's short story "The Sentinel". Clarke concurrently wrote the novel 2001: A Space Odyssey which was published soon after the film was released. The story deals with a series of encounters between humans and mysterious black monoliths that are apparently affecting human evolution, and a space voyage to Jupiter tracing a signal emitted by one such monolith found on the moon. Keir Dullea and Gary Lockwood star as the two astronauts on this voyage, with Douglas Rain as the voice of the sentient computer HAL 9000 who has full control over their spaceship. The film is frequently described as an "epic film", both for its length and scope, and for its affinity with classical epics.[2][3]
Produced and distributed by the American studio Metro-Goldwyn-Mayer, the film was made almost entirely in England, using both the studio facilities of MGM's subsidiary "MGM British" (among the last movies to be shot there before its closure in 1970)[4] and those of Shepperton Studios, mostly because of the availability of much larger sound stages than in the United States. The film was also co-produced by Kubrick's own "Stanley Kubrick Productions". Kubrick, having already shot his previous two films in England, decided to settle there permanently during the filming of Space Odyssey. Though Space Odyssey was released in the United States over a month before its release in the United Kingdom, and Encyclopædia Britannica calls this an American film,[5] other sources refer to it as an American, British, or American-British production.[6]
Thematically, the film deals with elements of human evolution, technology, artificial intelligence, and extraterrestrial life. It is notable for its scientific accuracy, pioneering special effects, ambiguous imagery, sound in place of traditional narrative techniques, and minimal use of dialogue. The film's memorable soundtrack is the result of the association that Kubrick made between the spinning motion of the satellites and the dancers of waltzes, which led him to use The Blue Danube waltz by Johann Strauss II,[7] and the symphonic poem Also sprach Zarathustra by Richard Strauss, to portray the philosophical concept of the Übermensch in Nietzsche's work of the same name.[8][9]
Despite initially receiving mixed reactions from critics and audiences alike, 2001: A Space Odyssey garnered a cult following and slowly became a box office hit. Some years after its initial release, it eventually became the highest grossing picture from 1968 in North America. Today it is near-universally recognized by critics, filmmakers, and audiences as one of the greatest and most influential films ever made. The 2002 Sight & Sound poll of critics ranked it among the top ten films of all time,[10] placing it #6 behind Tokyo Story. The film retained sixth place on the critics' list in 2012, and was named the second greatest film ever made by the directors' poll of the same magazine.[11] Two years before that, it was ranked the greatest film of all time by The Moving Arts Film Journal.[12] It was nominated for four Academy Awards, and received one for its visual effects. In 1991, it was deemed "culturally, historically, or aesthetically significant" by the United States Library of Congress and selected for preservation in the National Film Registry.[13]
In 1984, a sequel directed by Peter Hyams was released, titled 2010: The Year We Make Contact.

Sirtuins Reverse Aging in Mice; Human Trials to Start Soon


Anthony Loera

Science News (Pop Sci)  -  10:07 AM

 
Sirtuins reverse aging

The scientist that brought us news about Resveratrol, is now going farther and activating all of the sirtuins to reverse aging.


New DNA sequence shows that Neanderthals liked incest

Adam Clark Estes on Sploid

Original article at:  http://sploid.gizmodo.com/new-dna-sequence-shows-that-neanderthals-liked-incest-1485937673
New DNA sequence shows that Neanderthals liked incest
When you have a face that handsome, you do what you can to keep it in the family.
Berkeley scientists just generated a pristine genome sequence of Neanderthal DNA—the most complete ever created—and what they found might gross you out. It might also blow your mind.
 
The DNA sequence came from a 50,000 year old Neanderthal bone, a woman's toe to be exact. The scientists compared it with DNA from modern humans as well as Denisovans, the Neanderthals' contemporaries and often their lovers too. The analysis revealed a new level of complexity in the family tree that connects Neanderthals, Denisovans and a recently discovered mystery human species with modern man.
 
It also revealed that Neanderthals like to have sex with their siblings. The woman whose toe bone was analyzed, research showed, was the daughter of a very closely related man and woman, likely half siblings. The dataset as a whole suggests that inbreeding was more popular among Neanderthals than modern humans.
 
We've long known, however, that interbreeding was popular with everybody. The new study, which will be published in Nature on Thursday, says Neanderthals and Denisovans interbred often and are very closely related. Modern humans participated in the interbreeding, too, though they fancied the Neanderthals more than the Denisovans. The researchers estimate 1.5 to 2.1 percent of non-African, modern day genomes can be traced to Neanderthals, while about 0.2 percent can be traced to Denisovans.
 
Inevitably, we are different, though we're still figuring out exactly how. The new research shows that at least 87 specific genes in the modern day human genome differ from those in Neanderthals and Denisovans. Maybe one of them explains why we won out in the great game of evolution.

Quantum cryptography

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quantum_crypto...