Search This Blog

Wednesday, January 16, 2013

Chapter Two of the Third row


The Idiot’s Guide to Making Atoms

Avagadro’s Number and Moles

Writing this chapter has reminded me of the opening of a story by a well-known science fiction author (whose name, needless to say, I can’t recall): “This is a warning, the only one you’ll get so don’t take it lightly.” Alice in Wonderland or “We’re not in Kansas anymore” also pop into mind. What I mean by this is that I could find no way of writing it without requiring the reader to put his thinking (and imagining) cap on. So: be prepared.

A few things about science in general before I plunge headlong into the subject I’m going to cover. I have already mentioned the way science is a step-by-step, often even torturous, process of discovering facts, running experiments, making observations, thinking about them, and so on; a slow but steady accumulation of knowledge and theory which gradually reveals to us the way nature works, as well as why. But there is more to science than this. This more has to do with the concept, or hope I might say, of trying to understand things like the universe as a whole, or things as tiny as atoms, or geological time, or events that happen over exceedingly short times scales, like billionths of a second. I say hope because in dealing with such things, we are extremely removed from reality as we deal with it every day, in the normal course of our lives.

The problem is that, when dealing with such extremes, we find that most of our normal ideas and expectations – our intuitive, “common sense”, feeling grasp of reality – all too frequently starts to break down. There is of course good reason why this should be, and is, so. Our intuitions and common sense reasoning have been sculpted by our evolution – I will resist the temptation to say designed, although that often feels to be the case, for, ironically, the same reasons – to grasp and deal with ordinary events over ordinary scales of time and space. Our minds are not well endowed with the ability to intuitively understand nature’s extremes, which is why these extremes so often seem counter-intuitive and even absurd to us.

Take, as one of the best examples I know of this, biological evolution, a lá Darwin. As the English biologist and author Richard Dawkins has noted several times in his books, one of the reasons so many people have a hard time accepting Darwinian evolution is the extremely long time scale over which it occurs, time scales in the millions of years and more. None of us can intuitively grasp a million years; we can’t even grasp, for that matter, a thousand years, which is one-thousandth of a million. As a result, the claim that something like a mouse can evolve into something like an elephant feels “obviously” false. But that feeling is precisely what we should ignore in evaluating the possibility of such events, because we cannot have any such feeling for the exceedingly long time span it would take. Rather, we have to evaluate the likelihood using evidence and hard logic; commonsense can seriously mislead us.

The same is true for nature on the scale of the extremely small. When we start poking around in this territory, around with things like atoms and sub-atomic particles, we find ourselves in a world which bears little resemblance to the one we are used to. I am going to try various ways of giving you a sense of how the ultra-tiny works, but I know in advance that no matter what I do I am still going to be presenting concepts and ideas that seem, if anything, more outlandish than Darwinian evolution; ideas and concepts that might, no, probably will, leave your head spinning. If it is any comfort, they often leave my mind spinning as well. And again, the only reason to accept them is that they pass the scientific tests of requiring evidence and passing the muster of logic and reason; but they will often seem preposterous, nevertheless.

First, however, let’s try to grab hold of just how tiny the world we are about to enter is. Remember Avogadro’s number, the number of a mole of anything, from the last chapter? The reason we need such an enormous number when dealing with atoms is that they are so mind-overwhelmingly small. When I say mind-overwhelmingly, I really mean it. A good illustration of just how small that I enjoy is to compare the number of atoms in a glass of water to the number of glasses of water in all the oceans on our planet. As incredible as it sounds, the ratio of the former to the latter is around 10,000 to 1. This means that if you fill a glass with water, walk down to the seashore, pour the water into the ocean and wait long enough for it to disperse evenly throughout all the oceans (if anyone has managed to calculate how long this would take, please let me know), then dip your now empty glass into the sea and re-fill it, you will have scooped up some ten thousand of the original atoms that it contained. Another good way of stressing the smallness of atoms is to note that every time you breathe in you are inhaling some of the atoms that some historical figure – say Benjamin Franklin or Muhammad – breathed in his lifetime. Or maybe just in one of their breaths; I can’t remember which – that’s how hard to grasp just how small they are.

One reason all this matters is that nature in general does not demonstrate the property that physicists and mathematicians call “scale invariance.” Scale invariance simply means that, if you take an object or a system of objects, you can increase its size up to as large as you want, or decrease it down, and its various properties and behaviors will not change. Some interesting systems that do possess scale invariance are found among the mathematical entities called fractals: no matter how much you enlarge or shrink these fractals, their patterns repeat themselves over and over ad infinitum without change. A good example of this is the Koch snowflake:

which is just a set of repeating triangles, to as much depth as you want. There are a number of physical systems that have scale invariance as well, but, as I just said, in general this is not true. For example, going back to the mouse and the elephant, you could not scale the former up to the size of the latter and let it out to frolic in the African savannah with the other animals; our supermouse’s proportionately tiny legs, for one thing, would not be strong enough to lift it from the ground. Making flies human sized, or vice-versa, run into similar kinds of problems (a fly can walk on walls and ceilings because it is so small that electrostatic forces dominate its behavior far more than gravity).


Scale Invariance – Why it Matters

One natural phenomenon that we know lacks scale invariance, we met in the last chapter is matter itself. We know now that you cannot take a piece of matter, a nugget of gold for example, and keep cutting it into smaller and smaller pieces, and so on until the end of time. Eventually we reach the scale of individual gold atoms, and then even smaller, into the electrons, protons, and neutrons that comprise the atoms, all of which are much different things than the nugget we started out with. I hardly need to say that all elements, and all their varied combinations, up to stars and galaxies and larger, including even the entire universe, suffer the same fate. I should add, for the sake of completeness, that we cannot go in the opposite direction either; as we move toward increasingly more massive objects, their behavior is more and more dominated by the field equations of Einstein’s general relativity, which alters the space and time around and inside them to a more and more significant degree.

Why do I take the time to mention all this? Because we are en route to explaining how atoms, electrons and all, are built up and how they behave, and we need to understand that what goes on in nature at these scales is very different than what we are accustomed to, and that if we cannot adopt our thinking to these different behaviors we are going to find it very tough, actually impossible, sledding, indeed.

In my previous book, Wondering About, I out of necessity gave a very rough picture of the world of atoms and electrons, and how that picture helped explained the various chemical and biological behaviors that a number of atoms (mostly carbon) displayed. I say “of necessity” because I didn’t, in that book, want to mire the reader in a morass of details and physics and equations which weren’t needed to explain the things I was trying to explain in a chapter or two. But here, in a book largely dedicated to chemistry, I think the sledding is worth it, even necessary, even if we do still have to make some dashes around trees and skirt the edges of ponds and creeks, and so forth.

Actually, it seems to me that there are two approaches to this field, the field of quantum mechanics, the world we are about to enter, and how it applies to chemistry. One is to simply present the details, as if out of a cook book: so we are presented our various dishes of, first, classical mechanics, then the LaGrangian equation of motion and Hamiltonium operators and so forth, followed by Schrödinger’s various equations and Heisenberg’s matrix approach, with eigenvectors and eigenvalues, and all sorts of stuff that one can bury one’s head into and never come up for air. Incidentally, if you do want to summon your courage and take the plunge, a very good book to start with is Melvin Hanna’s Quantum Mechanics in Chemistry, of which I possess the third edition, and go perusing through from time to time when I am in the mood for such fodder.

The problem with this approach is that, although it cuts straight to the chase, it leaves out the historical development of quantum mechanics, which, I believe, is needed if we are to understand why and how physicists came to present us with such a peculiar view of reality. They had very good reasons for doing so, and yet the development of modern quantum mechanical theory is something that took several decades to mature and is still in some respects an unfinished body of work. Again, this is largely because some it its premises and findings are at odds with what we would intuitively expect about the world (another is that the math can be very difficult). These are premises and findings such as the quantitization of energy and other properties to discrete values in very small systems such as atoms. Then there is Heisenberg’s famous though still largely misunderstood uncertainly principle (and how the latter leads to the former).


Talking About Light and its Nature

A good way of launching this discussion is to begin with light, or, more precisely, electromagnetic radiation. What do I mean by these polysyllabic words? Sticking with the historical approach, the phenomena of electricity and magnetism had been intensely studied in the 1800s by people like Faraday and Gauss and Ørsted, among others. The culmination of all this brilliant theoretical and experimental work was summarized by the Scottish physicist James Clerk Maxwell, who in 1865 published a set of eight equations describing the relationships between the two phenomena and all that had been discovered about them. These equations were then further condensed down into four and placed in one of their modern forms in 1884 by Oliver Heaviside. One version of these equations is (if you are a fan of partial differential equations):





 
Don’t worry if you don’t understand this symbolism (most of it I don’t). The important part here is that the equations predict the existence of electromagnetic waves propagating through free space at the speed of light; waves rather like water waves on the open ocean albeit different in important respects. Maxwell at once realized that light must be just such a wave, but, more importantly, that there must be a theoretically infinite number of such waves, each with different wavelengths ranging from the very longest, what we now call radio waves, to the shortest, or gamma rays. An example of such a wave is illustrated below:



To assist you in understanding this wave, look at just one component of it, the oscillating electric field, or the part that is going up and down. For those not familiar with the idea of an electric (or magnetic) field, simply take a bar magnet, set it on a piece of paper, and sprinkle iron filings around it. You will discover, to your pleasure I’m certain, that the filings quickly align themselves according to the following pattern:


The pattern literally traces out the, in this case, magnetic field of the bar magnet, but we could have used an electrically charged source to produce a somewhat different pattern. The point is, the field makes the iron filings move into their respective positions; furthermore, if we were to move the magnet back and forth or side to side the filings would continuously move with it to assume their desired places. This happens because the outermost electrons in the filings (which, in addition to carrying an electric charge, also behave as very tiny magnets) are basically free to orient themselves anyway they want, so they respond to the bar’s field with gusto, in the same way a compass needle responds to Earth’s magnetic field. If we were using an electric dipole it would be the electric properties of the filings’ electrons performing the trick, but the two phenomena are highly interrelated.

Go back to the previous figure, of the electromagnetic wave. The wave is a combination of oscillating electric and magnetic fields, at right angles (90°) to each other, propagating through space. Now, imagine this wave passing through a wire made of copper or any other metal. Hopefully you can perceive by now that, if the wave is within a certain frequency range, it will cause the electrons in the wire’s atoms to start spinning around and gyrating in order to accommodate the changing electric and magnetic fields, just as you saw with the iron filings and the bar magnet. Not only would they do that, but the resulting electron motions could be picked up by the right kinds of electronic gizmos, transistors and capacitators and resistors and the like – here, we have just explained the basic working principle of radio transmission and receiving, assuming the wire is the antenna. Not bad for a few paragraphs of reading.

This sounds all very nice and neat, yet it is but our first foot into the door of what leads to modern quantum theory. The reason for this is that this pat, pretty perception of light as a wave just didn’t jibe with some other phenomena scientists were trying to explain at the end of the nineteenth century / beginning of the twentieth century. The main such phenomena along these lines which quantum thinking solved were the puzzles of the so-called “blackbody” radiation spectrum and the photo-electric effect.


Blackbody Radiation and the Photo-electric Effect

If you take an object, say, the tungsten filament of the familiar incandescent light bulb, and start pumping energy into it, not only will its temperature rise but at some point it will begin to emit visible light: first a dull red, then brighter red, then orange, then yellow – the filament eventually glows with a brilliant white light, meaning all of the colors of the visible spectrum are present in more or less equal amounts, illuminating the room in which we switched the light on. Even before it starts to visibly glow, the filament emits infrared radiation, which consist of longer wavelengths than visible red, and is outside our range of vision. It does so in progressively greater and greater amounts and shorter and shorter wavelengths, until the red light region and above is finally reached. At not much higher temperatures the filament melts, or at least breaks at one of its ends (which is why it is made from tungsten, the metal with the highest melting point), breaking the electric current and causing us to replace the bulb.

The filament is a blackbody in the sense that, to a first approximation, it completely absorbs all radiation poured onto it, and so its electromagnetic spectrum depends only on its temperature and not any on properties of its physical or chemical composition. Other such objects which are blackbodies include the sun and stars, and even our own bodies – if you could see into right region of the infrared range of radiation, we would all be glowing. A set of five blackbody electromagnetic spectra are illustrated below:


Examine these spectra, the colored curves, carefully. They all start out at zero on the left which is the shortest end of the temperature, or wavelength (λ, a Greek letter which is pronounced lambda) scale; the height of the curves then quickly rises to a maximum λ at a certain temperature, followed by a gradual decline at progressively lower temperatures until they are basically back at zero again. What is pertinent to the discussion here is that, if we were living around 1900, all these spectra would be experimental; it was not possible then, using the physical laws and equations known at the end of the 1800s, to explain or predict them theoretically. Instead, from the laws of physics as known then, the predicted spectra would simply keep increasing as λ grew shorter / temperature grew higher, resulting it what was called “the ultraviolet catastrophe.”

Another, seemingly altogether different, phenomenon that could not be explained using classical physics principles was the so-called photoelectric effect. The general idea is simple enough: if you shine enough light of the right wavelength or shorter onto certain metals – the alkali metals, including sodium and potassium, show this effect the strongest – electrons will be ejected from the metal, which can then be easily detected:


This illustration not only shows the effect but also the problem 19’th century physicists had explaining it. There are three different light rays shown striking the potassium plate: red at a wavelength of 700 nanometers or nm (an nm is a billionth of a meter), green at 550 nm, and purple at 400 nm. Note that the red light fails to eject any electrons at all, while the green and purple rays eject only one electron, with the purple electron escaping with a higher velocity, meaning higher energy, than the green.

The reason this is so difficult to explain with the physics of the 1800’s is that physics then defined the energy of all waves using both the wave’s amplitude, which is the distance from crest or highest point to trough or lowest point, in combination with the wavelength (the shorter the wavelength the more waves can strike within a given time). This is something you can easily appreciate by walking into the ocean until the water is up to your chest; both the higher the waves are and the faster they hit you, the harder it is to stay on your feet.

Why don’t the electrons in the potassium plate above react in the same way? If light behaved as a classical wave it should not only be the wavelength but the intensity or brightness (assuming this is the equivalent of amplitude) that determines how many electrons are ejected and with what velocity. But this is not what we see: e.g., no matter how much red light, of what intensity, we shine on the plate no electrons are emitted at all, while for green and purple light only the shortening of the wavelength in and of itself increases the energy of the ejected electrons, once again, regardless of intensity. In fact, increasing the intensity only increases the number of escaping electrons, assuming any escape at all, not their velocity. All in all, a very strange situation, which, as I said, had physicists scratching their heads all over at the end of the 1800s.

The answers to these puzzles, and several others, comes back to the point I made earlier about nature not being scale invariant. These conundrums were simply insolvable until scientists began to think of things like atoms and electrons and light waves as being quite unlike anything they were used to on the larger scale of human beings and the world as we perceive it. Using such an approach, the two men who cracked the blackbody spectrum problem and the photoelectric effect, Max Planck and Albert Einstein, did so by discarding the concept of light being a classical wave and instead, as Newton had insisted two hundred years earlier, thought of it as a particle, a particle which came to be called a photon. But they also did not allude to the photon as a classical particle either but as a particle with a wavelength; furthermore, that the energy E of this particle was described, or quantized, by the equation


in which c was the speed of light, λ the photon’s wavelength, and h was Planck’s constant, the latter of which is equal to 6.626 × 10-34 joules seconds – please note the extremely small value of this number. In contrast to our earlier, classical description of waves, the amplitude is to be found nowhere in the equation; only the wavelength, or frequency, of the photon determines its energy.

If you are starting to feel a little dizzy at this point in the story, don’t worry; you are in good company. A particle with a wavelength? Or, conversely, a wave that acts like a particle even if only under certain circumstances? A wavicle? Trying to wrap your mind around such a concept is like awakening from a strange dream in which bizarre things, only vaguely remembered, happened. And the only justification of this dream world is that it made sense of what was being seen in the laboratories of those who studied these phenomena. Max Planck, for example, was able, using this definition, to develop an equation which correctly predicted the shapes of blackbody spectra at all possible temperature ranges. And Einstein elegantly showed how it solved the mystery of the photoelectric effect: it took a minimum energy to eject an electron from a metal atom, an energy dictated by the wavelength of the incoming photon; the velocity or kinetic energy of the emitted electron came solely from the residual energy of the photon after the ejection. The number of electrons freed this way was simply equal to the number of the photons that showered down on the metal, or the light’s intensity. It all fit perfectly. The world of the quantum had made its first secure foot prints in the field of physics.
There was much, much more to come.

The Quantum and the Atom

Another phenomena that scientists couldn’t explain until the concept of the quantum came along around 1900-1905 was the atom itself. Part of the reason for this is that, as I have said, atoms were not widely accepted as real, physical entities until electrons and radioactivity were discovered by people like the Curies and J. J. Thompson, Rutherford performed his experiments with alpha particles, and Einstein did his work on Brownian motion and the photo-electric effect (the results of which he published in 1905, the same year he published his papers on special relativity and the E = mc2 equivalence of mass and energy in the same year, all at the tender age of twenty-six!). Another part is that, even if accepted, physics through the end of the 1800s simply could not explain how atoms could be stable entities.

The problem with the atomic structure became apparent in 1911, when Rutherford published his “solar system” model, in which a tiny, positively charged nucleus (again, neutrons were not discovered until 1932 so at the time physicists only knew about the atomic masses of elements) was surrounded by orbiting electrons, in much the same way as the planets orbit the sun. The snag with this rather intuitive model involved – here we go both with not trusting intuition and nature not being scale invariant again – something physicists had known for some time about charged particles.

When a charged particle changes direction, it will emit electromagnetic radiation and thereby lose energy. Orbiting electrons are electrons which are constantly changing direction and so, theoretically, should lose their energy and fall into the nucleus in a tiny fraction of a second (the same is true with planets orbiting a sun, but it takes many trillions of years for it to happen). It appeared that the Rutherford model, although still commonly evoked today, suffered from a lethal flaw.

And yet this model was compelling enough that there ought to be some means of rescuing it from its fate. That means was published two years later, in 1913, by Niels Bohr, possibly behind Einstein the most influential physicist of the twentieth century. Bohr’s insight was to take Planck’s and Einstein’s idea of the quantitization of light and apply it to the electrons’ orbits. It was a magnificent synthesis of scientific thinking; I cannot resist inserting here Jacob Bronowski’s description of Bohr’s idea, from his book The Ascent of Man:

Now in a sense, of course, Bohr’s task was easy. He had the Rutherford atom in one hand, he had the quantum in the other. What was there so wonderful about a young man of twenty-seven in 1913 putting the two together and making the modern image of the atom? Nothing but the wonderful, visible thought-process: nothing but the effort of synthesis. And the idea of seeking support for it in the one place where it could be found: the fingerprint of the atom, namely the spectrum in which its behavior becomes visible to us, looking at it from outside.

Reading this reminds me of another feature of atoms I have yet to mention. Just as blackbodies emit a spectrum of radiation, one based purely on their temperature, so did the different atoms have their own spectra. But the latter had the twist that, instead of being continuous, they consisted of a series a sharp lines and were not temperature dependent but were invoked usually by electric discharges into a mass of the atoms. The best known of these spectra, and the one shown below, is that of atomic hydrogen (atomic because hydrogen usually exists as diatomic molecules, H2, but the electric discharge also dissociates the molecules into discrete atoms):


This is the visible part of the hydrogen atom spectrum, or so-called Balmer series, in which there are four distinct lines: from right to left, the red one at 656 nanometers (nm), the blue-green at 486 nm, the blue-violet at 434 nm, and the violet at 410 nm.

Bohr’s dual challenge was explain both why the atom, in this case hydrogen, the simplest of atoms, didn’t wind down like a spinning top as classical physics predicted, and why its spectrum consisted of these sharp lines instead of being continuous as the energy is lost. As said, he accomplished both tasks by invoking quantum ideas. His reasoning was more or less as this: the planets in their paths around the sun can potentially occupy any orbit, in the same continuous fashion we have learned to expect from the world at large. As we now might begin to suspect however, this is not true for the electrons “orbiting” (I put this in quotes because we shall see that this is not actually the case) the nucleus. Indeed, this is the key concept which solves the puzzle of atomic structure, and which allowed scientists and other people to finally breathe freely while they accepted the reality of atoms.

Bohr kept the basic solar system model, but modified it by saying that there was not a continuous series of orbits the electrons could occupy but instead a set of discrete ones, in-between which there was a kind of no man’s land where electrons could never enter. Without going into details you can see how, at one stroke, this solved the riddle of the line spectra of atoms: each spectral line represented the transition of an electron from a higher orbit (more energy) to a lower one (less energy). For example, the 656 nm red line in the Balmer spectrum of hydrogen is caused by an electron dropping from orbit level three to orbit level two:


Here again we see the magical formula , the energy of the emitted photon, in this case being equal to E, the difference in energy between the two orbits. Incidentally, if the electron falls further inward, from orbit level two to orbit level one – this is what is known as the Lyman series, in this case accompanied by a photon emission of 122 nm, well into the ultraviolet and invisible to our visual systems. Likewise, falls to level three from above, the so-called Paschen series, occur in the equally invisible infrared spectrum. There is also a level four, five, six … potentially out to infinity. It was the discovery of these and other series which confirmed Bohr’s model and in part earned him the Nobel Prize in physics in 1932.

This is fundamentally the way science works. Inexplicable features of reality are solved, step by step, sweat drop by tear drop , and blood drop by drop, by the application of known physical laws; or, when needed, new laws and new ideas are summoned forth to explain them. Corks are popped, the bubbly flows, and awards are apportioned among the minds that made the breakthroughs. But then, as always, when the party is over and the guests start working off their hangovers, we realize that although, yes, progress has been made, there is still more territory to cover. Ironically, sometimes the new territory is a direct consequence of the conquests themselves.

Bohr’s triumph over atomic structure is perhaps the best known entrée in this genre of the story of scientific progress. There were two problems, one empirical and one theoretical, which arose from it in particular, problems which sobered up the scientific community. The empirical problem was that Bohr’s atomic model, while it perfectly explained the behavior of atomic hydrogen, could not be successfully applied to any other atom or molecule, not even seemingly simple helium or molecular hydrogen (H2), the former of which is just after hydrogen in the periodic table. The theoretical problem was that the quantitization of orbits was purely done on an ad hoc basis, without any meaningful physical insight as to why it should be true.
And so the great minds returned to their offices and chalkboards, determined to answer these new questions.

Key Ideas in the Development of Quantum Mechanics

The key idea which came out of trying to solve these problems was that, if that which had been thought of as a wave, light, could also possess particle properties, then perhaps the reverse was also true: that which had been thought of as having a particle nature, such as the electron, could also have the characteristics of waves. Louis de Broglie, in his 1924 model of the hydrogen atom, introduced this, what was to become called the wave-particle duality concept, explaining the discrete orbits concept of Bohr by recasting them as distances from the nuclei where standing electron waves could exist only in whole numbers, as the mathematical theory behind waves demanded:


De Broglie’s model was supported in the latter 1920’s by experiments which showed that electrons did indeed show wave features, at least under the right conditions. Yet, though a critical step forward in the formulation of the quantum mechanical description of atoms, de Broglie still fell short. For one thing, like Bohr, he could only predict the properties of the simplest atom, hydrogen. Second, and more importantly, he still gave no fundamental insight as to how or why particles could behave as waves and/or vice-versa. Although I have said that reality on such small scales should not be expected to behave in the same matter as the scales we are used to, there still has to be some kind of underlying theory, an intellectual glue if you prefer, that allows us to make at least some sense of what is really going on. And scientists in the early 1920’s still did not possess that glue.

That glue was first provided by people like Werner Heisenberg and Max Born, who, only a few years after de Broglie’s publication, created a revelation, or perhaps I should say revolution, of one of scientific – no, philosophic – history’s most astonishing ideas. In 1925 Heisenberg, working with Born, introduced the technique of matrix mechanics, one of the modern ways of formulating quantum mechanical systems. Crucial to the technique was the concept that at the smallest levels of nature, such as with electrons in an atom, neither the positions nor motions of particles could be defined exactly. Rather, these properties were “smeared out” in a way that left the particles with a defined uncertainty. This led, within two years, to Heisenberg’s famous Uncertainty Principle, which declared that certain pairs of properties of a particle in any system could not be simultaneously known with perfect precision, but only within a region of uncertainty. One formulation of this principle is, as I have used before:

x × s h / (2π × m)

which states that the product of the uncertainty of a particle’s position (x) and its speed (s) is always less than or equal to Planck’s (h) constant divided by 2π times the object’s mass (m). Now, there is something I must say upfront. It is critical to understand that this uncertainty is not due to deficiencies in our measuring instruments, but is built directly into nature, at a fundamental level. When I say fundamental I mean just that. One could say that, if God or Mother Nature really exists, even He Himself (or Herself, or Itself) does not and cannot know these properties with zero uncertainty. They simply do not have a certainty to reveal to any observer, not even to a supernatural one, should such an observer exist.
Yes, this is what I am saying. Yes, nature is this strange.


The Uncertainty Principle and Schrödinger’s Breakthrough

Another, more precise way of putting this idea is that you can specify the exact position of an object at a certain time, but then you can say nothing about its speed (or direction of motion); or the reverse, that speed / direction can be perfectly specified but then the position is a complete unknown. A critical point here is that the reason we do not notice this bizarre behavior in our ordinary lives – and so, never suspected it until the 20’th century – is that the product of these two uncertainties is inversely proportional to the object’s mass (that is, proportional to 1/m) as well as directly proportional to the tiny size of Planck’s constant h. The result of this is that large objects, such as grains of sand, are simply much too massive to make this infinitetesimally small uncertainty product measurable by any known or even imaginable technique.

Whew, I know. And just what does all this talk about uncertainty have to do with waves? Mainly it is that trigonometric wave functions, like sine and cosine, are closely related to probability functions, such as the well-known Gaussian, or bell-shaped, curve. Let’s start with the latter. This function starts off near (but never at) zero at very large negative x, rises to a maximum y = f(x) value at a certain point, say x = 0, and then, as though reflected through a mirror, trails off again at large positive x. A simple example should help make it clear. Take a large group of people. It could be the entire planet’s human’s population, though in practice that would make this exercise difficult. Record the heights of all these people, rounding the numbers off to a convenient unit, say, centimeters or cm. Now make sub-groups of these people, each sub-group consisting of all individuals of a certain height in cm. If you make a plot of the number of people within each sub-group, or the y value, versus the height of that sub-group, the x value, you will get a graph looking rather (but not exactly) like this:


Here, the y or f(x) value is called dnorm(x). Value x = 0 represents the average height of the population, and each x point (which have been connected together in a continuous line) the greater or lesser height on either side of x = 0. You see the bell shape of this curve, hence its common name.

What about those trigonometric functions? As another example, a sine function, which is the typical shape of a wave, looks like this:


The resemblances, I assume, are obvious; this function looks a lot like a bunch of bell shaped curves (both upright and upside-down), all strung together. In fact the relationship is so significant that a probability curve such as the Gaussian can be modeled using a series of sine (and cosine) curves in what mathematicians call a Fourier transformation. So obvious that Erwin Schrödinger, following up de Broglie’s work, in 1926 produced what is now known as the Schrödinger wave equation, or equations rather, which described the various properties of physical systems via one or more differential equations (if you know any calculus, these are equations with relate a function to one or more of its derivatives; if you don’t, don’t worry about it), whose solutions were a series of complex wave functions (a complex function or number is one that includes the imaginary number i, or square root of negative one), given the formal symbolic designation ψ. In addition to his work with Heisenberg, Max Born almost immediately followed Schrödinger‘s discovery with the description of the so-called complex square of ψ, or ψ* ψ , being the probability distribution of the object, in this case, the electron in the atom.

It is possible to set up Schrödinger’s equation for any physical system, including any atom. Alas, for all atoms except hydrogen, the equation is unsolvable due to a stone wall in mathematical physics known as the three-body problem; any system with more than two interacting components, say the two electrons plus nucleus of helium, simply cannot be solved by any closed algorithm. Fortunately, for hydrogen, where there is only a single proton and a single electron, the proper form of the equation can be devised and then solved, albeit with some horrendous looking mathematics, to yield a set of ψ, or wave functions. The complex squares of these functions as described above, or solutions I should say as there are an infinite number of them, describe the probability distributions and other properties of the hydrogen atom’s electron.
The nut had at last been (almost) cracked.

Solving Other Atoms

So all of this brilliance and sweat and blood, from Planck to Born, came down to the bottom line of, find the set of wave functions, or ψs, that solve the Schrödinger equation for hydrogen and you have solved the riddle of how electrons behave in atoms.

Scientists, thanks to Robert Mullikan in 1932, even went so far as to propose a name for the squared functions, or probability distribution functions, a term I dislike because it still invokes the image of electrons orbiting the nucleus: the atomic orbital.

Despite what I just said, actually, we haven’t completely solved the riddle. As I said, the Schrödinger equation cannot be directly solved for any other atom besides hydrogen. But nature can be kind sometimes as well as capricious, and thus allows us to find side door entrances into her secret realms. In the case of orbitals, it turns out that their basic pattern holds for almost all the atoms, with a little tweaking here, and some further (often computer intensive) calculations there. For our purposes here, it is the basic pattern that matters in cooking up atoms.

Orbitals. Despite the name, again, the electrons do not circle the nucleus (although most of them do have what is called angular momentum, which is the physicists’ fancy term for moving in a curved path). I’ve thought and thought about this, and decided that the only way to begin describing them is to present the general solution (a wave function, remember) to the Schrödinger equation for the hydrogen atom in all its brain-overloading detail:

Don’t panic: we are not going to muddle through all the symbols and mathematics involved here. What I want you to do is focus on three especially interesting symbols in the equation: n, , and m. Each appears in the ψ function in one or more places (search carefully), and their numeric values determine the exact form of the ψ we are referring to. Excuse me, I mean the exact form of the ψ* ψ, or squared wave function, or orbital, that is.

The importance of n, , and m lies in the fact that they are not free to take on any values, and that the values they can have are interrelated. Collectively, they are called quantum numbers, and since n is dubbed the principle quantum number, we will start with it. It is also the easiest to understand: its potential values are all the positive integers (whole numbers), from one on up. Historically, it roughly corresponds to the orbit numbers in Bohr’s 1913 orbiting model of the hydrogen atom. Note that one is its lowest possible value; it cannot be zero, meaning that the electron cannot collapse into the nucleus. Also sprach Zarathustra!

The next entry in the quantum number menagerie is , the angular momentum quantum number. As with n it is also restricted to integer values, but with the additional caveat that for every n it can only have values from zero to n-one. So, for example, if n is one, then can only equal one value, that of zero, while if n is two, then can be either zero or one, and so on. Another way of thinking about is that it describes the kind of orbital we are dealing with: a value of zero refers to what is called an s orbital, while a value of one means a so-called p orbital.

What about m, the magnetic moment quantum number? This can range in value from – to , and represents the number of orbitals of a given type, as designated by . Again, for an n of one, has just the one value of zero; furthermore, for equals zero m can only be zero (so there is only one s orbital), while for equals one m can be one of three integers: minus one, zero, and one. Seems complicated? Play around with this system for a while and you will get the hang of it. See? College chemistry isn’t so bad after all.

* * *

Let’s summarize before moving on. I have mentioned two kinds of orbitals, or electron probability distribution functions, so far: s and p. When equals zero we are dealing only with an s orbital, while for equals one the orbital is type p. Furthermore, when equals one m can be either minus one, zero, or one, meaning that at each level (as determined by n) there are always three p orbitals, and only one s orbital.

What about when n equals two? Following our scheme, for this value of n there are three orbital types, as can go from zero to one to two. The orbital designation when equals two is d; and as m can now vary from minus two to plus two (-2, -1, 0, 1, 2), there are five of these d type orbitals. I could press onward to ever increasing ns and their orbital types (f, g, etc.), but once again nature is cooperative, and for all known elements we rarely get past f orbitals, at least at the ground energy level (even though n reaches seven in the most massive atoms, as we shall see).

Tuesday, January 15, 2013

Dinosaurs Are Not Ancient!

Dinosaurs are often regarded as ancient, even early Earth life forms, but the following graph should dispel this notion once and for all:
 
 
You can see that the dinosaurs (excluding birds) only existed from 230 million (five percent of Earth's full age) and last only a about three percent of that age.  Humans appear a mere 65 million years later.  Even the first animal life 700-800 million years ago, or about eighteen percent of that age.  The first life starts around 3500-4000 million years, when out planet was already close to a billion years old (longer itself than the span of animal life!).  This often comes as a surprise even to people who are scientifically educated.
 
One consequence of this is that dinosaurs were probably as fully modern as today's mammals and birds; they were not "primitive" at all, and weren't driven into exaction by the latter.  A large asteroid impact is now the main theory behind the disappearance of non-avian dinosaurs, perhaps combine the mass volcanic eruptions in India.  Still, it is puzzling why at least some of the smaller, bird-like dinosaurs didn't sneak through; perhaps a few did (just as a few mammals and birds did) but the quicker evolution of the latter drove them into fossil grounds we simply haven't yet.
 
More on non-avian dinosaurs (most have feather, like "Velociraptor" (which wasn't) on Jurrasic Park.



Monday, January 14, 2013

Why do Woman ... Well, you'll see.

It's often been asked, evolutionarily speaking (it's a harder question for a creationist, though) why do male mammals, including ourselves of course, have nipples?  No functional purpose can be assigned to them so you think natural selection would "prefer" males who don't.  I think there's an even better question however:  why do female mammals have a clitoris?

The answer is clearly not for pleasure.  First, nature doesn't give a hoot about pleasure in making its choices.  Second, outside of humans and some other mammals, females don't enjoy sex at all (just watch two cats at it; she's clearly in pain and drives him off as quickly as they have intercourse).  She's driven by hormonal changes and attracts males to mate to her.  So why do they have an organ of pleasure if they don't use or need it?

I think the late, famous paleontologist Stephen Jay Gould would have the answer to this.  It isn't known exactly when (pre?) mammalian genitalia evolved (maybe as much as 200-250 million years ago, or much more recently), but they probably evolved from their reptilian ancestors who used a "primitive" cloaca system (everything comes out one tube) -- a system still basically in use by most modern reptiles and birds.  One of Gould's basic themes in his life is the natural selection is not all-powerful; it can't sculpt living this to exact specifications or proportions.  In a sense this is obvious (despite Richard Dawkin's ill-deserved reputation for "claiming" otherwise); natural selection can only work on gene selection, and most genes don't do just one thing -- we have about 20-25,000 genes and these interact in the body of the embryo/fetus/child and the physical and chemical environment of the womb to form all the millions of individual features we possess.

Here's the trick with we mammals.  As very young embryos we are all females; the basic genitalia and internal sex organs all develop within the abdomen.  No doubt the nerves that lead to sexual arousal and please largely develop then too, and have been since the beginning.  Why?  We'll probably never know because soft tissues are rarely fossilize -- we can make intelligent guesses about it (same about why not in reptiles and birds), but that's about all.

There is a gene, dubbed SRY, on the Y chromosome (which, recall, only males have; females are XX) which at several weeks into gestation, becomes active and causes the release of testosterone in the male body.  This causes a number of physical and chemical changes.  One of these changes is the decent of the proto-penis and testicles downward, into their position when he is born.  Prior to this however, the progenitor of the penis exists in both sexes; in females it descends too.  In other words, the penis and the clitoris start off as the same structure, already equipped with sexual nerves.  There probably aren't any available mutations that would eliminate the clitoris or its sexual nerves (genes to many things, remember) so natural selection cannot achieve this.  The net result as that as least some female mammals (and potentially perhaps all of them) get to enjoy this accidental benefit.  But accident is just what it is.

Friday, December 21, 2012

Chapter One of The Third Row

A Panoply of Elements

On the Nature of Substances

Look around you. I do not know about your environment, but I can describe mine in considerable detail; actually, in more detail than you would probably be willing to slog your way even if I were to write it down. A computer, a lamp, a desk, television … the objects in my environment are (at the moment) pretty mundane, or so they seem at first sight. I’ll bet that your environment is much the same way. Instead of focusing on the objects in our environments however, consider instead the substances they are composed of. These substances too are quite likely fairly common, and chances are there is a great overlap between your environment and mine. Wood, glass, living flesh, plastic, metal, paint, cardboard … or, if you are outside, plant and animal life, clouds, sunlight (or starlight or moonlight), dirt, rock, air, water … the list would appear to go on and on, with no end in sight.

 Or would it? This is an interesting point to ponder. There could be an infinite number of substances that things are composed of; or there could be a limited number, perhaps even a rather small number, of basic substances that combine in innumerable, different ways to make up the objects in our lives and in our universe.

The latter option – a limited number of basic substances of which everything is composed – seems preferable, if only because it makes figuring out the world around us a much simpler task. And indeed, there appears to be good evidence that this is so. Take the substance water, for example: we find it in all kinds of things, from milk to soda pop to our own bodies, to the great oceans of our home planet and even elsewhere in the universe. Reflecting on it, water is found in a great variety of things. Perhaps this means that water is one of these basic, or fundamental, substances, that we are trying to classify.

 Taking stock of things, we notice that water is not the only possibly basic substance. What about air? Although it is invisible, we are constantly aware of the existence of air merely by the act of breathing it in and out, or feeling a breeze on our face, or by watching it make the leaves of a tree rustle and sway as we walk through a park on a spring day. Noticing all this, we might want to classify air as one of our fundamental substances too, just like water.

 What about the earth beneath our feet? If we dig our fingers into the soil and pull some of it up, we see that earth is a substance as well. A fundamental substance? Well, we do find that it is almost always there, wherever we go, although it is not always of the same quality. Sometimes our digging will pull up sand, or rock, or clay, materials of different color and hardness and other attributes. Yet all of these things may simply be variations on the main theme, that of earth. So we will, at least for the time being, call earth a fundamental substance, adding it to water and air.

 Clearly, there are many directions we can take in all this classifying of substances. What about fire? This is a very interesting substance, one reason being because it can turn one substance into another. For example, it can boil the substance water, converting into the substance air, or what we call steam. It can also, when quite hot, be used extract metals like copper and tin and iron from certain rocks, which is how we have most of these materials. Quite an amazing substance, isn’t this fire? Perhaps we should list it among our fundamental substances too.

 Let us stop here and recapitulate our findings. We have selected four substances, water, air, earth, and fire, and labeled them as fundamental substances. Before we proceed, I would like to introduce another term for fundamental substances. The term I am proposing is elements. An element is a fundamental substance in the sense that it cannot be broken down into, or reduced to, other elements. Each element stands on its own, composed of nothing but itself. If this is true, then all substances and objects that we perceive in the world are a combination, in one form or another, of these four elements we have identified.

 Using this kind of analysis, we seem to have made some progress in understanding the world around us. We have reduced all things into a combination of four elements. If indeed, this is how the world works, we are very fortunate to have stumbled upon its basic constitution. Using the right combination of the four elements, perhaps tempered in the right way by the element fire, we should be able to create any object or substance we desire, from gold and diamonds, to modern computers and all the other electronics which have made the Information Age possible. Amazing!

The question is, are our elements truly elements by our definition – fundamental substances which themselves cannot be broken down or reduced to any other elements? If not, then our quest is not finished. Furthermore, how can we make the determination whether they are or aren’t, and if not, what are these elements that we seek?

 For an example, let us take our earth element, weigh it carefully in some kind of container such as a flask, and mix it with our water element, also carefully weighed in another flask, and stir the resulting mixture very thoroughly so that they are as completely blended together as possible. We then take this mixture, which we all recognize from our childhoods as plain old mud, and pass it through a filter, collecting the resulting filtrate, which is the liquid that passes through the filter, in yet a third flask; preferably we use a scientific filter designed for such purposes but a simple coffee filter should be very effective as well. If the filter is good enough, meaning if the holes are small enough only to pass the water plus anything dissolved (this is a suggestive concept in and of itself) in the water and not the entire mixture, something very interesting will happen. We will notice that the residue that remains behind in the filter after all the water has passed through it probably looks essentially the same as the earth we originally placed in its flask, with the exception that this residue is wet, or muddy, looking; while the watery filtrate, at the bottom of the flask we collected it in also still resembles ordinary water, though it too maybe somewhat colored, probably a color much like our muddied earth.

Now here is the interesting part. If we take the water filtrate out of its flask and set it in the sun, or heat it over a kitchen stove – it’s amazing how much science you can do in a kitchen – unlike plain ordinary water once all the water has been evaporated there will be a remaining dry sediment left behind. Or at least I will bet there will be. This sediment might be white, or one of a number of different colors, or even the same brown or other hue like the earth it was extracted from.

 Wait a minute, you say. Extracted? What exactly does that mean? How do I know that? Thinking about this, it would seem to be that, at the very least, we have separated the earth into at least two simpler substances: the filtrate, which passed through the filter, and whatever remains in the filter. But how can that be if earth itself truly is an element? By our definition of the word element, it can’t be.

 There is something else highly suggestive about this experiment, which is the concept of filtration. The whole idea of a filter is that it presents a solid barrier with very small holes, or perhaps passages is the better term, in it which allow particles smaller than the passage to go through, while blocking all larger particles. What’s suggestive is that the earth + water mixture, or mud, is composed of small particles of varying size, such that they can be separated by filtration. This whole idea, the particulate concept of matter, is of course not at all surprising to us because this is the twenty-first century and we all know about atoms, but what I want to emphasize is that the idea of atoms is not as obvious as it might seem. It only seems obvious to us because we have gone to school where we were taught the atomic theory of matter; but if we hadn’t been so taught, or indoctrinated is perhaps the better term, then like most people throughout history and even today we wouldn’t know about atoms at all and probably wouldn’t stumble upon this explanation of filtration and how it works. I won’t mention more about this idea here, the particulate nature of matter, because I am going to return to it in force fairly soon; but I hope you can see how it relates to the idea of elements and how they answer the riddle of matter. Our experiment with filtering earth + water mixtures gives us a small window of insight into this powerful idea.

 There is a great deal more to our filtration experiment and how it might be interpreted. For example, in addition to the process of separation, maybe what I am looking at is a result of a reaction between the two original elements, earth and water, when I mixed them together. The filtrate, as well the residue in the filter, may very well be the result of such a reaction. How can I determine the difference among the various possibilities?

 One way of going about this would be to weigh the original earth, the dried residue in the filter, and the dried filtrate in its flask, and add the various weights together. When we do this, and assuming that we have been very accurate and precise in our weighings, we discover that, as if magic ran the universe instead of blind physical laws – no, actually the reverse – the combined weights exactly equal the weight of the original earth we started out with. This is very revealing, for if there had been a reaction with the water, that would have increased the weight by the amount of water consumed, or perhaps decreased by some fraction perhaps. But this has not happened. To clinch the issue, if instead of allowing it to go free we have instead been diligently collecting all the evaporated water during our experiment and weigh it with the remaining liquid form, again we are chagrined – well, perhaps not too chagrined by now – to find that it too matches the mass of the original water.

 Even so, having done all these additional measurements turn out we can be certain that we have taken one of our original elements, earth, and broken it down it into at least two new substances, one that passed through the filter and one that does not. That being so, we can hardly call earth an element any longer! And yet, contemplate this fact: should this really surprise us? After all, we never did have good reason for saying earth was an element in the first place. We just assumed it because earth is so ubiquitous that it seemed reasonable to call it an element; we followed common sense and our intuitions instead of investigating nature closely and clearly and methodically, as science teaches us we must do. So perhaps, in retrospect, we shouldn’t be surprised at all.

 The next question is, how about the water? Unfortunately, this turns out to be a little trickier. It was certainly not separated into different substances by the filter, so at first sight we might be justified in calling it one of the elements we are searching for. In fact, water passes a lot of tests to determine elementhood, and so it is easy to conclude that it is an element. But there is a very well known experiment that will show elsewise: the electrolysis of water. This experiment is not as easy to set up as the filtration experiment, and we require some special materials and equipment. But it is still not that complicated. What is needed is two glass test tubes, filled with water and connected near their tops – their open ends – by a glass tube or corridor. At the very top of each tube is a water tight cork or rubber stopper, through which has been inserted an electrode of platinum or other suitably chemically inert, electrically conducting, metal (even graphite, a form of carbon that conducts electricity, can be used). The connected tubes are filled with water – this of course is done before the electrode containing stoppers have been inserted. It is critical, for reasons I won’t go into right now but are also suggestive, that the water be slightly salty, or into which some other substance which helps its electrical conductivity has been dissolved. The entire apparatus is then turned upside down. The wires now coming down from the electrodes / bottoms of the stoppers are then connected to a source of direct current electricity, usually a battery or a set of batteries wired in series, one that can provide sufficient current and voltage. The final apparatus looks like this:
 
 
 
 Here the “tops” (remember are actually the bottoms) of the test tubes also have tubular holes in them, and collection balloons have been placed, tightly, around the holes. When the wires from the electrodes are connected to the anode and cathode of the battery, something very interesting starts to happen at the surfaces of the electrodes, also shown in the drawing. Bubbles of gas start to form around them, bubbles which, when they have grown large enough to break their adherence to the electrode, rise up and collect in the balloons. This process continues as long as the water level is high enough to reach the electrodes and the connecting tube. At this point the gasses stop forming.

 At the end of the experiment, we weigh the collected gasses (again, not an easy thing to do), and the remaining water, and again we find that the summed weights equal the original weight of water placed in the apparatus. This is because we have taken our “element” water and broken it down into two new substances, which I will now admit are the gasses hydrogen and oxygen. Oh, and incidentally, if you mix the hydrogen and oxygen and burn them together, the product is … one guess … that’s right, water. Voila! Water is no more an element than earth.

 Air suffers the same fate. If we take a weighed volume of air and burn a weighed quantity of something – anything flammable, say paper – in it, we find that after the burning that the air has gained some weight while the burned material has not only changed appearance but is also now lighter, by exactly the same weight; that is, the air plus paper is the same weight after the burning as before. Something in the paper has been transferred to the air somehow.

 Not only thus, but if you liquefy air, again not an easy process, you find you can distill – that is, boil out fractions at different temperatures – from it a number of separate liquefied gasses, mostly nitrogen and oxygen, with a little argon and other gasses.

 So air is not an element either. As is neither water, or earth. As for fire, how can something that appears and then vanishes, into thin air one might say, be an element? It isn’t even clear that fire can be called a substance; or if so, it is certainly a very mysterious one.

 After all this discussion, we seem to have come full circle with the most fundamental question: just what, precisely, is an element, and how do we determine it?

 One part of our definition is that it is a substance that cannot be broken down into other substances by ordinary physical or chemical means. If you take a chunk of gold, for example, no matter how you heat it, combine it with other materials, chop it up, or otherwise afflict it, you cannot reduce it to anything simpler. You can make more complicated substances from it, like the various alloys and compounds of gold, but not something simpler. All of this is not obvious, of course. It requires a great deal of careful experimentation to show that it is true. But chemists have been working with gold long enough that they can call it an element with great confidence. Of course, that’s the way science usually works; a lot of time and people and material, and many, many experiments done over many years by people just to come to a firm conclusion. And even then we are not absolutely certain beyond any doubt, just adequately sure beyond any reasonable ones.

 I said that an element cannot be reduced to simpler substances by ordinary physical or chemical means. By that I meant we could heat it, freeze it, mix it with other substances (and then heat or freeze it) – all the things chemists do in their laboratories – and though you might yield materials with interesting properties, the gold or other elements it contains can still be extracted; the processes we put it through have not transformed it. Using other physical and/or chemical means we can restore the same gold, in its original condition.

 This leads me to another interesting subject, not just about gold but any physical substance: we can take a piece of it, divide that into two pieces, divide each of those pieces so that we have four, and divide again and again and again in this manner, each division yielding progressively smaller pieces of the substance. Actually, we are aware of course that we can only take this process so far; eventually we will reach a point where we cannot find a knife, or whatever we’re cutting the substance with, small enough to continue. But assuming you could, just how far can we go with this division and sub-division process? Could we go on forever? What exactly would happen?

 It is possible with modern scientific instruments to divide a piece of gold into many very tiny pieces. And lo and behold, each piece is still gold. But “very tiny” is a relative term. At some point, if we are somehow able to sub-divide it enough times, we may yet find gold to be composed of simpler things. Fortunately, there are ways of probing well beyond our method of divisions. But a brief discussion on the subject of radioactivity is necessary first.


Radioactivity and the Discovery of Sub-Atomic Particles

 By the end of the nineteenth century / beginning of the twentieth, a number of scientists had discovered an interesting property of certain kinds of substances. They appeared to be unstable at some very fundamental level, decomposing into other substances and emitting a variety of “rays” or radioactive emissions while doing so. The Curies, Marie and Pierre, are the most historically famous contributors in this field of work, although others were involved as well. Altogether, three main kinds of rays were initially discovered and labeled, using the first three letters in the Greek alphabet: alpha rays, beta rays, and gamma rays. Other kinds of rays were to be discovered later, but this is where the story begins. It was also not until later that the nature of these rays were determined: it turned out that alpha rays were actually particles, now known to be composed of two protons and two other particles, the latter of which we today we call neutrons; beta rays were also particles, in fact what we now know as electrons, albeit moving at high velocities from the radioactive atomic nuclei emitting them; and gamma rays were electromagnetic radiation, like light but of very high energy, even higher than X-rays, which can easily penetrate flesh and show the bones in our bodies.

 One of the interesting things about these rays, or particles, or both, are their penetrating powers. Alpha rays, although being the most massive of the three, have the least penetrating ability; a simple sheet of paper can stop (most of) them cold in their tracks. Beta rays / particles, are more penetrating and can get past your skin and well into the underlying flesh. Gamma rays are, as just noted, the most penetrating of all, even more so than X-rays. Alas, these penetrating power of the radioactive emissions and what they do to living tissues make them extremely hazardous to living organisms such as ourselves, a fact which tragically was not really recognized for several decades after their discoveries, resulting in many unnecessary terrible diseases and deaths due to the handling radioactive substances (Marie Curie, for one example, died of leukemia).

 Back to the beginning of the twentieth century. In 1911 the physicist Earnest Rutherford and his scientific team performed a remarkable experiment using alpha particles on a very thin sheet of gold, which can be beaten very thin. The sheet was so thin that they expected the alpha particles to pass through it with very few if any deflections, in much the same manner as a hard thrown baseball will go through tissue paper with virtually no resistance. I say “expected” with some reservation; if they were certain that this would happen they of course would never have bothered to do the experiment. In science you always start with some doubt or incomplete knowledge, and hope to be surprised, at least once in a while.

 Rutherford and his team were very much surprised by the results of their experiment. To their utter incredulity, although most of the particles did, as predicted, pass through the gold sheet without hindrance, a very small number of them were instead deflected; and not just deflected but by very large angles at that. It was as though, to paraphrase Rutherford’s description of the phenomenon at the time, a cannon ball had bounced off something in the gold sheet, to come straight back at the experimenter and strike him on the nose!

 Such behavior was very hard to explain unless one assumed that almost all of the mass of the gold was concentrated in a very large number of very tiny regions, regions spread throughout the sheet like raisons in a pudding. But if this were true then gold clearly is not infinitely divisible into ever smaller and smaller pieces. There is a smallest piece which may or may not be subdivisible into other things.

 My guess is that none of this really surprises you, because you live in the year 2010 and almost everyone has heard about atoms by now. What those few alpha particles were bouncing off of were the tiny but quite massive nuclei of the gold atoms, while the rest of them blasted through the extremely light electrons circling, or doing whatever electrons do, around the nuclei. In fact, Rutherford’s experiment is usually considered the proof of the basic structure of atoms. At the time it was groundbreaking work however, because only recently had the truth of the existence of atoms been established beyond a reasonable doubt by men like Einstein and J. J. Thompson (who discovered the electron), although John Dalton, a century earlier, is usually given credit for the modern version of the atom.

 All this returns us to answer the question of whether gold should be regarded as an element, in the modern chemical sense. As the protons and neutrons of the gold nucleus cannot be subdivided by an ordinary physical or chemical means, and gold is composed solely of gold atoms, the answer is a clear yes; gold is an element. But what I want to emphasize is that this answer is not at all obvious; it took many people many years and enormous amounts of work to establish this, what seems to us today so straightforward elementary school a fact that we take it for granted.

 At this point there are many more, although not necessarily easy, experiments we can do on, say, the hydrogen and oxygen generated by breaking down water, which shows these two gasses to be elements as well. We could also work on our various pieces of earth and show that they too are composed of simpler elements, such as silicon, aluminum, oxygen, iron, magnesium, and others. The nitrogen and oxygen and argon in air are also elements (though other minor gasses in it, such as water vapor and carbon dioxide, or CO2, are not). As for fire, it too is a mixture of elements, or compounds of elements, all undergoing a number of chemical reactions with each other and with oxygen in the air at high temperature, reactions which gives fire its various colors.


The Modern Conception of Elements

 I hope you are asking the next logical question in this lecture. Gold is an element; nitrogen and oxygen and hydrogen are elements; silicon and aluminum are elements; and so on. In fact, there are some ninety elements in nature, and about two dozen manmade ones as of this writing. The question is, what makes them all different from each other? And, more to the point of this book, are their differences and similarities organized in any way?

 To answer these questions, we must look at the nuclei of the atoms which compose each element, remembering that in doing so we are jumping over the enormous amount of scientific work that had to be done to establish, not only the very existence of atoms, but also the fact that they have nuclei. In doing so, we find that each element is characterized, no, defined, by the specific number of protons – relatively massive, positively charged particles – in the nucleus. Hydrogen has one proton, helium two, oxygen eight, iron twenty-six, and so on. This is called the element’s atomic number. To maintain electrical neutrality, an equal number of electrons surround the nucleus: one electron for hydrogen, twenty-six for iron, ninety-two for uranium, the most massive naturally occurring element, and also so on. As mentioned, a second kind of particle also resides in the nucleus, approximately the same mass as the proton but electrically neutral: the neutron, discovered by James Chadwick in 1932 and earning him the 1935 Nobel prize in physics. The total number of protons and neutrons in the nucleus is what is called the atomic mass of the element.

 I mentioned John Dalton a few moments ago as the author of the modern concept of the atom in the early 1800s. Yet what we see is that, despite Dalton’s elegant reasoning for his atomic theory, it took an entire century for scientists and philosophers to fully accept atoms as real things, not merely some bookkeeper’s way of keeping track of quantities in chemical reactions. Part of the reason for this lack of acceptance is that the scientific instrumentation capable of probing matter at the atomic level didn’t exist in Dalton’s time. Another part is that the concept of atoms didn’t fit neatly with either the edifice of Newtonian physics or the laws of thermodynamics as they unfolded in the 1700s and 1800s.

 Yet if atoms and the atomic theory of elements (an element is characterized by one and one only type of atom) had to wait until the twentieth century to be fully accepted, the modern concept of the chemical element was the offspring of work done in the late 1700s / early 1800s, by men like Lavoisier and LaPlace and Scheele and Priestly, among others. Hundreds of years of (failed) experiments in alchemy plus the Enlightenment and Scientific Revolution had driven home the idea that there were certain substances which simply could not be broken down into simpler ones, or turned into other ones, by any known chemical or physical processes. Thus, the main dream of alchemy – to turn “base” metals into gold – was finally seen as a delusion, even if the greatest minds of the day still did not know why. Yet some substances that had been thought of as elements, water being the prime example used in this chapter, were shown to be chemically reducible to simpler substances, in this case hydrogen and oxygen, which in turn proved to be elemental in character. The element fire, as already mentioned, which had been thought of as the release of a mysterious substance called phlogiston, was shown in fact to be the chemical breakdown or reassembly of a variety of substances, followed by their reaction with atmospheric oxygen in the vapor phase. And so on, with most of the original substances believed to be elements.

 Throughout the nineteenth century, as scientific instruments and theory became better and better honed, many new elements came to be added to the list, while some substances, like carbon and sulfur and iron and copper, which had been known since antiquity, also found their way in. The net result of all this innovation and exploration was that by the latter half of the nineteenth century a veritable zoo of elements had been identified and characterized. So large was this zoo, in fact, that scientists began to wonder if there were an underlying order to them, some schemata which naturally organized them according to their properties, both chemical and physical.


The Periodic Table
  Enter the brilliant Russian chemist Dmitri Ivanovich Mendeleev. Although others before him had noticed periodical trends in the elements, and even attempted to create tables of them, in which each column represented a series of similar elements, it wasn’t until 1869 that Mendeleev, via his own independent work, presented a table both complete and sophisticated enough that it was accepted by the scientific community. What was probably the most powerful feature of Mendeleev’s table, and what set it apart from others, was that it provided a means of testing it. It did this by predicting the existence of new, hitherto undiscovered elements to fill gaps in it. Specifically, he predicted the existence of what he called ekaaluminium and ekasilicon, amongst several others, and the properties these elements would have. When the elements gallium (Ga) and germanium (Ge) were found in 1875 and 1886, with properties that almost perfectly matched those predicted for ekaaluminum and ekasilicon, Mendeleev’s periodic table and his fame were secured. There were still more gaps to be filled, but over the next half century or so scientists teased them out from minerals in Earth’s crust (or in the case of helium, discovered it via spectroscopic lines in the sun’s atmosphere), to the point where today the aptly named periodic table of elements is now complete:

As noted, there are ninety naturally occurring elements, the rest having been man-made through nuclear transmutation of existing elements. Some terminology is in order. The table is called periodic because each row is a period, one that begins at an “alkali” metal (Li, Na, K, etc.) and ends at a “noble” gas (He, Ne, Ar, Kr, etc.). Incidentally, hydrogen (H), while sitting atop the alkali metals, doesn’t fit neatly anywhere, for reasons we shall come to. Complementary to this designation, each column is dubbed a group. In modern terminology there are eighteen groups, numbered in order from left to right; thus, the greatest length a period can be is eighteen members.

 So: we have made a little headway into understanding the elements, and their relationships to each other. Just a little, however; I still need to explain what these groups and periods actually mean, in both the physical and chemical senses. What exactly was Mendeleev’s brilliance, that has made him one of the most important scientists in history?

 Go back and study the modern periodic table as just presented. In particular, single out groups 1 and 2 (known as the alkali metals and alkaline earths) as well as groups 17 and 18 (the halogens and the noble gasses). Remember to exclude hydrogen, as it doesn’t neatly fit into any group. If you specifically examine group 1, the alkali metals, the similarity in their properties as you go up and down the group is remarkable: not only are they all highly metallic, they are also soft and malleable (becoming more so as you go down the group), react strongly with oxygen (O2) and water (H2 O) to form highly basic oxides and hydroxides in which the ratio of metal to oxide (O2-) and hydroxide (OH-) is exactly the same, react with other elements and compounds in very similar ways as well, and so on. The same can be said for the other groups I mentioned, the alkali earths, the halogens, and the noble gasses; as you go up and down the group/column, the physical and chemical properties bear a strong resemblance.

 These resemblances are the rational basis – no, the heart and soul – of the periodic table’s structure. Of equal if not greater importance is the way that the groups repeat themselves to form the rows, or periods; notice that although the groups are numbered 1 to 18, only the fourth through sixth periods actually have eighteen members (period seven would, and will, have them once we synthesize all of its elements; they are too radioactive to exist in nature). Period one has only two members, hydrogen and helium, while periods two and three have eight. If you look at periods six and seven, you will notice a break after the first two groups, filled in by the detached “sub”-periods beneath them known as the lanthanides and actinides, each of which having fourteen members. Believe it or not, if the number of elements were extended far enough, by artificial transmutations as they don’t exist in nature, the number and types of these sub-periods would continue to grow (as would their lengths – the next one would hold eighteen members). Indeed, theoretically there is no end to the table and how far it can be built; it goes on indefinitely. We should thank nature that there are only ninety naturally occurring and (as of this writing) around twenty man-made elements!

 It should go without saying that there is a good reason, founded in chemistry and physics, why the periodic table is built up this way, that it is not merely the way it is in order to baffle and befuddle poor students of chemistry. There is, and we shall get to it, but first we should note some other interesting aspects about the table. The one that should be staring you in the face is that, to demonstrate the family resemblances in groups/columns, I specifically singled out only the two left-most and two right-most ones. You might wonder why I was so persnickety about my choices, and you would be right to do so.

 The reason is that only in groups 1, 2, 17, and 18 do the resemblances of group members remain strong as you go all the way up and down the group. For the middle groups, 3 through 16, the top two series (He and Li through Ne) show distinct differences from their heavier brethren beneath. Specifically, boron, carbon, nitrogen, and oxygen, or B, C, N, and O, appear quite set apart in their properties than the elements beneath them, Al, Si, P, and S, or aluminum, silicon, phosphorus, and sulfur. As one example, carbon dioxide (CO2), which makes soda water fizzy and is a waste material we dispose of every time we exhale (as well as the main culprit behind global warming), is a colorless, essentially odorless gas at ordinary temperatures and pressures, while silicon dioxide (SiO2) is a hard, crystalline, more or less transparent solid under the same conditions. Likewise, water (H2O) is an almost colorless (it is actually slightly blue, as the color of the oceans attest), odorless, and tasteless liquid with a number of important and remarkable properties – life on this planet would not exist without copious amounts of it, in both liquid and gaseous form – while its sulfur analog, hydrogen sulfide (H2S), is a foul-smelling, highly toxic gas, as are H2Se and H2Te.

 Why the first two periods should display such differences from the periods beneath them is another topic we shall come to soon enough. First, however, let’s return to atoms.


The Idea of the Atom

 When Mendeleev created his first periodic table in 1869, atoms were not widely believed to exist, at least not as real physical entities, that is. Moreover, scientists had yet to discover the components of atoms, of which we are so familiar today: protons, neutrons, and electrons. Given this ignorance, based on what feature, or features, of the elements did Mendeleev and others base their tables from?

 If I were to answer that the feature were their atomic masses, your first response should be to object that that number is also derived from an atomic view of nature: it is, just as I said earlier, simply the combined mass of the protons and neutrons and binding energy (this is what holds them together) that characterize each element, averaged out over the percentages of each the element’s isotopes (different isotopes of an element have different numbers of neutrons in their nuclei).

 However, even though I told you this, it is not exactly true. There is another definition of atomic mass, one that doesn’t require any mention of sub-atomic particles. This definition is that it is the mass, in grams, of an Avagadro’s number of atoms – or elemental particles, if we do not know about atoms – of the element in question. Avagadro’s number is slap in the face enormous, being approximately 6.022 × 1023, although nobody knows its exact value. Note that it is just a number, or constant; one can have an Avagadro’s number of anything, from atoms to sand grains to basketballs to Ford model T’s to galaxies – anything you like. To give you a rough idea of just how large a number it is, if we are talking about sand grains, then by my estimate it is on the order of a hundred billion to a trillion beaches worth of sand or so – far, far more than all the grains on all the beaches and deserts on our Earth. Yet, large as it is, it is a very convenient number for dealing with things as small as atoms; an Avagadro’s number of atoms of any element is a quite manageable quantity of it, weighing from grams to hundreds of grams, depending on the element we are dealing with.

 One thing, however, that is a problem with talking about an Avagadro’s number of something is that it is a long, fumbling mouthful of syllables which would leave us needing a glass of water everytime we invoked it. Fortunately, chemists have come up with a short hand way of saying it, which is the word mole. A mole of something is simply an Avagadro’s number of the something, and again we can talk about a mole of atoms or sand grains or anything else. Whatever it is, I’m sure you’ll agree is a lot easier on the tongue. More to the point, using this much easier word the definition of atomic mass of an element is simply the mass, in grams, of a mole’s amount of it. In the case of the element carbon this is 12.011 grams/mole, and of the element gold is 196.97 grams/mole.

 Mendeleev did not know about the reality of atoms or anything about their sub-atomic components, and so his initial periodic table could only use atomic mass as a guide to where to place the various elements – a fact that made his construction of a workable table that much more difficult and his success in doing so that much more remarkable. Today we not only know about the reality of atoms but also all of their constituents, down to electrons, protons, and neutrons, the latter two of which can be furthered sub-divided into various quarks, as well as the various force particles which hold them together. This is important, because the true, modern, correct version of the table uses atomic numbers, the number of protons in the atomic nucleus (and the number of electrons swirling about that nucleus if it is an electrically neutral atom). None of this should be surprising, by the way; for as I keep emphasizing and re-emphasizing, science rarely if ever proceeds from zero knowledge to 100% understanding in one all-encompassing leap but largely from simpler, cruder models of reality to gradually more sophisticated, complete ones. The fact that we can make progress this way is one of the most fascinating features of science, not to mention one of the most curious features of reality; there is no reason, a priori, that we know of why this should be so. Why shouldn’t it be that to understand anything, you must understand everything first? Why should we be so fortunate that this is so? Feel free to speculate on that little philosophical conundrum.

 But first finish reading this book. As I noted, the modern periodic table is divided into columns or groups of similar elements, each repeating themselves in ever increasing sizes of periods; excepting that, as I have said, the first two periods are really not all that similar to those beneath them. The first period has only two members, hydrogen and helium, the second and third periods have eight members, the fourth and fifth eighteen members, the sixth and seventh thirty-two members (if you add in the lanthanides and actinides, that is), and so on.

 I can’t resist talking about this in more detail, as it fascinated me as a child who didn’t understand the reasons why nature is organized this way. It turns out that it takes a while to tease out the pattern to the increases, but it works out to be: (first period) = two protons/electrons; (second / third) = eight; (fourth / fifth) = eighteen; (six / seventh) = thirty-two. Putting it in tabular form, these increases go as the following:

2 = 2
2 + 6 = 8
2 + 6 + 10 = 18
2 + 6 + 10 + 14 = 32
2 + 6 + 10 + 14 + 18 = 50

 The pattern is, I hope, clear: each new row in this table adds an additional column, which is equal to the previous row’s last column entry plus four. A rather strange pattern, one must admit; but we must also be grateful for it, for all patterns in nature are the evidence of underlying structures or principles, and so are keys to understanding those structures/principles. The patterns in the periodic table are no different in this regard, as we shall come to see.

 The title of this book, The Third Row, refers to the third period in the periodic table, which has a total of eight elements. As I have already mentioned, without explanation, that the first two periods possess substantially different properties from those beneath them, not to mention the first significantly different from the second, this period is the first in which the strong similarities up and down the group become more apparent for all of the groups. For example, once again, hydrogen sulfide (H2S), hydrogen selenide (H2Se), and hydrogen telluride (H2Te), are much more alike to each other than they are to hydrogen oxide, or water (H2O).

 I think a natural question which arises here is: just why are there so many elements – or, to be more precise, atomic nuclei – in nature, and how did they come to exist?

 Where do the Elements come from? Why are There so Many of Them?

 To answer this question, we must segue from chemistry to, first, nuclear physics, and then to astrophysics and cosmology. The first segue, nuclear physics, is necessary because the elements, or again more specifically their atomic nuclei, are created by the joining together, or fusing, of smaller nuclei. To use the most common example of this, four hydrogen nuclei or protons (1H, where the 1 superscript indicates the total number of protons and neutrons in the nucleus, one proton in the case of hydrogen) are fused together, in one of a number of pathways, to make a helium four nucleus, or 4He, containing two protons and two neutrons. The overall reaction can be written, with some simplification, as:

 1H + 1H + 1H + 1H = 4He + 2e+ + 2νe

 The last two particles in this reaction, e+ and νe, are called the positron, or anti-electron, and the so-called electron anti-neutrino (neutrinos are particles with very small mass and no charge, which travel very close to the speed of light). Their emission is needed to turn two of the 1H nuclei, which are protons, into the two neutrons in the 4He nucleus. Another example of a fusion reaction is the so-called “triple alpha” process, in which three 4He nuclei, which are also the alpha particles mentioned earlier, are fused together to make one 12C nucleus:

 4He + 4He + 4He = 12C

 These and many other fusion reactions are employed by nature to build up the complement of chemical elements she has so generously provided to us. However, even the simplest of these reactions, hydrogen to helium, can happen only under very specific, and hence uncommon, conditions. To answer why this is so, stand back and take a better look at what we are doing. Remember how in school you learned that unlike electric charges attract each other while like charges repel? Well, atomic nuclei are composed of protons and neutrons, and while the neutrons are electrically neutral the protons carry a very strong positive electric charge and so should, and in fact do, repel each other, even in atomic nuclei. So what then even holds them together in the nucleus, let alone allowing them to fuse together into even larger nuclei? Why don’t atomic nuclei go around exploding like miniature firecrackers as a result of these mutual like charges in their nuclei, leaving us with an atom free universe?

 This turns out to be a very good question, and one again that took many years to answer, by scientists incessantly scratching their heads and trying innumerable experiments. You might attempt, as a first approach to solving this conundrum, to speculate that there might be other forces in nature which provide us with the solution to it. What about gravity, for example? We know a good deal about gravity; for example, that it causes all mass objects, regardless of their electric charge or any other factor, to be attracted to each other, via the relationship:

 F = G(m1m2)/(r*r)

In this equation, F is the gravitational force, m1m2 the product of the objects’ masses, r2 the square of the distance between the objects (and so the factor which shows how quickly the force between the objects diminishes with distance), and G the proportionality constant in the equation, being equal to 6.673×10−11 N m2 kg−2 if you are interested. Gravity would, indeed, seem to be a good candidate for holding atomic nuclei together; after all, it is what holds our Earth, not to mention the sun and all the other planets and most of their moons, together, keeps us secure on the surface of our planet instead of being hurled out into space from the centrifugal force its spin generates, keeps the moon revolving about Earth, and Earth and all the other planets in our solar system in their orbits about the sun. In fact, we are much more aware of gravity than of the electric force, and so can be excused for thinking it to be the stronger of the two, and by a considerable ratio.

 Not only could we be excused for thinking this way, we would have to be excused, because reality is in the opposite direction, and by a very large factor at that. In truth, the electromagnetic force of attraction or repulsion is approximately one thousand trillion trillion trillion (1039) stronger than gravity! The equation of this force is:

F = ke(q1q2)/(r*r)

 where now, instead of m1m2 we have q1q2, the product of the electric charges on the objects (whether attractive or repulsive), and ke as the proportionality constant. As with gravity, we also see that the force diminishes as the square of the distance between the objects.

 The fact that the electromagnetic force can be either attractive or repulsive, whereas gravity is always attractive, is the cause of our error in thinking gravity the stronger of the two. When matter is accumulated on the scales we are accustomed to, and larger, there are almost always as many negative as positive charges, and the net effect of this equality is to mutually cancel these charges out so that at most only a very, very small excess exists in either direction, if indeed there is any excess at all. As for gravity, however, this force is always cumulative, so that massive objects can build up an appreciable attractive charge, the larger the accumulation resulting in the greater the charge. Build up enough mass in a small enough volume in fact, and you will have yourself a something called a black hole, which is an object whose gravity is so intense that not even light can escape its clutches.

 So much for gravity, then; it has no chance of solving our dilemma. What then does hold the protons together in the nucleus and, more to our point, allows them to be fused together into ever increasingly larger nuclei? The answer, as Hamlet says to Horatio, involves realizing that “There are more things in heaven and earth than are dreamt of in your philosophy.” Just because gravity and electricity (more correctly, electromagnetism) are the only fundamental forces in the universe that we are directly aware of, thanks to their squared distance attenuation, so there are other forces we are rarely cognizant of solely because they diminish over much shorter distances. Physicists call these forces nuclear forces precisely because they drop to virtually zero over distances of even a small amount greater than atomic nuclei. There are two such forces, named, perhaps unimaginatively, the strong nuclear force and the weak nuclear force. The weak nuclear force comes into play in certain kinds of nuclear decay and will not be discussed further here. The strong nuclear force is what catches our interest because it is both attractive only (but only to protons and neutrons and other particles collectively known as hadrons) and is some one hundred times as strong as the electromagnetic force. Again, the reason we almost never directly encounter it is its extremely short range, approximately that of several protons and/or neutrons or the atomic nucleus at most; beyond that distance, it rapidly diminishes to essentially nothing.


How Does Nature Build Elements Beyond Hydrogen?

 Let us return to the simplest fusion reaction, that of hydrogen to helium:

1H + 1H + 1H + 1H = 4He + 2e+ + 2νe

 We can now see that what holds the two protons and two neutrons in the resulting helium nucleus must be the strong nuclear force. This is how nature creates, not just in helium but in all of the elements larger than hydrogen, by fusing together smaller nuclei. This present us with a problem, however. The four hydrogen nuclei, or protons, start out at a distance from one another much larger than the strong force’s range, while at the same time they are close enough to feel the electromagnetic force keeping them apart, a force which still immensely powerful. Somehow, some way, we must push the protons closer and closer together until they start feeling the strong force more strongly than their mutual electric repulsion and so stick together to form a two plus particle nucleus. (What happens after this in the creation of helium and other small nuclei, if you are interested, is that the weak nuclear force causes one or more protons to decay into neutrons, in the process emitting positrons and anti-neutrinos as we saw in the 41H → 4He reaction.)

 There are only two ways of forcing the protons close enough to overcome their repulsion, and that is either by pressing them together under extremely high density and/or raising by their temperature very high, generally in the millions of degrees, so that they will be moving fast enough to overcome their repulsion and fuse. In practice, one usually has to do both. In nature, there are only two places/times that these conditions exist: one is in the first few minutes of the Big Bang, the primordial beginning to our universe, while the other is in the extremely hot, dense cores of stars like our sun, both active today and in the past. The reason these conditions existed during the Big Bang is that it was the way our universe began, as either a singularity (single point in space-time) or a volume very close to it, so that the density and temperature must have passed through a time near its beginning when such fantastic conditions existed. The reason they exist now in the cores of stars is due to the massive gravitational compression and heating existing there; the so-called “proton – proton nucleosynthesis” fusion reaction to helium is in fact the primary source of most stars’ prodigious energy outputs, including our own sun. However, although there have been many quadrillions of stars in our universe carrying out this reaction since stars first started to form some thirteen billion years ago, most of the helium in the cosmos today is in fact the result of Big Bang nucleosynthesis – this is actually one of the facts that have been used to prove the Big Bang theory correct. The Big Bang is also responsible for most of the trace amounts of lithium, beryllium, and, I believe, boron, atomic numbers three through five, in the present cosmos.


Creation of Elements Beyond Helium

What about the other elements, including the ones in the third period we will be discussing? I’ve already shown one fusion reaction, the triple-alpha process, that yields carbon. This reaction requires much higher densities/temperatures than the proton-proton reaction however, and it should be by now pretty obvious why. Helium nuclei contain twice the number of protons as hydrogen, and so the electromagnetic repulsion between them is proportionately higher, at least twice as high – while the fact that the strong force is also one hundred times as strong does not help us here because of its very short range. Another reason is that now we are trying to fuse three nuclei into one, meaning you have to start by fusing two and hoping this nuclei will last long enough to be struck by the third 4He. This intermediate nucleus 8Be, however, is extraordinarily unstable, fissioning or breaking back down into two 4He in a fraction of a trillionth of a second.

 The triple-alpha process couldn’t happen during Big Bang nucleosynthesis because by the time enough helium had been created to do so, the temperature and density of the expanding universe had dropped to below what is needed for this reaction. The only place it can still happen, and still does happen, like the proton-proton process, is in the cores of stars; not just any stars, however, but only those significantly more massive than our sun. The reason for this is straightforward: hydrogen fusion in stars creates a helium “ash” which, as it is both heaver than hydrogen and has no energy source itself, collects in the center of the star. This core of helium grows throughout the star’s lifetime, in doing so raising the temperature of the core by its unchecked gravitational compression. As the core’s temperature thereby rises, the hydrogen fusion surrounding it becomes more intense; this leads to more helium accumulation, leading to still higher temperatures from gravitational compression in their cores, higher rates of proton-proton fusion, and so on, in a positive feedback mechanism that causes even sunlike stars like the sun to grow steadily hotter and brighter throughout this part of their evolution. The end result of this positive feedback loop is the creation of a “red giant” phase, in which stars like our sun become hundreds of times brighter than during their “Main Sequence” phase, their outer regions expanding to some hundred times their current diameters or more, while their surface temperatures cause a drop in color from yellow/white to red as these expanded atmospheres cools.

 For a sun-like star, that is pretty much it (and it’s about five billion years in our future for our sun, so don’t worry about it). The greatly increased radiation pressure from the red giant’s core eventually blows away most of its atmosphere and other outer regions into interstellar space. In doing so, the remnant central region, exhausted now of hydrogen fuel to fuse, shrinks until it is approximately the size of Earth or smaller. It’s surface is still white hot from the core, hence the name “white dwarf”, but it gradually cools over billions of years back down to red and then infrared invisibility. Finally it is cold as space itself.

 This is the fate of most stars, but not, as I have said, those significantly more massive than the sun. In massive stars the hydrogen fusion is much more profligate, as it must be to generate the enormous radiation pressure needed to hold the star up against gravitational collapse. This means the core temperature is much higher than our sun’s, and by the star’s red giant phase will be in the hundreds of millions to billions of degrees (instead of a “modest” fifteen million degrees C in the sun). At these temperatures the triple alpha process can and does occur. This “helium flash” in the core will consume most of its helium (and quite quickly), converting it not only into carbon but also turning some of that carbon further into oxygen, neon, and magnesium as additional 4He nuclei are fused in. Other elements are also created by the fusion of protons, neutrons, and other small nuclei.

 All these processes, in the most massive of stars, can continue to build heavier nuclei all the way up to iron and nickel. However, to create nuclei larger than iron and nickel requires an input of energy rather than its release, as the most stable nuclei (those having the smallest binding energies) end with these metals; all the heavier elements, up to uranium and beyond, are created mainly by neutron capture, a process that absorbs energy rather than releasing it (thereby hastening the end of the star’s life). Using this process, however, even the most massive stars can create nuclei only up to a certain number of protons, to between roughly ninety and one hundred. The reason for this is that most of these larger nuclei (some of the thorium and uranium nuclei are exceptions) are intensely radioactive and decay to smaller ones in short periods of times.

 All this of course only says how the elements are created; it does not tell us how they then found their way into other stars and their planets, including Earth, the compositions of which they primarily constitute. What completes the tale is also told by the most massive stars; for not only do they build these heavy elements, they then blast them into interstellar space via the supernova explosions which ends their brief lives, leaving them as either neutron stars or black holes. Newer generation stars and their planetary entourages then sweep these elements up during their formation. This also explains why the lighter elements, essentially the first two rows of the periodic table and to a lesser extent the third, make up the bulk of the matter in our universe, for, as we have seen, as you progress to heavier elements they become increasingly challenging to create.


Summary

So. We have explained what the chemical elements are, as well as how they are organized, how they were created, and why there are as many of them as there are. Excuse me, I should say how their atomic nuclei were created; talking about the elements themselves means talking about their constituent atoms, which includes their electrons, how the electrons are organized about the nucleus, and how they behave. This finally is the subject of chemistry, and we will make our first inroads into it in the next chapter.

 

Impact of the COVID-19 pandemic on the environment

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Impact_of_the_COVID-19_pandemic_on_the_envi...