In the standard scientific paradigm, observations lead to hypothesis (educated guesses to explain the observations), which lead to further observations and experiments designed to prove or disprove the hypothesis. If the hypothesis continues to pass the tests against it and be supported by the tests for it at some point we call it a theory -- which in scientific parlance, means a fact. Oh, of course it must not contradict any other theories, otherwise we will have to dig deeper and discover which is true; or it might turn out, as with General Relativity and Quantum Mechanics, they contradict but are true within their own respective realms. If this happens we suspect an even deeper theory, connecting the two and making sense in all realms. This is where we are in physics right now, though we are making progress.
I want to talk about a different subject: Darwinian evolution. Since Darwin's publication of The Origin of Species in 1859, it was swiftly accepted by most scientists as true, even though there were still a number of details yet to be worked out: for example, Darwin's ideas on inheritance were in conflict with his theory, mainly due to ignorance of Gregor Mendel's publication on particle gene inheritance two years earlier in an obscure article Darwin (or hardly anyone) read at the time. It was only until 1900 that Mendel's work was rediscovered and that chestnut laid to rest.
However, there was a more serious problem with Darwinism at the time, and that concerned the sun. You see, at the time, nobody knew where the sun's prodigious energies came from; there was two prevailing hypotheses, neither of them adequate. One is that the energy came from chemical burning, like a huge sphere in space of carbon or some flammable material. Never mind we knew by then there was no oxygen in space and that this element was essential for combustion. The other hypothesis was that the gravitational contraction of the sun provided the energy. This stretched the sun's energy out some millions of years, but was still not enough to satisfy evolution, which required hundreds of millions to billions of years
If Darwin was right then an automatic, or implied hypothesis came from it: there had to be a source of energy for the sun and stars which could power them for billions of years. Yet is was not until 1905 that Albert Einstein suggested one with his famous equation: energy(E) = mass(M) times the speed of light(C) squared. some twenty years later the late Hans Bethe described the hydrogen fusion reactions (which create helium) in the core of the sun, and which would keep the sun alive for at least ten billion years (We are now about halfway through its lifetime). Gravity had already explained how stars are formed from interstellar dust and gas, yielding the final piece to the puzzle in place.
A Medley of Potpourri is just what it says; various thoughts, opinions, ruminations, and contemplations on a variety of subjects.
Search This Blog
Monday, December 2, 2013
Wednesday, September 25, 2013
The Idiot’s Guide to Making Atoms
Avagadro’s Number and Moles
Writing
this chapter has reminded me of the opening of a story by a
well-known science fiction author (whose name, needless to say, I
can’t recall): “This is a warning, the only one you’ll get so
don’t take it lightly.” Alice in Wonderland or “We’re
not in Kansas anymore” also pop into mind. What I mean by this is
that I could find no way of writing it without requiring the reader
to put his thinking (and imagining) cap on. So: be prepared.
A
few things about science in general before I plunge headlong into the
subject I’m going to cover. I have already mentioned the way
science is a step-by-step, often even torturous, process of
discovering facts, running experiments, making observations, thinking
about them, and so on; a slow but steady accumulation of knowledge
and theory which gradually reveals to us the way nature works, as
well as why. But there is more to science than this. This more has
to do with the concept, or hope I might say, of trying to understand
things like the universe as a whole, or things as tiny as atoms, or
geological time, or events that happen over exceedingly short times
scales, like billionths of a second. I say hope because in dealing
with such things, we are extremely removed from reality as we deal
with it every day, in the normal course of our lives.
The
problem is that, when dealing with such extremes, we find that most
of our normal ideas and expectations – our intuitive, “common
sense”, feeling grasp of reality – all too frequently starts to
break down. There is of course good reason why this should be, and
is, so. Our intuitions and common sense reasoning have been sculpted
by our evolution – I will resist the temptation to say designed,
although that often feels to be the case, for, ironically, the same
reasons – to grasp and deal with ordinary events over ordinary
scales of time and space. Our minds are not well endowed with the
ability to intuitively understand nature’s extremes, which is why
these extremes so often seem counter-intuitive and even absurd to us.
Take, as one of the best examples I know of this, biological
evolution, a lá Darwin. As the English biologist and author
Richard Dawkins has noted several times in his books, one of the
reasons so many people have a hard time accepting Darwinian evolution
is the extremely long time scale over which it occurs, time scales in
the millions of years and more. None of us can intuitively grasp a
million years; we can’t even grasp, for that matter, a thousand
years, which is one-thousandth of a million. As a result, the claim
that something like a mouse can evolve into something like an
elephant feels “obviously” false. But that feeling is
precisely what we should ignore in evaluating the possibility of such
events, because we cannot have any such feeling for the exceedingly
long time span it would take. Rather, we have to evaluate the
likelihood using evidence and hard logic; commonsense can seriously
mislead us.
The
same is true for nature on the scale of the extremely small. When we
start poking around in this territory, around with things like atoms
and sub-atomic particles, we find ourselves in a world which bears
little resemblance to the one we are used to. I am going to try
various ways of giving you a sense of how the ultra-tiny works, but I
know in advance that no matter what I do I am still going to be
presenting concepts and ideas that seem, if anything, more outlandish
than Darwinian evolution; ideas and concepts that might, no, probably
will, leave your head spinning. If it is any comfort, they often
leave my mind spinning as well. And again, the only reason to accept
them is that they pass the scientific tests of requiring evidence and
passing the muster of logic and reason; but they will often seem
preposterous, nevertheless.
First,
however, let’s try to grab hold of just how tiny the world we are
about to enter is. Remember Avogadro’s number, the number of a
mole of anything, from the last chapter? The reason we need such an
enormous number when dealing with atoms is that they are so
mind-overwhelmingly small. When I say mind-overwhelmingly, I really
mean it. A good illustration of just how small that I enjoy is to
compare the number of atoms in a glass of water to the number of
glasses of water in all the oceans on our planet. As incredible as
it sounds, the ratio of the former to the latter is around 10,000
to 1. This means that if you fill a glass with water,
walk down to the seashore, pour the water into the ocean and wait
long enough for it to disperse evenly throughout all the oceans (if
anyone has managed to calculate how long this would take, please let
me know), then dip your now empty glass into the sea and re-fill it,
you will have scooped up some ten thousand of the original atoms that
it contained. Another good way of stressing the smallness of atoms
is to note that every time you breathe in you are inhaling some of
the atoms that some historical figure – say Benjamin Franklin or
Muhammad – breathed in his lifetime. Or maybe just in one of their
breaths; I can’t remember which – that’s how hard to grasp just
how small they are.
One
reason all this matters is that nature in general does not
demonstrate the property that physicists and mathematicians call
“scale invariance.” Scale invariance simply means that, if you
take an object or a system of objects, you can increase its size up
to as large as you want, or decrease it down, and its various
properties and behaviors will not change. Some interesting systems
that do possess scale invariance are found among the mathematical
entities called fractals: no matter how much you enlarge or shrink
these fractals, their patterns repeat themselves over and over ad
infinitum without change. A good example of this is the Koch
snowflake:
which
is just a set of repeating triangles, to as much depth as you want.
There are a number of physical systems that have scale invariance as
well, but, as I just said, in general this is not true. For example,
going back to the mouse and the elephant, you could not scale the
former up to the size of the latter and let it out to frolic in the
African savannah with the other animals; our supermouse’s
proportionately tiny legs, for one thing, would not be strong enough
to lift it from the ground. Making flies human sized, or vice-versa,
run into similar kinds of problems (a fly can walk on walls and
ceilings because it is so small that electrostatic forces dominate
its behavior far more than gravity).
Scale
Invariance – Why it Matters
One natural phenomenon
that we know lacks scale invariance, we met in the last chapter is
matter itself. We know now that you cannot take a piece of matter, a
nugget of gold for example, and keep cutting it into smaller and
smaller pieces, and so on until the end of time. Eventually we reach
the scale of individual gold atoms, and then even smaller, into the
electrons, protons, and neutrons that comprise the atoms, all of
which are much different things than the nugget we started out with.
I hardly need to say that all elements, and all their varied
combinations, up to stars and galaxies and larger, including even the
entire universe, suffer the same fate. I should add, for the sake of
completeness, that we cannot go in the opposite direction either; as
we move toward increasingly more massive objects, their behavior is
more and more dominated by the field equations of Einstein’s
general relativity, which alters the space and time around and inside
them to a more and more significant degree.
Why
do I take the time to mention all this? Because we are en route
to explaining how atoms, electrons and all, are built up and how they
behave, and we need to understand that what goes on in nature at
these scales is very different than what we are accustomed to, and
that if we cannot adopt our thinking to these different behaviors we
are going to find it very tough, actually impossible, sledding,
indeed.
In
my previous book, Wondering About, I out of necessity gave a
very rough picture of the world of atoms and electrons, and how that
picture helped explained the various chemical and biological
behaviors that a number of atoms (mostly carbon) displayed. I say
“of necessity” because I didn’t, in that book, want to mire the
reader in a morass of details and physics and equations which weren’t
needed to explain the things I was trying to explain in a chapter or
two. But here, in a book largely dedicated to chemistry, I think the
sledding is worth it, even necessary, even if we do still have to
make some dashes around trees and skirt the edges of ponds and
creeks, and so forth.
Actually,
it seems to me that there are two approaches to this field, the field
of quantum mechanics, the world we are about to enter, and how it
applies to chemistry. One is to simply present the details, as if
out of a cook book: so we are presented our various dishes of,
first, classical mechanics, then the LaGrangian equation of motion
and Hamiltonium operators and so forth, followed by Schrödinger’s
various equations and Heisenberg’s matrix approach, with
eigenvectors and eigenvalues, and all sorts of stuff that one can
bury one’s head into and never come up for air. Incidentally, if
you do want to summon your courage and take the plunge, a very good
book to start with is Melvin Hanna’s Quantum Mechanics in
Chemistry, of which I possess the third edition, and go perusing
through from time to time when I am in the mood for such fodder.
The problem with this
approach is that, although it cuts straight to the chase, it leaves
out the historical development of quantum mechanics, which, I
believe, is needed if we are to understand why and how physicists
came to present us with such a peculiar view of reality. They had
very good reasons for doing so, and yet the development of modern
quantum mechanical theory is something that took several decades to
mature and is still in some respects an unfinished body of work.
Again, this is largely because some it its premises and findings are
at odds with what we would intuitively expect about the world
(another is that the math can be very difficult). These are
premises and findings such as the quantitization of energy and other
properties to discrete values in very small systems such as atoms.
Then there is Heisenberg’s famous though still largely
misunderstood uncertainly principle (and how the latter leads to the
former).
Talking
About Light and its Nature
A
good way of launching this discussion is to begin with light, or,
more precisely, electromagnetic radiation. What do I mean by
these polysyllabic words? Sticking with the historical approach, the
phenomena of electricity and magnetism had been intensely studied in
the 1800s by people like Faraday and Gauss and Ørsted, among others.
The culmination of all this brilliant theoretical and experimental
work was summarized by the Scottish physicist James Clerk Maxwell,
who in 1865 published a set of eight equations describing the
relationships between the two phenomena and all that had been
discovered about them. These equations were then further condensed
down into four and placed in one of their modern forms in 1884 by
Oliver Heaviside. One version of these equations is (if you are a
fan of partial differential equations):
Don’t
worry if you don’t understand this symbolism (most of it I don’t).
The important part here is that the equations predict the existence
of electromagnetic waves propagating through free space at the speed
of light; waves rather like water waves on the open ocean albeit
different in important respects. Maxwell at once realized that light
must be just such a wave, but, more importantly, that there must be a
theoretically infinite number of such waves, each with different
wavelengths ranging from the very longest, what we now call radio
waves, to the shortest, or gamma rays. An example of such a wave is
illustrated below:
To
assist you in understanding this wave, look at just one component of
it, the oscillating electric field, or the part that is going up and
down. For those not familiar with the idea of an electric (or
magnetic) field, simply take a bar magnet, set it on a piece of
paper, and sprinkle iron filings around it. You will discover, to
your pleasure I’m certain, that the filings quickly align
themselves according to the following pattern:
The
pattern literally traces out the, in this case, magnetic field of the
bar magnet, but we could have used an electrically charged source to
produce a somewhat different pattern. The point is, the field makes
the iron filings move into their respective positions; furthermore,
if we were to move the magnet back and forth or side to side the
filings would continuously move with it to assume their desired
places. This happens because the outermost electrons in the filings
(which, in addition to carrying an electric charge, also behave as
very tiny magnets) are basically free to orient themselves anyway
they want, so they respond to the bar’s field with gusto, in the
same way a compass needle responds to Earth’s magnetic field. If
we were using an electric dipole it would be the electric properties
of the filings’ electrons performing the trick, but the two
phenomena are highly interrelated.
Go
back to the previous figure, of the electromagnetic wave. The wave
is a combination of oscillating electric and magnetic fields, at
right angles (90°) to each other, propagating through space. Now,
imagine this wave passing through a wire made of copper or any other
metal. Hopefully you can perceive by now that, if the wave is within
a certain frequency range, it will cause the electrons in the wire’s
atoms to start spinning around and gyrating in order to accommodate
the changing electric and magnetic fields, just as you saw with the
iron filings and the bar magnet. Not only would they do that, but
the resulting electron motions could be picked up by the right kinds
of electronic gizmos, transistors and capacitators and resistors and
the like – here, we have just explained the basic working principle
of radio transmission and receiving, assuming the wire is the
antenna. Not bad for a few paragraphs of reading.
This sounds all very
nice and neat, yet it is but our first foot into the door of what
leads to modern quantum theory. The reason for this is that this
pat, pretty perception of light as a wave just didn’t jibe with
some other phenomena scientists were trying to explain at the end of
the nineteenth century / beginning of the twentieth century.
The main such phenomena along these lines which quantum thinking
solved were the puzzles of the so-called “blackbody” radiation
spectrum and the photo-electric effect.
Blackbody
Radiation and the Photo-electric Effect
If you take an object,
say, the tungsten filament of the familiar incandescent light bulb,
and start pumping energy into it, not only will its temperature rise
but at some point it will begin to emit visible light: first a dull
red, then brighter red, then orange, then yellow – the filament
eventually glows with a brilliant white light, meaning all of the
colors of the visible spectrum are present in more or less equal
amounts, illuminating the room in which we switched the light on.
Even before it starts to visibly glow, the filament emits infrared
radiation, which consist of longer wavelengths than visible red, and
is outside our range of vision. It does so in progressively greater
and greater amounts and shorter and shorter wavelengths, until the
red light region and above is finally reached. At not much higher
temperatures the filament melts, or at least breaks at one of its
ends (which is why it is made from tungsten, the metal with the
highest melting point), breaking the electric current and causing us
to replace the bulb.
The
filament is a blackbody in the sense that, to a first approximation,
it completely absorbs all radiation poured onto it, and so its
electromagnetic spectrum depends only on its temperature and not any
on properties of its physical or chemical composition. Other such
objects which are blackbodies include the sun and stars, and even our
own bodies – if you could see into right region of the infrared
range of radiation, we would all be glowing. A set of five blackbody
electromagnetic spectra are illustrated below:
Examine
these spectra, the colored curves, carefully. They all start out at
zero on the left which is the shortest end of the temperature, or
wavelength (λ, a Greek letter which is
pronounced lambda) scale; the height of the curves then quickly rises
to a maximum λ at a certain temperature,
followed by a gradual decline at progressively lower temperatures
until they are basically back at zero again. What is pertinent to
the discussion here is that, if we were living around 1900, all these
spectra would be experimental; it was not possible then, using the
physical laws and equations known at the end of the 1800s, to explain
or predict them theoretically. Instead, from the laws of physics as
known then, the predicted spectra would simply keep increasing as λ
grew shorter / temperature grew higher, resulting it what was
called “the ultraviolet catastrophe.”
Another,
seemingly altogether different, phenomenon that could not be
explained using classical physics principles was the so-called
photoelectric effect. The general idea is simple enough: if you
shine enough light of the right wavelength or shorter onto certain
metals – the alkali metals, including sodium and potassium, show
this effect the strongest – electrons will be ejected from the
metal, which can then be easily detected:
This
illustration not only shows the effect but also the problem 19’th
century physicists had explaining it. There are three different
light rays shown striking the potassium plate: red at a wavelength
of 700 nanometers or nm (an nm is a billionth of a meter), green at
550 nm, and purple at 400 nm. Note that the red light fails to eject
any electrons at all, while the green and purple rays eject only one
electron, with the purple electron escaping with a higher velocity,
meaning higher energy, than the green.
The
reason this is so difficult to explain with the physics of the 1800’s
is that physics then defined the energy of all waves using both the
wave’s amplitude, which is the distance from crest or highest point
to trough or lowest point, in combination with the wavelength (the
shorter the wavelength the more waves can strike within a given
time). This is something you can easily appreciate by walking into
the ocean until the water is up to your chest; both the higher the
waves are and the faster they hit you, the harder it is to stay on
your feet.
Why
don’t the electrons in the potassium plate above react in the same
way? If light behaved as a classical wave it should not only be the
wavelength but the intensity or brightness (assuming this is the
equivalent of amplitude) that determines how many electrons are
ejected and with what velocity. But this is not what we see: e.g.,
no matter how much red light, of what intensity, we shine on the
plate no electrons are emitted at all, while for green and purple
light only the shortening of the wavelength in and of itself
increases the energy of the ejected electrons, once again, regardless
of intensity. In fact, increasing the intensity only increases the
number of escaping electrons, assuming any escape at all, not their
velocity. All in all, a very strange situation, which, as I said,
had physicists scratching their heads all over at the end of the
1800s.
The
answers to these puzzles, and several others, comes back to the point
I made earlier about nature not being scale invariant. These
conundrums were simply insolvable until scientists began to think of
things like atoms and electrons and light waves as being quite unlike
anything they were used to on the larger scale of human beings and
the world as we perceive it. Using such an approach, the two men who
cracked the blackbody spectrum problem and the photoelectric effect,
Max Planck and Albert Einstein, did so by discarding the concept of
light being a classical wave and instead, as Newton had insisted two
hundred years earlier, thought of it as a particle, a particle which
came to be called a photon. But they also did not allude to
the photon as a classical particle either but as a particle with a
wavelength; furthermore, that the energy E
of this particle was described, or quantized, by the equation
in which c was the speed of light, λ
the photon’s wavelength, and h was
Planck’s constant, the latter of which is equal to 6.626 × 10-34
joules seconds – please note
the extremely small value of this number. In contrast to our
earlier, classical description of waves, the amplitude is to be found
nowhere in the equation; only the wavelength, or frequency, of the
photon determines its energy.
If
you are starting to feel a little dizzy at this point in the story,
don’t worry; you are in good company. A particle with a
wavelength? Or, conversely, a wave that acts like a particle even if
only under certain circumstances? A wavicle? Trying to wrap
your mind around such a concept is like awakening from a strange
dream in which bizarre things, only vaguely remembered, happened.
And the only justification of this dream world is that it made sense
of what was being seen in the laboratories of those who studied these
phenomena. Max Planck, for example, was able, using this definition,
to develop an equation which correctly predicted the shapes of
blackbody spectra at all possible temperature ranges. And Einstein
elegantly showed how it solved the mystery of the photoelectric
effect: it took a minimum energy to eject an electron from a metal
atom, an energy dictated by the wavelength of the incoming photon;
the velocity or kinetic energy of the emitted electron came solely
from the residual energy of the photon after the ejection. The
number of electrons freed this way was simply equal to the number of
the photons that showered down on the metal, or the light’s
intensity. It all fit perfectly. The world of the quantum had made
its first secure foot prints in the field of physics.
There was much, much
more to come.
The
Quantum and the Atom
Another phenomena that
scientists couldn’t explain until the concept of the quantum came
along around 1900-1905 was the atom itself. Part of the reason for
this is that, as I have said, atoms were not widely accepted as real,
physical entities until electrons and radioactivity were discovered
by people like the Curies and J. J. Thompson, Rutherford performed
his experiments with alpha particles, and Einstein did his work on
Brownian motion and the photo-electric effect (the results of which
he published in 1905, the same year he published his papers on
special relativity and the E = mc2
equivalence of mass and energy in the same year, all at the tender
age of twenty-six!). Another part is that, even if accepted, physics
through the end of the 1800s simply could not explain how atoms could
be stable entities.
The
problem with the atomic structure became apparent in 1911, when
Rutherford published his “solar system” model, in which a tiny,
positively charged nucleus (again, neutrons were not discovered until
1932 so at the time physicists only knew about the atomic masses of
elements) was surrounded by orbiting electrons, in much the same way
as the planets orbit the sun. The snag with this rather intuitive
model involved – here we go both with not trusting intuition and
nature not being scale invariant again – something physicists had
known for some time about charged particles.
When
a charged particle changes direction, it will emit electromagnetic
radiation and thereby lose energy. Orbiting electrons are electrons
which are constantly changing direction and so, theoretically, should
lose their energy and fall into the nucleus in a tiny fraction of a
second (the same is true with planets orbiting a sun, but it takes
many trillions of years for it to happen). It appeared that the
Rutherford model, although still commonly evoked today, suffered from
a lethal flaw.
And
yet this model was compelling enough that there ought to be some
means of rescuing it from its fate. That means was published two
years later, in 1913, by Niels Bohr, possibly behind Einstein the
most influential physicist of the twentieth century. Bohr’s
insight was to take Planck’s and Einstein’s idea of the
quantitization of light and apply it to the electrons’ orbits. It
was a magnificent synthesis of scientific thinking; I cannot resist
inserting here Jacob Bronowski’s description of Bohr’s idea, from
his book The Ascent of Man:
Now in a sense, of course, Bohr’s
task was easy. He had the Rutherford atom in one hand, he had the
quantum in the other. What was there so wonderful about a young man
of twenty-seven in 1913 putting the two together and making the
modern image of the atom? Nothing but the wonderful, visible
thought-process: nothing but the effort of synthesis. And the idea
of seeking support for it in the one place where it could be found:
the fingerprint of the atom, namely the spectrum in which its
behavior becomes visible to us, looking at it from outside.
Reading this reminds me of another feature of atoms I have yet to
mention. Just as blackbodies emit a spectrum of radiation, one based
purely on their temperature, so did the different atoms have their
own spectra. But the latter had the twist that, instead of being
continuous, they consisted of a series a sharp lines and were not
temperature dependent but were invoked usually by electric discharges
into a mass of the atoms. The best known of these spectra, and the
one shown below, is that of atomic hydrogen (atomic because hydrogen
usually exists as diatomic molecules, H2, but the electric
discharge also dissociates the molecules into discrete atoms):
This is the visible part of the hydrogen atom spectrum, or so-called
Balmer series, in which there are four distinct lines: from right to
left, the red one at 656 nanometers (nm), the blue-green at 486 nm,
the blue-violet at 434 nm, and the violet at 410 nm.
Bohr’s
dual challenge was explain both why the atom, in this case hydrogen,
the simplest of atoms, didn’t wind down like a spinning top as
classical physics predicted, and why its spectrum consisted of these
sharp lines instead of being continuous as the energy is lost. As
said, he accomplished both tasks by invoking quantum ideas. His
reasoning was more or less as this: the planets in their paths
around the sun can potentially occupy any orbit, in the same
continuous fashion we have learned to expect from the world at large.
As we now might begin to suspect however, this is not true for the
electrons “orbiting” (I put this in quotes because we shall see
that this is not actually the case) the nucleus. Indeed, this is the
key concept which solves the puzzle of atomic structure, and which
allowed scientists and other people to finally breathe freely while
they accepted the reality of atoms.
Bohr
kept the basic solar system model, but modified it by saying that
there was not a continuous series of orbits the electrons could
occupy but instead a set of discrete ones, in-between which there was
a kind of no man’s land where electrons could never enter. Without
going into details you can see how, at one stroke, this solved the
riddle of the line spectra of atoms: each spectral line represented
the transition of an electron from a higher orbit (more energy) to a
lower one (less energy). For example, the 656 nm red line in the
Balmer spectrum of hydrogen is caused by an electron dropping from
orbit level three to orbit level two:
Here
again we see the magical formula hυ, the
energy of the emitted photon, in this case being equal to E,
the difference in energy between the two orbits. Incidentally, if
the electron falls further inward, from orbit level two to orbit
level one – this is what is known as the Lyman series, in this case
accompanied by a photon emission of 122 nm, well into the ultraviolet
and invisible to our visual systems. Likewise, falls to level three
from above, the so-called Paschen series, occur in the equally
invisible infrared spectrum. There is also a level four, five, six …
potentially out to infinity. It was the discovery of these and other
series which confirmed Bohr’s model and in part earned him the
Nobel Prize in physics in 1932.
This is fundamentally
the way science works. Inexplicable features of reality are solved,
step by step, sweat drop by tear drop , and blood drop by drop, by
the application of known physical laws; or, when needed, new laws and
new ideas are summoned forth to explain them. Corks are popped, the
bubbly flows, and awards are apportioned among the minds that made
the breakthroughs. But then, as always, when the party is over and
the guests start working off their hangovers, we realize that
although, yes, progress has been made, there is still more territory
to cover. Ironically, sometimes the new territory is a direct
consequence of the conquests themselves.
Bohr’s
triumph over atomic structure is perhaps the best known entré in
this genre of the story of scientific progress. There were two
problems, one empirical and one theoretical, which arose from it in
particular, problems which sobered up the scientific community. The
empirical problem was that Bohr’s atomic model, while it perfectly
explained the behavior of atomic hydrogen, could not be successfully
applied to any other atom or molecule, not even seemingly simple
helium or molecular hydrogen (H2), the former of which is
just after hydrogen in the periodic table. The theoretical problem
was that the quantitization of orbits was purely done on an ad hoc
basis, without any meaningful physical insight as to why it
should be true.
And
so the great minds returned to their offices and chalkboards,
determined to answer these new questions.
Key Ideas in the Development of Quantum Mechanics
The
key idea which came out of trying to solve these problems was that,
if that which had been thought of as a wave, light, could also
possess particle properties, then perhaps the reverse was also true:
that which had been thought of as having a particle nature, such as
the electron, could also have the characteristics of waves. Louis de
Broglie, in his 1924 model of the hydrogen atom, introduced this,
what was to become called the wave-particle duality concept,
explaining the discrete orbits concept of Bohr by recasting them as
distances from the nuclei where standing electron waves could exist
only in whole numbers, as the mathematical theory behind waves
demanded:
De Broglie’s model was supported in the latter 1920’s by
experiments which showed that electrons did indeed show wave
features, at least under the right conditions. Yet, though a
critical step forward in the formulation of the quantum mechanical
description of atoms, de Broglie still fell short. For one thing,
like Bohr, he could only predict the properties of the simplest atom,
hydrogen. Second, and more importantly, he still gave no fundamental
insight as to how or why particles could behave as waves and/or
vice-versa. Although I have said that reality on such small scales
should not be expected to behave in the same matter as the scales we
are used to, there still has to be some kind of underlying theory, an
intellectual glue, that allows us to make at least some sense of what
is really going on. And scientists in the early 1920’s still did
not possess that glue.
That
glue was first provided by people like Werner Heisenberg and Max
Born, who, only a few years after de Broglie’s publication, created
a revelation, or perhaps I should say revolution, of one of
scientific – no, philosophic – history’s most astonishing
ideas. In 1925 Heisenberg, working with Born, introduced the
technique of matrix mechanics, one of the modern ways of formulating
quantum mechanical systems. Crucial to the technique was the concept
that at the smallest levels of nature, such as with electrons in an
atom, neither the positions nor motions of particles could be defined
exactly. Rather, these properties were “smeared out” in a way
that left the particles with a defined uncertainty. This led, within
two years, to Heisenberg’s famous Uncertainty Principle, which
declared that certain pairs of properties of a particle in any system
could not be simultaneously known with perfect precision, but only
within a region of uncertainty. One formulation of this principle
is, as I have used before:
x × s
≤
h / (2π
× m)
which states that the product of the uncertainty of a particle’s
position (x)
and its speed (s)
is always less than or equal to Planck’s (h)
constant divided by 2π times the object’s mass
(m). Now, there is something I must say
upfront. It is critical to understand that this uncertainty is not
due to deficiencies in our measuring instruments, but is built
directly into nature, at a fundamental level. When I say fundamental
I mean just that. One could say that, if God or Mother Nature really
exists, even He Himself (or Herself, or Itself) does not and cannot
know these properties with zero uncertainty. They simply do not have
a certainty to reveal to any observer, not even to a supernatural
one, should such an observer exist.
Yes, this is what I am
saying. Yes, nature is this strange.
The
Uncertainty Principle and Schrödinger’s Breakthrough
Another, more precise
way of putting this idea is that you can specify the exact position
of an object at a certain time, but then you can say nothing about
its speed (or direction of motion); or the reverse, that speed /
direction can be perfectly specified but then the position is a
complete unknown. A critical point here is that the reason we do not
notice this bizarre behavior in our ordinary lives – and so, never
suspected it until the 20th century – is that the product of these
two uncertainties is inversely proportional to the object’s
mass (that is, proportional to 1/m) as
well as directly proportional to the tiny size of Planck’s constant
h. The result of this is that large
objects, such as grains of sand, are simply much too massive to make
this infinitetesimally small uncertainty product measurable by any
known or even imaginable technique.
Whew.
I know. And just what does all this talk about uncertainty have to
do with waves? Mainly it is that trigonometric wave functions, like
sine and cosine, are closely related to probability functions, such
as the well-known Gaussian, or bell-shaped, curve. Let’s start
with the latter. This function starts off near (but never at) zero
at very large negative x, rises to a maximum y = f(x) value at a
certain point, say x = 0, and then, as though reflected through a
mirror, trails off again at large positive x. A simple example
should help make it clear. Take a large group of people. It could
be the entire planet’s human’s population, though in practice
that would make this exercise difficult. Record the heights of all
these people, rounding the numbers off to a convenient unit, say,
centimeters or cm. Now make sub-groups of these people, each
sub-group consisting of all individuals of a certain height in cm.
If you make a plot of the number of people within each sub-group, or
the y value, versus the height of that sub-group, the x value, you
will get a graph looking rather (but not exactly) like this:
Here, the y or f(x) value is called dnorm(x). Value x = 0 represents
the average height of the population, and each x point (which have
been connected together in a continuous line) the greater or lesser
height on either side of x = 0. You see the bell shape of this
curve, hence its common name.
What
about those trigonometric functions? As another example, a sine
function, which is the typical shape of a wave, looks like this:
The resemblances, I assume, are obvious; this function looks a lot
like a bunch of bell shaped curves (both upright and upside-down),
all strung together. In fact the relationship is so significant that
a probability curve such as the Gaussian can be modeled using a
series of sine (and cosine) curves in what mathematicians call a
Fourier transformation. So obvious that Erwin
Schrödinger, following up de Broglie’s work, in 1926 produced what
is now known as the Schrödinger wave equation, or equations
rather, which described the various properties of physical systems
via one or more differential equations (if you know any calculus,
these are equations with relate a function to one or more of its
derivatives; if you don’t, don’t worry about it), whose solutions
were a series of complex wave functions (a complex function or number
is one that includes the imaginary number i, or square root of
negative one), given the formal symbolic designation ψ.
In addition to his work with Heisenberg, Max Born almost immediately
followed Schrödinger‘s discovery with the description of the
so-called complex square of ψ, or ψ*
ψ
, being the probability distribution of the object, in this
case, the electron in the atom.
It
is possible to set up Schrödinger’s equation for any physical
system, including any atom. Alas, for all atoms except hydrogen, the
equation is unsolvable due to a stone wall in mathematical physics
known as the three-body problem; any system with more than two
interacting components, say the two electrons plus nucleus of helium,
simply cannot be solved by any closed algorithm. Fortunately, for
hydrogen, where there is only a single proton and a single electron,
the proper form of the equation can be devised and then solved,
albeit with some horrendous looking mathematics, to yield a set of ψ,
or wave functions. The complex squares of these functions as
described above, or solutions I should say as there are an infinite
number of them, describe the probability distributions and other
properties of the hydrogen atom’s electron.
The nut had at last
been (almost) cracked.
Solving
Other Atoms
So all of this
brilliance and sweat and blood, from Planck to Born, came down to the
bottom line of, find the set of wave functions, or ψs,
that solve the Schrödinger equation for hydrogen and you have solved
the riddle of how electrons behave in atoms.
Scientists,
thanks to Robert Mullikan in 1932, even went so far as to propose a
name for the squared functions, or probability distribution
functions, a term I dislike because it still invokes the image of
electrons orbiting the nucleus: the atomic orbital.
Despite
what I just said, actually, we haven’t completely solved the
riddle. As I said, the Schrödinger equation cannot be directly
solved for any other atom besides hydrogen. But nature can be kind
sometimes as well as capricious, and thus allows us to find side door
entrances into her secret realms. In the case of orbitals, it turns
out that their basic pattern holds for almost all the atoms, with a
little tweaking here, and some further (often computer intensive)
calculations there. For our purposes here, it is the basic pattern
that matters in cooking up atoms.
Orbitals.
Despite the name, again, the electrons do not circle the nucleus
(although most of them do have what is called angular momentum,
which is the physicists’ fancy term for moving in a curved path).
I’ve thought and thought about this, and decided that the only way
to begin describing them is to present the general solution (a wave
function, remember) to the Schrödinger equation for the hydrogen
atom in all its brain-overloading detail:
Don’t
panic: we are not going to muddle through all the symbols and
mathematics involved here. What I want you to do is focus on three
especially interesting symbols in the equation: n,
ℓ, and m. Each
appears in the ψ function in one or more
places (search carefully), and their numeric values determine the
exact form of the ψ we
are referring to. Excuse me, I mean the exact form of the ψ*
ψ,
or squared wave function, or orbital, that is.
The
importance of n, ℓ,
and m lies in the fact that they are not
free to take on any values, and that the values they can have are
interrelated. Collectively, they are called quantum numbers,
and since n is dubbed the principle
quantum number, we will start with it. It is also the easiest to
understand: its potential values are all the positive integers
(whole numbers), from one on up. Historically, it roughly
corresponds to the orbit numbers in Bohr’s 1913 orbiting model of
the hydrogen atom. Note that one is its lowest possible value; it
cannot be zero, meaning that the electron cannot collapse into the
nucleus. Also sprach Zarathustra!
The
next entry in the quantum number menagerie is ℓ,
the angular momentum quantum number. As with n
it is also restricted to integer values, but with the additional
caveat that for every n it can only have
values from zero to n-one. So, for
example, if n is one, then ℓ
can only equal one value, that of zero, while if n
is two, then ℓ can be either zero or one, and
so on. Another way of thinking about ℓ is that
it describes the kind of orbital we are dealing with: a value
of zero refers to what is called an s orbital,
while a value of one means a so-called p orbital.
What
about m, the magnetic moment quantum
number? This can range in value from – ℓ
to ℓ, and represents the number of orbitals of
a given type, as designated by ℓ. Again, for
an n of one, ℓ has
just the one value of zero; furthermore, for ℓ
equals zero m can only be zero (so there
is only one s orbital), while for ℓ
equals one m can be one of three integers:
minus one, zero, and one. Seems complicated? Play around with this
system for a while and you will get the hang of it. See? College
chemistry isn’t so bad after all.
* * *
Let’s
summarize before moving on. I have mentioned two kinds of orbitals,
or electron probability distribution functions, so far: s
and p. When ℓ equals zero
we are dealing only with an s orbital, while for
ℓ equals one the orbital is type p.
Furthermore, when ℓ equals one m
can be either minus one, zero, or one, meaning that at each level (as
determined by n) there are always three p
orbitals, and only one s orbital.
What
about when n equals two? Following our
scheme, for this value of n there are
three orbital types, as ℓ can go from zero to
one to two. The orbital designation when ℓ
equals two is d; and as m
can now vary from minus two to plus two (-2, -1, 0, 1, 2), there are
five of these d type orbitals. I could press
onward to ever increasing ns and their
orbital types (f, g, etc.),
but once again nature is cooperative, and for all known elements we
rarely get past f orbitals, at least at the
ground energy level (even though n reaches
seven in the most massive atoms, as we shall see).
Explaining the Periodic Table. Atomic Orbitals.
It
hopefully is now beginning to make some sense. Each horizontal row,
or period, in the table represents a specific primary quantum number,
or n, starting with one at hydrogen (H)
and going up to seven at francium (Fr) in the seventh row. As we
move from left to right across a period, we are filling the elements
in said period, by which I mean their various orbitals, with
electrons. For n equals one there is only
ℓ equals zero, thus m
equaling zero, meaning we only have an s orbital
to fill in hydrogen and helium – each orbital can only hold a
maximum of two electrons, for reasons we will get to. For the period
just below hydrogen and helium, where n
equals two, ℓ can equal either zero or one,
meaning we have one s orbital and three p
orbitals to fill, the latter with m values
equaling minus one, zero, and one; this gives us a total of one +
three = four orbitals at this level, each orbital containing a
maximum of two electrons to give us the eight elements in this row /
period, Li through Ne.
The
title of this book, The Third Row, refers to the period
beginning with sodium (Na) and ending with argon (Ar). The first two
columns, or groups, known as the alkali metals and the alkaline
earths, represent s orbitals being filled, while
the last six groups – of which the last two are called the halogens
and the noble gasses – involve p orbital
filling. The central, sunken region, or transition metals, are d
orbitals being occupied, while the two offset rows at the bottoms are
f orbitals being filled. We will get to the
reasons why the d and f
orbital periods are sunken / disconnected later. The first
question you should ask is, how many electrons does it take to fill
an orbital? From what I’ve said and you’ve just seen, the answer
is two, but we can do a little better than that and explain why. It
turns out I have been holding out on you.
Well,
no, I haven’t really. We are following the historical development
of quantum mechanics, and now is the time to include some important
concepts I have been ignoring so far. It turns out that there is a
fourth quantum number, known as s
or spin (do not confuse this with s orbitals!),
which comes about when quantum mechanics is reformulated using
Einststein’s special relativity. This turns out to be necessary
because the electrons in an atom move at a significant fraction of
the speed of light (they can’t move as fast or faster than light
speed, as relativity also says) and so relativistic effects cannot be
ignored. This new quantum number s is of
course also quantized, and so can have only one of two values: +½ħ
or -½ħ, where ħ
or h-bar as it is called, is equal to h/2π.
This work was done by a number of individuals, some of whom we have
already met, but the main new name to enter here is Wolfgang Pauli.
Using the resulting relativistic quantum field theory devised from
special relativity and quantum mechanics, Pauli was able to show in
1925 that no two electrons in an atom can have the same quantum
numbers n, ℓ, m,
and s. More broadly, he showed that
fundamental particles known then (and now) could be divided into two
camps: fermions, which obey the Pauli Exclusion Principle,
and bosons, which do not. Later it came to be realized that fermions
are particles which constitute the main mass of matter, such as
electrons, protons, and neutrons, while bosons are force carrying
particles, or the glue if you like, holding the whole menagerie of
particles together. Photons are a good example of bosons: they
carry the electromagnetic force. And there are others, as we learned
from the last chapter but did not elaborate much on, such as nuclear
forces.
If
an orbital can hold only two electrons (same n,
ℓ, m, but with
different s’s) then we can see how the
atoms can be built up, step by step, filling in the lower level
orbitals and then expanding out to increasingly higher, or more
energetic, level ones. The entire periodic table finally snaps into
focus, and we find, to our astonishment, that we can grasp its
rationale, or at least some of it. And yet, yes, I still haven’t
covered one of the most important topics of all. What do these
orbitals look like and how do they behave? And why should we care?
The Shapes and Behaviors of the Atomic Orbitals
What
I am about to show you can be misleading, or at least confusing. I
will talk about the shapes and sizes of orbitals, and even show
pictures of them. The misleading part is in thinking that orbitals,
in and of themselves, are actual, concrete things, filling space like
any other material object. This image, or illusion I should say,
though easily fallen for, is something we have to resist if we are
truly to understand orbitals, their meanings, and the functions they
serve.
A
good place to start is with the s orbital where n
equal to one, i.e., the 1s orbital, first because
it is the simplest one in shape and also because there is an s
orbital for every value of n – that is,
every row in the periodic table. Let me begin by reminding you of
the Gaussian distribution function, shown several pages earlier:
I do this because this function is essentially the shape of the 1s
orbital. The only difference is that this is a two dimensional
figure while of course orbitals are three dimensional entities. We
should redraw the 1s orbital more appropriately
as something like this:
Can you see how this smeared out sphere is the 3D equivalent to the
Gaussian curve? It is densest / highest at the center, and
then exponentially drops off from there; you get the picture if you
will follow the density of the dots, ever closing in but never quite
reaching zero as you go out from the center (the dots do not
represent electrons themselves, but the probability of them being
found at a certain place). Incidentally, the same picture
fundamentally applies to all s orbitals, not just
those where n equals one; for successively
higher values of n (2, 3, 4, etc.) these
still have the highest density in the center but the exponential
decay decreases progressively more slowly, and circular nodes of zero
density start to form rings in the distribution.
What
about p orbitals? Remembering that there are
three of them, the best description of them is as dumbbell shapes,
one dumbbell for each 3D axis (x, y, z), having a node of zero at the
center of the atomic nucleus:
Displayed here is a px orbital, more
specifically the 2px as we only start
having these orbitals when the principle quantum number is two or
higher. There are two other such orbitals, the 2py
and the 2pz, which have the same
shapes but are oriented along their respective axes. And again, as n
goes up, these orbitals become larger and more spread out, and
develop nodes.
Technically
what comes next are the d orbitals, but again a
reminder before we proceed. These orbitals are not material entities
in any definition of the word; they are more akin to the various
states describing the electrons in an atom. If we were to propose a
(admittedly strained) analogy with our own world of space in time,
saying that an electron occupies a given orbital is rather like
saying a car is going so many miles per hour down a highway. In this
analogy, talking about an empty orbital is akin to talking about the
state of going so many mph, although no vehicle may actually be in
that state. I insert this caution precisely because we will at times
speak of orbitals as though they really are physical, even solid,
manifestations, for example when we combine them to make new
orbitals; but this is just a convenient way of talking about them –
don’t lose sight of what they really are, just the complex squares
of wave functions found by solving the Schrödinger equation for the
hydrogen atom.
With
this warning in mind, on to d orbitals. These
exist only for n equals three or higher,
so they don’t exist at all until we reach the third row of the
periodic table, which as I have said, runs from sodium (Na) to argon
(Ar). Here is also where we find ourselves faced with the puzzle of
why the transition, or d-block, elements don’t
begin here but only with the fourth period (n
= 4). There are five such orbitals for every period that has them,
but unfortunately they cannot easily be described by a few words so
the only thing to do is show them in their full splendor:
Here, because it makes them easier to visualize, for instead of the
fuzzy pictures made from dots I used for s and p
orbitals I am using solid shapes (bearing in mind yet again my
warning that orbitals themselves are not solid, material things)
which enclose approximately ninety percent of each orbital’s
probability distribution. This is as good a place as any to solve
the mystery of the sunken d-block elements in the
table, not to mention the offset f-block
(lanthanides and actinides) as well.
In the beginning was the solution to the Schrödinger equation for
the hydrogen atom:
in which the energies of the orbitals was dependent solely on n,
the principle quantum number. Recall, however, that some tweaks and
calculations are needed as we moved upward through the elements,
because they have multiple electrons and so we can’t solve the
equation for them directly. One of those changes is that, as soon as
we start filling the orbitals with electrons, the kinds of orbitals
at each level, s, p, or d,
begin to diverge in energy, with the higher ℓ
orbitals increasing over their lower siblings. Even by the time we
get to n equals two in the table the p
orbitals have higher energies than the s, and for
n equals three the d
orbitals are higher still.
This
is what accounts for the sunken transition elements. By the time n
equals three the 3d orbitals now lie at higher
energies than the 4s orbitals which we naively
expected should lie below them. Thus, we must wait for the 4s
orbitals to be filled (which they are in the elements potassium or
K, and calcium or Ca, through argon, Ar) before filling the 3d
orbitals in the first row of transition metals, which goes from
scandium (Sc) to zinc (Zn); only then can we move on to the 4p
elements, from potassium (K) to krypton (Kr). A similar, even
greater disparity in energy accounts for the f-block
elements, the lanthanides and actinides, which is why they are set
off below the main body of the table. It is a good thing nature
doesn’t go as far as g orbitals, or our pretty
table would become horrifically complicated!
Let
us recapitulate before moving on. We began this chapter by noting
that, in general, reality is not scale invariant, meaning that the
appearances and behavior of objects, and even the underlying physical
laws for them, appear to change as we move either to the world of the
immensely large or infinitesimally small. For the latter, we
discovered that nature at this level obeys the laws of quantum
mechanics, a system of physics that was mainly developed between 1900
and the 1930s. Electrons are so tiny that they fall well within the
range of this new system of physics; for example, they can not move
in simple orbits about the atomic nucleus as planets do around the
sun, but rather, their behavior is determined by wave functions
derived by solving the Schrödinger equation for the hydrogen atoms
(and then adding some extra tweaks and calculations). All of this
has been a considerable trek to understanding the whys and hows of
the periodic table of elements. And so, take a breather, and we
shall see where this will take us.
Word of Warning from Forty Years Ago by Jacob Bronowski in "Ascent of Man"
"Knowledge is not a loose-leaf notebook of facts. Above all, it is a responsibility for the integrity of what we are, primarily of what we are as ethical creatures. You cannot possibly maintain that informed integrity if you let other people run the world for you while you yourself continue to live out of a ragbag of morals that come from past beliefs. This is really crucial today. You can see it is pointless to advise people to learn differential equations, or do a course in electronics or computer programming. And yet, fifty years from now, if an understanding of man's origins, his evolution, his history, his progress, is not the commonplace of the schoolbooks, we shall not exist. The commonplace of the schoolbooks of tomorrow is the adventure of today, and that is what we are engaged in."
That was in 1973. It is not 2013, meaning that we have but ten years left to make this vision a reality or we are all in peril. I think Bronowski was pessimistic in his prophecy, but there must be some time period in which it is true. We may not have but a decade, but certainly only decades before it will be true. which means tht we must start now if there is to be any realistic hope. Less than half of Americans accept Darwinian evolution. Many don't trust scientific and technological progress even though they themselves benefit from it.
That was in 1973. It is not 2013, meaning that we have but ten years left to make this vision a reality or we are all in peril. I think Bronowski was pessimistic in his prophecy, but there must be some time period in which it is true. We may not have but a decade, but certainly only decades before it will be true. which means tht we must start now if there is to be any realistic hope. Less than half of Americans accept Darwinian evolution. Many don't trust scientific and technological progress even though they themselves benefit from it.
Tuesday, April 30, 2013
A Modest Proposal
I've been following events since the Newtown massacre and have an idea which might both help gun control AND uphold the 2'd Amendment. Note first, the entire wording of the Amendment:
"A well regulated militia being necessary to the security of a free state,
the right to keep and bear arms shall not be infringed."
The first phrase, usually overlooked, sounds a little puzzling. But it is easily understood in the context of the times. The fledgling United States had only a small federal government, and could not afford a standing army suitable to its need. So in 1792 the government passed (one of a number of) militia acts: this one required all able-bodied, law-abiding male between 18 and 45 to join their state's militia, to provide their own guns and supplies, to require them to be trained (well regulated), and be subject to mobilization in time of war -- and, according to the Constitution, to be under the ultimate control of the federal government.
Some of this could be reinstituted. My first thought is that those who purchase or obtains guns (legally) be members of the NRA or a similar organization, that the NRA would be responsible for background checks on such individuals (perhaps with federal dollars) AND be held liable if a gun crime is due to inadequate checks. The same NRA or such would also be responsible for hold gun training/safety programs, which would be mandatory for owners. Most important, gun owners (I would make exceptions here) would be available for call up and mobilization in times of war -- which, however, would only be done as a last resort measure as we are not (for the most part) dealing with professionally trained and equipped soldiers. I would also drop the mandatory gun ownership from the 1792 act.
With rights come responsibilities, as we all know. This is just an initial proposal and I invite debate and discussion on it.
Tuesday, January 22, 2013
Chapter Five of Wondering About
Wondering About
Ourselves
One of themes of
this book is that if we are to satisfy our curiosity about the
universe around and within us, we will need to use our imaginations
to the best of our abilities, because the universe as we perceive
with our physical senses will only take us so far. We saw this first
in chapter two, in which our robotic exploration of the solar system
revealed worlds which we had not foreseen, at least partly because we
had not completely unleashed our imaginations on the possibilities.
In chapters three and four we were forced to use our imaginations
again to picture how the world of the ultra-tiny, or atoms and
electrons, works, by suspending our common-sense ideas and
perceptions so that such things could become real and not mere
philosophical concepts, tangible things we could get our minds around
and acquire a sense of their true nature. My point is that in these
journeys we have gained a certain intellectual satisfaction – real
questions leading to real answers – but again we are being warned
that clinging to the world as modeled by our eyes and visual cortexes
is a habit we are going to have to resist, one way or the other, if
we expect to keep making progress.
This chapter is on
biology, which is why I begin with this emphasis on imagination, for
with the possible exception of quantum mechanics, I believe nowhere
is imagination more required than on the subject of life. Living
things, their origins, their myriad shapes and actions combined with
their underlying foundations, and how their marvelous,
interdependent, and beautiful adaptivity to all environments they
find themselves in, is a series of mysteries that will not yield to
the unimaginative mind, however much plodding thought is brought to
bear on them. Unconvinced? Then let’s start with the big
question, the question even the most renown scientists have been
beating their heads against right up to today: What is life? We see
it practically everywhere we look (at least on this isolated, tiny
planet) and we generally find we have no difficulty in distinguishing
it from the world of the non-living.
I think at this
point most of us would stop and agree, perhaps after some careful
thought, that there is something essential about living things. The
impression of this essentialness, this intentionality
I will call it, is indeed overwhelming, and easily hits us on the
head as the prime divider between life and everything else. All
non-living things seem to follow the laws of physics in a dumb,
obvious way: a pebble thrown into the air traces out a perfect
mathematical parabola as it interacts with the law of gravity,
finally striking earth in a completely predictable place at a
completely predictable time, given that we know its initial speed and
angle with respect to the ground. A pebble thrown into the air …
but what about a butterfly? When we watch a butterfly we put away
our calculators and measuring instruments, and simply watch in
wonder. A butterfly doesn’t blindly follow a parabola, it –
well, it seems to do whatever it decides to do, which is why making
measurements and calculations are pointless. A butterfly flies away,
perhaps never to be seen again. Or maybe it alights on a flower and
gazes at us, seemingly as equally puzzled by us as we are of it. We
are just certain there’s something going on behind those tiny eyes.
Something inexplicable. Something essential. Something that gets
down to what makes a pebble just a pebble but a butterfly something
... at the risk of being misunderstood, miraculous.
We still have taken
only a few steps in our attempt to define life, however. For the
next question is, is our butterfly, and by implication ourselves,
truly miraculous?
* * *
Let us try a
different tack. Richard Dawkins, in The
Blind Watchmaker, proposes
a definition of biology that is unusual but which he claims to be
perfectly workable: biology is the study of complex things that
appear to have been designed for a purpose. I speak of
intentionality, but complexity plus the appearance of design provides
us with another way of describing it, perhaps an even better way to
make progress. Dawkin’s point when applied to our butterfly is
threefold; first, like all life forms it is far, far more complex
than the pebble, far more complex than our solar system even; second,
not only does it act as though it has a mind capable of intentions
and a body capable of carrying those intentions out, it gives all the
appearance in the world to have been designed that way, designed to
fly (as well as many other things); and third, and most
significantly, there is an intimate relationship between complexity,
intentionality, and design. Flying is not a simple thing, not the
way butterflies do it at any rate, so complexity plus design, or as I
call it, intentionality, appears to improve our handle on what we
mean when we say something is alive.
But is it enough?
Or for that matter, is it really true? Living things do not always
appear to be complicated. Anyone dissecting a butterfly – no easy
task, admittedly – would marvel at its many interlocking intricate
parts, but what about the simple amoeba? Or a bacterium? At first
sight, such things do not appear to be particularly complex, but we
all agree that they are alive; that, like the butterfly, they appear
to move under the guidance of some internal intentions, some essence
which non-living things, even complex ones like computers, do not
possess.
Most of us, I
suspect, will find ourselves easily moving along some kind of
reasoning like this, perhaps without thinking about it very much. It
does seem to handle our common sense objection to calling complex
things like computers and airplanes biological while keeping “simple”
things like bacteria and amoebas in the same camp as butterflies and
human beings. Living things, from the simplest up to the most
complex, really do seem to have some special quality or essence that
ordinary matter lacks, whatever else that matter has. We almost can
feel it there, at the most basic levels, and we are certain that we
would never have any difficulty in distinguishing a living thing from
the non-living, based on that feeling. Wherever in the universe we
might find ourselves, the question of whether we were amongst life or
not would appear to be elementary.
* * *
Or would it? To our
astonishment, our common sense view of things biological begins to
disintegrate the moment we apply curiosity and imagination to it, to
dissect it and look into it at the finest levels science allows us to
probe. In doing so, try as we might, we never encounter this special
essence or quality which seems so obvious at first sight. Instead,
what we do find, when we break out our detectors and other scientific
instruments, is that living things are composed of atoms and
moledcules like everything else, albeit not in the same elemental
proportions, yet acting according to the same laws of physics and
chemistry as everything else. The mechanical, Newtonian universe of
objects and forces, modified by quantum effects on the smallest
scales, appear all that is needed to explain why butterflies fly, or
mate, or find food, or stare at us with the seeming same curiosity
that we feel gazing upon it. All our initial impressions, and all
the stories that have been told and retold aside, there appears no
miraculous special something that we can affix to or inject matter
with to make it come alive; no energy fields, no forces, no
protoplasm, no elixir of the living, nothing we can pump into Dr.
Frankenstein’s reassembled parts of corpses which will make it
groan and open its eyes and have thoughts and feelings and break its
bonds to move in accordance with them. There is nothing like that
whatsoever. No, whatever it is that characterizes life lies
elsewhere.
But the impression
of such a force is so strong, so deep, so instinctual that, try as we
might, we cannot simply abandon it without at least wondering why it
is there, where it comes from, and what it tells us. Something
is there, of that there can be no question.
Intentionality.
Complexity. Design. Try to put aside your ordinary impressions and
perceptions of things, and seed your mind, germinate in your mind,
take root and push out of the soil and put forth leaves and vines in
your mind, the theme that to satisfy our curiosity we must look at
the world from a different perspective, the one that imagination
unlocks. Very often, we find that when we look closely, what we
thought we were seeing fades away, yet is replaced by something just
as amazing – no, more so.
Let us start with
the simplest of things that could be called living. Consider the
virus. Here is something both considerably smaller and simpler than
the smallest, simplest bacterium, all biologists would agree. But on
the most microscopic of scales, that of individual atoms and
molecules, even the simplest virus turns out to be a machine of
remarkable complexity. At the very least it has to be able to
recognize a host cell it can parasitize, whether it is a cell in your
body or a bacterium (in which case it is called a bactaeriaphage),
somehow figure out the molecular locks and other gizmos which cells
use to protect themselves from invasion, penetrate the defenses, then
usurp the molecular machinery the cell uses to replicate itself,
perverting the cell into a factory for producing many more copies of
the virus, copies which then have to figure out how to break out of
the cell in order to repeat the cycle on other cells or bacteria, all
the while avoiding or distracting the many other layers of defenses
cells and bodies use to protect themselves from such invasions.
Biologists still
debate whether viruses can be legitimately counted among the various
kingdoms and domains of life, but there is no doubt that their hosts,
whether bacteria or other single celled organisms or multicellular
organisms, can be classified in the great Tree of Life, from which
all other living things, be they plants, animals, fungi, or you,
diverge from. And what dominates this tree, right down to the most
primitive beginnings we have yet been able to detect, is a level of
complexity that we simply do not encounter among the great many more
things than don’t belong on this tree, from rocks to stars to solar
systems to galaxies.
So after all this,
have we cornered our quarry? We started with the at first sight idea
that life possessed some special quality or substance or essence,
then realized that we could not find that essence however hard we
looked. But what we did find was that living things, even the
simplest of them, showed a level of complex organization well beyond
the most complex of non-living things.
Life is
special. I don’t want to lose sight of that. We are fully
justified in our grand division of matter into the non-living –
things we explain only by the laws of physics and chemistry at a
simple level – and the living, all the things we must also apply
whatever biology has to teach us. What I have been trying to show is
that, whatever that specialness is, it isn’t as obvious as it
appears upon first sight. It is more subtle, involving a number of
characters and qualities, one of which is complexity and another the
appearance of design or purpose.
* * *
Again, I say that
life truly is special. It is early May, and I have just come home
from a walk through Pennypack Park, one of the many lovely natural
places which skirt the city where I live, Philadelphia, one of
several cities along the eastern edge of North America. I would
love one day to walk on the moon or on the red soil of the planet
Mars, but what I have just experienced would be utterly lacking in
those dead, albeit fascinating places. In the spring in this part of
the world, as in many other parts of our planet, every sense is
roused to life by the call of the wild. Not only are you surrounded
by the verdant green of new buds and flowers and grasses, but also by
a cacophony of whistles, chirps, tweets, and other rhythmic sounds
which reminds you that you that new life is all about, some of it
still rustling itself to full wakefulness after winter but much of it
already in the air and alit on the many twigs and branches. And even
without vision and sound, you can still smell the musty beginnings of
stirrings things, the scents of enticing blossoms and irritating
pollens, and you can still feel the grass between your toes and the
softness of young leaves on your skin as you brush by the
undergrowth.
Here I have spoken
of complexity and the appearance of purpose and meaning, and perhaps
that is exactly what our scientific mission into the heart and soul
of biology requires, but this is one place where, I have to submit,
we will never really capture the essence of what we are studying.
Life is something that has to be experienced, and only living things
themselves have the capacity, as far as we know, to experience
anything. So, in a sense, our quest to satisfy our curiosity begins
with the admission that, at least for the world of the living, we
never can completely satisfy it.
Am I going to give
up, then? No, because, as I have maintained up to this point,
curiosity combined with imagination and the scientific method can
undo any knot, unlock any riddle, however baffling and impervious it
may seem. I have even suggested a starting place even, this idea of
complexity combined with apparent purposefulness, an idea I hope to
build upon and demonstrate just how powerful it is. I think we can
agree that it is a good starting place. Biological things, even the
simplest of them, are highly complex, we now see, and there does seem
to be something to this notion of being imbued with purpose, however
that comes about. If we can make some progress on this front, then
perhaps in the end we will satisfy our intellects after all, as
impossible as that seems looking at things from their beginnings.
* * *
Actually, I would
like to strike out first on a different front than is typical in
tomes on biology. I would like to retreat back to simple matter, of
the kind we started to explore in chapter four, and work up to what I
see as an essential question: can the laws of physics and chemistry,
as we have come to know them, even provide a platform for the vast
complexity of living things? In other words, do atoms, those basic
building blocks of all things material, even allow for the enormous
intricacies let alone purposefulness of the biological world?
This is a very good
question for it turns out, at least for the great majority of atoms
that we investigate toward this end, the answer is a clear and
resounding No.
Try as hard as we can, we find that when we begin assembling most
atoms into more and more complicated molecules or other structures,
they aren’t very cooperative in this process. No, things fall
apart, often violently, even if we can figure out a way of putting
them together. For the great majority of the kinds of atoms to be
found in nature, constructing an edifice of complexity sufficient for
life is a hopeless task. They simply will not stay put and do as
they are told.
All that is with
one, yes really only one, fortuitous exception, and one that we began
to explore in the previous chapter. The carbon atom. Atomic number
six on our periodic chart, a chart which now runs to over a hundred
if we include the extremely short-lived ones humans have made in
laboratories, is truly special. Carbon is what makes it all
possible, to the point where we can confidently say that if that if
this one lone atom out of the dozens had proved impossible for the
universe to produce in any significant quantities, neither you nor I
nor any of the myriad millions of species of life we share this
planet – on perhaps any planet – with would have any chance at
existing. Carbon alone is not sufficient for life, but it is
absolutely necessary. Of all the other elements in the biological
stew, perhaps substitutes could have been found, but no element,
under any conditions imaginable, appears a likely alternative to
carbon. This is because no other element yet discovered or made
could take its place as the backbone of the sizes and varieties of
the molecular components needed to make life, even the simplest forms
of life, possible. I will even go as far to say that if an
alternative form of life is ever found, if carbon isn’t at its
roots than neither is chemistry.
Carbon, indeed, is
so important that it is the only element whose existence was
predicted by the fact that living things do exist. All of the
naturally occurring elements in the universe today come from one of
two sources: either they were made in the first few minutes of the
universe’s existence, in the Big Bang which we will come to in a
later chapter, or they were made in the cores of the many trillions
of massive stars that have come and gone since the beginning. The
reasons for both is the same: larger, more complex atomic nuclei –
the core of protons and neutrons which make up the center of atoms
and ultimately determine their respective element’s properties –
have to be made from the simpler ones, ultimately from the simplest
of them all: hydrogen, atomic number one, a single proton (sometimes
combined with one or even two neutrons). This is done by smashing
two smaller nuclei together to make the larger one, a process which
requires extremely high pressures and temperatures because all nuclei
are positively electrically charged and ordinarily repel each other
unless they can be brought close enough together to be captured by
something called the strong nuclear force. Such conditions existed
naturally only in the moments after the Big Bang and today in the
hearts of stars, particularly the larger, hotter stars. Essentially,
to create a carbon atomic nucleus of six protons and six neutrons,
what must happen is that three helium four nuclei, each consisting of
two protons and two neutrons apiece, must be welded together in
exceedingly short order, within a millionth of a millionth of a
second, and then held together until they can relax and become
stable. This so-called “triple-alpha” process (an alpha particle
is a helium four nucleus) would itself seem to be an insurmountable
barrier to carbon and all the elements beyond, but surprisingly that
turns out to be not so: the pressures and temperatures which come to
exist in large stars – stars large enough to explode or somehow
spew their core substances into intergalactic space, making all those
large atomic nuclei available to new generations of stars and planets
such as our own, not to mention our own existence – are sufficient
to guarantee this process will happen enough to account for all the
carbon we are going to need.
With one problem.
This problem lies in the fact that our newborn carbon nucleus is
ringing and pulsing with so much energy that it should almost
instantly fragment into smaller pieces. What we need is some kind of
stable “resonance” at such high energies, which will allow the
newly born nucleus to hang together just long enough to relax by a
variety of processes into a lower, energetically stable state. But
when the details of Big Bang and stellar nucleosynthesis were being
worked out in the 1940’s and 50’s, no such resonance state was
known, nor was there any theoretical reason – theoretical from the
standpoint of physics at that time that is – to think one should
exist.
The problem was
solved by the single and to this day to my knowledge lone instance of
the so-called Anthropic Principle being used to successfully explain
an actual physical fact. If you are not familiar with it, the
Anthropic Principle, in its most basic, common-sense form, is simply
the statement that since we exist in this universe, the laws
governing it must be compatible with our existence (this seems
obvious, but there are other versions of the Anthropic Principle
which are more controversial). In this case, what the principle
insists is straightforward and simple: it insists that since the
element carbon does exist in sufficient quantities for our existence,
there must be a resonance energy level available for the newly bred
nuclei. The Anthropic Principle is not an argument physicists are
usually enamored of, but one group was sufficiently impressed by the
line of reasoning to take an actual look and see if the resonance
level really did exist. Lo and behold, they found that it did. In
fact the discovery not only explained the existence of carbon in
sufficient quantities in the universe, but also of the many elements
that are in turn built up from it: oxygen, neon, silicon, indeed
basically the entire periodic zoo of elements we find, to varying
degrees of magnitude, present in the universe today.
* * *
So, carbon exists.
It isn’t a very common element, and the fraction of it that does
reside in our universe in conditions where life can form, is
relatively small. But it is enough to account for, not just you and
I, but all of the manifestations of biology all about us, almost
anywhere you go on this planet, and probably what other worlds or
places we may one day find life. The next question is, what is it
about carbon that gives it its uniqueness, its specialness, its
ability to construct the large and complex and seemingly purposeful
phenomena that we call living things? What does carbon have that no
other atom seems to possess, however hard we play with them and build
castles in the air from them? Why do these carbon-based organisms
which are found with such fecundity on Earth and hopefully on at
least some other planets or moons or asteroids or places we’ve yet
to think of, exist, continue to exist, and have existed for so long?
Yes, carbon is special, but special in what ways, so many ways that
no other atom has a prayer of filling its role?
The answer to this
question is answered by a combination of chemistry and physics, some
of which I have already explored in the last chapter. It involves
two separate characteristics of carbon, chemical as well as physical
characteristics, characteristics which carbon and carbon alone
possesses, characteristics which we can never even mock up in any
other element, however hard we try.
One of those
characteristics is smallness. It is not coincidence that the great
majority of the atoms which constitute life are to be found at the
top of the periodic table, where the smallest and simplest of atoms
resides. One reason for that no doubt is that small atoms are, due
to the processes which forge them, simply more common than large
ones. But another reason, the key reason, is that smallness means
that these atoms can come much closer to each other in the
bond-forming process, resulting in bonds that are much stronger and
stabler than larger atoms can form. It is, in fact, well accepted,
that the small sizes of the first and second rows of the periodic
table account for much of the uniqueness of their chemistry,
especially in the ways they differ from their heavier cousins, even
in the same column. To give an example, sulfur, selenium, and
tellurium are much more similar to each other than the first member
of their column, oxygen. This is a statement which could be made for
nitrogen and boron as well, and even, although to a lesser extent,
lithium and fluorine. Small atoms make for short, strong bonds,
something necessary if we are to build up to the size and complexity
of living things and have them remain stable. Even a structure as
small as a bacterium demands a level of complexity which only carbon
and other small atoms can provide.
So smallness is
important, but it is not sufficient. The reason for this can be
found by examining the other elements in the first row, for example
hydrogen, which can bond with one and only one other atom; usually
another hydrogen atom, making the molecule H2,
which is almost entirely what we are dealing with when working with
hydrogen on the scale of pressures and temperatures we are accustomed
to. Likewise, nitrogen and oxygen, also essential to life, appear to
form stable binary molecules, N2
and O2,
which do not spontaneously join together into longer, more
complicated structures but make up the most common constituents of
this planet’s air we breathe, which is about 78% N2
and 21% O2.
Following this line
of argument, shouldn’t stable C2
molecules exist also, thereby undermining this theme of small atoms
making large, structurally stable molecules, and once again pulling
the rug out from underneath our feet in our quest to make the large,
complex yet stable molecules and molecular edifices that biological
things demand? Here is the interesting part, however; the neat trick
by which nature refuses to be obvious but instead manages to provide
us with exactly what we were looking for. For it turns out that C2
does not (or only rarely) exist, indeed is not normally stable, and
once again we are allowed to proceed in the directions biology calls
upon us to follow.
O2
and N2
are stable due to the smallness of oxygen and nitrogen atoms, but
there are limits to how large you can build these small, compact
molecules. These limits are inherit in the kinds of bonds that atoms
can form with one another. In chapter four, I introduced the idea of
the molecular orbital as the bond between two atoms created by the
combination of the atoms’ atomic orbitals. The strength of the
resulting bond depends on how well the atomic orbitals can overlap in
space. This is where smallness comes into play. The bond between
two hydrogen atoms is very strong because these atoms are very small
and can approach each other quite closely, allowing for maximum
overlap.
A picture being
worth a thousand words, let me recapitulate some of the material for
the preceding chapter about molecular bonds. If you’ll remember
the case of the H2
molecule, we explained the bond as the overlap of s orbitals, in
which two new orbitals came into existence:
one bonding one
between the atoms, which strengthens the bond, and one antibonding
orbital, which weakens it:
The bonding MO in this case has a name: it is called a sigma, or σ, bond, as chemists call them. This sigma bond, to reemphasize, is formed by the overlap of the s orbitals in the two hydrogen atoms, but more broadly, sigma orbitals / bonds are at highest density between the two orbitals. There is another type of σ bond, however, which can be formed by the overlap of p bonds. If you’ll recall, these bonds have the general shape
Here,
the lines drawn represent the y (vertical) and z (horizontal) axes,
while the x axis orbital would point at straight angles through the
page. This is why I say that there are three p
orbitals, all perpendicular to each other. Now, if you imagine the
pz
orbitals of two atoms, lying along the horizontal line above, the z
axis, you can see how they too can overlap to form sigma orbitals,
just as the s
orbitals did. Very nice, and exactly what happens for many elements
in the first row of the periodic chart. But now, remembering that
there are three p
type orbitals, you can see that in the case of two atoms the pz
and the py
don’t directly overlap, but appear to be parallel to each other.
Hang in here. For I think I can make this clear with a few more
diagrams and words. The orbital above, which I called py
because the lobes are oriented in the y direction along the axis,
cannot form σ
(sigma) molecular bonding orbitals, because they don’t directly
overlap between the atoms. But there are other kinds of bonds, bonds
in which the overlap of the atomic orbitals is not so direct and
obvious. The py
and pz
type orbitals possessed by the above atom are a perfect example of
this. Still, using our imaginations, we can see that, although these
orbitals do not combine headlong, there is nevertheless an overlap,
an oblique or sideways overlap, between the lobes of the orbitals,
for both the px
and pz
orbitals, if the atoms involved approach each other closely enough.
The overlap is not as strong as with the pz
orbitals, which directly overlap in the plane of the paper to form σ
bonds,
just like the s
orbitals do, but it is there nevertheless. It is strong enough that
we can construct new molecular orbitals, or bonding orbitals, using
these oblique or sideways oriented atomic orbitals. Chemists have a
name for these kinds of bonding orbitals as well; they are called π
orbitals or pi bonds, again applying our habit of using Greek
letters, in this case the letter π
which we call pi. An example of this sideways, pi type bond is given
below:
The
nucleus of each atom lying at the center of the two py
lobes is shown by the intersection of the x and z axes, while the py
orbitals are the areas “smeared out” above and around them. Can
you see that there can be sufficient overlap, and hence bond
formation, between the two atoms using their py
orbitals, shown by the grey regions, provided that they can be
brought close enough together? Also, is it clear to the eye that the
π
bonds are not as strong as sigma (or σ)
bonds, composed of either s
or px
orbitals, which occupy the space directly between the atomic nuclei,
and that to have any strength at all the respective atoms must be
able to approach each other very closely, which in turn means that
only small atoms form stable π
bonds? I think yes, just by looking at them, we can see that π
bonds will be weaker and easier to break than σ
bonds, a disparity that can only increase as we look at larger and
larger atoms.
So.
What has all this got to do with living things and their chemical
makeup? As it turns out, plenty. The molecules N2
and O2
are stable only because of the smallness of their atomic sizes, and
so can have as many as two (as in N2
or O2)
π
bonds, in addition to their σ
bonds – this, by the way, is why we call them triply-bonded or
doubly-bonded molecules.
Still,
they would prefer for energetic reasons to exchange these π
bonds and create or join in with molecules where all the bonding is
of σ
character; that is why we find how easily they combine with, for
example, hydrogen atoms to make the simple molecules of water (H2O)
and ammonia (NH3).
Even carbon, which we are reserving as the basis of many
manifestations of life, is often found bound up with hydrogen too, in
this case to yield the simple molecule of methane (CH4)
as seen in the last chapter. I should also mention to make this
clearer that this is the same reason why third level elements, even
those in the same family of nitrogen and oxygen – phosphorus and
sulfur – do not easily form π
bonds, as the larger size of these atoms do not allow them to
approach each other closely enough; thus, we do no see p2
or s2
molecules, but more complex structures (this is also because these
atoms have available 3d
orbitals for additional bond forming, but we will not go there).
As
we alluded to in chapter 4 carbon’s versatility comes forth even in
these most basic of molecules. Using sp3
(again, you may have to refer to the last chapter to refamiliarlize
yourself with them) hybrid orbitals, carbon can form as many as four
strong, sigma bonds with other atoms, a feat no other atom can boast
of. Since up to four of those atoms it can combine with is another
carbon, we can imagine a vast network of sigma-bonded carbon atoms, a
network that can grow virtually as large, and as complicated, as it
likes. Such networks in fact do exist, and as already mentioned we
call them diamonds, allegedly the hardest substance in the universe.
What is more important for this discussion is that if the simple atom
of carbon can yield the hardest of materials in the universe, then
the creation of living things would appear to be a natural outflow of
this process of bonding one carbon atom to another. Moreover, with
each carbon atom having four “hands” or valence electrons to
offer any other atoms it may encounter, we should be able to come up
with just about any large, complex, stable molecular structure we
can imagine.
Indeed
we can, and have, and the subject even has been given a special name:
organic chemistry. That’s right; carbon is so unique in its
abilities to build complex structures and edifices – meaning large,
complicated molecules – that it and it alone is awarded the very
special prestige of having an entire branch of chemistry constructed
around it. As a matter of fact, go to any university’s web site
and start checking out the chemistry courses and you will see that
the very subject appears to have two major branches: organic and all
the rest, some of the rest actually being called inorganic chemistry.
No other element even comes close to commanding such respect. So we
finally see carbon’s commanding role being due to its unique
ability to form the backbone of an almost infinite variety of
molecular sculptures. And it is exactly that kind of versatility we
are going to need if we are to make any sense of the fantastically
complex, seemingly purposeful assemblages of atoms and molecules
which comprise the roots and foundations of the vast panoply of
biology spread before us. When it comes to satisfying our
curiosities, that is one nail we can pound in completely and begin
our explorations around. We can now say that basic, straightforward
physics and chemistry do allow for biology, though, again this must
be stressed, they are nowhere close to explaining it by themselves.
The explanation requires something else, something that we have been
edging toward, a grand idea or set of ideas that provide the
gratification we have been seeking. It is time to fully enmesh
ourselves in these ideas, and bask in the glory of what we have been
seeking.
* * *
So,
the chemistry of carbon, along with a handful of other atoms like
oxygen, nitrogen, hydrogen, sulfur, phosphorus, and a smattering of
other trace elements gives us all the building blocks we need to
create human beings, elm trees, barnacles, tyrannosaurs, and
paramecia, but that doesn’t explain how or why the blocks manage to
come together in the right ways. Brachiosaurs, which were very
large, plant eating dinosaurs, may be built from the same carbon and
nitrogen and all the other atoms in our own biological grab-bag of
goodies, but no amount of blending, whipping, hurling around, or
piling one thing on top of another will never give us that Jurassic
eating machine, or anything else that could be even remotely
construed as alive in any sense of the word.
Here
we are in our quandary, because we know the answer provided by
thousands of years of folk-wisdom, occasionally dressed up in full
theological garb. God, or some pantheon of gods, or something
supernatural and miraculous, conjured up all the millions of species
that creep, run, swim, fly, fester, or patiently await for the
comings and goings of seasons and suns, so goes the wisdom of the
ancients. Most people who have ever lived, and probably most of
those alive at this moment, find this answer satisfactory. But if we
are to truly gratify our curiosity, we have to accept that this is no
answer at all. It is just another waving of the wand of the
miraculous, with results unexplained and unexplainable. Or to put it
another, yet more devastating, way: if God or some set of gods
explains the complexity plus appearance of purpose we find in
biology, then what explains Its / their equally perplexing purposeful
complexities? It’s an infinite regress, which leads nowhere and
satisfies nothing in the end. The only possible way this
“explanation” can work is if we can come up with something that
is intelligent, intentional, creative, and yet somehow simple. My
suspicion is that is exactly the kind of reasoning, at a largely
sub-conscious level, the theological inclined are actually all about.
To which all I can say is, I cannot dismiss it completely out of
hand, because imagination might someday find just such a joker in the
deck. My suspicion, however, is that there really are limits on what
reality can present us with. Intelligence and intention must be
built upon an edifice of complexity, along with the law of physics
and chemistry, any way we cut the cards. Five hundred years of
science, and five thousand of philosophy, have yet to sniff out any
alternatives, and seem unlikely ever to do so.
So
the supernatural is a non-starter, at least if we intend to stay true
to the themes of this book: curiosity, imagination, and the
scientific approach to explaining things. We have to find something,
or things, in what we already know, or can reasonably speculate
about, if life is to be laid out, dissected, elucidated in some
manner that satisfies us. What I experienced during my walk through
Pennypack Park begs for explanation as much as, if not more, than
anything else one might experience. But where does one begin this
journey towards enlightenment? How do we even start to think about
it?
* * *
Fortunately,
there is a place to start; not a place that makes everything that
follows easy or simple, but one that I believe at least parses the
subject of life into two separate, somewhat more manageable
sub-topics. One sub-topic is the question of the origin: how the
atoms and molecules that in early Earth were in arrangements almost
entirely non or pre-biological came to be re-assembled – or
super-assembled is perhaps the better term – into the most
primitive versions of our complexity plus (appearance of) purpose
life forms that appear first on this planet between three and a half
and four billion years ago.
Is
this a separate question? Yes, it clearly is, and for the following
reasons: first, all things biological, however large or small, or
ephemeral or long-loved, or whatever their form or function, or
however they eke out their livings, rely upon a common basis of
biochemistry which can be clearly seen in all of them, if you examine
them at the level of atoms and molecules. That in itself suggests a
common origin, and provides the platform for the other reason: this
platform, this origin aside, is what accounts for the overwhelming
diversity in livings things that we witness today, billions of years
after the beginnings, in the various shapes, colors, sizes, and
behaviors of the tens of millions of plants, animals, fungi, and
other species which have crawled, flown, walked, swam, or in whatever
manner reached practically every corner and niche of remotely
inhabitable space that can be found on Earth.
Here,
in the early twenty-first century, this cleavage of the problem of
life into these two daughter problems is supported by so much
evidence that there can be no doubt that it is the proper way to
initiate our quest. The chemical evidence from DNA, RNA, proteins,
carbohydrates, and other biomolecules has demonstrated their common
origin beyond any reasonable doubt. As an example, the viruses which
I have mentioned, so tiny that they cannot be seen by any optical
microscope however powerful, and whose relative simplicity makes
their place in the bower of biology still a disputed issue, can feast
upon hosts as disparate as bacteria and human beings and redwood
trees only because the molecular machinery underpinning all of these
things, including the viruses themselves, is almost identical. The
same could be said about the relationship between virulent bacteria
and their animal / plant / fungus hosts; for that matter, about the
plain, ordinary fact that most living things on this planet make
their living by somehow consuming other living things. This is
something that couldn’t happen if we weren’t all made up of the
same basic, chemical, stuff, underneath all our appearances of
diversity.
Of
course, it may still prove possible that the same kinds of processes
explain both phenomena, the origin of life, and its subsequent
diversification. But there is no reason to assume, a
priori,
that this is true, and in fact it is the position of almost all
scientists who tackle these two problems that it is almost certainly
not true, or at least not true for the most part though there may be
some overlap in some places.
What
is true, however, is that we are given a choice, right here and now,
at the start of our trek toward understanding. As in Robert Frost’s
poem, we are presented a choice of two roads to walk upon, both of
which seem equally enticing:
Two
roads diverged in a yellow wood,
And
sorry I could not travel both
And
be one traveler, long I stood
And
looked down one as far as I could
To
where it bent in the undergrowth;
Then
took the other, as just as fair,
And
having perhaps the better claim,
Because
it was grassy and wanted wear;
Though
as for that the passing there
Had
worn them really about the same,
And
both that morning equally lay
In
leaves no step had trodden black.
Oh,
I kept the first for another day!
Yet
knowing how way leads on to way,
I
doubted if I should ever come back.
I
shall be telling this with a sigh
Somewhere
ages and ages hence:
Two
roads diverged in a wood, and I—
I
took the one less traveled by
And
that has made all the difference.
Unlike Frost’s poem, the two roads we are faced with look very unequal even before we take the first step on either of them. Again, beginning with our vantage point at the start of the twenty-first century, we can say that one of these roads really is well-worn, although there do remain many thickets and tangles and vines and thorns to be waded through; while the other, superficially the more straightforward of the two, is actually much more mired in undergrowth and mystery, one on which many faltering first steps have been made or attempted still with no clear path in sight. That seemingly-clearer road is the problem of the origins of life; a surprise only as long as we overlook the one, real, overwhelming obstacle in our path: which is that, however it happened, it did so either billions of years ago on this planet, or trillions of miles away on other possible worlds as discussed in chapter two, and then transported here; either of which leaves us exceedingly short of useful data upon which we can build testable theories. Both of which leave us prey to the purveyors of miracles, a shortage of which is seemingly never found; as long as, however, we forget that if miracles are answers, then science would never have explained anything, and curiosity and imagination would be pointless. Even if we do never solve some particular problem, this is no cause for capitulation; we are, after all, mere human beings with human abilities, and it shouldn’t surprise anyone that some questions remain forever unanswered, no matter how much of those abilities are applied to them for how long. It is quite possible that the origin of life, or its different possible origins, remains a nut we never quite crack. Disappointing as that would be, it is no cause for dismay or futility or some kind of existential malaise; besides which, we will no doubt discover many amazing things in our endeavors to solve this problem. Indeed, this has already happened, with examples of amazing self-organizing complexity in various chemical systems being the most obvious examples. This is actually one of the most amazing things about science, at least as I have experienced it: that our attempts to hammer out a solution to one problem ends up leading us completely unexpected paths, stumbling upon unknown veins of gold.
* * *
The
problem of the origin(s) of life is a fascinating and of course
commanding one, one in which many books can and have been written on
and which careers have been dedicated to. However, I have
deliberately chosen to leave it out of this book because meandering
down so long a path with so many thickets and brambles is likely to
end up with ourselves just scratching ourselves all over, and mending
and binding the many wounds which we will receive, with no clear end
in sight as our reward. Actually, even Darwin himself knew this. In
all his tomes on evolution, he persistently avoids and evades the
question of life’s origins, leaving it in backwaters to be treaded
by the minds that were to come after him. If possible, he doesn’t
even mention or allude to it. He had the foresight and, in our
hindsight, the wisdom, to know that mucking around in those waters
would only muddy the tale he was bent on weaving, a tale with enough
problems of its own. Fittingly, it is a problem he only alights upon
to let us know that he too will have nothing of the supernatural in
solving it. Just as Newton was wise enough to know to let the cause
of the gravity he so deftly described be a problem left to his
successors, so Darwin also avoids this slippery trap and leaves the
question of origins to minds to come after him.
There
is one last point I would like to make here. It was well accepted by
the late eighteen hundreds that one of the most important
characteristics of livings things today is that all of them had
parents, of one form or another. That fact, so obvious to us now,
was finally nailed down by Louis Pasteur in a series of famous
experiments, thereby separating the problem of biology into its two
great sub-problems, its origins and its subsequent evolution. What
Pasteur showed was that wherever even the simplest of living things
came from, whether they be mice or maggots, they didn’t just burst
into existence out of inorganic or simple organic beginnings. No,
all of them, without exception, were begat in some manner; moms and
dads, or at least a parent of some sort, were involved, even if no
one knew in any detail how the begetting was done. You could breed
billions of bacteria from one bacterium, but not a one from zero,
however hard you tried. That clear and indisputable truth was a
beginning into everything the twentieth century contributed about the
fundamentals of biology: Everything comes from something, nothing
comes from nothing. At least not on this planet, at this point in
its history.
* * *
It
would appear that we at least have a beginning here in our wonderings
about ourselves, about life, that we can summarize. A quartet of
beginnings, actually. First is that it displays levels of
complexity, organization, and seeming purpose which would appear to
defy explanation. Second, at its most fundamental level, life and
its origins are based on nothing more than physics and chemistry,
most crucially on the amazing properties of that amazing element
carbon, although a plethora of other elements play essential roles as
well. In addition, we and our ancestors all share a common
biochemistry, a biochemistry built on DNA, proteins, and so forth,
and have certainly done so going back a good three billion plus years
in Earth’s history.
The
third beginning is an inevitable consequence of the first two, that
of procreation being the only way nature has now of producing new
organisms, from bacteria to human beings, that living things are
simply too complicated and organized to assemble by chance. Not only
that, but offspring resemble their parent(s) (although, of course
this is not always immediately obvious, as we all know from the
example of a caterpillar hatching from a butterfly’s egg), a
resemblance which will be passed on to future generations, albeit
with occasional mutations.
As
for the fourth beginning, evolution, that it occurs and has been
occurring for a vastly long time, that it explains the many forms and
functions and niches life has found on our world, and that, most
importantly, we possess the fundamental understanding of how and why
it occurs, underlays biology just as physics underlays chemistry and
mathematics underlays physics. Furthermore, just as our third
beginning derived from its predecessors, the fourth emerges
inevitably from the third. It is the beginning that took two English
naturalists, Charles Darwin and Alfred Russell Wallace, and these
Victorian gentlemen’s elegant and brilliant reasoning which derive
from the observation of two natural phenomena: the inheritance of
physical and behavioral traits from parent to offspring, and
competition for scarce resources among those offspring to survive and
repeat the process: natural selection. What to me makes their
accomplishments all the more remarkable is that how heredity works
was something neither man had a clear concept of (even though this
was the same time that Gregor Mendel was doing his experiments with
peas which would have helped both of them immensely – experiments
which remained in obscurity until the early 1900s); indeed, some of
Darwin’s concepts in this field actually made his theory harder to
defend. Still, they convinced the scientific establishment of their
day within a short period of time.
* * *
It
is natural selection and random mutation that have conspired together
over millions of years to wire our brains into the relentless
curious, pattern hunting, story weaving machines I spoke of in
chapter one. This unconscious conspiracy has been so successful that
we imagine that we see people and animals among the stars and, if
like most of us who have ever lived do not know better, believe tales
of how they came to be there. It is also of course one of the main
wellsprings of all art and literature, from the Mona Lisa and War and
Peace to the Campbell’s soup label and idle gossip. It is,
ironically, the reason that I used the word conspiracy and all it
implies without a second thought, and probably the reason you may not
have questioned my doing so.
The
obvious downside to this marvelous, compelling faculty of our brains
is that the patterns and stories are often unsuspicious products of
it. When this happens, then they, like magic, only sidetrack and
mislead us too, perhaps disastrously so. In fact, neither our brains
nor the rest of our bodies are the culmination of any kind of
conspiracy, but only one of many possible, logical outcomes of
nature’s blind laws.
So
we tread carefully when we look at the universe about and within us
and try to make sense of its workings and history. Each step has the
potential to take us either into deeper understanding or shallower
error. If we place too much trust in this part of what nature has
wired into us, we seriously risk the latter. We must always be
prepared to pull back to reexamine what we think we see, to be
skeptical, to consider other possibilities, and to use another gift
we have been given by those same blind laws, that of our ability to
reason. If we tread the path carefully enough, our prospects for
success, I believe, are promising.
Why
do I begin a discussion of evolution this way? The best answer I can
offer is to return to the beginning of this chapter: “One of
themes of this book is that if we are to satisfy our curiosity about
the universe around us, we will need to use our imaginations, because
the universe as we perceive it simply doesn’t get us very far.”
Yet
imagination stripped of pattern seeking and story telling would be a
moribund faculty of our minds, if indeed our minds could have it at
all. It surely would be nowhere close to the task of fleshing out
and filling in our understanding of things. Not that it would it
matter though for our curiosity would be almost severely crippled as
well, probably to no more than an animal instinct serving few goals
greater than finding food and mates and avoiding predators.
Nowhere
is this shown better than in the work on the structure and workings
of the DNA molecule, the beating heart of heredity, a heart that,
perhaps more than anything else science has discovered before or
since, would never have been found without that combination of
imagination, pattern seeking and story telling, skepticism, and
reason which make us such unique organisms that we may indeed be
alone (although I hope not) in the universe.
As
with so many other parts of my scientific education, I was first
exposed to DNA and its workings one of the Time-Life books (or maybe
it was one of Isaac Asimov’s many books on science). I was then
too young to understand it in much detail, but I do recall being
profoundly impressed with how important it was to all life on this
planet, and at least the rudiments of why. The deeper comprehension
was something that has taken a fair part of my life to even begin to
grasp, and even today I know that comprehension is nowhere near as
deep as it could be – not that I feel embarrassed or ashamed about
that for even the most brilliant minds in the world have spent both
this and a large part of the last century yet still have many
mysteries arrayed against them.
* * *
I
cannot resist a recapitulation here. It has been almost six months
since I took the stroll through Pennypack Park I described earlier in
this chapter, but right now, thinking of these issues, I find myself
irresistibly drawn back to that day. Doing so, I find that my senses
are as enthralled now as they were then. Once again I see and hear
and smell the many living things surrounding me, almost making me
feel as though I have been transported to some kind of paradise. For
here I am, surrounded by the oaks and the maples and the sycamores
and occasional pine trees, and admittedly many others I do not
recognize. The branches and twigs of bushes, both low and high,
brush against my body, and my shoes swish over the uncut grass.
Birds circle in the air, dart between the trees, then settle on their
branches and study the world around them. If I close my eyes, not
only do I hear their many languages, I am greeted by a cacophony of
other noises: insects of all kinds, the rustling of just opening
leaves in the spring breeze, the splashing of fish breaking the
surface of the still cold water, the dabbling and occasional quacking
of ducks, the distant, patient calls of bull frogs toward potential
mates, the scratching of squirrels racing up and down the trees, and
others which I cannot with any certainty place or, to be honest,
remember now. I am also of course aware of the humans around me and
their myriad tongues with their myriad emotions and hopes, not to
mention the clopping of those fortunate enough to be riding horses.
Dogs bark from time to time, also reminding me of our presence.
Opening my eyes again, I look for the other, more silent or better
concealed creatures I know to be about, from mice and ground hogs and
snakes, to ones like skunks, raccoons, opossums, and others that only
come out at night. I see no deer, but don’t doubt they are about,
that it is only a matter of time and attention. Stroking my fingers
on a stone wall I feel the velvet of new moss against my fingertips.
It is too early for mushrooms and most other fungi, but they too hide
in dark places, waiting for warmer weather and longer days to coax
them out. The insects I heard swirl around me now, and spiders lurk
in cracks in the stone walls or hang from fresh webs, waiting for
victims. Taking it all in, it is difficult to imagine how nature
could have been more creative in her choice of forms and functions
for her productions. Humans have nowhere near such power, and
perhaps never will.
Yet
I have only just brushed up against the most amazing thing about all
this splendor. Which is that, were we to take samples of all of it,
and place it under an instrument powerful enough to see that deeply
into the structure of life, they would all reveal the spirals of DNA
at the very core of their beings, spirals which account for that
amazing creativity. In no case would the spirals be exactly the same
– they would differ in their lengths and, in most places, their
specific nucleotide sequences – but the similarities would vastly
outweigh the differences in even the most distantly related
organisms. Walking through the park, we are inescapably aware of the
diversity which infinitely impresses us, yet it is only when we look
closer, much closer, do we see – probably the most profound paradox
of life on this world – the foundation which is shared by all of
it.
Which
is why of course I began by speaking of patterns and stories, and the
double-edged sword in our minds which compels us to see and create
them. If you will recall the beginning of this chapter, I dared the
reader to define what life actually is, and gave some examples of how
our forebears answered it. The important point about our forebears
is that the answers they did come up, as persuasive as they were to
them, could not have been more mistaken. The patterns they perceived
in life, and the stories they told to explain them and their origins,
however compelling and reasonable they seemed at the time, have
turned out to be wrong, dead wrong, in retrospect absurdly wrong.
What accounts for all living things is the laws of physics and
chemistry, working within the forces of evolution by natural
selection.
But
if we stop there we fail to appreciate the power of the other edge of
the sword. The discovery of DNA and the other molecules of heredity,
the probing into and teasing out how they work, would not have been
possible without our ability and willingness to use this edge as well
as all the other facets of imagination, in combination with the
hardest of scientific acumen. For what pattern in nature could be
more arresting than the DNA spiral? And what story could be more
captivating than the story that led to its discovery and unraveling –
except, perhaps, the story that DNA, and the millions of years it has
been evolving in so many directions, itself tells?
* * *
We
take it as common knowledge today that DNA (or, in some cases, its
brother molecule RNA) forms the hereditary basis for almost all
living things on this planet, but Darwin and Wallace died long before
it was discovered. Yet neither man could have failed to grasp the
power of this one molecule to fulfill its dual responsibilities as
the instruction set for both developing biological things and
maintaining so many of their essential functions. They would no
doubt have been equally impressed – no, elated – with its
additional ability to create new information via mutation,
information to be tested in the living, breathing, real world of life
and death. Natural selection could not have a greater ally.
I
have been emphasizing the almost incomprehensible complexity of
living things, but in describing DNA we are surprisingly impressed,
at least at first sight, by its simplicity. The simplicity is such
that Crick and Watson, who revealed its structure to the world a
little over fifty years ago (without, alas, giving Rosalind Franklin
her due credit), were able to deduce how it replicates itself –
something it must do every time a cell divides – without a single
additional observation or experiment to back their deduction up
(although they were rather cagey in how they mentioned it in their
paper). And although there are still much research to be done, we
have since that time been able to elucidate what DNA does and how it
does it with impressive detail.
Simplicity
does not mean lack of sophistication, however. DNA, comprised of two
strands of sugar phosphate backbones, twirled together and held that
way by pairs of small, interlocking “base pair” molecules, may
not sound promising as genetic material; it would probably not even
be the first choice of an engineer looking for an efficient molecule
for storing information. But remember the discussion earlier of the
power of the carbon atom to assemble stable molecules of very large
size. As large as, for example, the Hope diamond. The DNA contained
in chromosome one of the forty-six chromosomes of a single cell of
your body would, if teased out to its full length, be approximately
three inches long and contain over two hundred million base pairs (I
can’t resist the calculation that if all the DNA in all our cells
were laid end to end, they would stretch from here to the moon and
back some twelve thousand times!). Given that the four bases can
have any potential sequence, yielding 4200000000
or over 10100000000
possible arrangements in just that one chromosome, perhaps our
engineer should take a second look. Incidentally, don’t try: that
is a number no amount of imagination will make real in your mind;
even all the atoms in the entire known universe only sum up to less
than a paltry 1080.
Actually,
on third thought, if anything we seem to be dealing with such an
overkill of information storage capacity that we wonder why nature
chose to employ DNA at all. Would wonder, that is, if nature truly
were an intelligent engineer that could choose anything.
So
DNA, its deceptive initial simplicity aside, is easily – way easily
– more than up to the task of encoding all the information needed
to create and maintain not only ourselves, but also any living
organism we can conceive of, however strange and wondrous; more than
all the organisms that have ever lived on this planet, or might live
in the future. Or that might have or will live anywhere else in our
universe, assuming they use DNA as their genetic code. Or in a
billion billion universes (if they exist) spanning a billion billion
years.
Information
… but of what nature? And how is it encoded in the DNA spiral?
And how does our biological machinery and processes extract it, and
turn it into the raw material of our beings? And how has it allowed
the combination of random mutation and natural selection to drive
life from its simplest beginnings over three billion years ago to the
incredible diversity of much more complex forms, including ourselves,
that we see today – a diversity my walk through Pennypack Park only
revealed only the tiniest fraction of?
It
is time to talk about protein.
* * *
Here
is a subject we are all at least somewhat familiar with. Who doesn’t
remember as a child being cajoled, coaxed, and badgered into making
sure we ate enough protein to grow strong and tall? Go into any
health food store and you will find rows of large containers of
protein supplements, each promising to build stronger muscles in
absurdly short times.
Proteins
are large organic molecules (though nowhere near as large as DNA)
which, when we consume them, are broken down by digestive processes
into small molecular units called amino acids. There are some twenty
kinds of amino acids in living things, and different combinations and
numbers of them link together to make all the proteins nature
produces. Having broken down the proteins we eat, we then reassemble
the freed amino acids to construct the many new different proteins
our own bodies need. And our bodies need them for many different
purposes.
What
makes proteins so important and so versatile is the fact that they
are not merely random strings of amino acids, like glass beads on a
thread. Instead, because of the intramolecular forces in them, they
coil, wrap around each other, form plate-like structures, and then
fold up into specific, detailed shapes which are determined by their
specific sequences. That is why the glutinous, translucent “white”
of an egg becomes firm and truly white when we cook it, for heat, as
well as other physical and chemical assaults, unravels the globular
shape of the albumin proteins and make them lay flat against each
other.
The
myriad sizes and shapes of proteins are employed by bodies to perform
all kinds of functions. For example, proteins studding the surface
of a cell control the rate at which water and other molecules and
ions (electrically charged atoms and molecules) enter and leave.
They are employed in such diverse roles as construction material for
hair and nails and cartilage, and as essential components of
important biological molecules such as the hemoglobin in your blood,
which carries oxygen from your lungs to every cell in your body and
carries away the carbon dioxide waste to be exhaled. Numerous
different types are critical to cell metabolism, in particular those
that serve as enzymes, which catalyze chemical reactions in your
cells to produce other important molecules. The elasticity of
proteins in muscle cells allow those cells to expand and contract,
allowing your heart to beat and you to use your arms and legs. They
are also important in cell signaling and the proper functioning of
your immune system. Your parents were indeed wise to exhort you to
get enough of them in your diet, even if they did not know why.
Curiosity
ought to be provoking a question in your mind right about now.
Digestion breaks down the proteins we eat into their component amino
acids. The amino acids are then transported by the blood to all the
cells in the body. It is in our cells that all the proteins we
need are constructed. Yet proteins contain from hundreds to tens of
thousands of amino acids, all joined together in the specified orders
they require to perform their functions. The greatest engineer in
the world would be running out of his factory screaming if handed a
task this monumental. How do our cells handle it with such aplomb?
The
molecular machinery which assembles proteins in the cell is a subject
which, if I were an expert on it, I could easily fill the rest of
this chapter and more describing. Fortunately, I’m not an expert
on that particular topic, which means I can segue back to DNA without
further ado. The point in this discussion on proteins is that DNA is
the template which is used to build them. A gene is a section of DNA
serving as the template for a specific protein. More specifically,
the sequence of DNA bases determine the sequence of amino acids, the
correspondence between base and amino acid being three to one: each
amino acid corresponds to, is encoded by, three nucleotide bases in
succession. As there are four such bases, this gives us 4 × 4 × 4
= 64 possible amino acids we could code for, well more than the
twenty that are actually used in nature.
Actually,
this is worth elaborating on to some detail, due to another
digression I feel is worth making. If we represent the four
nucleotide bases in DNA, adenine, thymine, guanine, and cytosine, by
their initial letters, A, T, G, and C, we find that we have an
excellent “quaternary” coding system to work with. I use the
word quaternary here in the same way the word “binary” is used
when discussing computer code. When you run your favorite computer
program, or even much less than favorite program, the code your
computer is executing is essentially nothing more than a series of
(electronic) 0s and 1s. This series of 0s and 1s tell the computer’s
processing chip(s) and all the associated electronics and other
gizmos what to do (some of which you saw in chapter two); bear in
mind that with enough 0s and 1s we can create a computer program as
sophisticated as we like; if my understanding of computer science is
correct, given enough 0s and 1s we could create a program that
simulated the entire universe and its history, though whether this
universe includes the program and the computer it is running on is
still unclear to me.
The
three to one correspondence between bases and amino acids modifies
the coding system of DNA but does not alter the analogy with computer
programming code, an analogy I would like to continue with. It means
that if we were to read the amino acid sequence of a section of DNA
by “unzipping” it and looking at the base sequence, instead of
looking at it one base at a time we would have to read it in groups
of three: e. g., TTA, CAG, CTG, GCA, and so on, each group of three
coding for one acid. As noted, that is well more than enough for the
twenty amino acids nature uses in living things.
Computer
programming. Like one of the individuals who have inspired this
book, I too have had considerable experience in the field and so too
am drawn to the comparison of DNA to programming code. It is a
powerful and compelling comparison – the idea of DNA as digital
information, to be molded in any direction the blind but non-random
forces of evolution wield – has, for me at least, as much appeal to
imagination and useful insight as any other idea in biology over the
last quarter century or so. Now, with the digital age fully upon us,
the comparison, or analogy, is even more forceful to the mind.
Personally, as a (very) part-time science fiction writer it conjures
up images of artificial living beings, of synthetic organs and
tissues to prolong our lives, perhaps indefinitely, of expanding the
already impressive capacities of our brains with biochips, and even
such cybernetic ideas as an Internet composed of human minds directly
connected to and communicating with each other and with sentient
computers. Given that I honestly expect to see some of this happen
at least in my lifetime not to mention my children’s, the digital
view of biology is perhaps too
seductive.
* * *
After
everything I have said about imagination and our need to use it to
answer our questions about ourselves and our universe, the word
seductive alone ought to suggest I am about to pull back, at least
somewhat. So I am. Not that I don’t truly believe that many if
not all of the above mentioned wonders of coming technology will
happen someday. But the emphasis on the digital nature of DNA can
potentially mislead us as well as inform.
The
reason for this is that our digital DNA codes, serves as a template
for, the highly analog proteins that are the actual machinery of our
bodies. By analog I simply mean the opposite of digital: continuous
in change as opposed to changing in discrete steps. (A hopefully not
too outdated example of the difference would be the analog dial on
old radio sets which, as you turned it, changed the tuning of the
receiver continuously from one frequency to another, as opposed to
digital push-button radios today which jump instantly to a specific
frequency.) In calling proteins analog, I do not mean the sequence
of amino acids which comprise them; that is still as digital as DNA
in that an amino acid change in the sequence is discrete – you
can’t continuously change between one acid and another.
Hang
on, for I am getting to the reason for this digression. It is true
that the amino acid sequence in a protein is digital, but what
matters for proteins, what they do and how they work, is largely
their specific size and shape, qualities that usually can be varied
more or less continuously by changes in the amino acid sequence which
comprises them. That is, if we replace one amino acid in a protein
consisting of hundreds or thousands with another, the most likely
outcome is a small, perhaps even insignificant, change in its shape –
resulting in a proportionately tiny change in how the protein does
its job. For example, if the protein is an enzyme, a slight change
in its shape would cause the rate at which it catalyzes its specific
reaction to be somewhat faster or slower. Or if the protein controls
the rate at which a certain molecule or ion enters or leaves cells,
that could be modified slightly. Furthermore, additional amino acid
changes are likely to lead to similar small, cumulative changes in
the protein’s function.
Small,
cumulative changes. We are practically talking about the heart and
soul of Darwinian evolution. But what would cause these single amino
acid changes in a protein’s make-up? Recall that it is a
particular sequence of three consecutive nucleotide bases on DNA
which correspond to the amino acid at a particular location in a
protein. Any number of agents have the potential to alter, or
“mutate” a base in DNA: radiation and various kinds of chemical
assaults. Such mutations (there are others) even has a name: point
mutations. Point mutations are surprisingly common. Most are caught
and corrected by molecular machinery in the cell designed for the
purpose, but they occasionally slip through the defenses. In doing
so they can lead to the amino acid changes in proteins which often
(not always: sickle-cell anemia is caused by just such a single
change on one of the globin proteins in hemoglobin) cause those
proteins’ functioning to alter slightly, causing somewhat higher or
lower production of another chemical or modifying cell membrane
permeability to a molecule / ion, leading to … well, for example,
if the protein is involved in embryological development, a modest
change in the physiology or behavior of the organism. The point is,
when talking about natural selection, modest changes, such as those
that lead to slightly longer or shorter legs, have a better chance of
being advantageous than large changes, which are almost certain to be
disastrous. And given that changes can accumulate over geological
time, constantly being molded and “directed” by natural
selection, I hope it is by now clear that the entire edifice of DNA /
proteins / form and function, though completely unknown in Darwin’s
and Wallace’s time, could hardly have been better tailored to the
revolutionary ideas they unleashed upon the world.
* * *
Wondering
about ourselves is, of course, an endeavor that never ends, and no
such pretense will be made here. On the contrary, the territory
covered in this chapter is only a tiny fraction of the vast subject
of life, what it is and how it has come to be. Alas, curiosity
demands more, far more much more, than I could hope to deliver even
in an entire book, assuming I was well versed enough in the subject
for such an undertaking. But I do hope that certain basics about
life, in general, have been laid down: its utterly improbable
complexity, seeming design and purposefulness (what I have called
intentionality); the underlying chemistry, particularly of carbon,
that makes it possible (on our planet); the continuity, in that all
living organisms are in some way descended from a parent or parents,
going back to the beginnings of life on Earth some three and a half
or more billion years ago; the basics of Darwinian / Wallician
evolution, which explains how life today came from its much simpler
beginnings; and the interworkings of the tapestry of DNA with the
working machinery of proteins which are essential to both life’s
functioning and its evolution. I hope you feel that we have not made
a bad start.
But
there is another aspect to our self exploration, one that can’t be,
and won’t be, ignored. That is our wondering about ourselves as
individuals. How is it, each of us asks at least from time to time,
that I
came to be; what and why am I; what is my place and destiny, if any,
in the scheme of things, whatever that scheme is assuming; what does
it mean to be human and what else could I have been? The reason I
have excluded this aspect from this chapter is that the sciences that
answer it, if any, are necessarily more speculative, to the point
where it is questionable in many cases to call them sciences at all.
But that doesn’t stop our asking the questions. It doesn’t
quench our curiosity, or make it go away. And we can still use our
imaginations – gingerly, for we tread on unknown territories – in
our quest to come up with answers that just might make some degree of
sense. Or so we hope.
Subscribe to:
Posts (Atom)
Mandatory Palestine
From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine Palestine 1920–...
-
From Wikipedia, the free encyclopedia Islamic State of Iraq and the Levant الدولة الإسلامية في العراق والشام ( ...
-
From Wikipedia, the free encyclopedia A reproduction of the palm -leaf manuscript in Siddham script ...