Search This Blog

Friday, January 17, 2014

Are There 'Laws' in Social Science?

by Ross Pomeroy in Think Big 
January 17, 2014, 12:29 PM
220595
This post originally appeared in the Newton blog on RealClearScience.
You can read the original here.

Richard Feynman rarely shied away from debate. When asked for his opinions, he gave them, honestly and openly. In 1981, he put forth this one:

"Social science is an example of a science which is not a science... They follow the forms... but they don't get any laws."

Many modern social scientists will certainly say they've gotten somewhere. They can point to the law of supply and demand or Zipf's law for proof-at-first-glance -- they have the word "law" in their title! The law of supply and demand, of course, states that the market price for a certain good will fluctuate based upon the quantity demanded by consumers and the quantity supplied by producers. Zipf's law statistically models the frequency of words uttered in a given natural language.

But are social science "laws" really laws? A scientific law is "a statement based on repeated experimental observations that describes some aspect of the world. A scientific law always applies under the same conditions, and implies that there is a causal relationship involving its elements." The natural and physical sciences are rife with laws. It is, for example, a law that non-linked genes assort independently, or that the total energy of an isolated system is conserved.

But what about the poster child of social science laws: supply and demand? Let's take it apart. Does it imply a causal relationship? Yes, argues MIT professor Harold Kincaid.

"A demand or supply curve graphs how much individuals are willing to produce or buy at any given price. When there is a shift in price, that causes corresponding changes in the amount produced and purchased. A shift in the supply or demand curve is a second causal process – when it gets cheaper to produce some commodity, for example, the amount supplied for each given price may increase."

Are there repeated experimental observations for it? Yes, again, says Kincaid (PDF).

"The observational evidence comes from many studies of diverse commodities – ranging from agricultural goods to education to managerial reputations – in different countries over the past 75 years. Changes in price, demand, and supply are followed over time. Study after study finds the proposed connections."

Does supply and demand occur under the same conditions? That is difficult to discern. In the real world, unseen factors lurk behind every observation. Economists can do their best to control variables, but how can we know if the conditions are precisely identical?

Still, supply and demand holds up very well. Has Mr. Feynman been proved wrong? Perhaps. And if social science can produce laws, is it, too, a science? By Feynman's definition, it seems so.

The reason why social science and its purveyors often gets such a bad rap has less to do with the rigor of their methods and more to do with the perplexity of their subject matter. Humanity and its cultural constructs are more enigmatic than much of the natural world. Even Feynman recognized this.

"Social problems are very much harder than scientific ones," he noted. Social science itself may be an enterprise doomed, not necessarily to fail, just to never fully succeed. Utilizing science to study something inherently unscientific is a tricky business.

Of lice and men (and chimps): Study tracks pace of molecular evolution

Jan 07, 2014 from Phys.Org



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
A new study led by Kevin Johnson of the Illinois Natural History Survey (seated, at left), with (from left to right) entomology professor Barry Pittendrigh, animal biology professor Ken Paige and postdoctoral researcher Julie Allen, indicates
Read more at:
http://phys.org/news/2014-01-lice-men-chimps-tracks-pace.html#jCp
 
A new study compares the relative rate of molecular evolution between humans and chimps with that of their lice. The researchers wanted to know whether evolution marches on at a steady pace in all creatures or if subtle changes in genes – substitutions of individual letters of the genetic code – occur more rapidly in some groups than in others.
 
A report of the study appears in the Proceedings of the Royal Society B.
The team chose its study subjects because humans, chimps and their lice share a common historical fate: When the ancestors of humans and chimps went their separate ways, evolutionarily speaking, so did their lice.

"Humans are chimps' closest relatives and chimps are humans' closest relatives – and their lice are each others' closest relatives," said study leader Kevin Johnson, an ornithologist with the Illinois Natural History Survey at the University of Illinois. "Once the hosts were no longer in contact with each other, the parasites were not in contact with each other because they spend their entire life cycle on their hosts."

This fact, a mutual divergence that began at the same point in time (roughly 5 million to 6 million years ago) allowed Johnson and his colleagues to determine whether occurs faster in primates or in their parasites.

Previous studies had looked at the rate of molecular changes between parasites and their hosts, but most focused on single in the mitochondria, tiny energy-generating structures outside the nucleus of the cell that are easier to study. The new analysis is the first to look at the pace of molecular change across the genomes of different groups. It compared a total of 1,534 genes shared by the primates and their parasites. To do this, the team had to first assemble a rough sequence of the chimp louse (Pan troglodytes schweinfurthii) genome, the only one of the four organisms for which a full genome sequence was unavailable.

The team also tracked whether changes in gene sequence altered the structure of the proteins for which the genes coded (they looked only at protein-coding genes). For every gene they analyzed, they determined whether sequence changes resulted in a different amino acid being added to a protein at a given location.

They found that – at the scale of random changes to gene sequence – the lice are winning the molecular evolutionary race. This confirmed what previous, more limited studies had hinted at.
"For every single gene we looked at, the lice had more differences (between them) than (were found) between humans and chimps. On average, the parasites had almost 15 times more changes," Johnson said. "Often in parasites you see these faster rates," he said. There have been several hypotheses as to why, he said.

Humans and chimps had a greater percentage of sequence changes that led to changes in protein structure, the researchers found. That means that even though the louse genes are changing at a faster rate, most of those changes are "silent," having no effect on the proteins for which they code. Since these changes make no difference to the life of the organism, they are tolerated, Johnson said. Those sequence changes that actually do change the structure of proteins in lice are likely to be harmful and are being eliminated by natural selection, he said.

In humans and , the higher proportion of amino acid changes suggests that some of those genes are under the influence of "positive selection," meaning that the altered proteins give the primates some evolutionary advantage, Johnson said. Most of the genes that changed more quickly or slowly in primates followed the same pattern in their , Johnson said.

"The most likely explanation for this is that certain genes are more important for the function of the cell and can't tolerate change as much," Johnson said.

The new study begins to answer fundamental questions about changes at the molecular level that eventually shape the destinies of all organisms, Johnson said.

"Any difference that we see between species at the morphological level almost certainly has a genetic basis, so understanding how different genes are different from each other helps us understand why different species are different from each other," he said. "Fundamentally, we want to know which genetic differences matter, which don't, and why certain genes might change faster than others, leading to those differences."
Explore further: Louse genetics offer clues on human migrations
More information: "Rates of Genomic Divergence in Humans, Chimpanzees and Their Lice," rspb.royalsocietypublishing.org/lookup/doi/10.1098/rspb.2013.2174
Journal reference: Proceedings of the Royal Society B

Read more at: http://phys.org/news/2014-01-lice-men-chimps-tracks-pace.html#jCp



New form of quantum matter: Natural 3D counterpart to graphene discovered

17 hours ago by Lynn Yarris in Phys.orgNatural 3D counterpart to graphene discovered
A topological Dirac semi-metal state is realized at the critical point in the phase transition from a normal insulator to a topological insulator. The + and - signs denote the even and odd parity of the energy bands. Credit: Yulin Chen, Oxford

The discovery of what is essentially a 3D version of graphene – the 2D sheets of carbon through which electrons race at many times the speed at which they move through silicon - promises exciting new things to come for the high-tech industry, including much faster transistors and far more compact hard drives. A collaboration of researchers at the DOE's Lawrence Berkeley National Laboratory (Berkeley Lab) has discovered that sodium bismuthate can exist as a form of quantum matter called a three-dimensional topological Dirac semi-metal (3DTDS). This is the first experimental confirmation of 3D Dirac fermions in the interior or bulk of a material, a novel state that was only recently proposed by theorists.
"A 3DTDS is a natural three-dimensional counterpart to graphene with similar or even better mobility and velocity electrons," says Yulin Chen, a physicist with Berkeley Lab's Advanced Light Source (ALS) when he initiated the study that led to this discovery, and now with the University of Oxford. "Because of its 3D Dirac fermions in the bulk, a 3DTDS also features intriguing non-saturating linear magnetoresistance that can be orders of magnitude higher than the GMR materials now used in hard drives, and it opens the door to more efficient optical sensors."
Chen is the corresponding author of a paper in Science reporting the discovery. The paper is titled "Discovery of a Three-dimensional Topological Dirac Semimetal, Na3Bi." Co-authors were Zhongkai Liu, Bo Zhou, Yi Zhang, Zhijun Wang, Hongming Weng, Dharmalingam Prabhakaran, Sung-Kwan Mo, Zhi-Xun Shen, Zhong Fang, Xi Dai and Zahid Hussain.


Two of the most exciting new materials in the world of high technology today are graphene and , crystalline materials that are electrically insulating in the bulk but conducting on the surface. Both feature 2D Dirac fermions (fermions that aren't their own antiparticle), which give rise to extraordinary and highly coveted physical properties. Topological insulators also possess a unique , in which bulk electrons behave like those in an insulator while surface electrons behave like those in graphene.

Natural 3D counterpart to graphene discovered       
Beamline 10.0.1 at Berkeley Lab's Advanced Light Source is optimized for the study of for electron structures and correlated electron systems. Credit: Roy Kaltschmidt, Berkeley Lab

"The swift development of graphene and topological insulators has raised questions as to whether there are 3D counterparts and other materials with unusual topology in their electronic structure," says Chen. "Our discovery answers both questions. In the sodium bismuthate we studied, the bulk conduction and valence bands touch only at discrete points and disperse linearly along all three momentum directions to form bulk 3D Dirac fermions. Furthermore, the topology of a 3DTSD electronic structure is also as unique as those of topological insulators."

Read more at: http://phys.org/news/2014-01-quantum-natural-3d-counterpart-graphene.html#jCp

Thursday, January 16, 2014

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

https://medium.com/the-physics-arxiv-blog/d5d3dc850933
 

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it


When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The results were not pretty.
 
It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.
 
Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.
 
But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.
 
This conundrum, which physicists call ‘the problem of time’, has proved to be a thorn in flesh of modern physicists, who have tried to ignore it but with little success.
Then in 1983, the theorists Don Page and William Wootters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.
 
Entanglement is a deep and powerful link and Page and Wootters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.
 
But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.
 
In this case, Page and Wootters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.
 
But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.
 
This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.
 
Of course, without experimental verification, Page and Wootter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.
 
Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page and Wootters’ ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.
 
The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.
 
The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.
 
In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.
 
In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.
 
“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wootters mechanism,” say Moreva and co.
 
That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.
 
What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.
 
Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.
 
The next step will be to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.
 
And therein lies another challenge.
Ref: arxiv.org/abs/1310.4691 :Time From Quantum Entanglement: An Experimental Illustration

Brain on Autopilot

Neuroscience News
How the architecture of the brain shapes its functioning.
The structure of the human brain is complex, reminiscent of a circuit diagram with countless connections. But what role does this architecture play in the functioning of the brain? To answer this question, researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and University Hospital Freiburg, have for the first time analysed 1.6 billion connections within the brain simultaneously. They found the highest agreement between structure and information flow in the “default mode network,” which is responsible for inward-focused thinking such as daydreaming.
These images shows the brain connections.
A daydreaming brain: the yellow areas depict the default mode network from three different perspectives; the coloured fibres show the connections amongst each other and with the remainder of the brain. Credit Max Planck Institute.

Everybody’s been there: You’re sitting at your desk, staring out the window, your thoughts wandering. Instead of getting on with what you’re supposed to be doing, you start mentally planning your next holiday or find yourself lost in a thought or a memory. It’s only later that you realize what has happened: Your brain has simply “changed channels”—and switched to autopilot.

For some time now, experts have been interested in the competition among different networks of the brain, which are able to suppress one another’s activity. If one of these approximately 20 networks is active, the others remain more or less silent. So if you’re thinking about your next holiday, it is almost impossible to follow the content of a text at the same time.

To find out how the anatomical structure of the brain impacts its functional networks, a team of researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and the University Hospital Freiburg, have analysed the connections between a total of 40,000 tiny areas of the brain. Using functional magnetic resonance imaging, they examined a total of 1.6 billion possible anatomical connections between these different regions in 19 participants aged between 21 and 31 years. The research team compared these connections with the brain signals actually generated by the nerve cells.

Their results showed the highest agreement between brain structure and brain function in areas forming part of the “default mode network“, which is associated with daydreaming, imagination, and self-referential thought.

“In comparison to other networks, the default mode network uses the most direct anatomical connections. We think that neuronal activity is automatically directed to level off at this network whenever there are no external influences on the brain,” says Andreas Horn, lead author of the study and researcher in the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.

Living up to its name, the default mode network seems to become active in the absence of external influences. In other words, the anatomical structure of the brain seems to have a built-in autopilot setting. It should not, however, be confused with an idle state. On the contrary, daydreaming, imagination, and self-referential thought are complex tasks for the brain.

“Our findings suggest that the structural architecture of the brain ensures that it automatically switches to something useful when it is not being used for other activities,” says Andreas Horn. “But the brain only stays on autopilot until an external stimulus causes activity in another network, putting an end to the daydreaming. A buzzing fly, a loud bang in the distance, or focused concentration on a text, for example.”

The researchers hope that their findings will contribute to a better understanding of brain functioning in healthy people, but also of neurodegenerative disorders such as Alzheimer’s disease and psychiatric conditions such as schizophrenia. In follow-up studies, the research team will compare the brain structures of patients with neurological disorders with those of healthy controls.

Notes about this neuroscience and neuroimaging research
Contact: Nicole Siller – Max Planck Gesellschaft
Source: Max Planck Gesellschaft press release
Image Source: The image is adapted from the Max Planck Gesellschaft press release.
Original Research: Abstract for “The structural–functional connectome and the default mode network of the human brain” by Andreas Horn, Dirk Ostwald, Marco Reisert, and Felix Blankenburg. in Neuroimage. Published online October 4 2013 doi:10.1016/j.neuroimage.2013.09.069

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

 
DJS Comment:  I have been thinking for a while that consciousness/sentience was a fundamental property of reality, such that the right combination of physical/chemical/biological/? would turn it into an emergent property.

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.
 
That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.
 
Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.
 
Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.
 
He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.
 
Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.
In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.
 
And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.
 
Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.
Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.
 
It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.
 
Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.
 
Clearly, there is so much room for improvement that allows for the performance of conscious systems.
 
Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.
 
Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.
 
This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.
 
The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.
 
So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?
 
Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.
Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.
 
Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.
“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.
 
That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!
And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.
At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.
 
Could it be that a similar revolution is currently underway at the beginning of the 21st century?
Ref:arxiv.org/abs/1401.1219: Consciousness as a State of Matter

Wondering About The Sky

Starting at the beginning of our journey, I believe that the human child, looking at the world beneath his feet and the sky overhead, and wondering about it, is as good place to begin it as any. The child is not merely curious about the color of the daytime sky (see Appendix B) of course, but also the blackness of the night. But it is not just the blackness which attracts him but the things that shine within it and take that otherwise monotone of darkness away, revealing the full glory of the universe through the tapestry of night.


When the child gazes upwards at the night sky he sees not only the blackness but the many myriad of points of lights sprinkled against that inky backdrop. How many and how bright the points are depends upon from where he gazes, but they are spectacular nonetheless. Some of the lights stand out as special, or so he notices if he is keen enough an observer over many nights. They are generally brighter than the rest and appear to move slowly, thought stately, across the sky. Some appear only in the evening or the morning skies, while others traverse the entire arc of night, yet are not always there. They appear to ride, more or less, along a single line, which is known to astronomers as the ecliptic. The ecliptic, though our child does not know it, is simply the path the sun takes throughout the sky as a result of Earth orbiting her. The special lights seem to trail in her wake like fireflies.

These lights have special names. The ancient Greeks called them planetes, or wanderers, from which we derive the term planets. Of those visible to the naked eye, they include Mercury, Venus, Mars, Jupiter, and Saturn. One might add the moon and the sun to the list, and speak of the seven stars (although only one, the sun, is truly a star) as did the fool to the king in King Lear: (Fool: “The reason why the seven Starres are no mo then seuen, is a pretty reason”. King: “Because they are not eight.” Fool: “Yes indeed, thou wouldn’t make a good Foole.”).

What are these wandering “stars” that stand out so spectacularly in the night sky, and why do I begin my journey there? Certainly, they have piqued our curiosity as much as anything else in the natural world. The child eventually learns that the reason they wander against the seemingly fixed backdrop of stars is that they are members of our own solar system, planets in their own rights like our own, and so are relatively close by compared to the stars. He – by which, I mean I – also learns of two more planets, discovered over the last two hundred plus years, mighty in their own respects but too dim to be seen clearly from here on Earth: Uranus (which actually is just barely visible under the most optimum seeing conditions) and Neptune. And indeed, Earth we stand on is a planet as well, which we would easily see from the sky of any of the others should any of us be fortunate enough (and I believe some of us will have that fortune) to stand on one of them some day. But from whence does this knowledge come?

To the ancient Greeks and Romans the planets were the gods who inhabited Mount Olympus, but when people began in earnest to explore the natural world within the last few centuries, men like Galileo first pointed his primitive telescopes at them and discovered, lo and behold, that they were not the bright points of light the stars remained under even the highest magnifications, but showed clear discs which nobody, even the Catholic clergy of the day, could deny. Some of them even had small bright objects (which we call moons, or satellites) orbiting them, while others, such as Venus and Mercury, showed phases much like our own moon. The results of these simple observations was that the Earth-centered universe of Ptolemy was forever and at last obliterated and a new model of the heavens had to be found, one in which the Earth and the other planets circled the sun and not the other way around (though, as noted, some objects also orbited the planets themselves, such as our own moon or the four satellites orbiting Jupiter that Galileo discovered).

For me, the planets, and their moons, and the myriad other bodies of rock and metal and ice which form our solar system are such a marvelous beginning along our quest into curiosity, if only because so much has been learned about them in my lifetime. Also, as a boy it seemed my clear mandate to become an astronomer when I grew up (instead of the chemist and general scientific dilettante I actually became), so the night sky held a special fascination, perhaps because more than anything else it made me realize just how inconceivably vast the very concept of everything is.

In the early 1960s, when I was a small child just learning how to read and write, very little was known about the other worlds which inhabited our solar system. What was known was largely from the blurry images of ground-based telescopes and the simple spectroscopic and photographic equipment which was all that was available then. We also had some information from microwave and radio astronomy. So we knew some basic stuff; for example, that Jupiter and Saturn were huge gas giant worlds, Uranus and Neptune more modest gaseous worlds (still considerably larger than the Earth), that Mercury was almost certainly a sun-baked ball of rock as tidally locked to the sun as our moon is to Earth – indeed, it was probably a slightly larger version of our moon. Of Pluto, discovered only in 1930 by Clyde Tombaugh, virtually nothing was known for certain, even its mass and size. Finally, of all these worlds, only thirty two natural satellites were known, with essentially nothing known about any of them except that Titan, Saturn’s largest moon and probably the largest moon in the entire solar system, was the only one showing evidence of a substantial atmosphere, the nature of which was little more than speculation.

* * *

You could not fail to notice that I have overlooked two planets, the two of our most particular interest at that. Venus and Mars capture our imaginations and hopes precisely because they are the nearest worlds to our own, and thus, or so we thought / hoped, were also the nearest in their natures. Even without the benefit of interplanetary probes and the crude, atmosphere befogged instruments we possessed circa 1960, we could see how much promise they held. Rocky worlds like our own, with substantial atmospheres and possibly decent living conditions as good if not better than ours (albeit Mars probably on the cold side, and Venus a tad warm even for the hardiest frontiersmen), they invigorated our imaginations with tales of life and even intelligent beings which even the most skeptical could find believable. Percival Lowell could convincingly describe the “canals” he was certain he spied on Mars (a word which, in fact, is a mistranslation of the Italian word for channels, which their original, equally deluded, discoverer Schiaparelli called them), and the civilization which built them to keep their dying world alive was so believable that when Orson Welles broadcast H. G. Wells novel The War of the Worlds in 1938 thousands were panicked, convinced the invading Martians were all too real. Even well into the 1950s and 60s it was possible to populate the Red Planet with sentient beings with little in the way of scientific rebuke, as Ray Bradbury did in The Martian Chronicles.

Venus somehow inspired less creativity than Mars, perhaps because the dense foggy atmosphere that perpetually hid its surface from view made it seem less hospitable, at least to advanced, intelligent life such as our own. Still, and despite microwave measurements which suggested the planet too hot to be amenable to any life, legions of minds had no problem envisioning all sorts of exotic scenarios for our “sister” world (unlike the smaller Mars, Venus is almost exactly the Earth’s diameter and mass). From vast, swampy jungles, to an ocean-girdling world, to thick seas of hydrocarbons larger than anything on Earth, Venus was often envisioned as a planet as alive as our own.

* * *

All of these visions seemed plausible, even compelling, to imaginative minds right up until the 1960s, when the initial phase of the Great Age of Planetary Exploration blew them all into dust. We – or more precisely our robotic probes, launched into interplanetary space by Cold War ICBMs designed to drop nuclear bombs onto cities teeming with human life – learned that Venus was a searing carbon dioxide encased hell, hot enough to melt lead and with a surface atmospheric pressure equal to almost one kilometer beneath our oceans. Forget life: even our hardiest robots barely lasted an hour under such conditions. As for Mars, our smaller, brother world turned out to be positively welcoming in comparison to our sister, but Schiaparelli and Lowell were shown to be hopeless wishful thinkers; there were no civilizations, no canals, no sentient beings, no beings at all, not even simple plants.

Sometimes what curiosity discovers is that imagination has overreached itself. This is often considered to be curiosity’s downside; I suspect that much of the antagonism towards science comes from just this fact, that in collecting data about the universe we are to some degree destroying our creativity. Thinking about this complaint, I have come to the conclusion that it is not an entirely unfair one. Why do I say this? Because it is true, that in satisfying our curiosity we narrow the range of what “could have been” down into what is, and that is a real loss to real human beings in the real universe. There is no denying this.

At the same time, however, there is an opposite phenomenon which has to be added to the stew. In satisfying our curiosity, we just as often – indeed, perhaps more often – find that our imaginations have been in fact impoverished. It turns out that “there are more things in heaven and earth” than we ever came close to dreaming; that the ocean of actual realities extends far beyond the limiting horizons, out to lands and seas and possibilities we never suspected were out there. The reason I started this chapter with what the last forty years of planetary exploration has found is that nothing could be a better example of this discovery process in action.

Take Mars. The first Mariner photographs were crushing disappointments. Far from being a verdant world, the Red Planet looked more like our moon: crater-pocked, barren, lifeless. There were no signs either of life or any kind of intelligence. Even the atmosphere was less than what we’d hoped: a bare one percent of Earth’s surface pressure, and worse, composed almost entirely our of carbon dioxide, with no free oxygen or water vapor.

But those were just the initial impressions. More Mariner missions, two Viking orbiters and landers, and a slew of other robots hurled at Mars over the last twenty years, not to forget images from the space-based Hubble telescope, have shown it to be a world even more remarkable than we had thought. For one thing, there are amazing geological structures, some of the largest in the solar system: Olympus Mons, the giant shield volcano, is larger than any mountain on Earth several times over, and Cannis Marineris, a Grand Canyon like our own but which would stretch the entire breadth of North America. As for life, Mars now is probably (but not certainly) dead, but it once clearly once had all the elements for life, if several billions of years ago: a thicker atmosphere, warmer temperatures, flowing surface water, a likely abundance of organic or pre-organic molecules. The photographic and chemical evidence, returned from our probes and telescopes, have shown us this past and opened the door to our understanding of it. With some hard work and a little luck, in the coming decade or two we will finally have the answer to the question of whether life on Earth is unique or not, and, by implication, is common in the universe or not. Or if not, why not. Either way, at the very least the ramifications for our own existence are staggering.

This alone could justify the time and energy, and money, spent to satisfy our curiosity about other worlds. But this turns out to be just the beginning. The solar system’s biggest surprises have come in the exploration of the outer planets. It turns out that we knew pathetically little about these worlds and their moons, or the forces that have shaped their evolution. We had a few hints, but we mostly dwelled in ignorance and speculation. Starting in the late 1970s with the Pioneer 10 and 11 missions, then the Voyager and other probes, that ignorance was stripped away in the most spectacular fashion. Pioneer and Voyager returned pictures of worlds far more dynamic than what we had expected, in ways we had not foreseen.

Consider tides. Here on Earth the tidal effects of the moon and, to a lesser degree, the sun, make our oceans rise and fall in gentle cycles. The reasons for tides is a straightforward application of the inverse square law of gravity: the closer two objects are to each other, the more strongly they are pulled together, and so the faster they have to move in their respective orbits to avoid falling into each other. The net result of this dynamic is that the near sides of such objects are moving too slowly and try to fall together, while the far sides are moving too fast and thus want to pull away. On Earth, that means that the oceans on the side facing the moon fall toward it ever so slightly, while the oceans on the opposite side try to drift away. It is a very humble effect, just a few feet, or tens or feet, either way. Nothing to write home about.

Tides can do much than rock the seas of a world, however. The rock comprising Jupiter’s innermost large moon, Io, is largely molten, thanks to the heat generated by tidal forces by both the parent planet Jupiter and the other Galilean satellites. The result is the most volcanically active world in the solar system by far, not excluding Earth. Io’s surface is liberally pocketed by volcanic calderas of all different sizes, which spout sulfur and other molten minerals tens to hundreds of kilometers above and across its surface in a steady rain of debris; a surface so new that it contains not a single impact crater. If the tidal stresses in Io’s guts were just a smidgeon stronger than they are, the world would be literally torn apart by them. That indeed might be Io’s ultimate fate, to be fractured and rendered into a new ring for the giant planet.

The tides are cruelest to Io because it is closest to Jupiter, but they do not leave the other large moons at peace either. The next Galilean satellite, Europa, may prove to be the most intriguing place in the entire solar system outside of our own planet. I must make a brief digression to explain why. Most of the solar system’s matter does not consist of rock and metal but of light elements, such as hydrogen, helium, carbon, nitrogen, and oxygen, and their various chemical combinations – water, ammonia, methane and a variety of small hydrocarbons – chemicals composed of carbon and hydrogen. In the inner solar system these substances are largely in gaseous or liquid form, making it a challenge for the small worlds (including ours) inhabiting this region to even maintain a hold on them in the teeth of the sun’s fierce radiations and her perhaps fiercer solar wind (a steady stream of electrons, protons, and other particles constantly being blown out by the sun, which can easily blow away weakly held atmospheres) , but starting at the distance of Jupiter the sun’s output is diluted enough to let these substances condense into their solid phases: ices. Starting with Jupiter, ice is not merely a thin coating over rocky worlds and moons but comprises the bulk of these bodies. The most predominant of them is water ice, which at the temperatures prevalent in the outer solar system essentially is rock, albeit a low density kind.

The cores of three of the Galilean satellites, Europa, Ganymede, and Callisto, are normal rock like the inner, “terrestrial” planets’, but they are covered with mantles of liquid and solid water many tens to hundreds of kilometers deep. Europa in particular consists of a relatively thin skin of cue ball smooth water ice over an abyssal ocean far, far deeper than any sea on Earth. Again, it is the tidal kneading of Jupiter and its other moons which generate the internal warmth which keeps this ocean in a liquid phase.

Liquid water is one of the most important ingredients to life on Earth, so wherever else in the universe we encounter it we are also encountering the possibility of life. On Mars the presence of flowing water billions of years ago raises that possibility. What Pioneer and Voyager and later missions have done is show how parochial our thinking on this subject has been. The kilometers-thick water ocean beneath Europa’s and other satellites’ icy surfaces no doubt contain their share of organic and other pre-biotic chemicals, as well as free oxygen, and over the eons of being warmed and mixed in this lightless abode who can say what might have assembled itself? We know little enough about life’s origins here to make all kinds of speculation plausible, speculation that will be answered only by sending more and better probes to that world. By, in short, satisfying our curiosity.

Which leads me again to the most important lesson once again, which is in how in satisfying our curiosity we often broaden our perspectives, not narrow them as critics claim. In reaching out, we find more than we ever thought we would, and our lives become immeasurably richer. This is what our science, our passion to know, has given us.

* * *

The fundamental premise, and primary lesson, of science is that there are no magic fountains of truth. There are no books with all the answers, no machines to solve every problem, no authorities with all the answers, no voices in our heads, no golden compasses or other devices waiting to be opened to spoken to in just the right way. All we have are our own limited senses, our own seemingly unlimited minds, our own hard work and perseverance. And this we find true whatever our questions or whatever mysteries the universe puts before us. Actually, there are no mysteries either: there is only what we have not yet understood, because we have not yet figured out how to explain it.

So we press on resolutely, our feet on the ground and heads down but our eyes always facing forward. And we take the pleasure of learning what we learn, in the steps and pieces that we learn it. It is a process that is, at times, grim. But what it yields is pure treasure.

As amazing as the moons of Jupiter have turned out to be, you have to go out still further to find the most amazing moon of them all. The somewhat smaller planet Saturn and its entourage of satellites orbits the sun at a distance twice that of Jupiter’s and ten times further out than Earth’s from the sun. Still a glare too fierce to be gazed at directly, the sun only provides one percent of the warmth and light here that it shines down on us. Furthermore, the effects of tidal interaction between Saturn and its moons is not as potent a force as it is in the Jovian system: there are no raging volcanoes or vast underground oceans of liquid water (with one possible exception). If anything, compared to Jupiter, the Saturnian system would seem to be a quiet backwater where little of interest might be found. Yet something of the most enormous interest is found right here: Titan.

Titan was known to be unique long before we sent any robots to explore it. Unlike all other moons in the solar system, a star passing behind Titan (an “occultation” in astronomer language) will fade and twinkle briefly before disappearing completely, similar to the way the stars twinkle when seen from Earth’s surface. The reason for both phenomena is the same. Atmospheres will refract and scatter the light that passes through them. Titan is the only satellite in our solar system with a substantial atmosphere; one that is, in fact, considerably more substantial than our own.

This in itself would have made it an object worthy of our curiosity. Atmospheres are living things. They continuously grow and regenerate themselves lest they escape away into space, courtesy of the lightness of their molecules, the temperature, the strength of the solar wind, and other factors. They eventually dissipate when left on their own, though this may take billions of years. On Earth, for example, the nitrogen and oxygen which comprise ninety-nine percent of our atmosphere go through chemical and biological cycles which keep them ever fresh throughout geologic time.

Titan’s atmosphere is not only substantial, it is several times as dense as our own. Also, like Earth’s, it is largely nitrogen: ninety-eight point four percent of it is this gas, as compared to seventy-eight percent here. Even more interesting is the other one point six percent, which is largely hydrocarbons – simple, organic molecules – like methane and ethane. Thanks to the sun’s ultraviolet rays, which are still potent at this far reach in the solar system, these hydrocarbons have given rise to even more complicated molecules which comprise the orange smog which permanently hides Titan’s surface from all outside eyes. They also form the basis for clouds and various kinds of precipitation which rain down on this moon’s icy surface, forming the terrestrial equivalent of lakes and rivers.

As a possible womb for life, however, Titan has a problem. Its distance from the sun and shielding cover of hydrocarbon smog mean that the surface temperature here is almost three hundred degrees below zero Fahrenheit. This is so cold that even the nitrogen comprising the bulk of its atmosphere is on the edge of liquefying. Not only is the water so crucial to life on Earth completely frozen into a thick mantle as on other outer moons, but other molecules important to the life’s beginnings here, such as ammonia and carbon dioxide, would be rock-hard solids at these temperatures as well. Moreover, any chemistry which could happen would occur at a pace that would make a snail look like a jack-rabbit on caffeine. Looking over all these factors, biology would seem to be a non-existing subject on Titan.

We shouldn’t think so narrowly, however. Life does not require water so much as it needs some liquid medium, and as noted, compounds like methane and ethane, gasses on Earth, do exist in liquid form both on Titan’s surface and in its atmosphere. True, any biochemistry would proceed with agonizing slowness, but the solar system has been around for almost five billion years, and that might be just enough time for something to happen. We won’t find anything resembling a … well, even a bacterium is probably pushing it … on Titan, but some kinds of self-replicating entities – the most basic definition of life – might exist there. Or whatever could lead up to such entities under more favorable conditions. Either way, when we do find out, we will certainly learn some lessons applicable to how life came to exist on Earth, what that requires and what must be forbidden for that grand event to occur. All of which makes the time and energy and resources necessary to do the finding out worth it.

* * *

Our robotic exploration of the solar system has rewarded us with much more than volcanoes and canyons and possible new possible niches for life. For one thing, knowing about a place is often the first step to going there; it is certainly a necessary step. I call the last forty plus years the initial phase of the Great Age of Planetary Exploration, and there should be little doubt anymore that that is what it is. The twenty-first century will assuredly see us plant our footsteps on our neighboring worlds, the moon and Mars for certain, and the centuries to come will see their thorough colonization and exploitation.

What about beyond? We have come a long way in our travels in my lifetime, but at the same time, we have hardly begun to crack the door open. I loved astronomy as a child, but what excited me the most were not the planets but the stars. In reading about them, I learned that the stars were other suns like our own, possibly with their own worlds and God-knew-what on them, perhaps, one dared hope, some of them even people like ourselves: either way, it was and is an overwhelmingly staggering thought, especially when you contemplate how many stars there are.

Curiosity will eventually take us to the stars, but this is a journey that will take far longer and require much more resources than exploring our own solar system, because the distances involved are so much vaster, by a factor of a million and more. So much greater that it will change what it means to be human in some ways – though our passion to know will hopefully remain intact. We cannot travel to the stars yet, but their light comes to us, rains down on us in fact from every direction we look. And light is a code which, when unlocked, reveals a universe more amazing than dreams.

The six inch Newtonian reflector telescope I received for my eleventh birthday was a wondrous, magical device. With it, I could easily make out mountains and craters on the moon, view the planets as multi-colored discs along with their larger moons, resolve multiple star systems into their components, and in general enjoy many things of the nighttime sky which the naked eye alone can never see. And yet still the stars are so distant that they remained points of light in the blackness, brighter and more variously colored yes, but points nevertheless. Yet even had that telescope been more powerful a device, the miles of air and dust and water vapor I would still have had to peer through would have smeared my vision with unending twinkling and wavering, rendering it of maddeningly limited use. Even the simple question of whether other stars besides our own possessed planetary systems – and so, possibly, life and intelligence – would have been forever beyond its capacities.

The most powerful telescopes humans have ever built can collect a thousand times and more as much light as my childhood toy. They are perhaps the ultimate monuments to our lust for knowledge and understanding, sitting on their mountaintops above much of our world’s blurring atmosphere and now, in the form of the Hubble Space Telescope, even floating in space entirely beyond it. The ones on Earth wield corrective optics and sophisticated computer software to compensate for atmospheric disturbances. Not only do they gather much more light, but that light can be gathered it over hours, even days, of viewing times and stored it on sensitive electronics to be analyzed and manipulated using other ingenious software packages running on other powerful computers.

Light. It is a substance far more valuable than the most precious of metals (it is also far more mysterious, as Appendix A explains). It’s greatest value is not merely allowing us to see the universe around us, however. If you know how to decipher and decode it, and understand what comes out of doing so, light can tell you almost anything you could ever want to know about whatever you are gazing upon. I’m serious: it is that amazing a substance. For example, the science of spectroscopy, the analysis of light by wavelength, allows us to deduce the chemical composition of an object or substance simply by the light it creates, reflects, or transmits. This feature of light, discovered in the nineteenth century, has given us the elemental compositions of the stars and other astronomical objects, a gift we once thought we would never be granted. Light can also tell us the temperature of things and the ways its constituent atoms are chemically bonded together. Not a bad day’s work for something we take so much for granted.

Human ingenuity and the laws of physics are a dynamic combination which seems to have no limits. The question of whether life and intelligence exist elsewhere in the universe hinges partly on whether planetary systems are common or a unique aberration of our own star. Unfortunately, merely looking through our telescopes, or even recording what comes from them with our most powerful technology, can’t answer this most critical of questions: the light from even the dimmest star is so overpowering that it completely masks the feeble reflected glow of any planets it might own. It’s like trying to pick out a the tiny twinkle of a lit match sitting astride a lighthouse beacon’s full fury.

Until the 1990s, that would have been the beginning and end of the quest. But light holds other secrets for the mind clever enough and determined enough to pry them out and exploit them. One of those secrets, which Edwin Hubble used in the 1920s to show that the universe is indeed expanding as Einstein’s General Relativity (but not Einstein himself) predicted, is the ability to tell how fast an object is moving either toward or away from us. The so-called Doppler effect (see Appendix C for a fuller explanation) is easier described using sound rather than light, but the principle is the same: when a sound-emitting object is approaching us, the distance between sound wave peaks and troughs is shortened because the object has moved part of that distance toward us in the meantime; when moving away from us, the distance is increased for the same reason. Thus, in a standard example, a train whistle’s pitch drops suddenly as the train swoops by us.

The same modification of wavelength happens with light, although it is much smaller (because light travels so much faster). It is also trickier to use in an astronomical setting because, after all, we don’t know what the wavelength of the light is when the object is at rest! This is not a problem in planet-hunting, however, as we shall see. The other piece of cleverness in our scheme lies in the fact that, according to Newtonian physics, two gravitationally bound objects revolve around their common center of mass, a point not precisely at the center of either object; the common notion that the moon revolves about Earth, or Earth about the sun, arises because in these cases the larger object is so much more massive than the smaller that the center of gravity of the system is very close to the center of the larger object.

The basic picture starts to emerge: if a star has planets, then the star itself is revolving around the system’s center of mass. This causes the star to wobble about ever so slightly as its planet(s) revolves about it. We may or may not be able to detect this wobble; it depends on how large it is and, more importantly, the angle of the wobble with respect to us. If the angle causes the star to alternately approach and recede from us, this will give rise to a, albeit very small, Doppler shift of its light from our vantage point. It is this regular, cyclic change in the shift we are interested in, which is why the rest wavelength is not important; from its size and other details, we can infer not only the existence of planets, but their masses and orbits. This, needless to say, is where the main difficulty of the technique comes into play, in the “ever so slightly” aspect of the wobble. Only the most resourceful analysis of a sufficiently large enough set of observational data has a prayer of picking this wobble out from all the other motions of a star and everything else in its vicinity.

* * *

Resourcefulness is something Homo sapiens sapiens has never been in short supply of, and thanks to modern technology data can be almost as astronomical as the stars themselves. Assuming you can get enough time on the instruments, that is. The most powerful telescopes in the world are difficult to get that time with; curiosity combined plus the size of the universe makes for far more research proposals than time will ever permit conducting. As a result, the powers that control access to them must be convinced that it will be spent on something that is both worthwhile and possible to do, and convincing them is itself a challenge for the resourceful.

Whether our solar system, and by implication life and intelligence, is unique in the universe or not is a question that, at the end of the 1980s, appeared to be unanswerable in my lifetime.

Besides, ours was the only solar system we knew of. Straightforward physics suggests that the inner planets of a system should be terrestrial – composed of rock and metal, like Earth – and that the larger, gas and ice worlds will be found further out. Gas / ice worlds such as Jupiter, Saturn, Uranus, and Neptune are largely made from small molecules like hydrogen, helium, water, ammonia, and methane; these substances are volatile and are boiled off a newly forming world if it is too close to its sun, while further out they can condense in enormous quantities as they are by far the most common materials in the solar nebula.

So you expect Jovian worlds to be found only in stately orbits far from a star, if it has any. Nature, happily, has a way of not cooperating with our expectations – and of rewarding our willingness to test them. When the first extra-solar planet was discovered orbiting a sun-like star, 51 Pegasi, only some fifty light-years from our own solar system, it stunned the astronomical community only by showing a mass approximately half of our Jupiter’s, while at the same time being in an orbit which was only some five million miles from its sun (as opposed to Earth’s 93 million miles), with an orbital period of only some four and a quarter days. Similar systems were discovered in the ensuing years, also of gas giants in very close proximity to their stars.

In one sense, this should not have surprised us at all. Such planetary systems ought to be the first discovered as they are the easiest to detect: a large planet orbiting close to its sun will produce the largest Doppler shift effect, and hence be easiest to detect. It was just that no one had suspected such systems to exist at all, or at most, to be exceedingly rare. Gas giants, after all, could only form far from their parent stars, otherwise as mentioned the intense stellar radiation and stellar wind will blow the light elements away. Clearly, that was what had happened with Earth’s solar system. So what had gone awry in systems such as 51 Pegasi?

The basic physics of planetary formation are likely to be correct. Therefore, 51 Pegasi b (the official designation of the planet) must have formed at more Jovian-like distances: a good one hundred or so times further out from the present position. Various interactions with other bodies in the system, or even with other stars, have since gradually spiraled 51 Pegasi b in to its current orbit, very close to its sun. This hypothesis is not unreasonable; it was known that planetary orbits could be highly unstable over time spans of billions of years. No doubt, catastrophic interactions with other bodies in the 51 Pegasi system had occurred in this time: smaller, closer, possibly terrestrial (even Earthlike) planets had been bulldozed out of the system permanently, into cold interstellar space.

This just leads to the next question, however. Why has our own solar system been apparently so stable during its four and a half billion years of existence? If anything, the gas giants such as Jupiter and Saturn have done us a good turn by sweeping smaller bodies out of the system which otherwise might have collided with us, or herded them into relatively stable asteroid belts. Have we been just incredibly fortunate in this regard? Why didn’t Jupiter eject our own world, not to mention Mercury, Venus, Mars, and the moon into the interstellar abyss?

The number of additionally discovered systems similar to 51 Pegasi have made this question more than a trifling compelling. It suggests that systems harboring life-bearing worlds are rarer than we had supposed, relying on a mixture of luck and physical laws which we still have but an inkling as to their workings. It seems that once again, in our attempts to gratify our curiosity, we have only given it more fodder to feed on. One thing is for certain: repeatedly, we find our attempts to uncover the secret orderings of things to humble us again and again as to how little we still understand. We think we are taking the Russian dolls apart one by one, into ever deeper levels of understanding, only to find ourselves as baffled as when we had begun.

* * *

I am not trying to sound defeated. I do not believe that we are, or will be defeated. Progress in knowledge, in science, does proceed. Little by little, our curiosity is satisfied. It is merely that it never proceeds in the nice, round, little steps we always expect it to. No, there are fits and starts, backtrackings where we seem worse off than when we had begun, strategic retreats here and there before we make the next jump forward. If anything, this makes the whole journey that much more exciting, and fulfilling. At the end of each day, we can sit and watch the sunset, happy in what we have achieved and that much more edgy and restless for what tomorrow might bring. For we know that, like today, it will bring something, just not the nice, neat packages of knowledge that, actually, would have been quite boring to receive, but a mixture of new questions and mysteries with which we can set out for further explorations – with just enough genuine new understanding to leave us feeling satisfied. That is the way of knowledge, the path that curiosity invariably takes us down. Isn’t it one filled with restless throbbing and hope? I believe that it is.

Furthermore, since the discovery of the 51 Pegasi planet, almost fifteen years ago, astronomers have been aiming their instruments at the sky with the hopes finding more planetary systems, and not only that, planetary systems more like our own solar system. And they have been successful well beyond anyone’s expectations. Over the last few years systems have been found with planets more similar to our own; these includes “super-Earths”, which are rocky terrestrial worlds akin to our own, only much larger, and other large planets, similar to our own gas giants but smaller. Some of these worlds have even revealed the tantalizing tastes of substances such as oxygen and water, absolutely essential to life as we know it. It seems quite likely now that over the next ten-twenty years we will discover Earth-like planets circling other stars in our galactic neighborhood. And where there is life, there is certainly the possibility of intelligence.

* * *

Well, I certainly hope I have whetted your appetite for what is to come. At this point, I myself must admit that it is uncertain just what ground I will cover, what areas will be explored, what mysteries will be unveiled. Perhaps that is as it should be. Curiosity is a passion which you never know for certain where it may lead you. You only know it will go somewhere; that there will be a resting spot somewhere in the future you can perch upon and gaze at the territory covered, while the campfire dims and the last of the evening meal lingers on your palette.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...