Search This Blog

Wednesday, April 17, 2019

Logology (science of science)

From Wikipedia, the free encyclopedia


Logology ("the science of science") is the study of all aspects of science and of its practitioners—aspects philosophical, biological, psychological, societal, historical, political, institutional, financial. The term "logology" is used here as a synonym for the equivalent term "science of science" and the semi-equivalent term "sociology of science".

The term "logology" is back-formed from "-logy" (as in "geology", "anthropology", "sociology", etc.) in the sense of the "study of study" or the "science of science"—or, more plainly, the "study of science". The word "logology" provides grammatical variants not available with the earlier terms "science of science" and "sociology of science"—"logologist", "to logologize", "logological", "logologically".

Origins

The early 20th century brought calls, initially from sociologists, for the creation of a new, empirically-based science that would study the scientific enterprise itself. The early proposals were put forward with some hesitancy and tentativeness. The new meta-science would be given a variety of names, including "science of knowledge", "science of science", "sociology of science", and "logology". 

Florian Znaniecki, who is considered to be the founder of Polish academic sociology, and who in 1954 also served as the 44th president of the American Sociological Association, opened a 1923 article:
"[T]hough theoretical reflection on knowledge—which arose as early as Heraclitus and the Eleatics—stretches... unbroken... through the history of human thought to the present day... we are now witnessing the creation of a new science of knowledge [author's emphasis] whose relation to the old inquiries may be compared with the relation of modern physics and chemistry to the 'natural philosophy' that preceded them, or of contemporary sociology to the 'political philosophy' of antiquity and the Renaissance. [T]here is beginning to take shape a concept of a single, general theory of knowledge... permitting of empirical study.... This theory... is coming to be distinguished clearly from epistemology, from normative logic, and from a strictly descriptive history of knowledge."
A dozen years later, Polish husband-and-wife sociologists Stanisław Ossowski and Maria Ossowska (the Ossowscy) took up the same subject in an article on "The Science of Science" whose 1935 English-language version first introduced the term "science of science" to the world. The article postulated that the new discipline would subsume such earlier ones as epistemology, the philosophy of science, the "psychology of science", and the "sociology of science". Science of science would also concern itself with questions of a practical character such as social and state policy in relation to science; the organization of institutions of higher learning, of research institutes, and of scientific expeditions; the protection of scientific workers, etc. It would concern itself as well with historical questions: the history of the conception of science, of the scientist, of the various disciplines, and of learning in general.

In their 1935 paper, the Ossowscy mentioned the German philosopher Werner Schingnitz (1899–1953) who, in fragmentary 1931 remarks, had enumerated some possible types of research in the science of science and had proposed his own name for the new discipline: "scientiology". The Ossowscy took issue with the name: "Those who wish to replace the expression 'science of science' by a one-word term [that] sound[s] international, in the belief that only after receiving such a name [will] a given group of [questions be] officially dubbed an autonomous discipline, [might] be reminded of the name mathesiology, proposed long ago for similar purposes [by the French mathematician and physicist André-Marie Ampère (1775–1836)]."

Yet, before long, in Poland, the unwieldy three-word term "nauka o nauce" ("science of science") was replaced by the more versatile one-word term "naukoznawstwo" ("logology") and its natural variants: "naukoznawca" ("logologist"), "naukoznawczy" ("logological"), and "naukoznawczo" ("logologically"). And just after World War II, only 11 years after the Ossowscy's landmark 1935 paper, the year 1946 saw the founding of the Polish Academy of Sciences' quarterly Zagadnienia Naukoznawstwa (Logology) — long before similar journals in many other countries.

The new discipline also took root elsewhere—in English-speaking countries, without the benefit of a one-word name.

Science

The term

The term "science" (from the Latin "scientia", "knowledge") means somewhat different things in different languages. In the English language, "science", when unqualified, generally refers to the "natural", "exact", or "hard sciences". The corresponding terms in other languages, for example French, German, and Polish, refer to a broader domain that includes not only the exact sciences (logic and mathematics) and the natural sciences (physics, chemistry, biology, medicine, Earth sciences, geography, astronomy, etc.) but also the engineering sciences, social sciences (history, geography, psychology, physical anthropology, sociology, political science, economics, international relations, pedagogy, etc.), and humanities (philosophy, history, cultural anthropology, linguistics, etc.).

University of Amsterdam humanities professor Rens Bod points out that science—defined as a set of methods that describes and interprets observed or inferred phenomena, past or present, aimed at testing hypotheses and building theories—applies to such humanities fields as philology, art history, musicology, linguistics, archaeology, historiography, and literary studies.

Bod gives a historic example of scientific textual analysis. in 1440 the Italian philologist Lorenzo Valla exposed the Latin document Donatio Constantini (The Donation of Constantine)—which was used by the Catholic Church to legitimize its claim to lands in the Western Roman Empire—as a forgery. Valla used historical, linguistic, and philological evidence, including counterfactual reasoning, to rebut the document. Valla found words and constructions in the document that could not have been used by anyone in the time of Emperor Constantine I, at the beginning of the fourth century A.D. For example, the late Latin word feudum ("fief") referred to the feudal system, a medieval invention that did not exist before the seventh century A.D. Valla's methods were those of science, and inspired the later scientifically-minded work of Dutch humanist Erasmus of Rotterdam (1466–1536), Leiden University professor Joseph Justus Scaliger (1540–1609), and philosopher Baruch Spinoza (1632–77).

Knowability

Science's search for the truth about various aspects of reality entails the question of the very knowability of reality. Philosopher Thomas Nagel writes: "[In t]he pursuit of scientific knowledge through the interaction between theory and observation... we test theories against their observational consequences, but we also question or reinterpret our observations in light of theory. (The choice between geocentric and heliocentric theories at the time of the Copernican revolution is a vivid example.) ... How things seem is the starting point for all knowledge, and its development through further correction, extension, and elaboration is inevitably the result of more seemings—considered judgments about the plausibility and consequences of different theoretical hypotheses. The only way to pursue the truth is to consider what seems true, after careful reflection of a kind appropriate to the subject matter, in light of all the relevant data, principles, and circumstances."

The question of knowability is approached from a different perspective by physicist-astronomer Marcelo Gleiser: "What we observe is not nature itself but nature as discerned through data we collect from machines. In consequence, the scientific worldview depends on the information we can acquire through our instruments. And given that our tools are limited, our view of the world is necessarily myopic. We can see only so far into the nature of things, and our ever shifting scientific worldview reflects this fundamental limitation on how we perceive reality." Gleiser cites the condition of biology before and after the invention of the microscope or gene sequencing; of astronomy before and after the telescope; of particle physics before and after colliders or fast electronics. "[T]he theories we build and the worldviews we construct change as our tools of exploration transform. This trend is the trademark of science."

Writes Gleiser: "There is nothing defeatist in understanding the limitations of the scientific approach to knowledge.... What should change is a sense of scientific triumphalism—the belief that no question is beyond the reach of scientific discourse.

"There are clear unknowables in science—reasonable questions that, unless currently accepted laws of nature are violated, we cannot find answers to. One example is the multiverse: the conjecture that our universe is but one among a multitude of others, each potentially with a different set of laws of nature. Other universes lie outside our causal horizon, meaning that we cannot receive or send signals to them. Any evidence for their existence would be circumstantial: for example, scars in the radiation permeating space because of a past collision with a neighboring universe."

Gleiser gives three further examples of unknowables, involving the origins of the universe; of life; and of mind:  "Scientific accounts of the origin of the universe are incomplete because they must rely on a conceptual framework to even begin to work: energy conservation, relativity, quantum physics, for instance. Why does the universe operate under these laws and not others?  Similarly, unless we can prove that only one or very few biochemical pathways exist from nonlife to life, we cannot know for sure how life originated on Earth. 

"For consciousness, the problem is the jump from the material to the subjective—for example, from firing neurons to the experience of pain or the color red. Perhaps some kind of rudimentary consciousness could emerge in a sufficiently complex machine. But how could we tell? How do we establish—as opposed to conjecture—that something is conscious?" Paradoxically, writes Gleiser, it is through our consciousness that we make sense of the world, even if imperfectly. "Can we fully understand something of which we are a part?"

Facts and theories

Theoretical physicist and mathematician Freeman Dyson explains that "[s]cience consists of facts and theories":  "Facts are supposed to be true or false. They are discovered by observers or experimenters. A scientist who claims to have discovered a fact that turns out to be wrong is judged harshly....

"Theories have an entirely different status. They are free creations of the human mind, intended to describe our understanding of nature. Since our understanding is incomplete, theories are provisional. Theories are tools of understanding, and a tool does not need to be precisely true in order to be useful. Theories are supposed to be more-or-less true... A scientist who invents a theory that turns out to be wrong is judged leniently."

Dyson cites a psychologist's description of how theories are born: "We can't live in a state of perpetual doubt, so we make up the best story possible and we live as if the story were true." Dyson writes: "The inventor of a brilliant idea cannot tell whether it is right or wrong." The passionate pursuit of wrong theories is a normal part of the development of science. Dyson cites, after Mario Livio, five famous scientists who made major contributions to the understanding of nature but also believed firmly in a theory that proved wrong.

Charles Darwin explained the evolution of life with his theory of natural selection of inherited variations, but he believed in a theory of blending inheritance that made the propagation of new variations impossible. He never read Gregor Mendel's studies that showed that the laws of inheritance would become simple when inheritance was considered as a random process. Though Darwin in 1866 did the same experiment that Mendel had, Darwin did not get comparable results because he failed to appreciate the statistical importance of using very large experimental samples. Eventually, Mendelian inheritance by random variation would, no thanks to Darwin, provide the raw material for Darwinian selection to work on.

William Thomson (Lord Kelvin) discovered basic laws of energy and heat, then used these laws to calculate an estimate of the age of the earth that was too short by a factor of fifty. He based his calculation on the belief that the earth's mantle was solid and could transfer heat from the interior to the surface only by conduction. It is now known that the mantle is partly fluid and transfers most of the heat by the far more efficient process of convection, which carries heat by a massive circulation of hot rock moving upward and cooler rock moving downward. Kelvin could see the eruptions of volcanoes bringing hot liquid from deep underground to the surface; but his skill in calculation blinded him to processes, such as volcanic eruptions, that could not be calculated.

Linus Pauling discovered the chemical structure of protein and proposed a completely wrong structure for DNA, which carries hereditary information from parent to offspring. Pauling guessed a wrong structure for DNA because he assumed that a pattern that worked for protein would also work for DNA. He overlooked the gross chemical differences between protein and DNA. Francis Crick and James Watson paid attention to the differences and found the correct structure for DNA that Pauling had missed a year earlier.

Astronomer Fred Hoyle discovered the process by which the heavier elements essential to life are created by nuclear reactions in the cores of massive stars. He then proposed a theory of the history of the universe known as steady-state cosmology, which has the universe existing forever without an initial Big Bang (as Hoyle derisively dubbed it). He held his belief in the steady state long after observations proved that the Big Bang had happened.

Albert Einstein discovered the theory of space, time, and gravitation known as general relativity, and then added a cosmological constant, later known as dark energy. Subsequently, Einstein withdrew his proposal of dark energy, believing it unnecessary. Long after his death, observations suggested that dark energy really exists, so that Einstein's addition to the theory may have been right; and his withdrawal, wrong.

To Mario Livio's five examples of scientists who blundered, Dyson adds a sixth: himself. Dyson had concluded, on theoretical principles, that what was to become known as the W-particle, a charged weak boson, could not exist. An experiment conducted at CERN, in Geneva, later proved him wrong. "With hindsight I could see several reasons why my stability argument would not apply to W-particles. [They] are too massive and too short-lived to be a constituent of anything that resembles ordinary matter."

Empiricism

Steven Weinberg, 1979 Nobel laureate in physics, and a historian of science, writes that the core goal of science has always been the same: "to explain the world"; and in reviewing earlier periods of scientific thought, he concludes that only since Isaac Newton has that goal been pursued more or less correctly. He decries the "intellectual snobbery" that Plato and Aristotle showed in their disdain for science's practical applications, and he holds Francis Bacon and René Descartes to have been the "most overrated" among the forerunners of modern science (they tried to prescribe rules for conducting science, which "never works").

Weinberg draws parallels between past and present science, as when a scientific theory is "fine-tuned" (adjusted) to make certain quantities equal, without any understanding of why they should be equal. Such adjusting vitiated the celestial models of Plato's followers, in which different spheres carrying the planets and stars were assumed, with no good reason, to rotate in exact unison. But, Weinberg writes, a similar fine-tuning also besets current efforts to understand the "dark energy" that is speeding up the expansion of the universe.

Ancient science has been described as having gotten off to a good start, then faltered. The doctrine of atomism, propounded by the pre-Socratic philosophers Leucippus and Democritus, was naturalistic, accounting for the workings of the world by impersonal processes, not by divine volitions. Nevertheless, these pre-Socratics come up short for Weinberg as proto-scientists, in that they apparently never tried to justify their speculations or to test them against evidence.

Weinberg believes that science faltered early on due to Plato's suggestion that scientific truth could be attained by reason alone, disregarding empirical observation, and due to Aristotle's attempt to explain nature teleologically—in terms of ends and purposes. Plato's ideal of attaining knowledge of the world by unaided intellect was "a false goal inspired by mathematics"—one that for centuries "stood in the way of progress that could be based only on careful analysis of careful observation." And it "never was fruitful" to ask, as Aristotle did, "what is the purpose of this or that physical phenomenon."

A scientific field in which the Greek and Hellenistic world did make progress was astronomy. This was partly for practical reasons: the sky had long served as compass, clock, and calendar. Also, the regularity of the movements of heavenly bodies made them simpler to describe than earthly phenomena. But not too simple: though the sun, moon and "fixed stars" seemed regular in their celestial circuits, the "wandering stars"—the planets—were puzzling; they seemed to move at variable speeds, and even to reverse direction. Writes Weinberg: "Much of the story of the emergence of modern science deals with the effort, extending over two millennia, to explain the peculiar motions of the planets."

The challenge was to make sense of the apparently irregular wanderings of the planets on the assumption that all heavenly motion is actually circular and uniform in speed. Circular, because Plato held the circle to be the most perfect and symmetrical form; and therefore circular motion, at uniform speed, was most fitting for celestial bodies. Aristotle agreed with Plato. In Aristotle's cosmos, everything had a "natural" tendency to motion that fulfilled its inner potential. For the cosmos' sublunary part (the region below the moon), the natural tendency was to move in a straight line: downward, for earthen things (such as rocks) and water; upward, for air and fiery things (such as sparks). But in the celestial realm things were not composed of earth, water, air, or fire, but of a "fifth element", or "quintessence," which was perfect and eternal. And its natural motion was uniformly circular. The stars, the sun, the moon, and the planets were carried in their orbits by a complicated arrangement of crystalline spheres, all centered around an immobile earth.

The Platonic-Aristotelian conviction that celestial motions must be circular persisted stubbornly. It was fundamental to the astronomer Ptolemy's system, which improved on Aristotle's in conforming to the astronomical data by allowing the planets to move in combinations of circles called "epicycles".

It even survived the Copernican revolution. Copernicus was conservative in his Platonic reverence for the circle as the heavenly pattern. According to Weinberg, Copernicus was motivated to dethrone the earth in favor of the sun as the immobile center of the cosmos largely by aesthetic considerations: he objected to the fact that Ptolemy, though faithful to Plato's requirement that heavenly motion be circular, had departed from Plato's other requirement that it be of uniform speed. By putting the sun at the center—actually, somewhat off-center—Copernicus sought to honor circularity while restoring uniformity. But to make his system fit the observations as well as Ptolemy's system, Copernicus had to introduce still more epicycles. That was a mistake that, writes Weinberg, illustrates a recurrent theme in the history of science: "A simple and beautiful theory that agrees pretty well with observation is often closer to the truth than a complicated ugly theory that agrees better with observation."

The planets, however, do not move in perfect circles but in ellipses. It was Johannes Kepler, about a century after Copernicus, who reluctantly (for he too had Platonic affinities) realized this. Thanks to his examination of the meticulous observations compiled by astronomer Tycho Brahe, Kepler "was the first to understand the nature of the departures from uniform circular motion that had puzzled astronomers since the time of Plato."

The replacement of circles by supposedly ugly ellipses overthrew Plato's notion of perfection as the celestial explanatory principle. It also destroyed Aristotle's model of the planets carried in their orbits by crystalline spheres; writes Weinberg, "there is no solid body whose rotation can produce an ellipse." Even if a planet were attached to an ellipsoid crystal, that crystal's rotation would still trace a circle. And if the planets were pursuing their elliptical motion through empty space, then what was holding them in their orbits?

Science had reached the threshold of explaining the world not geometrically, according to shape, but dynamically, according to force. It was Isaac Newton who finally crossed that threshold. He was the first to formulate, in his "laws of motion", the concept of force. He demonstrated that Kepler's ellipses were the very orbits the planets would take if they were attracted toward the sun by a force that decreased as the square of the planet's distance from the sun. And by comparing the moon's motion in its orbit around the earth to the motion of, perhaps, an apple as it falls to the ground, Newton deduced that the forces governing them were quantitatively the same. "This," writes Weinberg, "was the climactic step in the unification of the celestial and terrestrial in science."

By formulating a unified explanation of the behavior of planets, comets, moons, tides, and apples, writes Weinberg, Newton "provided an irresistible model for what a physical theory should be"—a model that fit no preexisting metaphysical criterion. In contrast to Aristotle, who claimed to explain the falling of a rock by appeal to its inner striving, Newton was unconcerned with finding a deeper cause for gravity. He declared in his Philosophiæ Naturalis Principia Mathematica: "I do not 'feign' hypotheses." What mattered were his mathematically stated principles describing this force, and their ability to account for a vast range of phenomena.

About two centuries later, in 1915, a deeper explanation for Newton's law of gravitation was found in Albert Einstein's general theory of relativity: gravity could be explained as a manifestation of the curvature in spacetime resulting from the presence of matter and energy. Successful theories like Newton's, writes Weinberg, may work for reasons that their creators do not understand—reasons that deeper theories will later reveal. Scientific progress is not a matter of building theories on a foundation of reason, but of unifying a greater range of phenomena under simpler and more general principles.

Artificial intelligence

Since 1950, when Alan Turing proposed what has come to be called the "Turing test," there has been speculation whether machines such as computers can possess intelligence; and, if so, whether intelligent machines could become a threat to human intellectual and scientific ascendancy—or even an existential threat to humanity. John Searle points out common confusion about the correct interpretation of computation and information technology. "For example, one routinely reads that in exactly the same sense in which Garry Kasparov… beat Anatoly Karpov in chess, the computer called Deep Blue played and beat Kasparov.... [T]his claim is [obviously] suspect. In order for Kasparov to play and win, he has to be conscious that he is playing chess, and conscious of a thousand other things... Deep Blue is conscious of none of these things because it is not conscious of anything at all. Why is consciousness so important? You cannot literally play chess or do much of anything else cognitive if you are totally disassociated from consciousness."

Searle explains that, "in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer-independent, but the computation is observer-relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation.... There is no psychological reality at all to what is happening in the [computer]."

"[A] digital computer", writes Searle, "is a syntactical machine. It manipulates symbols and does nothing else. For this reason, the project of creating human intelligence by designing a computer program that will pass the Turing Test... is doomed from the start. The appropriately programmed computer has a syntax [rules for constructing or transforming the symbols and words of a language] but no semantics [comprehension of meaning].... Minds, on the other hand, have mental or semantic content."

Professor of psychology and neural science Gary Marcus points out a so far insuperable stumbling block to artificial intelligence: an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways. Our brain is so good at comprehending language that we do not usually notice." A prominent example is known as the "pronoun disambiguation problem" ("PDP"): a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.

Computer scientist Pedro Domingos writes: "AIs are like autistic savants and will remain so for the foreseeable future.... AIs lack common sense and can easily make errors that a human never would... They are also liable to take our instructions too literally, giving us precisely what we asked for instead of what we actually wanted.

Kai-Fu Lee, a Beijing-based venture capitalist, artificial-intelligence (AI) expert with a Ph.D. in computer science from Carnegie Mellon University, and author of the 2018 book, AI Superpowers: China, Silicon Valley, and the New World Order, emphasized in a 2018 PBS Amanpour interview with Hari Sreenivasan that AI, with all its capabilities, will never be capable of creativity or empathy.

Discovery

Discoveries and inventions

Fifty years before Florian Znaniecki published his 1923 paper proposing the creation of an empirical field of study to study the field of science, Aleksander Głowacki (better known by his pen name, Bolesław Prus) had made the same proposal. In an 1873 public lecture "On Discoveries and Inventions", Prus said:
Until now there has been no science that describes the means for making discoveries and inventions, and the generality of people, as well as many men of learning, believe that there never will be. This is an error. Someday a science of making discoveries and inventions will exist and will render services. It will arise not all at once; first only its general outline will appear, which subsequent researchers will emend and elaborate, and which still later researchers will apply to individual branches of knowledge.
Prus defines "discovery" as "the finding out of a thing that has existed and exists in nature, but which was previously unknown to people"; and "invention" as "the making of a thing that has not previously existed, and which nature itself cannot make."

He illustrates the concept of "discovery":
Until 400 years ago, people thought that the Earth comprised just three parts: Europe, Asia, and Africa; it was only in 1492 that the Genoese, Christopher Columbus, sailed out from Europe into the Atlantic Ocean and, proceeding ever westward, after [10 weeks] reached a part of the world that Europeans had never known. In that new land he found copper-colored people who went about naked, and he found plants and animals different from those in Europe; in short, he had discovered a new part of the world that others would later name "America." We say that Columbus had discovered America, because America had already long existed on Earth.
Prus illustrates the concept of "invention":
[As late as] 50 years ago, locomotives were unknown, and no one knew how to build one; it was only in 1828 that the English engineer [George] Stephenson built the first locomotive and set it in motion. So we say that Stephenson invented the locomotive, because this machine had not previously existed and could not by itself have come into being in nature; it could only have been made by man.
According to Prus, "inventions and discoveries are natural phenomena and, as such, are subject to certain laws." Those are the laws of "gradualness", "dependence", and "combination".
1. The law of gradualness. No discovery or invention arises at once perfected, but it is perfected gradually; likewise, no invention or discovery is the work of a single individual but of many individuals, each adding his little contribution. [...] Potatoes were first discovered; later they were found to make good cattle feed; then it was learned that potatoes could nourish people; and, later, potatoes began to be used for making vodka.

In regard to inventions, gradualness may be illustrated by the evolution of the stool. First people found that it was better to sit on a stump or a rock than on the ground. Then, noticing that a rock or a stump was too heavy to lug around, they built a stool consisting of a board and several legs. Next, to the stool they added a backrest, thus making a chair; to the chair, they added arm rests, making an armchair. Then they began painting and padding the armchairs and chairs, and so on.

2. The law of dependence. An invention or discovery is conditional on the prior existence of certain known discoveries and inventions. [...] If potatoes grew only in America, they could not have been discovered before America had been; if the black swan lives only in Australia, the black swan could not have been seen before Australia had been. If the rings of Saturn can be seen through telescopes, then the telescope had to have been invented before the rings could have been seen. [...]

3. The law of combination. Any new discovery or invention is a combination of earlier discoveries and inventions, or rests on them. When I study a new mineral, I inspect it, I smell it, I taste it, that is, I combine the mineral with my senses. Then I weigh it and heat it, which is to say, I combine the mineral with a balance and with fire. Then I place it into water, into sulfuric acid, and so forth, in short, I combine the mineral with everything that I have at hand and in this way I learn ever more of its properties. And as for inventions, who does not know that a clock is a combination of wheels, springs, dials, bells, etc.? Who does not know that gunpowder is a combination of sulfur, saltpeter and charcoal?
Each of Prus' three "laws" entails important corollaries. The law of gradualness implies the following:
a) Since every discovery and invention requires perfecting, let us not pride ourselves only on discovering or inventing something completely new, but let us also work to improve or get to know more exactly things that are already known and already exist.

b) The same law of gradualness demonstrates the necessity of expert training. Who can perfect a watch, if not a watchmaker with a good comprehensive knowledge of his métier? Who can discover new characteristics of an animal, if not a naturalist?
From the law of dependence flow the following corollaries:
a) No invention or discovery, even one seemingly without value, should be dismissed, because that particular trifle may later prove very useful. There would seem to be no simpler invention than the needle, yet the clothing of millions of people, and the livelihoods of millions of seamstresses, depend on the needle's existence. Even today's beautiful sewing machine would not exist, had the needle not long ago been invented.

b) The law of dependence teaches us that what cannot be done today, might be done later. People give much thought to the construction of a flying machine that could carry many persons and parcels. The inventing of such a machine will depend, among other things, on inventing a material that is, say, as light as paper and as sturdy and fire-resistant as steel.
Finally, Prus' corollaries to his law of combination:
a) Anyone who wants to be a successful inventor, needs to know a great many things—in the most diverse fields. For if a new invention is a combination of earlier inventions, then the inventor's mind is the ground on which, for the first time, various seemingly unrelated things combine. Example: The steam engine combines Rumford's double boiler, the pump, and the spinning wheel.

[…] What is the connection among zinc, copper, sulfuric acid, a magnet, a clock mechanism, and an urgent message? All these had to come together in the mind of the inventor of the telegraph…

The greater the number of inventions that come into being, the more things a new inventor must know; the first, earliest and simplest inventions were made by completely uneducated people—but today's inventions, particularly scientific ones, are products of the most highly educated minds.

b) A second corollary concerns societies that wish to have inventors. I said that a new invention is created by combining the most diverse objects; let us see where this takes us.

Suppose I want to make an invention, and someone tells me: Take 100 different objects and bring them into contact with one another, first two at a time, then three at a time, finally four at a time, and you will arrive at a new invention. Imagine that I take a burning candle, charcoal, water, paper, zinc, sugar, sulfuric acid, and so on, 100 objects in all, and combine them with one another, that is, bring into contact first two at a time: charcoal with flame, water with flame, sugar with flame, zinc with flame, sugar with water, etc. Each time, I shall see a phenomenon: thus, in fire, sugar will melt, charcoal will burn, zinc will heat up, and so on. Now I will bring into contact three objects at a time, for example, sugar, zinc and flame; charcoal, sugar and flame; sulfuric acid, zinc and water; etc., and again I shall experience phenomena. Finally I bring into contact four objects at a time, for example, sugar, zinc, charcoal, and sulfuric acid. Ostensibly this is a very simple method, because in this fashion I could make not merely one but a dozen inventions. But will such an effort not exceed my capability? It certainly will. A hundred objects, combined in twos, threes and fours, will make over 4 million combinations; so if I made 100 combinations a day, it would take me over 110 years to exhaust them all!

But if by myself I am not up to the task, a sizable group of people will be. If 1,000 of us came together to produce the combinations that I have described, then any one person would only have to carry out slightly more than 4,000 combinations. If each of us performed just 10 combinations a day, together we would finish them all in less than a year and a half: 1,000 people would make an invention which a single man would have to spend more than 110 years to make…

The conclusion is quite clear: a society that wants to win renown with its discoveries and inventions has to have a great many persons working in every branch of knowledge. One or a few men of learning and genius mean nothing today, or nearly nothing, because everything is now done by large numbers. I would like to offer the following simile: Inventions and discoveries are like a lottery; not every player wins, but from among the many players a few must win. The point is not that John or Paul, because they want to make an invention and because they work for it, shall make an invention; but where thousands want an invention and work for it, the invention must appear, as surely as an unsupported rock must fall to the ground.
But, asks Prus, "What force drives [the] toilsome, often frustrated efforts [of the investigators]? What thread will clew these people through hitherto unexplored fields of study?"
[T]he answer is very simple: man is driven to efforts, including those of making discoveries and inventions, by needs; and the thread that guides him is observation: observation of the works of nature and of man.

I have said that the mainspring of all discoveries and inventions is needs. In fact, is there any work of man that does not satisfy some need? We build railroads because we need rapid transportation; we build clocks because we need to measure time; we build sewing machines because the speed of [unaided] human hands is insufficient. We abandon home and family and depart for distant lands because we are drawn by curiosity to see what lies elsewhere. We forsake the society of people and we spend long hours in exhausting contemplation because we are driven by a hunger for knowledge, by a desire to solve the challenges that are constantly thrown up by the world and by life!

Needs never cease; on the contrary, they are always growing. While the pauper thinks about a piece of bread for lunch, the rich man thinks about wine after lunch. The foot traveler dreams of a rudimentary wagon; the railroad passenger demands a heater. The infant is cramped in its cradle; the mature man is cramped in the world. In short, everyone has his needs, and everyone desires to satisfy them, and that desire is an inexhaustible source of new discoveries, new inventions, in short, of all progress.

But needs are general, such as the needs for food, sleep and clothing; and special, such as needs for a new steam engine, a new telescope, a new hammer, a new wrench. To understand the former needs, it suffices to be a human being; to understand the latter needs, one must be a specialist—an expert worker. Who knows better than a tailor what it is that tailors need, and who better than a tailor knows how to find the right way to satisfy the need?

Now let us consider how observation can lead man to new ideas; and to that end, as an example, let us imagine how, more or less, clay products came to be invented.

Suppose that somewhere there lived on clayey soil a primitive people who already knew fire. When rain fell on the ground, the clay turned doughy; and if, shortly after the rain, a fire was set on top of the clay, the clay under the fire became fired and hardened. If such an event occurred several times, the people might observe and thereafter remember that fired clay becomes hard like stone and does not soften in water. One of the primitives might also, when walking on wet clay, have impressed deep tracks into it; after the sun had dried the ground and rain had fallen again, the primitives might have observed that water remains in those hollows longer than on the surface. Inspecting the wet clay, the people might have observed that this material can be easily kneaded in one's fingers and accepts various forms.

Some ingenious persons might have started shaping clay into various animal forms […] etc., including something shaped like a tortoise shell, which was in use at the time. Others, remembering that clay hardens in fire, might have fired the hollowed-out mass, thereby creating the first [clay] bowl.

After that, it was a relatively easy matter to perfect the new invention; someone else could discover clay more suitable for such manufactures; someone else could invent a glaze, and so on, with nature and observation at every step pointing out to man the way to invention.

[This example] illustrates how people arrive at various ideas: by closely observing all things and wondering about all things.
Take another example. [S]ometimes, in a pane of glass, we find disks and bubbles, looking through which we see objects more distinctly than with the naked eye. Suppose that an alert person, spotting such a bubble in a pane, took out a piece of glass and showed it to others as a toy. Possibly among them there was a man with weak vision who found that, through the bubble in the pane, he saw better than with the naked eye. Closer investigation showed that bilaterally convex glass strengthens weak vision, and in this way eyeglasses were invented. People may first have cut glass for eyeglasses from glass panes, but in time others began grinding smooth pieces of glass into convex lenses and producing proper eyeglasses.

The art of grinding eyeglasses was known almost 600 years ago. A couple of hundred years later, the children of a certain eyeglass grinder, while playing with lenses, placed one in front of another and found that they could see better through two lenses than through one. They informed their father about this curious occurrence, and he began producing tubes with two magnifying lenses and selling them as a toy. Galileo, the great Italian scientist, on learning of this toy, used it for a different purpose and built the first telescope.

This example, too, shows us that observation leads man by the hand to inventions. This example again demonstrates the truth of gradualness in the development of inventions, but above all also the fact that education amplifies man's inventiveness. A simple lens-grinder formed two magnifying glasses into a toy—while Galileo, one of the most learned men of his time, made a telescope. As Galileo's mind was superior to the craftsman's mind, so the invention of the telescope was superior to the invention of a toy.

The three laws [that have been discussed here] are immensely important and do not apply only to discoveries and inventions, but they pervade all of nature. An oak does not immediately become an oak but begins as an acorn, then becomes a seedling, later a little tree, and finally a mighty oak: we see here the law of gradualness. A seed that has been sown will not germinate until it finds sufficient heat, water, soil and air: here we see the law of dependence. Finally, no animal or plant, or even stone, is something homogeneous and single but is composed of various organs: here we see the law of combination.
Prus holds that, over time, the multiplication of discoveries and inventions has improved the quality of people's lives and has expanded their knowledge. "This gradual advance of civilized societies, this constant growth in knowledge of the objects that exist in nature, this constant increase in the number of tools and useful materials, is termed progress, or the growth of civilization." Conversely, Prus warns, "societies and people that do not make inventions or know how to use them, lead miserable lives and ultimately perish."

Reproducibility

A fundamental feature of the scientific enterprise is reproducibility of results. "For decades", writes Shannon Palus, "it has been... an open secret that a [considerable part] of the literature in some fields is plain wrong." This effectively sabotages the scientific enterprise and costs the world many billions of dollars annually in wasted resources. Militating against reproducibility is scientists' reluctance to share techniques, for fear of forfeiting one's advantage to other scientists. Also, scientific journals and tenure committees tend to prize impressive new results rather than gradual advances that systematically build on existing literature. Scientists who quietly fact-check others' work or spend extra time ensuring that their own protocols are easy for other researchers to understand, gain little for themselves.

With a view to improving reproducibility of scientific results, it has been suggested that research-funding agencies finance only projects that include a plan for making their work transparent. In 2016 the U.S. National Institutes of Health introduced new application instructions and review questions to encourage scientists to improve reproducibility. The NIH requests more information on how the study builds on previous work, and a list of variables that could affect the study, such as the sex of animal subjects—a previously overlooked factor that led many studies to describe phenomena found in male animals as universal.

Likewise, the questions that a funder can ask in advance could be asked by journals and reviewers. One solution is "registered reports", a preregistration of studies whereby a scientist submits, for publication, research analysis and design plans before actually doing the study. Peer reviewers then evaluate the methodology, and the journal promises to print the results, no matter what they are. In order to prevent over-reliance on preregistered studies—which could encourage safer, less venturesome research, thus over-correcting the problem—the preregistered-studies model could be operated in tandem with the traditional results-focused model, which may sometimes be more friendly to serendipitous discoveries.

Rediscovery

A 2016 Scientific American report highlights the role of rediscovery in science. Indiana University Bloomington researchers combed through 22 million scientific papers published over the previous century and found dozens of "Sleeping Beauties"—studies that lay dormant for years before getting noticed. The top finds, which languished longest and later received the most intense attention from scientists, came from the fields of chemistry, physics, and statistics. The dormant findings were wakened by scientists from other disciplines, such as medicine, in search of fresh insights, and by the ability to test once-theoretical postulations. Sleeping Beauties will likely become even more common in the future because of increasing accessibility of scientific literature. The Scientific American report lists the top 15 Sleeping Beauties: 7 in chemistry, 5 in physics, 2 in statistics, and 1 in metallurgy. Examples include:

Herbert Freundlich's "Concerning Adsorption in Solutions" (1906), the first mathematical model of adsorption, when atoms or molecules adhere to a surface. Today both environmental remediation and decontamination in industrial settings rely heavily on adsorption.

A. Einstein, B. Podolsky and N. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review, vol. 47 (May 15, 1935), pp. 777–780. This famous thought experiment in quantum physics—now known as the EPR paradox, after the authors' surname initials—was discussed theoretically when it first came out. It was not until the 1970s that physics had the experimental means to test quantum entanglement.

J[ohn] Turkevich, P. C. Stevenson, J. Hillier, "A Study of the Nucleation and Growth Processes in the Synthesis of Colloidal Gold", Discuss. Faraday. Soc., 1951, 11, pp. 55–75, explains how to suspend gold nanoparticles in liquid. It owes its awakening to medicine, which now employs gold nanoparticles to detect tumors and deliver drugs.

William S. Hummers and Richard E Offeman, "Preparation of Graphitic Oxide", Journal of the American Chemical Society, vol. 80, no. 6 (March 20, 1958), p. 1339, introduced Hummers' Method, a technique for making graphite oxide. Recent interest in graphene's potential has brought the 1958 paper to attention. Graphite oxide could serve as a reliable intermediate for the 2-D material.

Multiple discovery

Historians and sociologists have remarked the occurrence, in science, of "multiple independent discovery". Sociologist Robert K. Merton defined such "multiples" as instances in which similar discoveries are made by scientists working independently of each other. "Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a new discovery which, unknown to him, somebody else has made years before." Commonly cited examples of multiple independent discovery are the 17th-century independent formulation of calculus by Isaac Newton, Gottfried Wilhelm Leibniz, and others; the 18th-century independent discovery of oxygen by Carl Wilhelm Scheele, Joseph Priestley, Antoine Lavoisier, and others; and the 19th-century independent formulation of the theory of evolution of species by Charles Darwin and Alfred Russel Wallace.

Merton contrasted a "multiple" with a "singleton" — a discovery that has been made uniquely by a single scientist or group of scientists working together. He believed that it is multiple discoveries, rather than unique ones, that represent the common pattern in science.

Multiple discoveries in the history of science provide evidence for evolutionary models of science and technology, such as memetics (the study of self-replicating units of culture), evolutionary epistemology (which applies the concepts of biological evolution to study of the growth of human knowledge), and cultural selection theory (which studies sociological and cultural evolution in a Darwinian manner). A recombinant-DNA-inspired "paradigm of paradigms", describing a mechanism of "recombinant conceptualization", predicates that a new concept arises through the crossing of pre-existing concepts and facts. This is what is meant when one says that a scientist, scholar, or artist has been "influenced by" another — etymologically, that a concept of the latter's has "flowed into" the mind of the former.

The phenomenon of multiple independent discoveries and inventions can be viewed as a consequence of Bolesław Prus' three laws of gradualness, dependence, and combination (see "Discoveries and inventions", above). The first two laws may, in turn, be seen as corollaries to the third law, since the laws of gradualness and dependence imply the impossibility of certain scientific or technological advances pending the availability of certain theories, facts, or technologies that must be combined to produce a given scientific or technological advance.

Psychology of science

Nonconformance

A practical question concerns the traits that enable some individuals to achieve extraordinary results in their fields of work—and how such creativity can be fostered. Melissa Schilling, a student of innovation strategy, has identified some traits shared by eight major innovators in natural science or technology: Benjamin Franklin (1706–90), Thomas Edison (1847–1931), Nikola Tesla (1856–1943), Maria Skłodowska Curie (1867–1934), Dean Kamen (born 1951), Steve Jobs (1955–2011), Albert Einstein (1879–1955), and Elon Musk (born 1971).

Schilling chose innovators in natural science and technology rather than in other fields because she found much more consensus about important contributions to natural science and technology than, for example, to art or music. She further limited the set to individuals associated with multiple innovations. "When an individual is associated with only a single major invention, it is much harder to know whether the invention was caused by the inventor's personal characteristics or by simply being at the right place at the right time."

The eight individuals were all extremely intelligent, but "that is not enough to make someone a serial breakthrough innovator." Nearly all these innovators showed very high levels of social detachment, or separateness (a notable exception being Benjamin Franklin). "Their isolation meant that they were less exposed to dominant ideas and norms, and their sense of not belonging meant that even when exposed to dominant ideas and norms, they were often less inclined to adopt them." From an early age, they had all shown extreme faith in their ability to overcome obstacles—what psychology calls "self-efficacy".

"Most [of them, writes Schilling] were driven by idealism, a superordinate goal that was more important than their own comfort, reputation, or families. Nikola Tesla wanted to free mankind from labor through unlimited free energy and to achieve international peace through global communication. Elon Musk wants to solve the world's energy problems and colonize Mars. Benjamin Franklin was seeking greater social harmony and productivity through the ideals of egalitarianism, tolerance, industriousness, temperance, and charity. Marie Curie had been inspired by Polish Positivism's argument that Poland, which was under Tsarist Russian rule, could be preserved only through the pursuit of education and technological advance by all Poles—including women."

Most of the innovators also worked hard and tirelessly because they found work extremely rewarding. Some had an extremely high need for achievement. Many also appeared to find work autotelic—rewarding for its own sake. A surprisingly large portion of the breakthrough innovators have been autodidacts—self-taught persons—and excelled much more outside the classroom than inside.

"Almost all breakthrough innovation," writes Schilling, "starts with an unusual idea or with beliefs that break with conventional wisdom.... However, creative ideas alone are almost never enough. Many people have creative ideas, even brilliant ones. But usually we lack the time, knowledge, money, or motivation to act on those ideas." It is generally hard to get others' help in implementing original ideas because the ideas are often initially hard for others to understand and value. Thus each of Schilling's breakthrough innovators showed extraordinary effort and persistence. Even so, writes Schilling, "being at the right place at the right time still matter[ed]."

Lichenology

When Swiss botanist Simon Schwendener discovered in the 1860s that lichens were a symbiotic partnership between a fungus and an alga, his finding at first met with resistance from the scientific community. After his discovery that the fungus—which cannot make its own food—provides the lichen's structure, while the alga's contribution is its photosynthetic production of food, it was found that in some lichens a cyanobacterium provides the food—and a handful of lichen species contain both an alga and a cyanobacterium, along with the fungus.

A self-taught naturalist, Trevor Goward, has helped create a paradigm shift in the study of lichens and perhaps of all life-forms by doing something that people did in pre-scientific times: going out into nature and closely observing. His essays about lichens were largely ignored by most researchers because Goward has no scientific degrees and because some of his radical ideas are not supported by rigorous data.

When Goward told Toby Spribille, who at the time lacked a high-school education, about some of his lichenological ideas, Goward recalls, "He said I was delusional." Ultimately Spribille passed a high-school equivalency examination, obtained a Ph.D. in lichenology at the University of Graz in Austria, and became an assistant professor of the ecology and evolution of symbiosis at the University of Alberta. In July 2016 Spribille and his co-authors published a ground-breaking paper in Science revealing that many lichens contain a second fungus.

Spribille credits Goward with having "a huge influence on my thinking. [His essays] gave me license to think about lichens in [an unorthodox way] and freed me to see the patterns I worked out in Bryoria with my co-authors." Even so, "one of the most difficult things was allowing myself to have an open mind to the idea that 150 years of literature may have entirely missed the theoretical possibility that there would be more than one fungal partner in the lichen symbiosis." Spribille says that academia's emphasis on the canon of what others have established as important is inherently limiting.

Leadership

Contrary to previous studies indicating that higher intelligence makes for better leaders in various fields of endeavor, later research suggests that, at a certain point, a higher IQ can be viewed as harmful. Decades ago, psychologist Dean Simonton suggested that brilliant leaders' words may go over people's heads, their solutions could be more complicated to implement, and followers might find it harder to relate to them. At last, in the July 2017 Journal of Applied Psychology, he and two colleagues published the results of actual tests of the hypothesis.

Studied were 379 men and women business leaders in 30 countries, including the fields of banking, retail, and technology. The managers took IQ tests—an imperfect but robust predictor of performance in many areas—and each was rated on leadership style and effectiveness by an average of 8 co-workers. IQ correlated positively with ratings of leadership effectiveness, strategy formation, vision, and several other characteristics—up to a point. The ratings peaked at an IQ of about 120, which is higher than some 80% of office workers. Beyond that, the ratings declined. The researchers suggested that the ideal IQ could be higher or lower in various fields, depending on whether technical or social skills are more valued in a given work culture.

Psychologist Paul Sackett, not involved in the research, comments: "To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers. The wrong interpretation would be,'Don't hire high-IQ leaders.'" The study's lead author, psychologist John Antonakis, suggests that leaders should use their intelligence to generate creative metaphors that will persuade and inspire others. "I think the only way a smart person can signal their intelligence appropriately and still connect with the people," says Antonakis, "is to speak in charismatic ways."

Sociology of science

Specialization

Academic specialization produces great benefits for science and technology by focusing effort on discrete disciplines. But excessively narrow specialization can act as a roadblock to productive collaboration between traditional disciplines. 

In 2017, in Manhattan, James Harris Simons, a noted mathematician and retired founder of one of the world's largest hedge funds, inaugurated the Flatiron Institute, a nonprofit enterprise whose goal is to apply his hedge fund's analytical strategies to projects dedicated to expanding knowledge and helping humanity. He has established computational divisions for research in astrophysics, biology, and quantum physics, and an interdisciplinary division for climate modelling that interfaces geology, oceanography, atmospheric science, biology, and climatology.

The latter, fourth Flatiron Institute division was inspired by a 2017 presentation to the Institute's leadership by John Grotzinger, a "bio-geoscientist" from the California Institute of Technology, who explained the challenges of climate modelling. Grotzinger was a specialist in historical climate change—specifically, what had caused the great Permian extinction, during which virtually all species died. To properly assess this cataclysm, one had to understand both the rock record and the ocean's composition, but geologists did not interact much with physical oceanographers. Grotzinger's own best collaboration had resulted from a fortuitous lunch with an oceanographer. Climate modelling was an intrinsically difficult problem made worse by academia's structural divisions. "If you had it all under one umbrella... it could result [much sooner] in a major breakthrough." Simons and his team found Grotzinger's presentation compelling, and the Flatiron Institute decided to establish its fourth and final computational division.

Mentoring

Sociologist Harriet Zuckerman, in her 1977 study of natural-science Nobel laureates in the United States, was struck by the fact that more than half (48) of the 92 laureates who did their prize-winning research in the U.S. by 1972 had worked either as students, postdoctorates, or junior collaborators under older Nobel laureates. Furthermore, those 48 future laureates had worked under a total of 71 laureate masters.

Social viscosity ensures that not every qualified novice scientist attains access to the most productive centers of scientific thought. Nevertheless, writes Zuckerman, "To some extent, students of promise can choose masters with whom to work and masters can choose among the cohorts of students who present themselves for study. This process of bilateral assortative selection is conspicuously at work among the ultra-elite of science. Actual and prospective members of that elite select their scientist parents and therewith their scientist ancestors just as later they select their scientist progeny and therewith their scientist descendants."

Zuckerman writes: "[T]he lines of elite apprentices to elite masters who had themselves been elite apprentices, and so on indefinitely, often reach far back into the history of science, long before 1900, when [Alfred] Nobel's will inaugurated what now amounts to the International Academy of Sciences. As an example of the many long historical chains of elite masters and apprentices, consider the German-born English laureate Hans Krebs (1953), who traces his scientific lineage [...] back through his master, the 1931 laureate Otto Warburg. Warburg had studied with Emil Fis[c]her [1852–1919], recipient of a prize in 1902 at the age of 50, three years before it was awarded [in 1905] to his teacher, Adolf von Baeyer [1835–1917], at age 70. This lineage of four Nobel masters and apprentices has its own pre-Nobelian antecedents. Von Baeyer had been the apprentice of F[riedrich] A[ugust] Kekulé [1829–96], whose ideas of structural formulae revolutionized organic chemistry and who is perhaps best known for the often retold story about his having hit upon the ring structure of benzene in a dream (1865). Kekulé himself had been trained by the great organic chemist Justus von Liebig (1803–73), who had studied at the Sorbonne with the master J[oseph] L[ouis] Gay-Lussac (1778–1850), himself once apprenticed to Claude Louis Berthollet (1748–1822). Among his many institutional and cognitive accomplishments, Berthollet helped found the École Polytechnique, served as science advisor to Napoleon in Egypt, and, more significant for our purposes here, worked with [Antoine] Lavoisier [1743–94] to revise the standard system of chemical nomenclature."

Collaboration

Sociologist Michael P. Farrell has studied close creative groups and writes: "Most of the fragile insights that laid the foundation of a new vision emerged not when the whole group was together, and not when members worked alone, but when they collaborated and repsonded to one another in pairs." François Jacob, who, with Jacques Monod, pioneered the study of gene regulation, notes that by the mid-20th century, most research in molecular biology was conducted by twosomes. "Two are better than one for dreaming up theories and constructing models," writes Jacob. "For with two minds working on a problem, ideas fly thicker and faster. They are bounced from partner to partner.... And in the process, illusions are sooner nipped in the bud." As of 2018, in the previous 35 years, some half of Nobel Prizes in Physiology or Medicine had gone to scientific partnerships. James Somers describes a remarkable partnership between Google's top software engineers, Jeff Dean and Sanjay Ghemawat.

Twosome collaborations have also been prominent in creative endeavors outside the natural sciences and technology; examples are Monet's and Renoir's 1869 joint creation of Impressionism, Pablo Picasso's and Georges Braque's six-year collaborative creation of Cubism, and John Lennon's and Paul McCartney's collaborations on Beatles songs. "Everyone", writes James Somers, "falls into creative ruts, but two people rarely do so at the same time."

The same point was made by Francis Crick, member of what may be history's most famous scientific duo, Francis Crick and James Watson, who together discovered the structure of the genetic material, DNA. At the end of a PBS television documentary on James Watson, in a video clipping Crick explains to Watson that their collaboration had been crucial to their discovery because, when one of them was wrong, the other would set him straight.

Politics

Big Science

What has been dubbed "Big Science" emerged from the United States' World War II Manhattan Project that produced the world's first nuclear weapons; and Big Science has since been associated with physics, which requires massive particle accelerators. In biology, Big Science debuted in 1990 with the Human Genome Project to sequence human DNA. In 2013 neuroscience became a Big Science domain when the U.S. announced a BRAIN Initiative and the European Union announced a Human Brain Project. Major new brain-research initiatives were also announced by Israel, Canada, Australia, New Zealand, Japan, and China.

Earlier successful Big Science projects had habituated politicians, mass media, and the public to view Big Science programs with sometimes uncritical favor.

The U.S.'s BRAIN Initiative was inspired by concern about the spread and cost of mental disorders and by excitement about new brain-manipulation technologies such as optogenetics. After some early false starts, the U.S. National Institute of Mental Health let the country's brain scientists define the BRAIN Initiative, and this led to an ambitious interdisciplinary program to develop new technological tools to better monitor, measure, and simulate the brain. Competition in research was ensured by the National Institute of Mental Health's peer-review process.

In the European Union, the European Commission's Human Brain Project got off to a rockier start because political and economic considerations obscured questions concerning the feasibility of the Project's initial scientific program, based principally on computer modeling of neural circuits. Four years earlier, in 2009, fearing that the European Union would fall further behind the U.S. in computer and other technologies, the European Union had begun creating a competition for Big Science projects, and the initial program for the Human Brain Project seemed a good fit for a European program that might take a lead in advanced and emerging technologies. Only in 2015, after over 800 European neuroscientists threatened to boycott the European-wide collaboration, were changes introduced into the Human Brain Project, supplanting many of the original political and economic considerations with scientific ones.

Funding

Government funding

Nathan Myhrvold, former Microsoft chief technology officer and founder of Microsoft Research, argues that the funding of basic science cannot be left to the private sector—that "without government resources, basic science will grind to a halt." He notes that Albert Einstein's general theory of relativity, published in 1915, did not spring full-blown from his brain in a eureka moment; he worked at it for years—finally driven to complete it by a rivalry with mathematician David Hilbert. The history of almost any iconic scientific discovery or technological invention—the lightbulb, the transistor, DNA, even the Internet—shows that the famous names credited with the breakthrough "were only a few steps ahead of a pack of competitors." Some writers and elected officials have used this phenomenon of "parallel innovation" to argue against public financing of basic research: government, they assert, should leave it to companies to finance the research they need.

Myhrvold writes that such arguments are dangerously wrong: without government support, most basic scientific research will never happen. "This is most clearly true for the kind of pure research that has delivered... great intellectual benefits but no profits, such as the work that brought us the Higgs boson, or the understanding that a supermassive black hole sits at the center of the Milky Way, or the discovery of methane seas on the surface of Saturn's moon Titan. Company research laboratories used to do this kind of work: experimental evidence for the big bang was discovered at AT&T's Bell Labs, resulting in a Nobel Prize. Now those days are gone."

Even in applied fields such as materials science and computer science, writes Myhrvold, "companies now understand that basic research is a form of charity—so they avoid it." Bell Labs scientists created the transistor, but that invention earned billions for Intel and Microsoft. Xerox PARC engineers invented the modern graphical user interface, but Apple and Microsoft profited most. IBM researchers pioneered the use of giant magnetoresistance to boost hard-disk capacity but soon lost the disk-drive business to Seagate and Western Digital.

Company researchers now have to focus narrowly on innovations that can quickly bring revenue; otherwise the research budget could not be justified to the company's investors. "Those who believe profit-driven companies will altruistically pay for basic science that has wide-ranging benefits—but mostly to others and not for a generation—are naive.... If government were to leave it to the private sector to pay for basic research, most science would come to a screeching halt. What research survived would be done largely in secret, for fear of handing the next big thing to a rival."

Private funding

A complementary perspective on the funding of scientific research is given by D.T. Max, writing about the Flatiron Institute, a computational center set up in 2017 in Manhattan to provide scientists with mathematical assistance. The Flatiron Institute was established by James Harris Simons, a mathematician who had used mathematical algorithms to make himself a Wall Street billionaire. The Institute has three computational divisions dedicated respectively to astrophysics, biology, and quantum physics, and is working on a fourth division for climate modeling that will involve interfaces of geology, oceanography, atmospheric science, biology, and climatology.

The Flatiron Institute is part of a trend in the sciences toward privately funded research. In the United States, basic science has traditionally been financed by universities or the government, but private institutes are often faster and more focused. Since the 1990s, when Silicon Valley began producing billionaires, private institutes have sprung up across the U.S. In 1997 Larry Ellison launched the Ellison Medical Foundation to study the biology of aging. In 2003 Paul Allen founded the Allen Institute for Brain Science. In 2010 Eric Schmidt founded the Schmidt Ocean Institute.

These institutes have done much good, partly by providing alternatives to more rigid systems. But private foundations also have liabilities. Wealthy benefactors tend to direct their funding toward their personal enthusiasms. And foundations are not taxed; much of the money that supports them would otherwise have gone to the government.

Funding biases

John P.A. Ioannidis, of Stanford University Medical School, writes that "There is increasing evidence that some of the ways we conduct, evaluate, report and disseminate research are miserably ineffective. A series of papers in 2014 in the Lancet... estimated that 85 percent of investment in biomedical research is wasted. Many other disciplines have similar problems." Ioannidis identifies some science-funding biases that undermine the efficiency of the scientific enterprise, and proposes solutions: 

Funding too few scientists: "[M]ajor success [in scientific research] is largely the result of luck, as well as hard work. The investigators currently enjoying huge funding are not necessarily genuine superstars; they may simply be the best connected." Solutions: "Use a lottery to decide which grant applications to fund (perhaps after they pass a basic review).... Shift... funds from senior people to younger researchers..."

No reward for transparency: "Many scientific protocols, analysis methods, computational processes and data are opaque. [M]any top findings cannot be reproduced. That is the case for two out of three top psychology papers, one out of three top papers in experimental economics and more than 75 percent of top papers identifying new cancer drug targets. [S]cientists are not rewarded for sharing their techniques." Solutions: "Create better infrastructure for enabling transparency, openness and sharing. Make transparency a prerequisite for funding. [P]referentially hire, promote or tenure... champions of transparency."

No encouragement for replication: Replication is indispensable to the scientific method. Yet, under pressure to produce new discoveries, researchers tend to have little incentive, and much counterincentive, to try replicating results of previous studies. Solutions: "Funding agencies must pay for replication studies. Scientists' advancement should be based not only on their discoveries but also on their replication track record."

No funding for young scientists: "Werner Heisenberg, Albert Einstein, Paul Dirac and Wolfgang Pauli made their top contributions in their mid-20s." But the average age of biomedical scientists receiving their first substantial grant is 46. The average age for a full professor in the U.S. is 55. Solutions: "A larger proportion of funding should be earmarked for young investigators. Universities should try to shift the aging distribution of their faculty by hiring more young investigators."

Biased funding sources: "Most funding for research and development in the U.S. comes not from the government but from private, for-profit sources, raising unavoidable conflicts of interest and pressure to deliver results favorable to the sponsor." Solutions: "Restrict or even ban funding that has overt conflicts of interest. Journals should not accept research with such conflicts. For less conspicuous conflicts, at a minimum ensure transparent and thorough disclosure."

Funding the wrong fields: "Well-funded fields attract more scientists to work for them, which increases their lobbying reach, fueling a vicious circle. Some entrenched fields absorb enormous funding even though they have clearly demonstrated limited yield or uncorrectable flaws." Solutions: "Independent, impartial assessment of output is necessary for lavishly funded fields. More funds should be earmarked for new fields and fields that are high risk. Researchers should be encouraged to switch fields, whereas currently they are incentivized to focus in one area."

Not spending enough: The U.S. military budget ($886 billion) is 24 times the budget of the National Institutes of Health ($37 billion). "Investment in science benefits society at large, yet attempts to convince the public often make matters worse when otherwise well-intentioned science leaders promise the impossible, such as promptly eliminating all cancer or Alzheimer's disease." Solutions: "We need to communicate how science funding is used by making the process of science clearer, including the number of scientists it takes to make major accomplishments.... We would also make a more convincing case for science if we could show that we do work hard on improving how we run it."

Rewarding big spenders: "Hiring, promotion and tenure decisions primarily rest on a researcher's ability to secure high levels of funding. But the expense of a project does not necessarily correlate with its importance. Such reward structures select mostly for politically savvy managers who know how to absorb money." Solutions: "We should reward scientists for high-quality work, reproducibility and social value rather than for securing funding. Excellent research can be done with little to no funding other than protected time. Institutions should provide this time and respect scientists who can do great work without wasting tons of money."

No funding for high-risk ideas: "The pressure that taxpayer money be 'well spent' leads government funders to back projects most likely to pay off with a positive result, even if riskier projects might lead to more important, but less assured, advances. Industry also avoids investing in high-risk projects... Innovation is extremely difficult, if not impossible, to predict..." Solutions: "Fund excellent scientists rather than projects and give them freedom to pursue research avenues as they see fit. Some institutions such as Howard Hughes Medical Institute already use this model with success." It must be communicated to the public and to policy-makers that science is a cumulative investment, that no one can know in advance which projects will succeed, and that success must be judged on the total agenda, not on a single experiment or result.

Lack of good data: "There is relatively limited evidence about which scientific practices work best. We need more research on research ('meta-research') to understand how to best perform, evaluate, review, disseminate and reward science." Solutions: "We should invest in studying how to get the best science and how to choose and reward the best scientists."

Sexual bias

Claire Pomeroy, president of the Lasker Foundation, which is dedicated to advancing medical research, points out that women scientists continue to be subjected to discrimination in professional advancement.

Though the percentage of doctorates awarded to women in life sciences in the United States increased from 15 to 52 percent between 1969 and 2009, only a third of assistant professors and less than a fifth of full professors in biology-related fields in 2009 were women. Women make up only 15 percent of permanent department chairs in medical schools and barely 16 percent of medical-school deans.

The problem is a culture of unconscious bias that leaves many women feeling demoralized and marginalized. In one study, science faculty were given identical résumés in which the names and genders of two applicants were interchanged; both male and female faculty judged the male applicant to be more competent and offered him a higher salary.

Unconscious bias also appears as "microassaults" against women scientists: purportedly insignificant sexist jokes and insults that accumulate over the years and undermine confidence and ambition. Writes Claire Pomeroy: "Each time it is assumed that the only woman in the lab group will play the role of recording secretary, each time a research plan becomes finalized in the men's lavatory between conference sessions, each time a woman is not invited to go out for a beer after the plenary lecture to talk shop, the damage is reinforced."

"When I speak to groups of women scientists," writes Pomeroy, "I often ask them if they have ever been in a meeting where they made a recommendation, had it ignored, and then heard a man receive praise and support for making the same point a few minutes later. Each time the majority of women in the audience raise their hands. Microassaults are especially damaging when they come from a high-school science teacher, college mentor, university dean or a member of the scientific elite who has been awarded a prestigious prize—the very people who should be inspiring and supporting the next generation of scientists."

Sexual harassment

Sexual harassment is more prevalent in academia than in any other social sector except the military. A June 2018 report by the National Academies of Sciences, Engineering, and Medicine states that sexual harassment hurts individuals, diminishes the pool of scientific talent, and ultimately damages the integrity of science.

Paula Johnson, co-chair of the committee that drew up the report, describes some measures for preventing sexual harassment in science. One would be to replace trainees' individual mentoring with group mentoring, and to uncouple the mentoring relationship from the trainee's financial dependence on the mentor. Another way would be to prohibit the use of confidentiality agreements in connection with harassment cases.

A novel approach to the reporting of sexual harassment, dubbed Callisto, that has been adopted by some institutions of higher education, lets aggrieved persons record experiences of sexual harassment, date-stamped, without actually formally reporting them. This program lets people see if others have recorded experiences of harassment from the same individual, and share information anonymously.

Deterrent stereotypes

Psychologist Andrei Cimpian and philosophy professor Sarah-Jane Leslie have proposed a theory to explain why American women and African-Americans are often subtly deterred from seeking to enter certain academic fields by a misplaced emphasis on genius. Cimpian and Leslie had noticed that their respective fields are similar in their substance but hold different views on what is important for success. Much more than psychologists, philosophers value a certain kind of person: the "brilliant superstar" with an exceptional mind. Psychologists are more likely to believe that the leading lights in psychology grew to achieve their positions through hard work and experience. In 2015, women accounted for less than 30% of doctorates granted in philosophy; African-Americans made up only 1% of philosophy Ph.D.s. Psychology, on the other hand, has been successful in attracting women (72% of 2015 psychology Ph.D.s) and African-Americans (6% of psychology Ph.D.s).

An early insight into these disparities was provided to Cimpian and Leslie by the work of psychologist Carol Dweck. She and her colleagues had shown that a person's beliefs about ability matter a great deal for that person's ultimate success. A person who sees talent as a stable trait is motivated to "show off this aptitude" and to avoid making mistakes. By contrast, a person who adopts a "growth mindset" sees his or her current capacity as a work in progress: for such a person, mistakes are not an indictment but a valuable signal highlighting which of their skills are in need of work. Cimpian and Leslie and their collaborators tested the hypothesis that attitudes, about "genius" and about the unacceptability of making mistakes, within various academic fields may account for the relative attractiveness of those fields for American women and African-Americans. They did so by contacting academic professionals from a wide range of disciplines and asking them whether they thought that some form of exceptional intellectual talent was required for success in their field. The answers received from almost 2,000 academics in 30 fields matched the distribution of Ph.D.s in the way that Cimpian and Leslie had expected: fields that placed more value on brilliance also conferred fewer Ph.D.s on women and African-Americans. The proportion of women and African-American Ph.D.s in psychology, for example, was higher than the parallel proportions for philosophy, mathematics, or physics.

Further investigation showed that non-academics share similar ideas of which fields require brilliance. Exposure to these ideas at home or school could discourage young members of stereotyped groups from pursuing certain careers, such as those in the natural sciences or engineering. To explore this, Cimpian and Leslie asked hundreds of five-, six-, and seven-year-old boys and girls questions that measured whether they associated being "really, really smart" (i.e., "brilliant") with their sex. The results, published in January 2017 in Science, were consistent with scientific literature on the early acquisition of sex stereotypes. Five-year-old boys and girls showed no difference in their self-assessment; but by age six, girls were less likely to think that girls are "really, really smart." The authors next introduced another group of five-, six-, and seven-year-olds to unfamiliar gamelike activities that the authors described as being "for children who are really, really smart." Comparison of boys' and girls' interest in these activities at each age showed no sex difference at age five but significantly greater interest from boys at ages six and seven—exactly the ages when stereotypes emerge.

Cimpian and Leslie conclude that, "Given current societal stereotypes, messages that portray [genius or brilliance] as singularly necessary [for academic success] may needlessly discourage talented members of stereotyped groups."

Academic snobbery

Largely as a result of his growing popularity, astronomer and science popularizer Carl Sagan, creator of the 1980 PBS TV Cosmos series, came to be ridiculed by scientist peers and failed to receive tenure at Harvard University in the 1960s and membership in the National Academy of Sciences in the 1990s. The eponymous "Sagan effect" persists: as a group, scientists still discourage individual investigators from engaging with the public unless they are already well-established senior researchers.

The operation of the Sagan effect deprives society of the full range of expertise needed to make informed decisions about complex questions, including genetic engineering, climate change, and energy alternatives. Fewer scientific voices mean fewer arguments to counter antiscience or pseudoscientific discussion. The Sagan effect also creates the false impression that science is the domain of older white men (who dominate the senior ranks), thereby tending to discourage women and minorities from considering science careers.

A number of factors contribute to the Sagan effect's durability. At the height of the Scientific Revolution in the 17th century, many researchers emulated the example of Isaac Newton, who dedicated himself to physics and mathematics and never married. These scientists were viewed as pure seekers of truth who were not distracted by more mundane concerns. Similarly, today anything that takes scientists away from their research, such as having a hobby or taking part in public debates, can undermine their credibility as researchers.

Another, more prosaic factor in the Sagan effect's persistence may be professional jealousy.

However, there appear to be some signs that engaging with the rest of society is becoming less hazardous to a career in science. So many people have social-media accounts now that becoming a public figure is not as unusual for scientists as previously. Moreover, as traditional funding sources stagnate, going public sometimes leads to new, unconventional funding streams. A few institutions such as Emory University and the Massachusetts Institute of Technology may have begun to appreciate outreach as an area of academic activity, in addition to the traditional roles of research, teaching, and administration. Exceptional among federal funding agencies, the National Science Foundation now officially favors popularization.

Institutional snobbery

Like infectious diseases, ideas in academia are contagious. But why some ideas gain great currency while equally good ones remain in relative obscurity had been unclear. A team of computer scientists has used an epidemiological model to simulate how ideas move from one academic institution to another. The model-based findings, published in October 2018, show that ideas originating at prestigious institutions cause bigger "epidemics" than equally good ideas from less prominent places. The finding reveals a big weakness in how science is done. Many highly trained people with good ideas do not obtain posts at the most prestigious institutions; much good work published by workers at less prestigious places is overlooked by other scientists and scholars because they are not paying attention.

Harold Urey

From Wikipedia, the free encyclopedia

Harold Urey

Urey.jpg
Harold Urey
Born
Harold Clayton Urey

April 29, 1893

DiedJanuary 5, 1981 (aged 87)
NationalityUnited States
Alma mater
Known for
Awards
Scientific career
FieldsPhysical chemistry
Institutions
Doctoral advisorGilbert N. Lewis
Doctoral students
Signature
Harold Urey signature.svg

Harold Clayton Urey (April 29, 1893 – January 5, 1981) was an American physical chemist whose pioneering work on isotopes earned him the Nobel Prize in Chemistry in 1934 for the discovery of deuterium. He played a significant role in the development of the atom bomb, as well as contributing to theories on the development of organic life from non-living matter.

Born in Walkerton, Indiana, Urey studied thermodynamics under Gilbert N. Lewis at the University of California. After he received his PhD in 1923, he was awarded a fellowship by the American-Scandinavian Foundation to study at the Niels Bohr Institute in Copenhagen. He was a research associate at Johns Hopkins University before becoming an associate professor of Chemistry at Columbia University. In 1931, he began work with the separation of isotopes that resulted in the discovery of deuterium.

During World War II, Urey turned his knowledge of isotope separation to the problem of uranium enrichment. He headed the group located at Columbia University that developed isotope separation using gaseous diffusion. The method was successfully developed, becoming the sole method used in the early post-war period. After the war, Urey became professor of chemistry at the Institute for Nuclear Studies, and later Ryerson professor of chemistry at the University of Chicago.

Urey speculated that the early terrestrial atmosphere was composed of ammonia, methane, and hydrogen. One of his Chicago graduate students was Stanley L. Miller, who showed in the Miller–Urey experiment that, if such a mixture were exposed to electric sparks and water, it can interact to produce amino acids, commonly considered the building blocks of life. Work with isotopes of oxygen led to pioneering the new field of paleoclimatic research. In 1958, he accepted a post as a professor at large at the new University of California, San Diego (UCSD), where he helped create the science faculty. He was one of the founding members of UCSD's school of chemistry, which was created in 1960. He became increasingly interested in space science, and when Apollo 11 returned moon rock samples from the moon, Urey examined them at the Lunar Receiving Laboratory. Lunar astronaut Harrison Schmitt said that Urey approached him as a volunteer for a one-way mission to the Moon, stating "I will go, and I don't care if I don't come back."

Early life

Harold Clayton Urey was born on April 29, 1893, in Walkerton, Indiana, the son of Samuel Clayton Urey, a school teacher and a minister in the Church of the Brethren, and his wife Cora Rebecca née Reinoehl. He had a younger brother, Clarence, and a younger sister, Martha. The family moved to Glendora, California, but moved back to Indiana to live with Cora's widowed mother when Samuel became seriously ill with tuberculosis. He died when Urey was six years old.
Urey was educated in an Amish grade school, from which he graduated at the age of 14. He then attended high school in Kendallville, Indiana. After graduating in 1911, he obtained a teacher's certificate from Earlham College, and taught in a small school house in Indiana. He later moved to Montana, where his mother was then living, and he continued to teach there. Urey entered the University of Montana in Missoula in the autumn of 1914, where he earned a Bachelor of Science (BS) degree in zoology in 1917. After the United States entry into World War I that year, Urey took a wartime job with the Barrett Chemical Company in Philadelphia, making TNT. After the war, he returned to the University of Montana as an instructor in Chemistry.
An academic career required a doctorate, so in 1921 Urey enrolled in a PhD program at the University of California, Berkeley, where he studied thermodynamics under Gilbert N. Lewis. His initial attempt at a thesis was on the ionization of cesium vapor. He ran into difficulties, and Meghnad Saha published a better paper on the same subject. Urey then wrote his thesis on the ionization states of an ideal gas, which was subsequently published in the Astrophysical Journal. After he received his PhD in 1923, Urey was awarded a fellowship by the American-Scandinavian Foundation to study at the Niels Bohr Institute in Copenhagen, where he met Werner Heisenberg, Hans Kramers, Wolfgang Pauli, Georg von Hevesy, and John Slater. At the conclusion of his stay, he traveled to Germany, where he met Albert Einstein and James Franck.
On returning to the United States, Urey received an offer of a National Research Council fellowship to Harvard University, and also received an offer to be a research associate at Johns Hopkins University. He chose the latter. Before taking up the job, he traveled to Seattle, Washington, to visit his mother. On the way, he stopped by Everett, Washington, where he knew a woman called Kate Daum. Kate introduced Urey to her sister, Frieda. Urey and Frieda soon became engaged. They were married at her father's house in Lawrence, Kansas, in 1926. The couple had four children: Gertrude Bessie (Elisabeth), born in 1927; Frieda Rebecca, born in 1929; Mary Alice, born in 1934; and John Clayton Urey, born in 1939.
At Johns Hopkins, Urey and Arthur Ruark wrote Atoms, Quanta and Molecules (1930), one of the first English texts on quantum mechanics and its applications to atomic and molecular systems. In 1929, Urey became an associate professor of Chemistry at Columbia University, where his colleagues included Rudolph Schoenheimer, David Rittenberg, and T. I. Taylor.

Deuterium

In the 1920s, William Giauque and Herrick L. Johnston discovered the stable isotopes of oxygen. Isotopes were not well understood at the time; James Chadwick would not discover the neutron until 1932. Two systems were in use for classifying them, based on chemical and physical properties. The latter was determined using the mass spectrograph. Since it was known that the atomic weight of oxygen was almost exactly 16 times as heavy as hydrogen, Raymond Birge, and Donald Menzel hypothesized that hydrogen had more than one isotope as well. Based upon the difference between the results of the two methods, they predicted that only one hydrogen atom in 4,500 was of the heavy isotope.
In 1931, Urey set out to find it. Urey and George Murphy calculated from the Balmer series that the heavy isotope should have lines redshifted by 1.1 to 1.8 ångströms (1.1×10−10 to 1.8×10−10 metres). Urey had access to a 21-foot (6.4 m) grating spectrograph, a sensitive device that had been recently installed at Columbia and was capable of resolving the Balmer series. With a resolution of 1 Å per millimetre, the machine should have produced a difference of about 1 millimetre. However, since only one atom in 4,500 was heavy, the line on the spectrograph was very faint. Urey therefore decided to delay publishing their results until he had more conclusive evidence that it was heavy hydrogen.
Urey and Murphy calculated from the Debye model that the heavy isotope would have a slightly higher boiling point than the light one. By carefully warming liquid hydrogen, 5 litres of liquid hydrogen could be distilled to 1 millilitre, which would be enriched in the heavy isotope by 100 to 200 times. To obtain five litres of liquid hydrogen, they traveled to the cryogenics laboratory at the National Bureau of Standards in Washington, D.C., where they obtained the help of Ferdinand Brickwedde, whom Urey had known at Johns Hopkins.
The first sample that Brickwedde sent was evaporated at 20 K (−253.2 °C; −423.7 °F) at a pressure of 1 standard atmosphere (100 kPa). To their surprise, this showed no evidence of enrichment. Brickwedde then prepared a second sample evaporated at 14 K (−259.1 °C; −434.5 °F) at a pressure of 53 mmHg (7.1 kPa). On this sample, the Balmer lines for heavy hydrogen were seven times as intense. The paper announcing the discovery of what we now call deuterium was jointly published by Urey, Murphy, and Brickwedde in 1932. Urey was awarded the Nobel Prize in Chemistry in 1934 "for his discovery of heavy hydrogen". He declined to attend the ceremony in Stockholm, so that he could be present at the birth of his daughter Mary Alice.
Working with Edward W. Washburn from the Bureau of Standards, Urey subsequently discovered the reason for the anomalous sample. Brickwedde's hydrogen had been separated from water by electrolysis, resulting in depleted sample. Moreover, Francis William Aston now reported that his calculated value for the atomic weight of hydrogen was wrong, thereby invalidating Birge and Menzel's original reasoning. The discovery of deuterium stood, however.
Urey and Washburn attempted to use electrolysis to create pure heavy water. Their technique was sound, but they were beaten to it in 1933 by Lewis, who had the resources of the University of California at his disposal. Using the Born–Oppenheimer approximation, Urey and David Rittenberg calculated the properties of gases containing hydrogen and deuterium. They extended this to enriching compounds of carbon, nitrogen, and oxygen. These could be used as tracers in biochemistry, resulting in a whole new way of examining chemical reactions. He founded the Journal of Chemical Physics in 1932, and was its first editor, serving in that capacity until 1940.
At Columbia, Urey chaired the University Federation for Democracy and Intellectual Freedom. He supported Atlanticist Clarence Streit's proposal for a federal union of the world's major democracies, and the republican cause during the Spanish Civil War. He was an early opponent of German Nazism and assisted refugee scientists, including Enrico Fermi, by helping them find work in the United States, and to adjust to life in a new country.

Manhattan Project

By the time World War II broke out in Europe in 1939, Urey was recognized as a world expert on isotope separation. Thus far, separation had involved only the light elements. In 1939 and 1940, Urey published two papers on the separation of heavier isotopes in which he proposed centrifugal separation. This assumed great importance due to speculation by Niels Bohr that uranium 235 was fissile. Because it was considered "very doubtful whether a chain reaction can be established without separating 235 from the rest of the uranium," Urey began intensive studies of how uranium enrichment might be achieved. Apart from centrifugal separation, George Kistiakowsky suggested that gaseous diffusion might be a possible method. A third possibility was thermal diffusion. Urey coordinated all isotope separation research efforts, including the effort to produce heavy water, which could be used as a neutron moderator in nuclear reactors.
In May 1941, Urey was appointed to the Committee on Uranium, which oversaw the uranium project as part of the National Defense Research Committee (NDRC). In 1941, Urey and George B. Pegram led a diplomatic mission to England to establish co-operation on development of the atomic bomb. The British were optimistic about gaseous diffusion, but it was clear that both gaseous and centrifugal methods faced formidable technical obstacles. In May 1943, as the Manhattan Project gained momentum. Urey became head of the wartime Substitute Alloy Materials Laboratories (SAM Laboratories) at Columbia, which was responsible for the heavy water and all the isotope enrichment processes except Ernest Lawrence's electromagnetic process.
Early reports on the centrifugal method indicated that it was not as efficient as predicted. Urey suggested that a more efficient but technically more complicated countercurrent system be used instead of the previous flow-through method. By November 1941, technical obstacles seemed formidable enough for the process to be abandoned. Countercurrent centrifuges were developed after the war, and today are the favored method in many countries.
The gaseous diffusion process remained more encouraging, although it too had technical obstacles to overcome. By the end of 1943, Urey had over 700 people working for him on gaseous diffusion. The process involved hundreds of cascades, in which corrosive uranium hexafluoride diffused through gaseous barriers, becoming progressively more enriched at every stage. A major problem was finding proper seals for the pumps, but by far the greatest difficulty lay in constructing an appropriate diffusion barrier. Construction of the huge K-25 gaseous diffusion plant was well under way before a suitable barrier became available in quantity in 1944. As a backup, Urey championed thermal diffusion.
Worn out by the effort, Urey left the project in February 1945, handing over his responsibilities to R. H. Crist. The K-25 plant commenced operation in March 1945, and as the bugs were worked out, the plant operated with remarkable efficiency and economy. For a time, uranium was fed into the S50 liquid thermal diffusion plant, then the K-25 gaseous, and finally the Y-12 electromagnetic separation plant; but soon after the war ended the thermal and electromagnetic separation plants were closed down, and separation was performed by K-25 alone. Along with its twin, K-27, constructed in 1946, it became the principal isotope separation plant in the early post-war period. For his work on the Manhattan Project, Urey was awarded the Medal for Merit by the Project director, Major General Leslie R. Groves, Jr.

Post-war years

After the war, Urey became professor of chemistry at the Institute for Nuclear Studies, and then became Ryerson professor of chemistry at the University of Chicago in 1952. He did not continue his pre-war research with isotopes. However, applying the knowledge gained with hydrogen to oxygen, he realized that the fractionation between carbonate and water for oxygen-18 and oxygen-16 would decrease by a factor of 1.04 between 0 and 25 °C (32 and 77 °F). The ratio of the isotopes could then be used to determine average temperatures, assuming that the measurement equipment was sufficiently sensitive. The team included his colleague Ralph Buchsbaum. Examination of a 100-million-year-old belemnite then indicated the summer and winter temperatures that it had lived through over a period of four years. For this pioneering paleoclimatic research, Urey was awarded the Arthur L. Day Medal by the Geological Society of America, and the Goldschmidt Medal of the Geochemical Society.
Miller–Urey experiment
Urey actively campaigned against the 1946 May-Johnson bill because he feared that it would lead to military control of nuclear energy, but supported and fought for the McMahon bill that replaced it, and ultimately created the Atomic Energy Commission. Urey's commitment to the ideal of world government dated from before the war, but the possibility of nuclear war made it only more urgent in his mind. He went on lecture tours against war, and became involved in Congressional debates regarding nuclear issues. He argued publicly on behalf of Ethel and Julius Rosenberg, and was even called before the House Un-American Activities Committee.

Cosmochemistry and the Miller–Urey experiment

In later life, Urey helped develop the field of cosmochemistry and is credited with coining the term. His work on oxygen-18 led him to develop theories about the abundance of the chemical elements on earth, and of their abundance and evolution in the stars. Urey summarized his work in The Planets: Their Origin and Development (1952). Urey speculated that the early terrestrial atmosphere was composed of ammonia, methane, and hydrogen. One of his Chicago graduate students, Stanley L. Miller, showed in the Miller–Urey experiment that, if such a mixture be exposed to electric sparks and to water, it can interact to produce amino acids, commonly considered the building blocks of life.
Urey spent a year as a visiting professor at Oxford University in England in 1956 and 1957. In 1958, he reached the University of Chicago's retirement age of 65, but he accepted a post as a professor at large at the new University of California, San Diego (UCSD), and moved to La Jolla, California. He was subsequently made a professor emeritus there from 1970 to 1981. Urey helped build up the science faculty there. He was one of the founding members of UCSD's school of chemistry, which was created in 1960, along with Stanley Miller, Hans Suess, and Jim Arnold.
In the late 1950s and early 1960s, space science became a topic of research in the wake of the launch of Sputnik I. Urey helped persuade NASA to make unmanned probes to the moon a priority. When Apollo 11 returned moon rock samples from the moon, Urey examined them at the Lunar Receiving Laboratory. The samples supported Urey's contention that the moon and the Earth shared a common origin. While at UCSD, Urey published 105 scientific papers, 47 of them about lunar topics. When asked why he continued to work so hard, he joked, "Well, you know I'm not on tenure anymore."

Death and legacy

Urey enjoyed gardening, and raising cattleya, cymbidium and other orchids. He died at La Jolla, California, and is buried in the Fairfield Cemetery in DeKalb County, Indiana.
Apart from his Nobel Prize, he also won the Franklin Medal in 1943, the J. Lawrence Smith Medal in 1962, the Gold Medal of the Royal Astronomical Society in 1966, and the Priestley Medal of the American Chemical Society in 1973. In 1964 he received the National Medal of Science. He became a Fellow of the Royal Society in 1947. Named after him are lunar impact crater Urey, asteroid 4716 Urey, and the H. C. Urey Prize, awarded for achievement in planetary sciences by the American Astronomical Society. The Harold C. Urey Middle School in Walkerton, Indiana, is also named for him, as is Urey Hall, the chemistry building at Revelle College, UCSD, in La Jolla. UCSD has also established a Harold C. Urey chair whose first holder is Jim Arnold.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...