Search This Blog

Friday, February 22, 2019

Elementary charge

From Wikipedia, the free encyclopedia

Elementary electric charge
Definition:Charge of a proton
Symbole or sometimes qe
Value in coulombs:1.6021766208(98)×10−19 C

The elementary charge, usually denoted by e or sometimes qe, is the electric charge carried by a single proton, or equivalently, the magnitude of the electric charge carried by a single electron, which has charge e. This elementary charge is a fundamental physical constant. To avoid confusion over its sign, e is sometimes called the elementary positive charge. This charge has a measured value of approximately 1.6021766208(98)×10−19 C (coulombs). When the 2019 redefinition of SI base units takes effect on 20 May 2019, its value will be exactly 1.602176634×10−19 C by definition of the coulomb. In the centimeter–gram–second system of units (CGS), it is 4.80320425(10)×10−10 statcoulombs.

Robert A. Millikan's oil drop experiment first measured the magnitude of the elementary charge in 1909.

As a unit

Elementary charge   (as a unit of charge)
Unit systemAtomic units
Unit ofelectric charge
Symbole or q 
Conversions
1 e or q in ...... is equal to ...
   coulomb   1.6021766208(98)×10−19
   statcoulomb   4.80320425(10)×10−10
   HEP: ħc   0.30282212088
   MeVfm   1.4399764

In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge, that is e is equal to 1 e in those unit systems. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt reminds us that the elementary charge was once called electron

The maximum capacity of each pixel in a charge-coupled device image sensor, known as the well depth, is typically given in units of electrons, commonly around 105
e
per pixel. 

In high-energy physics (HEP) Lorentz–Heaviside units are used, and the charge unit is a dependent one, , so that e = 0.30282212088 ħc.

Quantization

Charge quantization is the principle that the charge of any object is an integer multiple of the elementary charge. Thus, an object's charge can be exactly 0 e, or exactly 1 e, −1 e, 2 e, etc., but not, say,  1/2e, or −3.8 e, etc. (There may be exceptions to this statement, depending on how "object" is defined; see below.) 

This is the reason for the terminology "elementary charge": it is meant to imply that it is an indivisible unit of charge.

Charges less than an elementary charge

There are two known sorts of exceptions to the indivisibility of the elementary charge: quarks and quasiparticles.
  • Quarks, first posited in the 1960s, have quantized charge, but the charge is quantized into multiples of 1/3e. However, quarks cannot be seen as isolated particles; they exist only in groupings, and stable groupings of quarks (such as a proton, which consists of three quarks) all have charges that are integer multiples of e. For this reason, either 1 e or 1/3 e can be justifiably considered to be "the quantum of charge", depending on the context. This charge commensurability, "charge quantization", has partially motivated Grand unified Theories.
  • Quasiparticles are not particles as such, but rather an emergent entity in a complex material system that behaves like a particle. In 1982 Robert Laughlin explained the fractional quantum Hall effect by postulating the existence of fractionally-charged quasiparticles. This theory is now widely accepted, but this is not considered to be a violation of the principle of charge quantization, since quasiparticles are not elementary particles.

What is the quantum of charge?

All known elementary particles, including quarks, have charges that are integer multiples of 1/3e. Therefore, one can say that the "quantum of charge" is  1/3e. In this case, one says that the "elementary charge" is three times as large as the "quantum of charge". 

On the other hand, all isolatable particles have charges that are integer multiples of e. (Quarks cannot be isolated: they only exist in collective states like protons that have total charges that are integer multiples of e.) Therefore, one can say that the "quantum of charge" is e, with the proviso that quarks are not to be included. In this case, "elementary charge" would be synonymous with the "quantum of charge". 

In fact, both terminologies are used. For this reason, phrases like "the quantum of charge" or "the indivisible unit of charge" can be ambiguous, unless further specification is given. On the other hand, the term "elementary charge" is unambiguous: it refers to a quantity of charge equal to that of a proton.

Experimental measurements of the elementary charge

In terms of the Avogadro constant and Faraday constant

If the Avogadro constant NA and the Faraday constant F are independently known, the value of the elementary charge can be deduced, using the formula
(In other words, the charge of one mole of electrons, divided by the number of electrons in a mole, equals the charge of a single electron.) 

This method is not how the most accurate values are measured today: Nevertheless, it is a legitimate and still quite accurate method, and experimental methodologies are described below: 

The value of the Avogadro constant NA was first approximated by Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas. Today the value of NA can be measured at very high accuracy by taking an extremely pure crystal (often silicon), measuring how far apart the atoms are spaced using X-ray diffraction or another method, and accurately measuring the density of the crystal. From this information, one can deduce the mass (m) of a single atom; and since the molar mass (M) is known, the number of atoms in a mole can be calculated: NA = M/m.

The value of F can be measured directly using Faraday's laws of electrolysis. Faraday's laws of electrolysis are quantitative relationships based on the electrochemical researches published by Michael Faraday in 1834. In an electrolysis experiment, there is a one-to-one correspondence between the electrons passing through the anode-to-cathode wire and the ions that plate onto or off of the anode or cathode. Measuring the mass change of the anode or cathode, and the total charge passing through the wire (which can be measured as the time-integral of electric current), and also taking into account the molar mass of the ions, one can deduce F.

The limit to the precision of the method is the measurement of F: the best experimental value has a relative uncertainty of 1.6 ppm, about thirty times higher than other modern methods of measuring or calculating the elementary charge.

Oil-drop experiment

A famous method for measuring e is Millikan's oil-drop experiment. A small drop of oil in an electric field would move at a rate that balanced the forces of gravity, viscosity (of traveling through the air), and electric force. The forces due to gravity and viscosity could be calculated based on the size and velocity of the oil drop, so electric force could be deduced. Since electric force, in turn, is the product of the electric charge and the known electric field, the electric charge of the oil drop could be accurately computed. By measuring the charges of many different oil drops, it can be seen that the charges are all integer multiples of a single small charge, namely e

The necessity of measuring the size of the oil droplets can be eliminated by using tiny plastic spheres of a uniform size. The force due to viscosity can be eliminated by adjusting the strength of the electric field so that the sphere hovers motionless.

Shot noise

Any electric current will be associated with noise from a variety of sources, one of which is shot noise. Shot noise exists because a current is not a smooth continual flow; instead, a current is made up of discrete electrons that pass by one at a time. By carefully analyzing the noise of a current, the charge of an electron can be calculated. This method, first proposed by Walter H. Schottky, can determine a value of e of which the accuracy is limited to a few percent. However, it was used in the first direct observation of Laughlin quasiparticles, implicated in the fractional quantum Hall effect.

From the Josephson and von Klitzing constants

Another accurate method for measuring the elementary charge is by inferring it from measurements of two effects in quantum mechanics: The Josephson effect, voltage oscillations that arise in certain superconducting structures; and the quantum Hall effect, a quantum effect of electrons at low temperatures, strong magnetic fields, and confinement into two dimensions. The Josephson constant is
(where h is the Planck constant). It can be measured directly using the Josephson effect

It can be measured directly using the quantum Hall effect.

From these two constants, the elementary charge can be deduced:

CODATA method

In the most recent CODATA adjustments, the elementary charge is not an independently defined quantity. Instead, a value is derived from the relation
where h is the Planck constant, α is the fine-structure constant, μ0 is the magnetic constant, ε0 is the electric constant, and c is the speed of light. The uncertainty in the value of e is currently determined almost entirely by the uncertainty in the Planck constant.

The most precise values of the Planck constant come from Kibble balance experiments, which are used to measure the product K2
J
RK. The most precise values of the fine structure constant come from comparisons of the measured and calculated value of the gyromagnetic ratio of the electron.

Can We Drill A Hole Deep Enough For Our Nuclear Waste?

Workers position the Deep Isolation prototype nuclear waste canister over the test drill hole at a commercial oil and gas testing facility in Texas. The canister was lowered using a wireline and tractor assembly, commonly used to position tools and equipment in horizontal drill holes. Deep Isolation

Yes we can! And it was just demonstrated. And it seems to have some bipartisan support.

The technology used was actually developed to frack natural gas and oil wells, but Elizabeth Muller understands that it could dispose of nuclear waste as well. The Chief Executive Officer and Co-Founder of Deep Isolation knows this is a great way to dispose of this small, but bizarre, waste stream.

Deep Isolation is a recent start-up company from Berkeley by Muller and her father, Richard, that seeks to dispose of nuclear waste safely at a much lower cost than existing strategies. The idea of Deep Borehole Disposal for nuclear waste is not new, but Deep Isolation is the first to consider horizontal wells and is the first to actually demonstrate the concept.

The technology takes advantage of recently developed fracking technologies to place nuclear waste in a series of two-mile-long tunnels, a mile below the Earth’s surface, where they’ll be surrounded by a very tight rock known as shale. Shale is so tight that it takes fracking technology to get any oil or gas out of it at all.

As geologists, we know how many millions of years it takes for anything to get up from that depth in the Earth’s crust.

The Deep Isolation strategy begins with a one-mile vertical access drill hole that curves into a two-mile horizontal direction where the waste is stored. The horizontal repository portion has a slight upward tilt that provides additional isolation, and isolating any mechanisms that could move radioactive constituents upward. They would have to move down first, then up, something that cannot occur by natural processes. Deep Isolation

So what better way to use this technology than to put something back in that you want to stay there for geologic time. The demonstration occurred on January 16th, when Deep Isolation placed and retrieved a waste canister from thousands of feet underground.

This first-of-its kind demonstration was witnessed by Department of Energy officials, nuclear scientists and industry professionals, investors, environmentalists, local citizens, and even oil & gas professionals since this uses their new drilling technologies. No radioactive material was used in the test, and the location was not one where actual waste would be disposed.

Over 40 observers from multiple countries looked on as a prototype nuclear waste canister, designed to hold highly radioactive nuclear waste but filled with steel to simulate the weight of actual waste, was lowered over 2000 feet deep in an existing drill hole using a wire line cable, and then pushed using an underground tractor into a long horizontal storage section.

The canister was released and the tractor and cable withdrawn. Several hours later, the tractor was placed back in the hole, where it latched and retrieved the canister, bringing it back to the surface.

This is not just an exercise for the student. The cost of our nuclear and radioactive waste programs keeps rising astronomically. The Department of Energy recently projected the cost for their cleanup to be almost $500 billion, up over $100 billion from its estimate just a year earlier. Most of that cost is for the Hanford Site in Washington State where weapons waste that used to be high-level is no longer high-level.

The Government Accountability Office considers even that amount to be low-balled, as do I. Just look at the highly-fractured, variably saturated, dual-porosity volcanic tuff at Yucca Mountain with highly oxidizing groundwater which sits on the edge of the Las Vegas Shear Zone. Yucca Mountain was supposed to hold all of our high-level weapons waste and our commercial spent fuel.

The original estimate for that project was only $30 billion, but ever since we found out that we picked the wrong rock in 1987, the cost has skyrocketed beyond $200 billion. This is twice as high as could ever be covered by the money being set aside for this purpose, in the Nuclear Waste Fund, and it is unlikely Congress will ever appropriate the extra money to complete it.

The primary reason for the increasing costs are outdated plans that use technologies that are overly complicated and untested, and strategies that are overkill for the actual risks. Especially since the waste itself has changed its radioactivity dramatically through radioactive decay from the time when they began filling these waste tanks 70 years ago.

The hottest components of radioactive waste have half-lives of 30 years or less. Most of this stuff is only a fraction as hot as it was when it was formed.

“We’re using a technique that’s been made cheap over the last 20 years,” says the elder Muller, who is also a physicist and climate change expert at UC Berkeley. “We could begin putting this waste underground right away.”

Like all leading climate scientists, the Mullers now argue that the world must increase its use of nuclear energy to slow climate change and realize that solving the nuclear waste problem would help a lot.

When it comes to finding a permanent home for nuclear waste, the two biggest hurdles Deep Isolation, and everyone else, has observed is public consent and bipartisan agreement. The bipartisan nature of this particular effort is reflected in the company’s advisory board and public support from experts on both sides of the aisle.

Deep Isolation’s Advisory Board has a variety of industry leaders in nuclear and other fields, including Robert Bunditz and David Lochbaum, generally considered anti-nuclear watchdogs of the industry.

Furthermore, two Nobel Peace Prize winners, Steven Chu and Arno Penzias, an Emmy award winner, David Hoffman, and professionals from both sides of the aisle like Ed Fuelner of the Heritage Foundation and Daniel Metlay from the Carter Administration, sit on their Board.

Public consent just takes time and lots of meetings with state and local officials and the public wherever you think the project would work. And we have lots and lots of deep tight shales in America way below any drinking water aquifers.

Elizabeth Muller emphasized that “Stakeholder engagement is where our solution began. To prepare for this public demonstration, we met with national environmental groups, as well as local leaders, to listen to concerns, incorporate suggestions, and build our solution around their needs and our customers’.”

In 2019, Deep Isolation is focused on both the U.S. and the international markets for nuclear waste disposal. According to the International Atomic Energy Agency, there are about 400 thousand tons of highly radioactive spent nuclear fuel waste temporarily stored in pools and dry casks at hundreds of sites around the world.

No country has an operational geological repository for spent fuel disposal, although France, Sweden and Finland are well along on their plan to open one. The United States does have an operating deep geological repository for transuranic weapons waste at the WIPP Site near Carlsbad, New Mexico, which was actually designed and built to hold all of our nuclear waste of any type.

Dr. James Conca is an expert on energy, nuclear and dirty bombs, a planetary geologist, and a professional speaker. Follow him on Twitter @jimconca and see his book at Amazon.com

Lee Smolin

From Wikipedia, the free encyclopedia

Lee Smolin
LeeSmolinAtHarvard.JPG
Lee Smolin at Harvard
BornJune 6, 1955 (age 63)
NationalityAmerican
Alma materHampshire College (B.A., 1975)
Harvard University (A.M., 1978; Ph.D, 1979)
AwardsMajorana Prize (2007)
Klopsteg Memorial Award (2009)
Queen Elizabeth II Diamond Jubilee Medal (2013)
Scientific career
FieldsPhysics
Cosmology
InstitutionsPerimeter Institute, University of Waterloo
Doctoral advisorSidney Coleman
Stanley Deser

Lee Smolin is an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo and a member of the graduate faculty of the philosophy department at the University of Toronto. Smolin's 2006 book The Trouble with Physics criticized string theory as a viable scientific theory. He has made contributions to quantum gravity theory, in particular the approach known as loop quantum gravity. He advocates that the two primary approaches to quantum gravity, loop quantum gravity and string theory, can be reconciled as different aspects of the same underlying theory. His research interests also include cosmology, elementary particle theory, the foundations of quantum mechanics, and theoretical biology.

Early life

Smolin was born in New York City. His brother, David M. Smolin, became a professor in the Cumberland School of Law in Birmingham, Alabama.

Education and career

Smolin dropped out of Walnut Hills High School in Cincinnati, Ohio, and was educated at Hampshire College. He received his Ph.D in theoretical physics from Harvard University in 1979. He held postdoctoral research positions at the Institute for Advanced Study in Princeton, New Jersey, the Kavli Institute for Theoretical Physics in Santa Barbara and the University of Chicago, before becoming a faculty member at Yale, Syracuse and Pennsylvania State Universities. He was a visiting scholar at the Institute for Advanced Study in 1995 and a visiting professor at Imperial College London (1999-2001) before becoming one of the founding faculty members at the Perimeter Institute in 2001.

Theories and work

Loop quantum gravity

Smolin contributed to the theory of loop quantum gravity (LQG) in collaborative work with Ted Jacobson, Carlo Rovelli, Louis Crane, Abhay Ashtekar and others. LQG is an approach to the unification of quantum mechanics with general relativity which utilizes a reformulation of general relativity in the language of gauge field theories, which allows the use of techniques from particle physics, particularly the expression of fields in terms of the dynamics of loops. (See main page loop quantum gravity.) With Rovelli he discovered the discreteness of areas and volumes and found their natural expression in terms of a discrete description of quantum geometry in terms of spin networks. In recent years he has focused on connecting LQG to phenomenology by developing implications for experimental tests of spacetime symmetries as well as investigating ways elementary particles and their interactions could emerge from spacetime geometry.

Background independent approaches to string theory

Between 1999 and 2002, Smolin made several proposals to provide a fundamental formulation of string theory that does not depend on approximate descriptions involving classical background spacetime models.

Experimental tests of quantum gravity

Smolin is among those theorists who have proposed that the effects of quantum gravity can be experimentally probed by searching for modifications in special relativity detected in observations of high energy astrophysical phenomena. These include very high energy cosmic rays and photons and neutrinos from gamma ray bursts. Among Smolin's contributions are the coinvention of doubly special relativity (with João Magueijo, independently of work by Giovanni Amelino-Camelia) and of relative locality (with Amelino-Camelia, Laurent Freidel and Jerzy Kowalski-Glikman).

Foundations of quantum mechanics

Smolin has worked since the early 1980s on a series of proposals for hidden variables theories, which would be non-local deterministic theories which would give a precise description of individual quantum phenomena. In recent years, he has pioneered two new approaches to the interpretation of quantum mechanics suggested by his work on the reality of time, called the real ensemble interpretation and the principle of precedence.

Cosmological natural selection

Smolin's hypothesis of cosmological natural selection, also called the fecund universes theory, suggests that a process analogous to biological natural selection applies at the grandest of scales. Smolin published the idea in 1992 and summarized it in a book aimed at a lay audience called The Life of the Cosmos

Black holes have a role in natural selection. In fecund theory a collapsing black hole causes the emergence of a new universe on the "other side", whose fundamental constant parameters (masses of elementary particles, Planck constant, elementary charge, and so forth) may differ slightly from those of the universe where the black hole collapsed. Each universe thus gives rise to as many new universes as it has black holes. The theory contains the evolutionary ideas of "reproduction" and "mutation" of universes, and so is formally analogous to models of population biology

Alternatively, black holes play a role in cosmological natural selection by reshuffling only some matter affecting the distribution of elementary quark universes. The resulting population of universes can be represented as a distribution of a landscape of parameters where the height of the landscape is proportional to the numbers of black holes that a universe with those parameters will have. Applying reasoning borrowed from the study of fitness landscapes in population biology, one can conclude that the population is dominated by universes whose parameters drive the production of black holes to a local peak in the landscape. This was the first use of the notion of a landscape of parameters in physics. 

Leonard Susskind, who later promoted a similar string theory landscape, stated:
I'm not sure why Smolin's idea didn't attract much attention. I actually think it deserved far more than it got.
However, Susskind also argued that, since Smolin's theory relies on information transfer from the parent universe to the baby universe through a black hole, it ultimately makes no sense as a theory of cosmological natural selection. According to Susskind and many other physicists, the last decade of black hole physics has shown us that no information that goes into a black hole can be lost. Even Stephen Hawking, who was the largest proponent of the idea that information is lost in a black hole, later reversed his position. The implication is that information transfer from the parent universe into the baby universe through a black hole is not conceivable.

Smolin has noted that the string theory landscape is not Popper-falsifiable if other universes are not observable. This is the subject of the Smolin–Susskind debate concerning Smolin's argument: "[The] Anthropic Principle cannot yield any falsifiable predictions, and therefore cannot be a part of science." There are then only two ways out: traversable wormholes connecting the different parallel universes, and "signal nonlocality", as described by Antony Valentini, a scientist at the Perimeter Institute.

In a critical review of The Life of the Cosmos, astrophysicist Joe Silk suggested that our universe falls short by about four orders of magnitude from being maximal for the production of black holes. In his book Questions of Truth, particle physicist John Polkinghorne puts forward another difficulty with Smolin's thesis: one cannot impose the consistent multiversal time required to make the evolutionary dynamics work, since short-lived universes with few descendants would then dominate long-lived universes with many descendants. Smolin responded to these criticisms in Life of the Cosmos, and later scientific papers. 

When Smolin published the theory in 1992, he proposed as a prediction of his theory that no neutron star should exist with a mass of more than 1.6 times the mass of the sun. Later this figure was raised to two solar masses following more precise modeling of neutron star interiors by nuclear astrophysicists. If a more massive neutron star was ever observed, it would show that our universe's natural laws were not tuned for maximal black hole production, because the mass of the strange quark could be returned to lower the mass threshold for production of a black hole. A 1.97-solar-mass pulsar was discovered in 2010.

In 1992 Smolin also predicted that inflation, if true, must only be in its simplest form, governed by a single field and parameter. Both predictions have held up, and they demonstrate Smolin's main thesis: that the theory of cosmological natural selection is Popper falsifiable.

Contributions to the philosophy of physics

Smolin has contributed to the philosophy of physics through a series of papers and books that advocate the relational, or Leibnizian, view of space and time. Since 2006, he has collaborated with the Brazilian philosopher and Harvard Law School professor Roberto Mangabeira Unger on the issues of the reality of time and the evolution of laws; in 2014 they published a book, its two parts being written separately.

A book length exposition of Smolin's philosophical views appeared in April 2013. Time Reborn argues that physical science has made time unreal while, as Smolin insists, it is the most fundamental feature of reality: "Space may be an illusion, but time must be real" (p. 179). An adequate description according to him would give a Leibnizian universe: indiscernibles would not be admitted and every difference should correspond to some other difference, as the principle of sufficient reason would have it. A few months later a more concise text was made available in a paper with the title Temporal Naturalism.

The Trouble with Physics

Smolin's 2006 book The Trouble with Physics explored the role of controversy and disagreement in the progress of science. It argued that science progresses fastest if the scientific community encourages the widest possible disagreement among trained and accredited professionals prior to the formation of consensus brought about by experimental confirmation of predictions of falsifiable theories. He proposed that this meant the fostering of diverse competing research programs, and that premature formation of paradigms not forced by experimental facts can slow the progress of science.

As a case study, The Trouble with Physics focused on the issue of the falsifiability of string theory due to the proposals that the anthropic principle be used to explain the properties of our universe in the context of the string landscape. The book was criticized by the physicists Joseph Polchinski and other string theorists. 

In his earlier book Three Roads to Quantum Gravity (2002), Smolin stated that loop quantum gravity and string theory were essentially the same concept seen from different perspectives. In that book, he also favored the holographic principle. The Trouble with Physics, on the other hand, was strongly critical of the prominence of string theory in contemporary theoretical physics, which he believes has suppressed research in other promising approaches. Smolin suggests that string theory suffers from serious deficiencies and has an unhealthy near-monopoly in the particle theory community. He called for a diversity of approaches to quantum gravity, and argued that more attention should be paid to loop quantum gravity, an approach Smolin has devised. Finally, The Trouble with Physics is also broadly concerned with the role of controversy and the value of diverse approaches in the ethics and process of science. 

In the same year that The Trouble with Physics was published, Peter Woit published a book for nonspecialists whose conclusion was similar to Smolin's, namely that string theory was a fundamentally flawed research program.

Views

Smolin's view on the nature of time:
More and more, I have the feeling that quantum theory and general relativity are both deeply wrong about the nature of time. It is not enough to combine them. There is a deeper problem, perhaps going back to the beginning of physics.
Smolin does not believe that quantum mechanics is a "final theory":
I am convinced that quantum mechanics is not a final theory. I believe this because I have never encountered an interpretation of the present formulation of quantum mechanics that makes sense to me. I have studied most of them in depth and thought hard about them, and in the end I still can't make real sense of quantum theory as it stands.
In a 2009 article, Smolin has articulated the following philosophical views (the sentences in italics are quotations):
  • There is only one universe. There are no others, nor is there anything isomorphic to it. Smolin denies the existence of a "timeless" multiverse. Neither other universes nor copies of our universe — within or outside — exist. No copies can exist within the universe, because no subsystem can model precisely the larger system it is a part of. No copies can exist outside the universe, because the universe is by definition all there is. This principle also rules out the notion of a mathematical object isomorphic in every respect to the history of the entire universe, a notion more metaphysical than scientific.
  • All that is real is real in a moment, which is a succession of moments. Anything that is true is true of the present moment. Not only is time real, but everything that is real is situated in time. Nothing exists timelessly.
  • Everything that is real in a moment is a process of change leading to the next or future moments. Anything that is true is then a feature of a process in this process causing or implying future moments. This principle incorporates the notion that time is an aspect of causal relations. A reason for asserting it, is that anything that existed for just one moment, without causing or implying some aspect of the world at a future moment, would be gone in the next moment. Things that persist must be thought of as processes leading to newly changed processes. An atom at one moment is a process leading to a different or a changed atom at the next moment.
  • Mathematics is derived from experience as a generalization of observed regularities, when time and particularity are removed. Under this heading, Smolin distances himself from mathematical platonism, and gives his reaction to Eugene Wigner's "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
Smolin views rejecting the idea of a creator as essential to cosmology on similar grounds to his objections against the multiverse. He does not definitively exclude or reject religion or mysticism but rather believes that science should only deal with that of which is observable. He also opposes the anthropic principle, which he claims "cannot help us to do science."

He also advocates "principles for an open future" which he claims underlie the work of both healthy scientific communities and democratic societies: "(1) When rational argument from public evidence suffices to decide a question, it must be considered to be so decided. (2) When rational argument from public evidence does not suffice to decide a question, the community must encourage a diverse range of viewpoints and hypotheses consistent with a good-faith attempt to develop convincing public evidence." (Time Reborn p 265.)

Awards and honors

Smolin was named as #21 on Foreign Policy Magazine's list of Top 100 Public Intellectuals. He is also one of many physicists dubbed the "New Einstein" by the media. The Trouble with Physics was named by Newsweek magazine as number 17 on a list of 50 "Books for our Time", June 27, 2009. In 2007 he was awarded the Majorana Prize from the Electronic Journal of Theoretical Physics, and in 2009 the Klopsteg Memorial Award from the American Association of Physics Teachers (AAPT) for "extraordinary accomplishments in communicating the excitement of physics to the general public," He is a fellow of the Royal Society of Canada and the American Physical Society. In 2014 he was awarded the Buchalter Cosmology Prize for a work published in collaboration with Marina Cortês.

Personal life

Smolin was born in New York City. His father is Michael Smolin, an environmental and process engineer and his mother is the playwright Pauline Smolin. Lee Smolin has stayed involved with theatre becoming a scientific consultant for such plays as A Walk in the Woods by Lee Blessing, Background Interference by Drucilla Cornell and Infinity by Hannah Moscovitch. He is married to Dina Graser, a lawyer and public servant in Toronto, Ontario. His brother is law professor David M. Smolin.

Publications

The following books are non-technical, and can be appreciated by those who are not physicists.

Developmental systems theory

From Wikipedia, the free encyclopedia

Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principle explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.

Overview

All versions of developmental systems theory espouse the view that:
  • All biological processes (including both evolution and development) operate by continually assembling new structures.
  • Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
  • Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
  • Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any particular entity and thereby maintains an explanatory openness on all empirical fronts. For example, there is vigorous resistance to the widespread assumptions that one can legitimately speak of genes ‘for’ specific phenotypic characters or that adaptation consists of evolution ‘shaping’ the more or less passive species, as opposed to adaptation consisting of organisms actively selecting, defining, shaping and often creating their niches.

Developmental systems theory: Topics

Six Themes of DST 

1. Joint Determination by Multiple Causes

Development is a product of multiple interacting sources.

2. Context Sensitivity and Contingency

Development depends on the current state of the organism.

3. Extended Inheritance

An organism inherits resources from the environment in addition to genes.

4. Development as a process of construction

The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge.

5. Distributed Control

Idea that no single source of influence has central control over an organism's development.

6. Evolution As Construction

The evolution of an entire developmental system, including whole ecosystems of which given organisms are parts, not just the changes of a particular being or population.

A computing metaphor

To adopt a computing metaphor, the reductionists whom developmental systems theory opposes assume that causal factors can be divided into ‘processes’ and ‘data’, as in the Harvard computer architecture. Data (inputs, resources, content, and so on) is required by all processes, and must often fall within certain limits if the process in question is to have its ‘normal’ outcome. However, the data alone is helpless to create this outcome, while the process may be ‘satisfied’ with a considerable range of alternative data. Developmental systems theory, by contrast, assumes that the process/data distinction is at best misleading and at worst completely false, and that while it may be helpful for very specific pragmatic or theoretical reasons to treat a structure now as a process and now as a datum, there is always a risk (to which reductionists routinely succumb) that this methodological convenience will be promoted into an ontological conclusion. In fact, for the proponents of DST, either all structures are both process and data, depending on context, or even more radically, no structure is either.

Fundamental asymmetry

For reductionists there is a fundamental asymmetry between different causal factors, whereas for DST such asymmetries can only be justified by specific purposes, and argue that many of the (generally unspoken) purposes to which such (generally exaggerated) asymmetries have been put are scientifically illegitimate. Thus, for developmental systems theory, many of the most widely applied, asymmetric and entirely legitimate distinctions biologists draw (between, say, genetic factors that create potential and environmental factors that select outcomes or genetic factors of determination and environmental factors of realization) obtain their legitimacy from the conceptual clarity and specificity with which they are applied, not from their having tapped a profound and irreducible ontological truth about biological causation. One problem might be solved by reversing the direction of causation correctly identified in another. This parity of treatment is especially important when comparing the evolutionary and developmental explanations for one and the same character of an organism.

DST approach

One upshot of this approach is that developmental systems theory also argues that what is inherited from generation to generation is a good deal more than simply genes (or even the other items, such as the fertilized zygote, that are also sometimes conceded). As a result, much of the conceptual framework that justifies ‘selfish gene’ models is regarded by developmental systems theory as not merely weak but actually false. Not only are major elements of the environment built and inherited as materially as any gene but active modifications to the environment by the organism (for example, a termite mound or a beaver’s dam) demonstrably become major environmental factors to which future adaptation is addressed. Thus, once termites have begun to build their monumental nests, it is the demands of living in those very nests to which future generations of termite must adapt.

This inheritance may take many forms and operate on many scales, with a multiplicity of systems of inheritance complementing the genes. From position and maternal effects on gene expression to epigenetic inheritance to the active construction and intergenerational transmission of enduring niches, development systems theory argues that not only inheritance but evolution as a whole can be understood only by taking into account a far wider range of ‘reproducers’ or ‘inheritance systems’ – genetic, epigenetic, behavioral and symbolic  – than neo-Darwinism’s ‘atomic’ genes and gene-like ‘replicators’. DST regards every level of biological structure as susceptible to influence from all the structures by which they are surrounded, be it from above, below, or any other direction – a proposition that throws into question some of (popular and professional) biology’s most central and celebrated claims, not least the ‘central dogma’ of Mendelian genetics, any direct determination of phenotype by genotype, and the very notion that any aspect of biological (or psychological, or any other higher form) activity or experience is capable of direct or exhaustive genetic or evolutionary ‘explanation’.

Developmental systems theory is plainly radically incompatible with both neo-Darwinism and information processing theory. Whereas neo-Darwinism defines evolution in terms of changes in gene distribution, the possibility that an evolutionarily significant change may arise and be sustained without any directly corresponding change in gene frequencies is an elementary assumption of developmental systems theory, just as neo-Darwinism’s ‘explanation’ of phenomena in terms of reproductive fitness is regarded as fundamentally shallow. Even the widespread mechanistic equation of ‘gene’ with a specific DNA sequence has been thrown into question, as have the analogous interpretations of evolution and adaptation.

Likewise, the wholly generic, functional and anti-developmental models offered by information processing theory are comprehensively challenged by DST’s evidence that nothing is explained without an explicit structural and developmental analysis on the appropriate levels. As a result, what qualifies as ‘information’ depends wholly on the content and context out of which that information arises, within which it is translated and to which it is applied.

Related theories

Developmental systems theory is by no means a narrowly defined collection of ideas, and the boundaries with neighboring models are very porous. Notable related ideas (with key texts) include:

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...