Search This Blog

Monday, February 16, 2015

The Argument For Nuclear Energy In Australia: Scientist, Energy Consultant

By Barry Brook and Ben Heard
Original link:  https://newmatilda.com/2015/02/16/argument-nuclear-energy-australia-scientist-energy-consultant
 
 
In New Matilda's ongoing debate around nuclear energy, Barry Brook and Ben Heard make their case for nuclear power's role in tackling climate change.

By now, most of you would have heard that the Premier of South Australia, Labor’s Jay Weatherill, has announced a Royal Commission into an expanded future role for the state in nuclear energy. For people like us, who are both strongly focused on tackling climate change by eliminating Australia’s dependence on fossil fuels, and who consider nuclear to be an essential tool, this is real progress.

In a recent article on The Conversation, we explained the types of issues we think the Royal Commission might consider. These obviously only represent our opinions and perspectives, albeit well-informed and researched.

We cover most of the well-trodden ground on radioactive waste management and energy generation. We also explain a number of reasons, ranging from political to economic to geological, why we think South Australia is a particularly good place to kick-start any deeper foray by our nation into the nuclear fuel cycle.

One thing that particularly frustrated us was the immediate condemnation of the news by the SA Greens Party, and disappointingly, also by the Australian Youth Climate Coalition.

The whole point of Royal Commissions is the rigorous uncovering of facts, based on solid research and deep consultation with experts, government and public representatives. So why the objection?

Well, the arguments are well rehearsed and endlessly debated. Nuclear is too costly, unsafe, produces dangerous and intractable waste, is connected with weapons proliferation, is unsustainable, and besides, is unneeded.

Such a ‘washing list’ of objections is superficially convincing, and the last one in particular appeals to most people’s sensibilities. Australia is large, sunny and sparsely populated country with long, windswept coastlines. Surely then, we can (and should) do it all with wind and solar, and forget about dirty and technically complex alternatives like nuclear fission?

The thing is, with an issue as serious and immediate as climate change, we can’t afford to be carried away by wishful thinking, nor get trapped into thinking that ‘hope’ is a plan. We owe it to the future to be ruthlessly pragmatic about solutions, and accept that trade-offs are inevitable.

So, in as brief a summary as we can put it, here is the state of play was we see it.

Nuclear is expensive, at least compared to coal. But when coal pays its environmental costs (especially for air pollution and greenhouse gas emissions) nuclear is not expensive at all.

Electricity from some renewables is now comparatively cheap. But when renewables pay their full system costs to overcome variability, a renewable system is very expensive indeed.

In this context, a ‘nuclear intensive’ strategy is still likely to underpin the most viable, scalable and cost-effective pathway to replace coal.

Nuclear is the safest form of large-scale energy production, when evaluated on the basis of deaths per unit of generation.

Nuclear accidents like Chernobyl and Fukushima, although awful, are actually far less environmentally hazardous than many claim, and become ever less probable with newer, inherently safer designs.
Chernobyl Reactor 4, which melted down in April 1986. The other three reactors at Chernobyl continued to generate power, with the last reactor decommissioned in 2000.
 
Chernobyl Reactor 4, which melted down in April 1986. The other three reactors at Chernobyl continued to generate power, with the last reactor decommissioned in 2000.

Nuclear produces radioactive waste, but this is captured almost completely and isolated, and it can be recycled many times.

When fully recycled, its half-life is 30 years, and its already tiny volume is reduced by 50 times.

Nuclear power and nuclear weapons both work by using technology to split atoms, but beyond that the relationship is complex. The international safeguards set up to constrain proliferation are extensive and one has to draw a very long bow to link weapons acquisition to commercial power generation.

Nuclear fuel is not in short supply today, and long before it does become scarce, we will be recycling the waste to produce over 100 times more zero-carbon energy that will last millennia.

To set the goal of a “100 per cent renewables grid” is, at best, logistically and economically ‘courageous’, and at worst, a foolhardy strategy that is doomed to fail.

Either way, it is detached from what we consider the actual goal. If we really want to guarantee that we can rid ourselves of fossil fuels, then renewables come together, in a combined package with nuclear fission.

That last paragraph contains a whole lot of assertions. Yet we stand by all of them, because we have looked deeply at each of those statements. We have probed them critically for flaws, tested them in consultation with experts, exposed them repeatedly to the peer-reviewed energy literature, and debated them with opponents endlessly.

It’s almost all on the public record, in our scientific publications, lectures, blogs (Brave New Climate and Decarbonise SA), books, articles and videos.

Why have we done this? If you could roll back the calendar enough years, you would find one of us (Brook) was perfectly ambivalent regarding nuclear power, and the other (Heard) was an outright opponent.

Change did not come easily, and it did not come without challenge. The biggest challenge always came from within, to make sure we were moving beyond just having opinions, and moving towards having informed and reasoned positions.

The thing is, we still are. We make mistakes and get things wrong. Our positions continue to evolve and, we hope, improve, with greater nuance, understanding and balance.

We keep learning from each other, our “opponents”, our colleagues, our students, our research, from other experts in a variety of fields and, of course, when the facts change. Our position is being tested constantly. Learning does not end.

Moreover, we reckon we’ve heard all of the counter-arguments (slanted from a variety of viewpoints!), and thought carefully about them.

Some we’ve taken onboard, some disputed, some rejected. We understand the failings of nuclear energy, and we acknowledge that it is hardly a ‘perfect solution’. But we still hold that, when balanced against the alternatives, nuclear fission is a real winner.

The biggest win will be found by using everything to get an important job done.

This short essay is definitely not the place for us to try and convince the doubters. We’ve put briefly what we consider to be the ‘key facts’ and we’ve drawn what we think are robust conclusions from them. But you should all be skeptical of our claims — and those of anyone else — until you’ve looked hard at the evidence yourselves, and ideally, tried hard to disprove your cherished beliefs, rather than comfortably prop up the world-view that you already think you ‘know’ to be right.

It’s a fun intellectual exercise to try and show yourself why you’re wrong (on any number of things), and it’s the kind of strategy that scientists use every day to learn about how things work. If you do this and still disagree with us, then that’s fine — we place great value on rigorous challenges and evidence-based rebuttals. For dealing with climate change, the bottom line is, we need a plan that will work!

To conclude, below are some sources of information that we think are particularly valuable if you want to really understand nuclear energy and the plausibility of alternative options.

There is obviously plenty more out there, but please apply critical judgment when you consider the robustness of your source material.

The new Royal Commission is going to be following a similar process of judicious knowledge acquisition, albeit a most exhaustive one. Relish the journey.

Barry Brook is an Australian scientist. He is a professor and Chair of Environmental Sustainability at the University of Tasmania in the Faculty of Science, Engineering & Technology. He was formerly an ARC Future Fellow in the School of Earth and Environmental Sciences at the University of Adelaide, Australia, where he held the Sir Hubert Wilkins Chair of Climate Change from 2007 to 2014. He was also Director of Climate Science at the Environment Institute and co-ran the Global Ecology Lab.

Ben Heard is an independent environmental consultant. He holds a Masters of Corporate Environmental Sustainability Management from Monash University. He is currently undertaking doctoral studies at the University of Adelaide, examing pathways for optimal decarbonisation of Australian electricity using both nuclear and renewable sources.

Sunday, February 15, 2015

Astrochemistry



From Wikipedia, the free encyclopedia

Astrochemistry is the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation.[citation needed] The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.

Spectroscopy

One particularly important experimental tool in astrochemistry is spectroscopy, the use of telescopes to measure the absorption and emission of light from molecules and atoms in various environments. By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. This is possible because ions, atoms, and molecules have characteristic spectra: that is, the absorption and emission of certain wavelengths (colors) of light, often not visible to the human eye. However, these measurements have limitations, with various types of radiation (radio, infrared, visible, ultraviolet etc.) able to detect only certain types of species, depending on the chemical properties of the molecules. Interstellar formaldehyde was the first organic molecule detected in the interstellar medium.

Perhaps the most powerful technique for detection of individual chemical species is radio astronomy, which has resulted in the detection of over a hundred interstellar species, including radicals and ions, and organic (i.e. carbon-based) compounds, such as alcohols, acids, aldehydes, and ketones. One of the most abundant interstellar molecules, and among the easiest to detect with radio waves (due to its strong electric dipole moment), is CO (carbon monoxide). In fact, CO is such a common interstellar molecule that it is used to map out molecular regions.[1] The radio observation of perhaps greatest human interest is the claim of interstellar glycine,[2] the simplest amino acid, but with considerable accompanying controversy.[3] One of the reasons why this detection was controversial is that although radio (and some other methods like rotational spectroscopy) are good for the identification of simple species with large dipole moments, they are less sensitive to more complex molecules, even something relatively small like amino acids.

Moreover, such methods are completely blind to molecules that have no dipole. For example, by far the most common molecule in the universe is H2 (hydrogen gas), but it does not have a dipole moment, so it is invisible to radio telescopes. Moreover, such methods cannot detect species that are not in the gas-phase. Since dense molecular clouds are very cold (10-50 K = -263 to -223 C = -440 to -370 F), most molecules in them (other than hydrogen) are frozen, i.e. solid. Instead, hydrogen and these other molecules are detected using other wavelengths of light. Hydrogen is easily detected in the ultraviolet (UV) and visible ranges from its absorption and emission of light (the hydrogen line). Moreover, most organic compounds absorb and emit light in the infrared (IR) so, for example, the detection of methane in the atmosphere of Mars[4] was achieved using an IR ground-based telescope, NASA's 3-meter Infrared Telescope Facility atop Mauna Kea, Hawaii. NASA also has an airborne IR telescope called SOFIA and an IR space telescope called Spitzer. Somewhat related to the recent detection of methane in the atmosphere of Mars, scientists reported, in June 2012, that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars.[5][6] According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active."[5] Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres.[7][8]

Infrared astronomy has also revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called polyaromatic hydrocarbons, often abbreviated PAHs or PACs. These molecules, composed primarily of fused rings of carbon (either neutral or in an ionized state), are said to be the most common class of carbon compound in the galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium and isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying, carbon-rich red giant stars).

Infrared astronomy has also been used to assess the composition of solid materials in the interstellar medium, including silicates, kerogen-like carbon-rich solids, and ices. This is because unlike visible light, which is scattered or absorbed by solid particles, the IR radiation can pass through the microscopic interstellar particles, but in the process there are absorptions at certain wavelengths that are characteristic of the composition of the grains.[9] As above with radio astronomy, there are certain limitations, e.g. N2 is difficult to detect by either IR or radio astronomy.

Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur. Since hydrogen is by far the most abundant molecule in the universe, the initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic, then the H atoms react with available O, C and N atoms, producing "reduced" species like H2O, CH4, and NH3. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react or remain bonded together, producing CO, CO2, CN, etc. These mixed-molecular ices are exposed to ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry.[9] Lab experiments on the photochemistry of simple interstellar ices have produced amino acids.[10] The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.

Research

Research is progressing on the way in which interstellar and circumstellar molecules form and interact, and this research could have a profound impact on our understanding of the suite of molecules that were present in the molecular cloud when our solar system formed, which contributed to the rich carbon chemistry of comets and asteroids and hence the meteorites and interstellar dust particles which fall to the Earth by the ton every day.

The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on Earth can be highly abundant in space, for example the H3+ ion. Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, the consequences for stellar evolution, as well as stellar 'generations'. Indeed, the nuclear reactions in stars produce every naturally occurring chemical element. As the stellar 'generations' advance, the mass of the newly formed elements increases. A first-generation star uses elemental hydrogen (H) as a fuel source and produces helium (He). Hydrogen is the most abundant element, and it is the basic building block for all other elements as its nucleus has only one proton. Gravitational pull toward the center of a star creates massive amounts of heat and pressure, which cause nuclear fusion. Through this process of merging nuclear mass, heavier elements are formed. Carbon, oxygen and silicon are examples of elements that form in stellar fusion. After many stellar generations, very heavy elements are formed (e.g. iron and lead).

In October 2011, scientists reported that cosmic dust contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars.[11][12][13]

On August 29, 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth.[14][15] Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.[16]

In September, 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics - "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively".[17][18] Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."[17][18]

In February 2014, NASA announced the creation of an improved spectral database [19] for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.[20]

On August 11, 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).[21][22]

For the study of the recourses of chemical elements and molecules in the universe is developed the mathematical model of the molecules composition distribution in the interstellar environment on thermodynamic potentials by professor M.Yu. Dolomatov using methods of the probability theory, the mathematical and physical statistics and the equilibrium thermodynamics.[23][24][25] Based on this model are estimated the resources of life-related molecules, amino acids and the nitrogenous bases in the interstellar medium. The possibility of the oil hydrocarbons molecules formation is shown. The given calculations confirm Sokolov’s and Hoyl’s hypotheses about the possibility of the oil hydrocarbons formation in Space. Results are confirmed by data of astrophysical supervision and space researches.

Astrophysics


From Wikipedia, the free encyclopedia


NGC 4414, a typical spiral galaxy in the constellation Coma Berenices, is about 56,000 light-years in diameter and approximately 60 million light-years distant.

Astrophysics (from Greek astron, ἄστρον "star", and physis, φύσις "nature") is the branch of astronomy that deals with the physics of the universe, especially with "the nature of the heavenly bodies, rather than their positions or motions in space".[1][2] Among the objects studied are galaxies, stars, extrasolar planets, the interstellar medium and the cosmic microwave background.[3][4] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.

In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Highly elusive areas of study for astrophysicists, which are of immense interest to the public, include their attempts to determine: the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.[5] Topics also studied by theoretical astrophysicists include: Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.

Astrophysics can be studied at the bachelors, masters, and Ph.D. levels in physics or astronomy departments at many universities.

History

Although astronomy is as ancient as recorded history itself, it was long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.[6][7]

In the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws.

Early in the 19th century, William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum.[8] By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements.[9] Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere.[10] In this way it was proved that the chemical elements found in the Sun and stars (chiefly hydrogen) were also found on Earth.

Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected bright, as well as dark, lines in solar spectra. Working with the chemist, Edward Frankland, to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the lines represented a new element, which was called helium, after the Greek Helios, the Sun personified.[11][12] During the 20th century, spectroscopy (the study of these spectral lines) advanced, particularly as a result of the advent of quantum physics that was necessary to understand the astronomical and experimental observations.[13]

See also:

Observational astrophysics


Early 20th-century comparison of elemental, solar, and stellar spectra

Observational astronomy is a division of the astronomical science that is concerned with recording data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.

The majority of astrophysical observations are made using the electromagnetic spectrum.
Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study our Sun. Cosmic rays consisting of very high energy particles can be observed hitting the Earth's atmosphere.

Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.

The study of our very own Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Our understanding of our own Sun serves as a guide to our understanding of other stars.

The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction. The material composition of the astronomical objects can often be examined using:

Theoretical astrophysics


The stream lines on this simulation of a supernova show the flow of matter behind the shock wave giving clues as to the origin of pulsars

Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.[14][15]

Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.

Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.

Topics studied by theoretical astrophysicists include: stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.

Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model are the Big Bang, Cosmic inflation, dark matter, dark energy and fundamental theories of physics. Wormholes are examples of hypotheses which are yet to be proven (or disproven).

Notable astrophysicists

Astrophysics has become better-known as an important and notable science due to the efforts of educators such as prominent professors Carl Sagan, Stephen Hawking and Neil DeGrasse Tyson. Charlotte Blackwood, famous for her work with the US military, has a PhD in Astrophysics.

Evolution as Fact and Theory

STEPHEN JAY GOULD
STEPHEN JAY GOULD
Gould’s career as a scientist at Harvard from 1967 until his death in 2002 ended with the publication of his magnum opus, The Structure of Evolutionary Theory. In addition to his treasured teaching and remarkable research, he wrote each month for 25 years a column, “This View of Life,” for Natural History, the magazine of the American Museum of Natural History. The wisdom of this leading participant in major scientific controversies was evident whenever he spoke at Cambridge Forum events in the same Unitarian Meeting House in Harvard Square where earlier Ralph Waldo Emerson delivered his epochal address on “The American Scholar.”

This article is abridged from Speak Out Against the New Right edited by Herbert F. Vetter (Boston: Beacon Press, 1982)
Kirtley Mather, who died last year at age 89, was a pillar of both science and the Christian religion in America and one of my dearest friends. The difference of half a century in our ages evaporated before our common interests. The most curious thing we shared was a battle we each fought at the same age. For Kirtley had gone to Tennessee with Clarence Darrow to testify for evolution at the Scopes trial of 1925. When I think that we are enmeshed again in the same struggle for one of the best documented, most compelling and exciting concepts in all of science, I don't know whether to laugh or cry.

According to idealized principles of scientific discourse, the arousal of dormant issues should reflect fresh data that give renewed life to abandoned notions. Those outside the current debate may therefore be excused for suspecting that creationists have come up with something new, or that evolutionists have generated some serious internal trouble. But nothing has changed; the creationists have not a single new fact or argument. Darrow and Bryan were at least more entertaining than we lesser antagonists today. The rise of creationism is politics, pure and simple; it represents one issue (and by no means the major concern) of the resurgent evangelical right. Arguments that seemed kooky just a decade ago have reentered the mainstream.

Creationism Is Not Science

The basic attack of the creationists falls apart on two general counts before we even reach the supposed factual details of their complaints against evolution. First, they play upon a vernacular misunderstanding of the word "theory" to convey the false impression that we evolutionists are covering up the rotten core of our edifice. Second, they misuse a popular philosophy of science to argue that they are behaving scientifically in attacking evolution. Yet the same philosophy demonstrates that their own belief is not science, and that "scientific creationism" is therefore meaningless and self-contradictory, a superb example of what Orwell called "newspeak."

In the American vernacular, "theory" often means "imperfect fact" —part of a hierarchy of confidence running downhill from fact to theory to hypothesis to guess. Thus the power of the creationist argument: evolution is"only" a theory, and intense debate now rages about many aspects of the theory. If evolution is less than a fact, and scientists can't even make up their minds about the theory, then what confidence can we have in it? Indeed, President Reagan echoed this argument before an evangelical group in Dallas when he said (in what I devoutly hope was campaign rhetoric): "Well, it is a theory. It is a scientific theory only, and it has in recent years been challenged in the world of science—that is, not believed in the scientific community to be as infallible as it once was."

Well, evolution is a theory. It is also a fact. And facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts. Facts do not go away when scientists debate rival theories to explain them. Einstein's theory of gravitation replaced Newton's, but apples did not suspend themselves in mid-air pending the outcome. And human beings evolved from apelike ancestors whether they did so by Darwin's proposed mechanism or by some other, yet to be discovered.

Moreover, "fact" does not mean "absolute certainty." The final proofs of logic and mathematics flow deductively from stated premises and achieve certainty only because they are not about the empirical world. Evolutionists make no claim for perpetual truth, though creationists often do (and then attack us for a style of argument that they themselves favor). In science, "fact" can only mean "confirmed to such a degree that it would be perverse to withhold provisional assent." I suppose that apples might start to rise tomorrow, but the possibility does not merit equal time in physics classrooms.

Evolutionists have been clear about this distinction between fact and theory from the very beginning, if only because we have always acknowledged how far we are from completely understanding the mechanisms (theory) by which evolution (fact) occurred. Darwin continually emphasized the difference between his two great and separate accomplishments: establishing the fact of evolution, and proposing a theory—natural selection—to explain the mechanism of evolution. He wrote in The Descent of Man: "I had two distinct objects in view; firstly, to show that species had not been separately created, and secondly, that natural selection had been the chief agent of change . . . Hence if I have erred in . . . having exaggerated its | natural selection's] power . . . I have at least, as I hope, done good service in aiding to overthrow the dogma of separate creations."

Thus Darwin acknowledged the provisional nature of natural selection while affirming the fact of evolution. The fruitful theoretical debate that Darwin initiated has never ceased. From the 1940s through the 1960s, Darwin's own theory of natural selection did achieve a temporary hegemony that it never enjoyed in his lifetime. But renewed debate characterizes our decade, and, while no biologist questions the importance of natural selection, many now doubt its ubiquity. In particular, many evolutionists argue that substantial amounts of genetic change may not be subject to natural selection and may spread through populations at random. Others are challenging Darwin's linking of natural selection with gradual, imperceptible change through all intermediary degrees; they are arguing that most evolutionary events may occur far more rapidly than Darwin envisioned.

Scientists regard debates on fundamental issues of theory as a sign of intellectual health and a source of excitement. Science is—and how else can I say it?—most fun when it plays with interesting ideas, examines their implications, and recognizes that old information may be explained in surprisingly new ways. Evolutionary theory is now enjoying this uncommon vigor. Yet amidst all this turmoil no biologist has been led to doubt the fact that evolution occurred; we are debating how it happened. We are all trying to explain the same thing: the tree of evolutionary descent linking all organisms by ties of genealogy. Creationists pervert and caricature this debate by conveniently neglecting the common conviction that underlies it, and by falsely suggesting that we now doubt the very phenomenon we are struggling to understand.

Using another invalid argument, creationists claim that "the dogma of separate creations," as Darwin characterized it a century ago, is a scientific theory meriting equal time with evolution in high school biology curricula. But a prevailing viewpoint among philosophers of science belies this creationist argument. Philosopher Karl Popper has argued for decades that the primary criterion of science is the falsifiability of its theories. We can never prove absolutely, but we can falsify. A set of ideas that cannot, in principle, be falsified is not science.

The entire creationist argument involves little more than a rhetorical attempt to falsify evolution by presenting supposed contradictions among its supporters. Their brand of creationism, they claim, is "scientific" because it follows the Popperian model in trying to demolish evolution. Yet Popper's argument must apply in both directions. One does not become a scientist by the simple act of trying to falsify another scientific system; one has to present an alternative system that also meets Popper's criterion—it too must be falsifiable in principle.

"Scientific creationism" is a self-contradictory, nonsense phrase precisely because it cannot be falsified. I can envision observations and experiments that would disprove any evolutionary theory I know, but I cannot imagine what potential data could lead creationists to abandon their beliefs. Unbeatable systems are dogma, not science. Lest I seem harsh or rhetorical, I quote creationism's leading intellectual, Duane Gish, Ph.D., from his recent (1978) book Evolution? The Fossils Say No! "By creation we mean the bringing into being by a supernatural Creator of the basic kinds of plants and animals by the process of sudden, or fiat, creation. We do not know how the Creator created, what processes He used, for He used processes which are not now operating anywhere in the natural universe [Gish's italics]. This is why we refer to creation as special creation. We cannot discover by scientific investigations anything about the creative processes used by the Creator."

Pray tell, Dr. Gish, in the light of your last sentence, what then is "scientific" creationism?

Telerobotics


From Wikipedia, the free encyclopedia


Justus security robot patrolling in Kraków

Telerobotics is the area of robotics concerned with the control of semi-autonomous robots from a distance, chiefly using Wireless network (like Wi-Fi, Bluetooth, the Deep Space Network, and similar) or tethered connections. It is a combination of two major subfields, teleoperation and telepresence.

Teleoperation

Teleoperation indicates operation of a machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academic and technical environments. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance.[1]
Teleoperation is standard term in use both in research and technical communities and is by far the most standard term for referring to operation at a distance. This is opposed to "telepresence" that is a less standard term and might refer to a whole range of existence or interaction that include a remote connotation.

A telemanipulator (or teleoperator) is a device that is controlled remotely by a human operator. If such a device has the ability to perform autonomous work, it is called a telerobot. If the device is completely autonomous, it is called a robot. In simple cases the controlling operator's command actions correspond directly to actions in the device controlled, as for example in a radio controlled model aircraft or a tethered deep submergence vehicle. Where communications delays make direct control impractical (such as a remote planetary rover), or it is desired to reduce operator workload (as in a remotely controlled spy or attack aircraft), the device will not be controlled directly, instead being commanded to follow a specified path. At increasing levels of sophistication the device may operate somewhat independently in matters such as obstacle avoidance, also commonly employed in planetary rovers.

Devices designed to allow the operator to control a robot at a distance is sometimes called telecheric robotics.

Two major components of Telerobotics and Telepresence are the visual and control applications. A remote camera provides a visual representation of the view from the robot. Placing the robotic camera in a perspective that allows intuitive control is a recent technique that although based in Science Fiction (Robert A. Heinlein's Waldo 1942) has not been fruitful as the speed, resolution and bandwidth have only recently been adequate to the task of being able to control the robot camera in a meaningful way. Using a head mounted display, the control of the camera can be facilitated by tracking the head as shown in the figure below.

This only works if the user feels comfortable with the latency of the system, the lag in the response to movements, and the visual representation. Any issues such as, inadequate resolution, latency of the video image, lag in the mechanical and computer processing of the movement and response, and optical distortion due to camera lens and head mounted display lenses, can cause the user 'simulator sickness' that is exacerbated by the lack of vestibular stimulation with visual representation of motion.

Mismatch between the users motions such as registration errors, lag in movement response due to overfiltering, inadequate resolution for small movements, and slow speed can contribute to these problems.

The same technology can control the robot, but then the eye–hand coordination issues become even more pervasive through the system, and user tension or frustration can make the system difficult to use.

Ironically, the tendency to build robots has been to minimize the degrees of freedom because that reduces the control problems. Recent improvements in computers has shifted the emphasis to more degrees of freedom, allowing robotic devices that seem more intelligent and more human in their motions. This also allows more direct teleoperation as the user can control the robot with their own motions.

Interfaces

A telerobotic interface can be as simple as a common MMK (monitor-mouse-keyboard) interface. While this is not immersive, it is inexpensive. Telerobotics driven by internet connections are often of this type. A valuable modification to MMK is a joystick, which provides a more intuitive navigation scheme for planar robot movement.

Dedicated telepresence setups utilize a head mounted display with either single or dual eye display, and an ergonomically matched interface with joystick and related button, slider, trigger controls.

Future interfaces will merge fully immersive virtual reality interfaces and port real-time video instead of computer-generated images. Another example would be to use an omnidirectional treadmill with an immersive display system so that the robot is driven by the person walking or running. Additional modifications may include merged data displays such as Infrared thermal imaging, real-time threat assessment, or device schematics.

Applications

Telerobotics for Space


NASA HERRO (Human Exploration using Real-time Robotic Operations) telerobotic exploration concept[2]

With the exception of the Apollo program most space exploration has been conducted with telerobotic space probes. Most space-based astronomy, for example, has been conducted with telerobotic telescopes. The Russian Lunokhod-1 mission, for example, put a remotely driven rover on the moon, which was driven in real time (with a 2.5-second lightspeed time delay) by human operators on the ground. Robotic planetary exploration programs use spacecraft that are programmed by humans at ground stations, essentially achieving a long-time-delay form of telerobotic operation. Recent noteworthy examples include the Mars exploration rovers (MER) and the Curiosity rover. In the case of the MER mission, the spacecraft and the rover operated on stored programs, with the rover drivers on the ground programming each day's operation. The International Space Station (ISS) uses a two-armed telemanipulator called Dextre. More recently, a humanoid robot Robonaut[3] has been added to the space station for telerobotic experiments.

NASA has proposed use of highly capable telerobotic systems[4] for future planetary exploration using human exploration from orbit. In a concept for Mars Exploration proposed by Landis, a precursor mission to Mars could be done in which the human vehicle brings a crew to Mars, but remains in orbit rather than landing on the surface, while a highly capable remote robot is operated in real time on the surface.[5] Such a system would go beyond the simple long time delay robotics and move to a regime of virtual telepresence on the planet. One study of this concept, the Human Exploration using Real-time Robotic Operations (HERRO) concept, suggested that such a mission could be used to explore a wide variety of planetary destinations.[2]

Telepresence/Videoconferencing


iRobot Ava 500, an autonomous roaming telepresence robot.

The prevalence of high quality video conferencing using mobile devices, tablets and portable computers has enabled a drastic growth in Telepresence Robots to help give a better sense of remote physical presence for communication and collaboration in the office, home, school, etc. when one cannot be there in person. The robot avatar can move or look around at the command of the remote person.[6]

There have been two primary approaches that both utilize videoconferencing on a display 1) desktop telepresence robots - typically mount a phone or tablet on a motorized desktop stand to enable the remote person to look around a remote environment by panning and tilting the display or 2) drivable telepresence robots - typically contain a display (integrated or separate phone or tablet) mounted on a roaming base Some examples of desktop telepresence robots include Kubi by Revolve Robotics, Galileo by Motrr, and Swivl. Some examples of roaming telepresence robots include Beam by Suitable Technologies, Double by Double Robotics, RP-Vita by iRobot, Anybots, Vgo, TeleMe by Mantarobot, and Romo by Romotive. More modern roaming telepresence robots may include an ability to operate autonomously. The robots can map out the space and be able to avoid obstacles while driving themselves between rooms and their docking stations.[7]

For over 20 years, telepresence robots, also sometimes referred to as remote-presence devices have been a vision of the tech industry. Until recently, engineers did not have the processors, the miniature microphones, cameras and sensors, or the cheap, fast broadband necessary to support them. But in the last five years, a number of companies have been introducing functional devices. As the value of skilled labor rises, these companies are beginning to see a way to eliminate the barrier of geography between offices.[8] Traditional videoconferencing systems and telepresence rooms generally offer Pan / Tilt / Zoom cameras with far end control. The ability for the remote user to turn the device’s head and look around naturally during a meeting is often seen as the strongest feature of a telepresence robot. For this reason, the developers have emerged in the new category of desktop telepresence robots that concentrate on this strongest feature to create a much lower cost robot. The Desktop Telepresence Robots, also called Head and Neck Robots allow users to look around during a meeting and are small enough to be carried from location to location, eliminating the need for remote navigation.[9]

Marine Applications

Marine remotely operated vehicles (ROVs) are widely used to work in water too deep or too dangerous for divers. They repair offshore oil platforms and attach cables to sunken ships to hoist them. They are usually attached by a tether to a control center on a surface ship. The wreck of the Titanic was explored by an ROV, as well as by a crew-operated vessel.

Telemedicine

Additionally, a lot of telerobotic research is being done in the field of medical devices, and minimally invasive surgical systems. With a robotic surgery system, a surgeon can work inside the body through tiny holes just big enough for the manipulator, with no need to open up the chest cavity to allow hands inside.

Other Telerobotic Applications

Remote manipulators are used to handle radioactive materials.

Telerobotics has been used in installation art pieces; Telegarden is an example of a project where a robot was operated by users through the Web.

Pluto

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Pluto 134340 Pluto Pluto, imaged by the New Horizons spac...