Search This Blog

Monday, September 1, 2014

Van Allen radiation belt

Van Allen radiation belt

From Wikipedia, the free encyclopedia
 
Van Allen radiation belts (cross section)
 
A radiation belt is a layer of energetic charged particles that is held in place around a magnetized planet, such as the Earth, by the planet's magnetic field. The Earth has two such belts and sometimes others may be temporarily created. The discovery of the belts is credited to James Van Allen and as a result the Earth's belts bear his name. The main belts extend from an altitude of about 1,000 to 60,000 kilometers above the surface in which region radiation levels vary. Most of the particles that form the belts are thought to come from solar wind and other particles by cosmic rays.[1] The belts are located in the inner region of the Earth's magnetosphere. The belts contain energetic electrons that form the outer belt and a combination of protons and electrons that form the inner belt. The radiation belts additionally contain lesser amounts of other nuclei, such as alpha particles. The belts endanger satellites, which must protect their sensitive components with adequate shielding if their orbit spends significant time in the radiation belts. In 2013, NASA reported that the Van Allen Probes had discovered a transient, third radiation belt, which was observed for four weeks until destroyed by a powerful, interplanetary shock wave from the Sun.[2]

Discovery

Kristian Birkeland, Carl Størmer, and Nicholas Christofilos had investigated the possibility of trapped charged particles before the Space Age.[3] Explorer 1 and Explorer 3 confirmed the existence of the belt in early 1958 under James Van Allen at the University of Iowa. The trapped radiation was first mapped out by Explorer 4, Pioneer 3 and Luna 1.

The term Van Allen belts refers specifically to the radiation belts surrounding Earth; however, similar radiation belts have been discovered around other planets. The Sun itself does not support long-term radiation belts, as it lacks a stable, global dipole field. The Earth's atmosphere limits the belts' particles to regions above 200–1,000 km,[4] while the belts do not extend past 7 Earth radii RE.[4] The belts are confined to a volume which extends about 65°[4] from the celestial equator.

Research

Jupiter's variable radiation belts

The NASA Van Allen Probes mission will go further and gain scientific understanding (to the point of predictability) of how populations of relativistic electrons and ions in space form or change in response to changes in solar activity and the solar wind. NASA Institute for Advanced Concepts–funded studies have proposed magnetic scoops to collect antimatter that naturally occurs in the Van Allen belts of Earth, although only about 10 micrograms of antiprotons are estimated to exist in the entire belt.[5]

The Van Allen Probes mission successfully launched on August 30, 2012.[6] The primary mission is scheduled to last two years with expendables expected to last four. NASA's Goddard Space Flight Center manages the overall Living With a Star program of which the Van Allen Probes is a project, along with Solar Dynamics Observatory (SDO). The Applied Physics Laboratory is responsible for the overall implementation and instrument management for the Van Allen Probes.[7]

Van Allen radiation belts exist on other planets and planets in the solar system that have a magnetic field that is powerful enough to sustain a radiation belt. However, many of these radiation belts have been poorly mapped. The Voyager Program (namely Voyager 2) only nominally confirmed the existence of similar belts on Uranus and Neptune.

Outer belt

Laboratory simulation of the Van Allen belt's influence on the Solar Wind; these aurora-like Birkeland currents were created by the scientist Kristian Birkeland in his terrella, a magnetized anode globe in an evacuated chamber

The large outer radiation belt is almost toroidal in shape, extending from an altitude of about three to ten Earth radii (RE) or 13,000 to 60,000 kilometres (8,100 to 37,300 mi) above the Earth's surface. Its greatest intensity is usually around 4–5 RE. The outer electron radiation belt is mostly produced by the inward radial diffusion[8][9] and local acceleration[10] due to transfer of energy from whistler-mode plasma waves to radiation belt electrons. Radiation belt electrons are also constantly removed by collisions with atmospheric neutrals,[10] losses to magnetopause, and the outward radial diffusion. The outer belt consists mainly of high energy (0.1–10 MeV) electrons trapped by the Earth's magnetosphere. The gyroradii for energetic protons would be large enough to bring them into contact with the Earth's atmosphere. The electrons here have a high flux and at the outer edge (close to the magnetopause), where geomagnetic field lines open into the geomagnetic "tail", fluxes of energetic electrons can drop to the low interplanetary levels within about 100 km (62 mi), a decrease by a factor of 1,000.

The trapped particle population of the outer belt is varied, containing electrons and various ions. Most of the ions are in the form of energetic protons, but a certain percentage are alpha particles and O+ oxygen ions, similar to those in the ionosphere but much more energetic. This mixture of ions suggests that ring current particles probably come from more than one source.

The outer belt is larger than the inner belt and its particle population fluctuates widely. Energetic (radiation) particle fluxes can increase and decrease dramatically as a consequence of geomagnetic storms, which are themselves triggered by magnetic field and plasma disturbances produced by the Sun. The increases are due to storm-related injections and acceleration of particles from the tail of the magnetosphere.

On February 28, 2013, a third radiation belt, consisting of high-energy ultrarelativistic charged particles, was reported to be discovered. In a news conference by NASA's Van Allen Probe team, it was stated that this third belt is generated when a mass coronal ejection is created by the Sun. It has been represented as a separate creation which splits the Outer Belt, like a knife, on its outer side, and exists separately as a storage container for a month's time, before merging once again with the Outer Belt.[11]

The unusual stability of this third, transient belt has been explained as due to a 'trapping' by the Earth's magnetic field of ultrarelativistic particles as they are lost from the second, traditional outer belt. While the outer zone, which forms and disappears over a day, is highly variable owing to interactions with the atmosphere, the ultrarelativistic particles of the third belt are thought to not scatter into the atmosphere, as they are too energetic to interact with atmospheric waves at low latitudes.[12] This absence of scattering and the trapping allows them to persist for a long time, finally only being destroyed by an unusual event, such as the shock wave from the sun which eventually destroyed it.

Inner belt

Two giant belts of radiation surround Earth. The inner belt is dominated by electrons and the outer one by protons.Image Credit: NASA
"Zebra stripes" in the inner radiation belt: An example of energetic electron spectra, measured on June 18, 2013 by NASA's twin Van Allen Probes in the inner radiation belt during quiet time during low solar activity. The striped, banded pattern is caused by the rotation of the Earth, previously thought to have no effect on the highly energetic particles of the radiation belt. Credit: A. Ukhorskiy/JHUAPL

While protons form one radiation belt, trapped electrons present two distinct structures, the inner and outer belt. The inner electron Van Allen Belt extends typically from an altitude of 0.2 to 2 Earth radii (L values of 1 to 3) or 600 miles (1,000 km) to 3,700 miles (6,000 km) above the Earth.[1][13] In certain cases when solar activity is stronger or in geographical areas such as the South Atlantic Anomaly (SAA), the inner boundary may go down to roughly 200 kilometers[14] above the Earth's surface. The inner belt contains high concentrations of electrons in the range of hundreds of keV and energetic protons with energies exceeding 100 MeV, trapped by the strong (relative to the outer belts) magnetic fields in the region.[15]

It is believed that proton energies exceeding 50 MeV in the lower belts at lower altitudes are the result of the beta decay of neutrons created by cosmic ray collisions with nuclei of the upper atmosphere. The source of lower energy protons is believed to be proton diffusion due to changes in the magnetic field during geomagnetic storms.[16]

Due to the slight offset of the belts from Earth's geometric center, the inner Van Allen belt makes its closest approach to the surface at the South Atlantic Anomaly.[17] [18]

On March 2014, a pattern resembling 'zebra stripes' was discovered in the radiation belts by NASA in their energetic particle experiment, RBSPICE. The reason reported was that due to the tilt in Earth's magnetic field axis, the planet’s rotation generated an oscillating, weak electric field that permeates through the entire inner radiation belt. The field affects the electrons as if they behave like fluids.[19]

The global oscillations slowly stretch and fold the fluid resulting in the striped pattern observed across the entire inner belt, extending from above Earth’s atmosphere, about 800 km above the planet’s surface up to roughly 13,000 km.[20]

Flux values

In the belts, at a given point, the flux of particles of a given energy decreases sharply with energy.
At the magnetic equator, electrons of energies exceeding 500 keV (resp. 5 MeV) have omnidirectional fluxes ranging from 1.2×106 (resp. 3.7×104) up to 9.4×109 (resp. 2×107) particles per square centimeter per second.

The proton belts contain protons with kinetic energies ranging from about 100 keV (which can penetrate 0.6 µm of lead) to over 400 MeV (which can penetrate 143 mm of lead).[21]

Most published flux values for the inner and outer belts may not show the maximum probable flux densities that are possible in the belts. There is a reason for this discrepancy: the flux density and the location of the peak flux is variable (depending primarily on solar activity), and the number of spacecraft with instruments observing the belt in real time has been limited. The Earth has not experienced a solar storm of Carrington event intensity and duration while spacecraft with the proper instruments have been available to observe the event.

Regardless of the differences of the flux levels in the Inner and Outer Van Allen belts, the beta radiation levels would be dangerous to humans if they were exposed for an extended period of time.[17][22]

Antimatter confinement

In 2011, a study has confirmed earlier speculation that the Van Allen belt could confine antiparticles. The PAMELA experiment detected orders of magnitude higher levels of antiprotons than are expected from normal particle decays while passing through the SAA. This suggests the Van Allen belts confine a significant flux of antiprotons produced by the interaction of the Earth's upper atmosphere with cosmic rays.[23] The energy of the antiprotons has been measured in the range from 60–750 MeV.

Implications for space travel

Missions beyond low Earth orbit leave the protection of the geomagnetic field, and transit the Van Allen belts. Thus they may need to be shielded against exposure to cosmic rays, Van Allen radiation, or solar flares. The region between two to four Earth radii lies between the two radiation belts and is sometimes referred to as the "safe zone".[24][25]

Solar cells, integrated circuits, and sensors can be damaged by radiation. Geomagnetic storms occasionally damage electronic components on spacecraft. Miniaturization and digitization of electronics and logic circuits have made satellites more vulnerable to radiation, as the total electric charge in these circuits is now small enough so as to be comparable with the charge of incoming ions. Electronics on satellites must be hardened against radiation to operate reliably. The Hubble Space Telescope, among other satellites, often has its sensors turned off when passing through regions of intense radiation.[26] A satellite shielded by 3 mm of aluminium in an elliptic orbit (200 by 20,000 miles (320 by 32,190 km)) passing the radiation belts will receive about 2,500 rem (25 Sv) per year. Almost all radiation will be received while passing the inner belt.[27]

The Apollo missions marked the first event where humans traveled through the Van Allen belts, which was one of several radiation hazards known by mission planners.[28] The astronauts had low exposure in the Van Allen belts due to the short period of time spent flying through them.[29] The command module's inner structure was an aluminum "sandwich" consisting of a welded aluminium inner skin, a thermally bonded honeycomb core, and a thin aluminium "face sheet". The steel honeycomb core and outer face sheets were thermally bonded to the inner skin.

In fact, the astronauts' overall exposure was dominated by solar particles once outside Earth's magnetic field. The total radiation received by the astronauts varied from mission to mission but was measured to be between 0.16 and 1.14 rads (1.6 and 11.4 mGy), much less than the standard of 5 rem (50 mSv) per year set by the United States Atomic Energy Commission for people who work with radioactivity.[28]

Causes

Simulated Van Allen Belts generated by a plasma thruster in tank #5 at the Electric Propulsion Laboratory located at the then-called Lewis Research Center, Cleveland, Ohio

It is generally understood that the inner and outer Van Allen belts result from different processes. The inner belt, consisting mainly of energetic protons, is the product of the decay of so-called "albedo" neutrons which are themselves the result of cosmic ray collisions in the upper atmosphere. The outer belt consists mainly of electrons. They are injected from the geomagnetic tail following geomagnetic storms, and are subsequently energized through wave-particle interactions.

In the inner belt, particles are trapped in the Earth's nonlinear magnetic field, that originate from the sun. Particles gyrate and move along field lines. As particles encounter regions of larger density of magnetic field lines, their "longitudinal" velocity is slowed and can be reversed, reflecting the particle. This causes the particles to bounce back and forth between the Earth's poles.[30] Globally, the motion of these trapped particles is chaotic.[31]

A gap between the inner and outer Van Allen belts, sometimes called safe zone or safe slot, is caused by the Very Low Frequency (VLF) waves which scatter particles in pitch angle which results in the gain of particles to the atmosphere. Solar outbursts can pump particles into the gap but they drain again in a matter of days. The radio waves were originally thought to be generated by turbulence in the radiation belts, but recent work by James L. Green of the Goddard Space Flight Center comparing maps of lightning activity collected by the Microlab 1 spacecraft with data on radio waves in the radiation-belt gap from the IMAGE spacecraft suggests that they are actually generated by lightning within Earth's atmosphere. The radio waves they generate strike the ionosphere at the right angle to pass through it only at high latitudes, where the lower ends of the gap approach the upper atmosphere. These results are still under scientific debate.

There have been nuclear tests in space that have caused artificial radiation belts. Starfish Prime, a high altitude nuclear test, created an artificial radiation belt that damaged or destroyed as many as one third of the satellites in low Earth orbit at the time.

Proposed removal

The belts are a hazard for artificial satellites and are dangerous for human beings, and are difficult and expensive to shield against.

High Voltage Orbiting Long Tether, or HiVOLT, is a concept proposed by Russian physicist V.V. Danilov and further refined by Robert P. Hoyt and Robert L. Forward for draining and removing the radiation fields of the Van Allen radiation belts[32] that surround the Earth.[33] A proposed configuration consists of a system of five 100 km long conducting tethers deployed from satellites, and charged to a large voltage. This would cause charged particles that encounter the tethers to have their pitch angle changed, thus over time dissolving the inner belts. Hoyt and Forward's company, Tethers Unlimited, performed a preliminary analysis simulation, and produced a chart depicting a theoretical radiation flux reduction,[34] to less than 1% of current levels within two months for the inner belts that threaten LEO objects.[35]

Cambridge Study Reveals How Life Could Have Started From Nothing


Cambridge Study Reveals How Life Could Have Started From Nothing


cambridge, study, reveals, how, life, could, have, started, from, nothing,  
Cambridge Study Reveals How Life Could Have Started From Nothing
Image Credit: Getty
 
One of the most challenging questions in basic biology and the history of evolution and life stems from the unknown origin of the first cells billions of years ago. Though many pieces of the puzzle have been put together, this origin story remains somewhat murky. But a team of researchers from the University of Cambridge believe they've accidentally stumbled on an answer, and a very compelling one at that.

The discovery: Through routine quality control testing, a researcher working with Markus Ralser, who would eventually become the lead researcher for the project, stumbled upon signs of the metabolic process where, for all intents and purposes, there shouldn't have been. Until now, much of the science community has generally agreed that Ribonucleic acid, or RNA, was the first building block of life because it produces enzymes that could catalyze complex sequences of reactions such as metabolic action. However, Ralser's lab found the end products of the metabolic process without any presence of RNA. Instead, the findings indicate that complex and life-forming reactions like these could occur spontaneously given the right, but surprisingly simple, conditions.

"People have said that these pathways look so complex they couldn't form by environmental chemistry alone," Rasler told NewScientist. "This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA."

Testing: Because Rasler's team basically stumbled upon their initial findings, they repeated the process several times and were pleasantly surprised with repeat successful outcomes. So, taking things to the next level, Rasler began working with Cambridge's Earth sciences department to determine if these processes could have occurred in the Archean Ocean, the oxygen-free world, predating photosynthesis, which covered the planet almost 4 billion years ago.

"In the beginning we had hoped to find one reaction or two maybe, but the results were amazing," said Ralser. "We could reconstruct two metabolic pathways almost entirely."

If these metabolic pathways were occurring in the absence of RNA in conditions rich with iron and other metals and phosphate, it seems increasingly likely that life could have literally started from nothing and spontaneously formed in ways until now believed impossible.

So what? "I think this paper has really interesting connotations for the origins of life," says Matthew Powner at University College London. "For origins of life, it is important to understand where the source molecules come from."

Rasler's team has been the first to show that life could literally come from nothing. Of course, in the scientific community, this could be a major advancement, albeit one that is still only a part of an overall picture that's still forming through years of continuing research. However, these findings could also potentially play into the creationism versus evolution debate. One of the holes often poked by creationists is the complex and hard-to-explain idea of life started from nothing at all, and for the most part scientific explanations have been somewhat lacking. However, these findings indicate that something from nothing might not be as far-fetched idea as it seems.

Science as Salvation?

Science as Salvation?


Marcelo Gleiser
Marcelo Gleiser

The Island of Knowledge
The Limits of Science and the Search for Meaning.
By Marcelo Gleiser.

Whether or not scientists are from Mars and humanists from Venus, the “two cultures” debate about the arts and sciences has never been down to earth. For decades we’ve endured schematic sparring between straw men: humanists claim that scientists are reductive, scientists find humanists reactionary. (A recent bout between the cognitive scientist Steven Pinker and the literary critic Leon Wieseltier in the pages of The New Republic ran true to form.) Marcelo Gleiser, a physicist with strong ties to the humanities, is alarmed by the hubristic stance of his discipline and the backlash it is liable to provoke. He has written The Island of Knowledge as “a much needed self-analysis in a time when scientific speculation and arrogance are rampant…. I am attempting to protect science from attacks on its intellectual integrity.”

Perhaps this well-meant intervention is unnecessary, given the many signs of interdisciplinary concord today. These include the growth of science studies, technocultural studies and the digital humanities within the liberal arts; successful popularizations of science in the media—the new Cosmos had the largest debut of any series in television history; and the ongoing enthusiasm for science fiction in mass culture. (True, the genre is often light-years away from genuine science, but at its best it’s an exemplary merger of the two cultures.) From such portents alone, we seem poised to embrace the ideal of “one culture, many methods.” But might this be a pious platitude, if not a colossal category mistake? Are the arts and sciences actually fated to be an estranged couple, burdening their offspring with crippling complexes?

Gleiser hopes to heal the rift between the two cultures by denying the scientific dream of establishing final truths. He insists that while the arts and sciences have different methods, they are fundamentally united in their search for humanity’s roots and purposes; they also share the human limitation of finding only provisional and incomplete answers. He traces Western science’s misguided aspiration to omniscience, and its consequent devaluing of human fallibility, to its beginnings in classical Greece. This is certainly an appropriate place to start for a history of science’s Platonic aspirations. However, the origin of the “two cultures” debate that Gleiser implicitly addresses is more recent, and thus less entrenched, than his own chronology implies. The unhappy couple stands a good chance of being reconciled through judicious interventions such as his.

Their current disaffection commenced in the early nineteenth century, when the “natural philosopher,” a man of parts, began to be replaced by the specialized “scientist,” a term coined in the 1830s. A new division of labor emerged. Scientists claimed to establish objective facts and laws about the natural world by stifling their imagination and relying on empirical observation, testing and prediction; humanists embraced the Romantic imagination, interpreting the ambiguous nature of human experience through empathy as well as analysis. At the dawn of the twentieth century, reconciliation beckoned within the new domain of the “social sciences.” Economists, anthropologists, sociologists, psychologists and historians combined rational inquiry with intuitive insight—the sort of “scientific use of the imagination” proposed by the scientist John Tyndall and exemplified by the fictional icon Sherlock Holmes. Nevertheless, methods clashed and philosophies jostled. Should social scientists seek simple, encompassing laws like the natural sciences, or should they highlight particularity and uniqueness, like the humanities? The debate revolved around approaches deemed “nomothetic” (generalizing) or ”idiographic” (individualizing)—terms so ugly they assured public disinterest.
* * *
The battle lines became firmly drawn in the years following World War II. In Science and Human Values (1956), Jacob Bronowski attempted to overcome the sullen suspicions between humanists and scientists, each now condemning the other for the horrifying misuse of technology during the conflict:
Those whose education and perhaps tastes have confined them to the humanities protest that the scientists alone are to blame, for plainly no mandarin ever made a bomb or an industry. The scientists say, with equal contempt, that the Greek scholars and the earnest explorers of cave paintings do well to wash their hands of blame; but what in fact are they doing to help direct the society whose ills grow more often from inaction than from error?
Bronowski was a published poet and biographer of William Blake as well as a mathematician; he knew that artists and scientists had different aims and methods. Yet he also attested that both engaged in imaginative explorations of the unities underlying the human and natural worlds.

If Bronowski’s stress on the imagination as the foundation of both the arts and sciences had prevailed, Gleiser would not need to remind his readers that Newton and Einstein shared a similar “belief in the creative process.” However, while Bronowski meant to heal the breach by exposing it, he inadvertently encouraged others to expand it into an unbridgeable gulf, a quagmire of stalemate and trench warfare. His friend C.P. Snow battened on the division in lectures that were subsequently published under the meme-friendly title The Two Cultures and the Scientific Revolution (1959). Snow acknowledged that scientists could be philistine about the humanities, but his ire was directed at the humanists: they composed the governing establishment, their willful ignorance about science impeding policies that could help millions worldwide. As the historian Guy Ortolano has shown in The Two Cultures Controversy (2009), Snow tactlessly insinuated that the literary intelligentsia’s delight in irrational modernism rather than rational science was partly responsible for the Holocaust: “Didn’t the influence of all they represent bring Auschwitz that much closer?” Such ad hominem attacks raised the hackles of the literary critic F.R. Leavis, himself a master of the art. His response, Two Cultures? The Significance of C.P. Snow (1962), proved only that humanists could be just as intemperate as Snow implied. (One critic, appalled by Leavis’s vituperation, dubbed him “the Himmler of Literature.”)

The “two cultures” debate has continued for decades, often rehashing the same issues and generating more heat than light—a metaphor that reminds us of how entwined the arts and sciences are in everyday life. In recent years, however, the tone and substance of the debate have changed. There is a revived tenor of nineteenth-century scientific triumphalism, owing in part to the amazing successes of the natural sciences, from the standard model in physics to DNA sequencing and the Human Genome Project. Numerous physicists are convinced that they will discover a final “theory of everything” proving the unity of nature’s laws and defining its constituent elements. Not all scientists share this reductionist outlook, but the wider culture unintentionally reinforces it, thanks to information technology’s colonization of everyday life. We’re more primed than ever before to think in terms of keyword searches, algorithmic sequences and Big Data.

No wonder that science, for many, has become a secular holy writ, goading its believers to denounce all forms of religion as empty superstition while converting the humanistic disciplines into mere disciples of science. The new priesthood even performs last rites, as Stephen Hawking did in 2011: “Philosophy is dead,” he pronounced, because “[p]hilosophers have not kept up with modern developments in science. Particularly physics.” Gleiser is troubled by the fatuous preening of some prominent scientists, who risk alienating a public otherwise predisposed to appreciate the marvels of scientific discovery and the mysteries of scientific exploration: “To claim to know the ‘truth’ is too heavy a burden for scientists to carry. We learn from what we can measure and should be humbled by how much we can’t. It’s what we don’t know that matters.”

In this polarized atmosphere, offers of a truce in the manner of Bronowski simply inflame mutual mistrust. The recent dust-up in The New Republic began when Pinker extended to the humanities an olive branch of sorts in the name of “consilience” with science. Wieseltier identified it as a cudgel, and in some ways he was right: Pinker began by transubstantiating eighteenth-century philosophers like Hume and Rousseau into scientists manqué, and then added insult to injury by suggesting that the humanities become more like the sciences by adopting a “progressive agenda.” Wieseltier agreed with him that the boundaries between the two cultures were porous, but demanded they be buttressed against science’s imperialistic agenda: “Unified field theories may turn scientists on, but they turn humanists off: it has taken a very long time to establish the epistemological humility, the pluralistic largeness of mind, that those borders represent, and no revolution in any science has the power to repeal it.” (To be fair, the humanities have had their share of unified theories, including Marxism, Freudianism and structuralism. The two cultures are true to human nature in craving essences and totalities; even some postmodernists have been heard to proclaim that there are absolutely no absolutes.)

If such well-intentioned partisans can’t negotiate a cease-fire, perhaps each side needs to conduct an internal audit about what it has in common with its opponent prior to future armistice talks. Philosophers and historians of science have laid the groundwork, but they tend to be humanists and thus easier for hard scientists to dismiss. Steven Weinberg, a Nobel laureate in physics, patronized the philosophy of science as providing a “pleasing gloss” on scientific achievements, but little more: “We should not expect it to provide today’s scientists with any useful guidance about how to go about their work or about what they are likely to find.”

This situation is what makes Gleiser’s intervention in the debates so timely and interesting. He started his career in theoretical physics believing in the holy grail of his field, a final theory unifying quantum mechanics with general relativity. In his autobiographical A Tear at the Edge of Creation (2010), he confessed that he had been attracted to science initially by his own psychological need for order in an apparently meaningless universe. The death of his mother when he was 6 led him to search for sources of transcendence, from religion to fantasy fiction. He finally became a convert to the secular “magic” of physics as a teenager: “Science was a rational connection to a reality beyond our senses. There was a bridge to the mysterious, and it did not have to cross over supernatural lands. This was the greatest realization of my life.”

Gleiser has never lost his sense of wonder about existence or about the importance of science in conveying it. But his own experiences as a professional have led him to abandon the dream of attaining any final theory—in fact, he views the goal itself as a form of “intellectual vanity” and “monotheistic science.” Part of his disillusion has to do with the failure to find possible tests or empirical evidence for the extravagant claims of superstring theory, rendering it closer to metaphysics than physics. Gleiser also immersed himself in the history of science and was reminded that Western science has dreamed of discovering ultimate truth since the discipline’s inception. This faith has never been substantiated at the empirical level, situating it alongside mythic and religious yearnings to attain “oneness.” “There are faith-based myths running deep in science’s canon,” he maintains. “Scientists, even the great ones, may confuse their expectations of reality with reality itself.”

None of these heartfelt observations would surprise philosophers of science; Mary Midgley’s wonderful Science as Salvation (1992)—not included in Gleiser’s bibliography—makes the same points. But Gleiser speaks as a scientist and is thus more likely to be heard by his peers—provided he doesn’t scare them off with his anti-realist stance. He can sound positively postmodern when he defines science as “a human construction, a narrative we create to make sense of the world around us.” But if he opposes the naïve realist belief that science accesses a mind-independent reality, he doesn’t make the equally naïve claim that science is merely a social construction. It does attain verifiable knowledge of reality, its evolving instruments yielding increasingly precise data: but the resultant explanations are inevitably partial and always subject to change. There are no final answers, for new knowledge yields new mysteries to be solved. Science is a limited, interpretive practice and will only be “humanized” if it adopts the epistemological humility that Wieseltier claimed was the purview of the humanities.

These conclusions, and some of the same historical examples, reappear in Gleiser’s The Island of Knowledge. In this work, he underscores the many limits, even “insurmountable barriers,” to scientific knowledge. He likens science to an island situated within a wider sea of the unknown: “As the Island of Knowledge grows, so do the shores of our ignorance.” In thirty-two brief chapters, he provides a stimulating overview of Western science’s shifting interpretations of reality from classical Greece to the present, including informative discussions of atomism, alchemy, classical physics, quantum mechanics, quantum entanglement, the Big Bang, the multiverse, superstring theory, mathematics, information theory, computers and consciousness.
* * *
Gleiser is a brilliant expositor of difficult concepts, and his raw enthusiasm is transporting. He is equally fervent about the uncertainties of science, having once been a believer in its unalloyed truth: “I find myself in the difficult role of being a romantic having to kill the dreams of other romantics.”
However, as with many disillusioned votaries of absolutist creeds, his new stance can be as fundamentalist as the one he rejects. As he argued in his previous book—and continues to argue in this one—science’s “essential limitations” include the imprecision of its instruments and the cultural contingency of its concepts. In The Island of Knowledge, he eagerly gathers other objections to any final theory as kindling for a bonfire of the vanities. He contends that nature itself posits absolute limits to what we can know empirically, such as the initial conditions that generated the Big Bang or the existence of multiple universes implied by current theories of cosmic inflation. In addition, the quantum world is impervious to deterministic explanations. And mathematics is likely not mind-independent but rather a human invention—one whose formal structures cannot be both consistent and complete.

These assertions may be valid—only time will tell, if that—but Gleiser’s temperamental absolutism sometimes subverts his pragmatic faith in an unfinished universe. He insists that “there are aspects of reality that are permanently beyond our reach,” and also that “we can never know for certain…. We should build solid arguments based on current scientific knowledge but keep our minds open for surprises.” He notes that some mysteries will always remain mysteries—“there is an essential difference between ‘we don’t know’ and ‘we can’t know’”—but also admits that “‘Never’ is a hard word to use in science.” He inadvertently becomes his own best example of how hard it is to practice epistemological humility even when one is committed to it. Attaining that outlook, rather than certainty, is the true noble dream.

It is this lesson, above all, that makes Gleiser’s intervention in the “two cultures” debate so valuable. As scientists, both he and Bronowski have established underlying unities: not in the forces of nature, but in the humanities and the sciences. Bronowski stressed their common reliance on imagination, which subtends “numbers and pictures, the lever and The Iliad, the shapes of atoms and the great plays and the Socratic dialogues.” Gleiser emphasizes science’s inherent limitations, which make it “more beautiful and powerful, not less.” Despite its commitment to establishing verifiable knowledge of reality, science remains an interpretive and contingent practice—indeed, a humanistic enterprise. In the “two cultures” debate, one hopes that Gleiser’s words are among the last, especially his claim that science aligns “with the rest of the human creative output—impressive, multifaceted, and imperfect as we are.”

Sunday, August 31, 2014

"My name is Paul Weston and I am a racist"



 

 

 

 

 

 

"My name is Paul Weston and I am a racist"

"My name is Paul Weston and I am a racist."


Hello. My name is Paul Weston and I am a racist.

I know I'm a racist because I'm told I'm a racist by a great deal of people. The hard Left think I'm a racist, the Labour Party thinks I'm a racist, Conservatives think I'm a racist, Liberal Democrats think I'm a racist, the BBC thinks I'm a racist. So I must therefore be a racist.

Why am I a racist? It's very simple: I wish to preserve the culture of my country, I wish to preserve the people of my country, and in doing so that makes me a designated racist in today's society.

Now this is something that's been moved by the Left – the goalposts have been moved by the Left a considerable distance on this. In order to be termed a racist thirty or forty years ago, you had to actively dislike foreign people. I don't dislike foreign people. What I do like, what I love, is my country, my culture and my people, and I see them under a terrible threat at the moment.

Britain is a very small country that's opened its doors to the mass immigrants of the Third World, and we are simply being overwhelmed. Our schools can't cope, our hospitals can't cope, very little can cope any more. Our welfare system is on the verge of buckling as well. So if I want to defend what I grew up in, what I was born into – my country, my British culture, my heritage and my history – I am apparently, according to absolutely everybody today, a racist.

But I don't think that's the case. Not the case that I'm not actually a racist – I'm going to admit that full out, right now, because clearly I am. I've been told by so many people I am, it simply must be true. I'm probably also an Islamophobe.

A phobia is an irrational fear of something. Now I don't have an irrational fear of Islam. I look around the world today, – at Syria at the moment, where almost a hundred thousand people have been killed in the last two years, where Shia Muslims are slaughtering Sunni Muslims and vice versa – I look at places like Indonesia, and Egypt, and China and the Philippines – everywhere you look you see problems with Islam. And they're violent. They are – dare I say it, to really reinforce my racist credentials – a thoroughly savage political and religious ideology.

Now many people will disagree with that. The far Left of course will say, you cannot criticise Islam because Islam is a religion, and rules have now been put into place in this country that say if you criticise it, you are guilty of inciting religious hatred. But Islam is not just a religion, Islam is a political ideology as well and we need to call it out on the fact that it is also political. It is a culture that is both political and religious.

I would like to know if I'm able to say certain things about it. Do I think for example that stoning adulteresses to death is something we should welcome in this country? Well I don't think it is.

Therefore, I'm guilty of religious hatred by saying it. Do I think homosexuals should be hanged from cranes? No I don't, I think it's backward, I think it's savage, and I think the people that do it are beyond the pale, quite frankly.

I'm not allowed to say these things because of course I'm again inciting religious hatred. So not only am I a racist, I'm also a religionist, apparently.

But I'm not. We have a huge problem in this country, that is not going to go away, it is going to get worse and worse and worse. We as a people are declining, as a demographic, and the Islamic population is growing nine times faster than any other; and when I look to the future I see a full-blown religious civil war occurring in this country. The unthinkable things that are going on in somewhere like Syria today will happen in this country before 2040, certainly before 2050. I don't want Britain to turn into a country like that. So I'm going to denounce Islam as a backward, savage political and religious ideology, and to hell with what anybody thinks about that – because if we don't do something about it, we are going to be involved in something that most people can barely even begin to imagine in Britain.

Babies are beheaded in towns in Syria. The idea this could happen in somewhere like Surbiton, or even Eaton Square, is simply impossible to think for most people but it is going to happen, it really is going to happen. So we need to denounce it for what it is. And we need to start mounting some sort of defence against it.

But the trouble with mounting a defence against it is that you get hit with the 'racist' accusation: "I'm not a racist, but …". So here's the thing: I am a racist. If I want to avoid a civil war happening in my country, I am prepared to accept being called a racist; and you should be prepared to accept being called a racist as well. Let's all just say, "Yes, we're dreadful, dreadful racists", and let's start denouncing an ideology that is the most primitive, backward, savage ideology that we've wilfully imported into this country – by the Left, by people like Tony Blair, who did it deliberately in order to undermine our culture, our people, our country, my country. They did it deliberately – and then they said you're not allowed to actually argue with us about this.

Well I'm arguing with you about this Mr Blair. And I'll tell you something: you ... repealed the treason laws shortly after you came into power. I think you committed treason, Mr Blair. I think you committed treason when you said, we are going to import the Third World in order to "rub the noses of the right in diversity". To me, that's treason.

Your principal duty was to uphold the best interests of the people of this country. The idea that you deliberately set out to undermine us and to subvert us is an act that's criminal. It doesn't matter that you repealed the laws, those laws can be brought back. And one day Mr Blair, you will be tried for treason, along with the rest of your Cabinet and every single high-ranking Labour politician that allowed this criminal act to happen.

I'm going to tell you this. It doesn't matter that you can perhaps prosecute me for 'racism' or inciting religious hatred. I don't believe in that. I believe only in one thing: the defence of my country, the defence of my people, the defence of my culture. And everything else can just go to hell.

I am a racist.

Ionosphere

Ionosphere

From Wikipedia, the free encyclopedia
 
The ionosphere /ˈɒnɵˌsfɪər/ is a region of the upper atmosphere, from about 85 km (53 mi) to 600 km (370 mi) altitude, and includes the thermosphere and parts of the mesosphere and exosphere. It is distinguished because it is ionized by solar radiation. It plays an important part in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth.[1]
Relationship of the atmosphere and ionosphere

Geophysics

The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about 50 km (31 mi) to more than 1,000 km (620 mi). It owes its existence primarily to ultraviolet radiation from the Sun.

The lowest part of the Earth's atmosphere, the troposphere extends from the surface to about 10 km (6.2 mi). Above 10 km (6.2 mi) is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above 80 km (50 mi), in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is ionized and contains a plasma which is referred to as the ionosphere. In a plasma, the negative free electrons and the positive ions are attracted to each other by the electrostatic force, but they are too energetic to stay fixed together in an electrically neutral molecule.

Ultraviolet (UV), X-Ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present.

Ionization depends primarily on the Sun and its activity. The amount of ionization in the ionosphere varies greatly with the amount of radiation received from the Sun. Thus there is a diurnal (time of day) effect and a seasonal effect. The local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. The activity of the Sun is associated with the sunspot cycle, with more radiation occurring with more sunspots. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization. There are disturbances such as solar flares and the associated release of charged particles into the solar wind which reaches the Earth and interacts with its geomagnetic field.

The ionospheric layers

Ionospheric layers.

At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F1 layer. The F2 layer persists by day and night and is the region mainly responsible for the refraction of radio waves.

D layer

The D layer is the innermost layer, 60 km (37 mi) to 90 km (56 mi) above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.5 nanometre (nm) ionizing nitric oxide (NO). In addition, with high Solar activity hard X-rays (wavelength < 1 nm) may ionize (N₂, O₂). During the night cosmic rays produce a residual amount of ionization. Recombination is high in the D layer, the net ionization effect is low, but loss of wave energy is great due to frequent collisions of the electrons (about ten collisions every msec). As a result high-frequency (HF) radio waves are not reflected by the D layer but suffer loss of energy therein. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively smaller absorption as the frequency gets higher. The absorption is small at night and greatest about midday. The layer reduces greatly after sunset; a small part remains due to galactic cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime.

During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours.

E layer

The E layer is the middle layer, 90 km (56 mi) to 120 km (75 mi) above the surface of the Earth. Ionization is due to soft X-ray (1-10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O₂). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense Sporadic E events, the Es layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer rapidly disappears because the primary source of ionization is no longer present.
After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer.

This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). However, it was not until 1924 that its existence was detected by Edward V. Appleton and Miles Barnett.

Es

The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, rarely up to 225 MHz. Sporadic-E events may last for just a few minutes to several hours. Sporadic E propagation makes radio amateurs very excited, as propagation paths that are generally unreachable can open up. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs most frequently during the summer months when high signal levels may be reached. The skip distances are generally around 1,640 km (1,020 mi). Distances for one hop propagation can be as close as 900 km (560 mi) or up to 2,500 km (1,600 mi).
Double-hop reception over 3,500 km (2,200 mi) is possible.

F layer

The F layer or region, also known as the Appleton-Barnett layer, extends from about 200 km (120 mi) to more than 500 km (310 mi) above the surface of Earth. It is the densest point of the ionosphere, which implies signals penetrating this layer will escape into space. At higher altitudes, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant; this layer is the topside ionosphere. There, extreme ultraviolet (UV, 10–100 nm) solar radiation ionizes atomic oxygen. The F layer consists of one layer at night, but during the day, a deformation often forms in the profile that is labeled F₁. The F₂ layer remains by day and night responsible for most skywave propagation of radio waves, facilitating high frequency (HF, or shortwave) radio communications over long distances.

From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region.[2]

Ionospheric model

An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity.
Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density.

Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI)[3] (IRI 2007), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI).[4] The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St. Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC) .Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457).

Persistent anomalies to the idealized model

Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions.

Winter anomaly

At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity.

Equatorial anomaly

Electric currents created in sunward ionosphere.

Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain.

Equatorial electrojet

The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) (100 km (62 mi) – 130 km (81 mi) altitude). Resulting from this current is an electrostatic field directed E-W (dawn-dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet.

Ephemeral ionospheric perturbations

X-rays: sudden ionospheric disturbances (SID)

When the Sun is active, strong solar flares can occur that will hit the sunlit side of Earth with hard X-rays. The X-rays will penetrate to the D-region, releasing electrons that will rapidly increase absorption, causing a High Frequency (3 - 30 MHz) radio blackout. During this time Very Low Frequency (3 – 30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out ends as the electrons in the D-region recombine rapidly and signal strengths return to normal.

Protons: polar cap absorption (PCA)

Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours.

Geomagnetic storms

A geomagnetic storm is a temporary intense disturbance of the Earth's magnetosphere.
  • During a geomagnetic storm the F₂ layer will become unstable, fragment, and may even disappear completely.
  • In the Northern and Southern pole regions of the Earth aurorae will be observable in the sky.

Lightning

Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (Very Low Frequency) radio waves launched into the magnetosphere.
These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events.

Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called Early/Fast.

In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focussed on the mechanism by which this process can occur.

Applications

Radio communication

DX communication, popular among amateur radio enthusiasts, is a term given to communication over great distances. Thanks to the property of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can be utilized to "bounce" a transmitted signal down to ground.
Transcontinental HF-connections rely on up to 5 bounces, or hops. Such communications played an important role during World War II. Karl Rawer's most sophisticated prediction method[1] took account of several (zig-zag) paths, attenuation in the D-region and predicted the 11-year solar cycle by a method due to Wolfgang Gleißberg.

Mechanism of refraction

When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy.
Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough.

The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below:
f_{\text{critical}} = 9 \times\sqrt{N}
where N = electron density per m3 and fcritical is in Hz.
The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time.
f_\text{muf} = \frac{f_\text{critical}}{ \sin \alpha}
where \alpha = angle of attack, the angle of the wave relative to the horizon, and sin is the sine function.

The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer.

Other applications

The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction.

Measurements

Overview

Scientists also are exploring the structure of the ionosphere by a wide variety of methods, including passive observations of optical and radio emissions generated in the ionosphere, bouncing radio waves of different frequencies from it, incoherent scatter radars such as the EISCAT, Sondre Stromfjord, Millstone Hill, Arecibo, and Jicamarca radars, coherent scatter radars such as the Super Dual Auroral Radar Network (SuperDARN) radars, and using special receivers to detect how the reflected waves have changed from the transmitted waves.

A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska.

The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 different countries and multiple radars in both hemispheres.

Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo radio telescope located in Puerto Rico, was originally intended to study Earth's ionosphere.

Ionograms

Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available).

Incoherent scatter radars

Incoherent scatter radars operate above the critical frequencies. Therefore the technique allows to probe the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities.

Solar flux

Solar flux is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a radio telescope located in Dominion Radio Astrophysical Observatory, Penticton, British Columbia, Canada.[5] Known also as the 10.7 cm flux (the wavelength of the radio signals at 2800 MHz), this solar radio emission has been shown to be proportional to sunspot activity. However, the level of the Sun's ultraviolet and X-ray emissions is primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere.
  • The A and K indices are a measurement of the behavior of the horizontal component of the geomagnetic field. The K index uses a scale from 0 to 9 to measure the change in the horizontal component of the geomagnetic field. A new K index is determined at the Boulder Geomagnetic Observatory 40°08′15″N 105°14′16″W.
  • The geomagnetic activity levels of the Earth are measured by the fluctuation of the Earth's magnetic field in SI units called teslas (or in non-SI gauss, especially in older literature). The Earth's magnetic field is measured around the planet by many observatories. The data retrieved is processed and turned into measurement indices. Daily measurements for the entire planet are made available through an estimate of the ap index, called the planetary A-index (PAI).

GPS/GNSS

Ionospheres on other planets and Titan

The atmosphere of Titan includes an ionosphere that ranges from about 1,100 km (680 mi) to 1,300 km (810 mi) in altitude and contains carbon compounds.[6]

Planets with ionospheres (incomplete list): Venus, Uranus.

History

As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a 152.4 m (500 ft) kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced.
The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work.[7] However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later.

In 1902, Oliver Heaviside proposed the existence of the Kennelly-Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Heaviside's proposal, coupled with Planck's law of black body radiation, may have hampered the growth of radio astronomy for the detection of electromagnetic waves from celestial bodies until 1932 (and the development of high-frequency radio transceivers). Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties.

In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923.

In 1926, Scottish physicist Robert Watson-Watt introduced the term ionosphere in a letter published only in 1969 in Nature:
We have in quite recent years seen the universal adoption of the term ‘stratosphere’..and..the companion term ‘troposphere’... The term ‘ionosphere’, for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series.
Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere.
Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere.

In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS -A and -B in 1972 and 1975, all for measuring the ionosphere.

Mesosphere

Mesosphere

From Wikipedia, the free encyclopedia
Space Shuttle Endeavour appears to straddle the stratosphere and mesosphere in this photo. "The orange layer is the troposphere, where all of the weather and clouds which we typically watch and experience are generated and contained. This orange layer gives way to the whitish Stratosphere and then into the Mesosphere."[1]
Earth atmosphere diagram showing the exosphere and other layers. The layers are to scale. From Earth's surface to the top of the stratosphere (50 km or 31 mi) is just under 1.2% of Earth's radius.

The mesosphere (/ˈmɛssfɪər/; from Greek mesos "middle" and sphaira "ball") is the layer of the Earth's atmosphere that is directly above the stratopause and directly below the mesopause. In the mesosphere temperature decreases with increasing height. The upper boundary of the mesosphere is the mesopause, which can be the coldest naturally occurring place on Earth with temperatures below 130 K (−226 °F; −143 °C). The exact upper and lower boundaries of the mesosphere vary with latitude and with season, but the lower boundary of the mesosphere is usually located at heights of about 50 kilometres (160,000 ft; 31 mi) above the Earth's surface and the mesopause is usually at heights near 100 kilometres (62 mi), except at middle and high latitudes in summer where it descends to heights of about 85 kilometres (53 mi).

The stratosphere, mesosphere and lowest part of the thermosphere are collectively referred to as the "middle atmosphere", which spans heights from approximately 10 kilometres (33,000 ft) to 100 kilometres (62 mi). The mesopause, at an altitude of 80–90 km (50–56 mi), separates the mesosphere from the thermosphere—the second-outermost layer of the Earth's atmosphere. This is also around the same altitude as the turbopause, below which different chemical species are well mixed due to turbulent eddies. Above this level the atmosphere becomes non-uniform; the scale heights of different chemical species differ by their molecular masses.

Temperature

Within the mesosphere, temperature decreases with increasing altitude. This is due to decreasing solar heating and increasing cooling by CO2 radiative emission. The top of the mesosphere, called the mesopause, is the coldest part of Earth's atmosphere.[2] Temperatures in the upper mesosphere fall as low as −100 °C (173 K; −148 °F),[3] varying according to latitude and season.

Dynamic features

The main dynamic features in this region are strong zonal (East-West) winds, atmospheric tides, internal atmospheric gravity waves (commonly called "gravity waves") and planetary waves. Most of these tides and waves are excited in the troposphere and lower stratosphere, and propagate upward to the mesosphere. In the mesosphere, gravity-wave amplitudes can become so large that the waves become unstable and dissipate. This dissipation deposits momentum into the mesosphere and largely drives global circulation.

Noctilucent clouds are located in the mesosphere. The upper mesosphere is also the region of the ionosphere known as the D layer. The D layer is only present during the day, when some ionization occurs with nitric oxide being ionized by Lyman series-alpha hydrogen radiation. The ionization is so weak that when night falls, and the source of ionization is removed, the free electron and ion form back into a neutral molecule. The mesosphere is also known as the "Ignorosphere".[4]

A 5 km (3.1 mi) deep sodium layer is located between 80–105 km (50–65 mi). Made of unbound, non-ionized atoms of sodium, the sodium layer radiates weakly to contribute to the airglow.

Uncertainties

The mesosphere lies above the maximum altitude for aircraft and below the minimum altitude for orbital spacecraft. It has only been accessed through the use of sounding rockets. As a result, it is the most poorly understood part of the atmosphere. The presence of red sprites and blue jets (electrical discharges or lightning within the lower mesosphere), noctilucent clouds and density shears within the poorly understood layer are of current scientific interest.

Meteors

Millions of meteors enter the atmosphere, an average of 40 tons per year.[5] (Actually the daily mass influx into the atmosphere is quite unknown and different studies are ranging from roughly 1 to 200 tons per day[citation needed].) Within the mesosphere most melt or vaporize as a result of collisions with the gas particles contained there. This results in a higher concentration of iron and other refractory materials reaching the surface.[citation needed]

Computer-aided software engineering

From Wikipedia, the free encyclopedia ...