Search This Blog

Sunday, August 3, 2014

Inflation (cosmology)

Inflation (cosmology)

From Wikipedia, the free encyclopedia:  http://en.wikipedia.org/wiki/Inflation_(cosmology)
 
Evidence of gravitational waves in the infant universe may have been uncovered by the BICEP2 radio telescope.[1][2][3][4]
 
In physical cosmology, cosmic inflation, cosmological inflation, or just inflation is the exponential expansion of space in the early universe. The inflationary epoch lasted from 10−36 seconds after the Big Bang to sometime between 10−33 and 10−32 seconds. Following the inflationary period, the universe continues to expand, but at a less accelerated rate.
 
The inflationary hypothesis was developed in the 1980s by physicists Alan Guth and Andrei Linde.[5]
Inflation explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the universe (see galaxy formation and evolution and structure formation).[6] Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
 
While the detailed particle physics mechanism responsible for inflation is not known, the basic picture makes a number of predictions that have been confirmed by observation.[7][8] The hypothetical field thought to be responsible for inflation is called the inflaton.[9]
 
On 17 March 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation.[1][2][3][4][10][11] However, on 19 June 2014, lowered confidence in confirming the findings was reported.[10][12][13] 

Overview

An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of the Earth's surface, marks the boundary of the part of the universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon never reaches the observer, because the space in between the observer and the object is expanding too rapidly.
 
History of the Universe - gravitational waves are hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the Big Bang (17 March 2014).[1][2][3]
 
The observable universe is one causal patch of a much larger unobservable universe; there are parts of the universe that cannot communicate with us yet. These parts of the universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees these regions for the first time, they look no different from any other region of space the local observer has already seen: they have a background radiation that is at nearly exactly the same temperature as the background radiation of other regions, and their space-time curvature is evolving lock-step with ours. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not in communication with our past light cone before.[14][15]
 
Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous very quickly.
 
As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero, and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are necessarily at nearly the same temperature and curvature, because they come from the same little patch of space.
 
The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter, and residual vacuum energy in the universe have to add up to the critical density, and the evidence strongly supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.[16][17]

Space expands

To say that space expands exponentially means that two inertial observers are moving farther apart with accelerating velocity. In stationary coordinates for one observer, a patch of an inflating universe has the following polar metric:[18][19]

ds^2 = - (1- \Lambda r^2) \, dt^2 + {1\over 1-\Lambda r^2} \, dr^2 + r^2 \, d\Omega^2.
This is just like an inside-out black hole metric—it has a zero in the dt component on a fixed radius sphere called the cosmological horizon. Objects are drawn away from the observer at r=0 towards the cosmological horizon, which they cross in a finite proper time. This means that any inhomogeneities are smoothed out, just as any bumps or matter on the surface of a black hole horizon are swallowed and disappear.

Since the space–time metric has no explicit time dependence, once an observer has crossed the cosmological horizon, observers closer in take its place. This process of falling outward and replacement points closer in are always steadily replacing points further out—an exponential expansion of space–time.

This steady-state exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy proportional to \Lambda everywhere. In this case, the equation of state is \! p=-\rho. The physical conditions from one moment to the next are stable: the rate of expansion, called the Hubble parameter, is nearly constant, and the scale factor of the universe is proportional to e^{Ht}. Inflation is often called a period of accelerated expansion because the distance between two fixed observers is increasing exponentially (i.e. at an accelerating rate as they move apart), while \Lambda can stay approximately constant (see deceleration parameter).

Few inhomogeneities remain

Cosmological inflation has the important effect of smoothing out inhomogeneities, anisotropies and the curvature of space. This pushes the universe into a very simple state, in which it is completely dominated by the inflaton field, the source of the cosmological constant, and the only significant inhomogeneities are the tiny quantum fluctuations in the inflaton. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem"[20] by analogy with the no hair theorem for black holes.

The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for philosophical disagreements about what is on the other side. The interpretation of the no-hair theorem is that the universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.[21]

Key requirement

A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the universe expanded by a factor of at least 1026 during inflation[22]

Motivations

Inflation resolves several problems in the Big Bang cosmology that were discovered in the 1970s.[26] Inflation was first discovered by Guth while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.

Horizon problem

The horizon problem is the problem of determining why the universe appears statistically homogeneous and isotropic in accordance with the cosmological principle.[27][28][29] For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light—thus have never come into causal contact: in the history of the universe, back to the earliest times, it has not been possible to send a light signal between the two regions. Because they have no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). This is because the Hubble radius in a radiation or matter-dominated universe expands much more quickly than physical lengths and so points that are out of communication are coming into communication. Historically, two proposed solutions were the Phoenix universe of Georges Lemaître[30] and the related oscillatory universe of Richard Chase Tolman,[31] and the Mixmaster universe of Charles Misner.[28][32] Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the universe more chaotic, could lead to statistical homogeneity and isotropy.

Flatness problem

Another problem is the flatness problem (which is sometimes called one of the Dicke coincidences, with the other being the cosmological constant problem).[33][34] It had been known in the 1960s that the density of matter in the universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).[35]:61

Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the universe could not be much greater than the contribution of matter. But as the universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at big bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the universe is flat to the accuracy of a few percent.[36]

Magnetic-monopole problem

The magnetic monopole problem (sometimes called the exotic-relics problem) says that if the early universe were very hot, a large number of very heavy[why?], stable magnetic monopoles would be produced. This is a problem with Grand Unified Theories, which proposes that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory.[37] These theories predict a number of heavy, stable particles that have not yet been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "knot" in the magnetic field.[38][39] Monopoles are expected to be copiously produced in Grand Unified Theories at high temperature,[40][41] and they should have persisted to the present day, to such an extent that they would become the primary constituent of the universe.[42][43] Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the universe.[44] A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: monopoles would be separated from each other as the universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written, "Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!"[45]

Reheating

Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model dependent, but in the first models it was typically from 1027K down to 1022K.[23]) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflation is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.[24][25]

Spin-based electronics: New material successfully tested

Spin-based electronics: New material successfully tested

Jul 30, 2014
From:http://phys.org/news/2014-07-spin-based-electronics-material-successfully.html

Spintronics is an emerging field of electronics, where devices work by manipulating the spin of electrons rather than the current generated by their motion. This field can offer significant advantages to computer technology. Controlling electron spin can be achieved with materials called 'topological insulators', which conduct electrons only across their surface but not through their interior. One such material, samarium hexaboride (SmB6), has long been theorized to be an ideal and robust topological insulator, but this has never been shown practically. Publishing in Nature Communications, scientists from the Paul Scherrer Institute, the IOP (Chinese Academy of Science) and Hugo Dil's team at EPFL, have demonstrated experimentally, for the first time, that SmB6 is indeed a topological insulator.

Electronic technologies in the future could utilize an intrinsic property of electrons called spin, which is what gives them their . Spin can take either of two possible states: "up" or "down", which can be pictured respectively as clockwise or counter-clockwise rotation of the electron around its axis.

Spin control can be achieved with materials called , which can conduct spin-polarized electrons across their surface with 100% efficiency while the interior acts as an insulator.
However, topological insulators are still in the experimental phase. One particular insulator, samarium hexaboride (SmB6), has been of great interest. Unlike other topological insulators, SmB6's insulating properties are based on a special phenomenon called the 'Kondo effect'. The Kondo effect prevents the flow of electrons from being destroyed by irregularities in the material's structure, making SmB6 a very robust and efficient topological 'Kondo' insulator.

Scientists from the Paul Scherrer Institute (PSI), the Institute of Physics (Chinese Academy of Science) and Hugo Dil's team at EPFL have now shown experimentally that samarium hexaboride (SmB6) is the first topological Kondo insulator. In experiments carried out at the PSI, the researchers illuminated samples of SmB6 with a special type of light called 'synchroton radiation'. The energy of this light was transferred to electrons in SmB6, causing them to be ejected from it. The properties of ejected electrons (including ) were measured with a detector, which gave clues about how the electrons behaved while they were still on the surface of SmB6. The data showed consistent agreement with the predictions for a topological insulator.

"The only real verification that SmB6 is a topological Kondo insulator comes from directly measuring the and how it's affected in a Kondo insulator", says Hugo Dil. Although SmB6 shows insulating behavior only at very low temperatures the experiments provide a proof of principle, and more importantly, that Kondo topological insulators actually exist, offering an exciting stepping-stone into a new era of technology.

Explore further: Spintronics: Deciphering a material for future electronics      

More information: Nature Communications, 30 Jul 2014 DOI: 10.1038/ncomms5566
Journal reference: Nature Communications

Read more at: http://phys.org/news/2014-07-spin-based-electronics-material-successfully.html#jCp

Chemists demonstrate 'bricks-and-mortar' assembly of new molecular structures

Chemists demonstrate 'bricks-and-mortar' assembly of new molecular structures

Jul 31, 2014
From:  http://phys.org/news/2014-07-chemists-bricks-and-mortar-molecular.html
Chemists demonstrate 'bricks-and-mortar' assembly of new molecular structures        
This artwork will appear on the cover of Chemical Communications. It depicts the cyanostar molecules moving in solution, ordering on the surface, and stacking by anion binding. Imaging of the surface structure is performed by scanning …more
Chemists at Indiana University Bloomington have described the self-assembly of large, symmetrical molecules in bricks-and-mortar fashion, a development with potential value for the field of organic electronic devices such as field-effect transistors and photovoltaic cells.

Their paper, "Anion-Induced Dimerization of 5-fold Symmetric Cyanostars in 3D Crystalline Solids and 2D Self-Assembled Crystals," has been published online by Chemical Communications, a journal of the Royal Society of Chemistry. It is the first collaboration by Amar Flood, the James F. Jackson Associate Professor of Chemistry, and Steven L. Tait, assistant professor of chemistry. Both are in the materials chemistry program in the IU Bloomington Department of Chemistry, part of the College of Arts and Sciences.

The article will appear as the cover article of an upcoming issue of the journal. The cover illustration was created by Albert William, a lecturer in the media arts and science program of the School of Informatics and Computing at Indiana University-Purdue University Indianapolis. William specializes in using advanced graphics and animation to convey complex scientific concepts.

Lead author of the paper is Brandon Hirsch, who earned the cover by winning a poster contest at the fall 2013 meeting of the International Symposium on Macrocyclic and Supramolecular Chemistry. Co-authors, along with Flood and Tait, include doctoral students Semin Lee, Bo Qiao and Kevin P. McDonald and research scientist Chun-Hsing Chen.

The researchers demonstrate the self-assembly and packing of a five-sided, symmetrical molecule, called cyanostar, that was developed by Flood's IU research team. While researchers have created many such large, cyclic , or macrocycles, cyanostar is unusual in that it can be readily synthesized in a "one pot" process. It also has an unprecedented ability to bind with large, negatively charged anions such as perchlorate.

"This great piece of work, with state-of-the-art studies of the assembly of some beautiful compounds pioneered by the group in Indiana, shows how anions can help organize molecules that could have very interesting properties," said David Amabilino, nanomaterials group leader at the Institute of Materials Science of Barcelona. "Symmetry is all important when molecules pack together, and here the supramolecular aspects of these compounds with a very particular shape present tantalizing possibilities. This research is conceptually extremely novel and really interdisciplinary: It has really unveiled how anions could help pull molecules together to behave in completely new ways."
The paper describes how cyanostar molecules bind with anions in 2-to-1 sandwich-like complexes, with anions sandwiched between two saucer-shaped cyanostars. The study shows the packing of the molecules in repeating patterns reminiscent of the two-dimensional packing of pentagons shown by artist Albrecht Durer in 1525. It further shows the packing to take place not only at but away from the surface of materials.

The future of organic electronics will rely upon packing molecules onto electrode surfaces, yet it has been challenging to get packing of the molecules away from the surface, Tait and Flood said. With this paper, they present a collaborative effort, combining their backgrounds in traditionally distinct fields of , as a new foray to achieve this goal using a bricks-and-mortar approach.

The paper relies on two complementary technologies that provide high-resolution images of molecules:
  • X-ray crystallography, which is being celebrated worldwide for its invention 100 years ago, can provide images of molecules from analysis of the three-dimensional crystalline solids.
  • Scanning tunneling microscopy, or STM, developed in 1981, shows two-dimensional packing of molecules immobilized on a surface.
The results are distinct, with submolecular views of the star-shaped molecules that are a few nanometers in diameter. (A human hair is about 100,000 nanometers thick).

Explore further: Two teams pave way for advances in 2D materials      


 Read more at: http://phys.org/news/2014-07-chemists-bricks-and-mortar-molecular.html#jCp

Nanostructured metal-oxide catalyst efficiently converts CO2 to methanol

Nanostructured metal-oxide catalyst efficiently converts CO2 to methanol (w/ Video)

Jul 31, 2014

Nanostructured metal-oxide catalyst efficiently converts CO2 to methanol        














Scanning tunneling microscope image of a cerium-oxide and copper catalyst (CeOx-Cu) used in the transformation of carbon dioxide (CO2) and hydrogen (H2) gases to methanol (CH3OH) and water (H2O). In the presence of hydrogen, the Ce4+ and Cu+1 …more
Scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have discovered a new catalytic system for converting carbon dioxide (CO2) to methanol-a key commodity used to create a wide range of industrial chemicals and fuels. With significantly higher activity than other catalysts now in use, the new system could make it easier to get normally unreactive CO2 to participate in these reactions.

"Developing an effective for synthesizing methanol from CO2 could greatly expand the use of this abundant gas as an economical feedstock," said Brookhaven chemist Jose Rodriguez, who led the research. It's even possible to imagine a future in which such catalysts help mitigate the accumulation of this greenhouse gas, by capturing CO2 emitted from methanol-powered combustion engines and fuel cells, and recycling it to synthesize new fuel.

That future, of course, will be determined by a variety of factors, including economics. "Our basic research studies are focused on the science-the discovery of how such catalysts work, and the use of this knowledge to improve their activity and selectivity," Rodriguez emphasized.
The research team, which included scientists from Brookhaven, the University of Seville in Spain, and Central University of Venezuela, describes their results in the August 1, 2014, issue of the journal Science.
 New tools for discovery Because CO2 is normally such a reluctant participant in , interacting weakly with most catalysts, it's also rather difficult to study. These studies required the use of newly developed in-situ (or on-site, meaning under reaction conditions) imaging and chemical "fingerprinting" techniques. These techniques allowed the scientists to peer into the dynamic evolution of a variety of catalysts as they operated in real time. The scientists also used computational modeling at the University of Seville and the Barcelona Supercomputing Center to provide a molecular description of the methanol synthesis mechanism.

The team was particularly interested in exploring a catalyst composed of copper and ceria (cerium-oxide) nanoparticles, sometimes also mixed with titania. The scientists' previous studies with such metal-oxide nanoparticle catalysts have demonstrated their exceptional reactivity in a variety of reactions. In those studies, the interfaces of the two types of nanoparticles turned out to be critical to the reactivity of the catalysts, with highly reactive sites forming at regions where the two phases meet.

To explore the reactivity of such dual particle catalytic systems in converting CO2 to methanol, the scientists used spectroscopic techniques to investigate the interaction of CO2 with plain copper, plain cerium-oxide, and cerium-oxide/copper surfaces at a range of reaction temperatures and pressures. Chemical fingerprinting was combined with to reveal the most probable progression of intermediates as the reaction from CO2 to methanol proceeded.

These studies revealed that the metal component of the catalysts alone could not carry out all the chemical steps necessary for the production of methanol. The most effective binding and activation of CO2 occurred at the interfaces between metal and oxide nanoparticles in the cerium-oxide/copper catalytic system.

"The key active sites for the chemical transformations involved atoms from the metal [copper] and oxide [ceria or ceria/titania] phases," said Jesus Graciani, a chemist from the University of Seville and first author on the paper. The resulting catalyst converts CO2 to methanol more than a thousand times faster than plain copper particles, and almost 90 times faster than a common copper/zinc-oxide catalyst currently in industrial use.

This study illustrates the substantial benefits that can be obtained by properly tuning the properties of a metal-oxide interface in catalysts for methanol synthesis.

"It is a very interesting step, and appears to create a new strategy for the design of highly active catalysts for the synthesis of alcohols and related molecules," said Brookhaven Lab Chemistry Department Chair Alex Harris.
Explore further: Ionic liquid boosts efficiency of CO2 reduction catalyst

More information: www.sciencemag.org/lookup/doi/… 1126/science.1253057

Journal reference: Science

Read more at: http://phys.org/news/2014-07-nanostructured-metal-oxide-catalyst-efficiently-co2.html#jCp

Scientists develop pioneering new spray-on solar cells

Scientists develop pioneering new spray-on solar cells

Aug 01, 2014 by Hannah Postles
Link:  http://phys.org/news/2014-08-scientists-spray-on-solar-cells.html
Scientists develop pioneering new spray-on solar cells
An artist's impression of spray-coating glass with the polymer to create a solar cell
(Phys.org) —A team of scientists at the University of Sheffield are the first to fabricate perovskite solar cells using a spray-painting process – a discovery that could help cut the cost of solar electricity.


Experts from the University's Department of Physics and Astronomy and Department of Chemical and Biological Engineering have previously used the spray-painting method to produce solar cells using organic semiconductors - but using perovskite is a major step forward.
Efficient organometal halide perovskite based photovoltaics were first demonstrated in 2012. They are now a very promising new material for solar cells as they combine high efficiency with low materials costs.
The spray-painting process wastes very little of the perovskite material and can be scaled to high volume manufacturing – similar to applying paint to cars and graphic printing.
Lead researcher Professor David Lidzey said: "There is a lot of excitement around perovskite based photovoltaics.
"Remarkably, this class of material offers the potential to combine the high performance of mature solar cell technologies with the low embedded energy costs of production of organic photovoltaics."
While most solar cells are manufactured using energy intensive materials like silicon, perovskites, by comparison, requires much less energy to make. By spray-painting the perovskite layer in air the team hope the overall energy used to make a solar cell can be reduced further.
 
Share Video
  00:00      
00:00
 
00:00
        
 
Professor Lidzey said: "The best certified efficiencies from are around 10 per cent.
"Perovskite cells now have efficiencies of up to 19 per cent. This is not so far behind that of silicon at 25 per cent - the material that dominates the world-wide solar market."
He added: "The perovskite devices we have created still use similar structures to organic cells. What we have done is replace the key light absorbing layer - the organic layer - with a spray-painted perovskite.
"Using a perovskite absorber instead of an organic absorber gives a significant boost in terms of efficiency."
The Sheffield team found that by spray-painting the perovskite they could make prototype with efficiency of up to 11 per cent.

Professor Lidzey said: "This study advances existing work where the perovskite layer has been deposited from solution using laboratory scale techniques. It's a significant step towards efficient, low-cost solar cell devices made using high volume roll-to-roll processing methods."
Solar power is becoming an increasingly important component of the world-wide renewables energy market and continues to grow at a remarkable rate despite the difficult economic environment.
Professor Lidzey said: "I believe that new thin-film photovoltaic technologies are going to have an important role to play in driving the uptake of solar-, and that perovskite based cells are emerging as likely thin-film candidates. "
Explore further: A new stable and cost-cutting type of perovskite solar cell

Read more at: http://phys.org/news/2014-08-scientists-spray-on-solar-cells.html#jCp

Big data confirms climate extremes are here to stay

Big data confirms climate extremes are here to stay

Jul 30, 2014
Original Link:  http://phys.org/news/2014-07-big-climate-extremes.html

In a paper published online today in the journal Scientific Reports, published by Nature, Northeastern researchers Evan Kodra and Auroop Ganguly found that while global temperature is indeed increasing, so too is the variability in temperature extremes. For instance, while each year's average hottest and coldest temperatures will likely rise, those averages will also tend to fall within a wider range of potential high and low temperate extremes than are currently being observed. This means that even as overall temperatures rise, we may still continue to experience extreme cold snaps, said Kodra.

"Just because you have a year that's colder than the usual over the last decade isn't a rejection of the hypothesis," Kodra explained.

With funding from a $10-million multi-university Expeditions in Computing grant, the duo used computational tools from big data science for the first time in order to extract nuanced insights about climate extremes.

The research also opens new areas of interest for future work, both in climate and data science. It suggests that the natural processes that drive weather anomalies today could continue to do so in a warming future. For instance, the team speculates that ice melt in hotter years may cause colder subsequent winters, but these hypotheses can only be confirmed in physics-based studies.

The study used simulations from the most recent climate models developed by groups around the world for the Intergovernmental Panel on Climate Change and "reanalysis data sets," which are generated by blending the best available weather observations with numerical weather models. The team combined a suite of methods in a relatively new way to characterize extremes and explain how their variability is influenced by things like the seasons, geographical region, and the land-sea interface. The analysis of multiple climate model runs and reanalysis data sets was necessary to account for uncertainties in the physics and model imperfections.

The new results provide important scientific as well as societal implications, Ganguly noted. For one thing, knowing that models project a wider range of extreme temperature behavior will allow sectors like agriculture, public health, and insurance planning to better prepare for the future. For example, Kodra said, "an agriculture insurance company wants to know next year what is the coldest snap we could see and hedge against that. So, if the range gets wider they have a broader array of policies to consider."

Explore further: Arctic warming linked to fewer European and US cold weather extremes, study shows

Read more at: http://phys.org/news/2014-07-big-climate-extremes.html#jCp

Nuclear Fission and Fusion

Nuclear Fission and Fusion

Condensed From Wikipedia, the free encyclopedia
________________________________________

Nuclear fission

 

An induced fission reaction. A neutron is absorbed by a uranium-235 nucleus, turning it briefly into an excited uranium-236 nucleus, with the excitation energy provided by the kinetic energy of the neutron plus the forces that bind the neutron. The uranium-236, in turn, splits into fast-moving lighter elements (fission products) and releases three free neutrons. At the same time, one or more "prompt gamma rays" (not shown) are produced, as well.
 
In nuclear physics and nuclear chemistry, nuclear fission is either a nuclear reaction or a radioactive decay process in which the nucleus of an atom splits into smaller parts (lighter nuclei). The fission process often produces free neutrons and photons (in the form of gamma rays), and releases a very large amount of energy even by the energetic standards of radioactive decay.
 
Nuclear fission of heavy elements was discovered on December 17, 1938 by Otto Hahn and his assistant Fritz Strassmann, and explained theoretically in January 1939 by Lise Meitner and her nephew Otto Robert Frisch. Frisch named the process by analogy with biological fission of living cells. It is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place).
In order for fission to produce energy, the total binding energy of the resulting elements must be greater than that of the starting element.
 
Fission is a form of nuclear transmutation because the resulting fragments are not the same element as the original atom. The two nuclei produced are most often of comparable but slightly different sizes, typically with a mass ratio of products of about 3 to 2, for common fissile isotopes.[1][2] Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission. The smallest of these fragments in ternary processes ranges in size from a proton to an argon nucleus.
 
Fission as encountered in the modern world is usually a deliberately produced man-made nuclear reaction induced by a neutron. It is less commonly encountered as a natural form of spontaneous radioactive decay (not requiring a neutron), occurring especially in very high-mass-number isotopes.
The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner) distinguishes fission from purely quantum-tunnelling processes such as proton emission, alpha decay and cluster decay, which give the same products each time. Nuclear fission produces energy for nuclear power and drives the explosion of nuclear weapons. Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes possible a self-sustaining nuclear chain reaction that releases energy at a controlled rate in a nuclear reactor or at a very rapid uncontrolled rate in a nuclear weapon.
 
The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a similar mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. Concerns over nuclear waste accumulation and over the destructive potential of nuclear weapons may counterbalance the desirable qualities of fission as an energy source, and give rise to ongoing political debate over nuclear power.

Mechanism


A visual representation of an induced nuclear fission event where a slow-moving neutron is absorbed by the nucleus of a uranium-235 atom, which fissions into two fast-moving lighter elements (fission products) and additional neutrons. Most of the energy released is in the form of the kinetic velocities of the fission products and the neutrons.
 

Fission product yields by mass for thermal neutron fission of U-235, Pu-239, a combination of the two typical of current nuclear power reactors, and U-233 used in the thorium cycle.
 
 
Nuclear fission can occur without neutron bombardment, as a type of radioactive decay. This type of fission (called spontaneous fission) is rare except in a few heavy isotopes. In engineered nuclear devices, essentially all nuclear fission occurs as a "nuclear reaction" — a bombardment-driven process that results from the collision of two subatomic particles. In nuclear reactions, a subatomic particle collides with an atomic nucleus and causes changes to it. Nuclear reactions are thus driven by the mechanics of bombardment, not by the relatively constant exponential decay and half-life characteristic of spontaneous radioactive processes.
 
Many types of nuclear reactions are currently known. Nuclear fission differs importantly from other types of nuclear reactions, in that it can be amplified and sometimes controlled via a nuclear chain reaction (one type of general chain reaction). In such a reaction, free neutrons released by each fission event can trigger yet more events, which in turn release more neutrons and cause more fissions.
 
The chemical element isotopes that can sustain a fission chain reaction are called nuclear fuels, and are said to be fissile. The most common nuclear fuels are 235U (the isotope of uranium with an atomic mass of 235 and of use in nuclear reactors) and 239Pu (the isotope of plutonium with an atomic mass of 239). These fuels break apart into a bimodal range of chemical elements with atomic masses centering near 95 and 135 u (fission products). Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha/beta decay chain over periods of millennia to eons.
In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events.
 
Nuclear fissions in fissile fuels are the result of the nuclear excitation energy produced when a fissile nucleus captures a neutron. This energy, resulting from the neutron capture, is a result of the attractive nuclear force acting between the neutron and nucleus. It is enough to deform the nucleus into a double-lobed "drop," to the point that nuclear fragments exceed the distances at which the nuclear force can hold two groups of charged nucleons together, and when this happens, the two fragments complete their separation and then are driven further apart by their mutually repulsive charges, in a process which becomes irreversible with greater and greater distance. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these isotopes require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons).
 
The liquid drop model of the atomic nucleus predicts equal-sized fission products as an outcome of nuclear deformation. The more sophisticated nuclear shell model is needed to mechanistically explain the route to the more energetically favorable outcome, in which one fission product is slightly smaller than the other. A theory of the fission based on shell model has been formulated by Maria Goeppert Mayer.
 
The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z=1), to as large a fragment as argon (Z=18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors.[3]
__________________________________________

Nuclear fusion


The Sun is a main-sequence star, and thus generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen each second.

In nuclear physics, nuclear fusion is a nuclear reaction in which two or more atomic nuclei collide at a very high speed and join to form a new type of atomic nucleus. During this process, matter is not conserved because some of the matter of the fusing nuclei is converted to photons (energy). Fusion is the process that powers active or "main sequence" stars.

The fusion of two nuclei with lower masses than iron (which, along with nickel, has the largest binding energy per nucleon) generally releases energy, while the fusion of nuclei heavier than iron absorbs energy. The opposite is true for the reverse process, nuclear fission. This means that fusion generally occurs for lighter elements only, and likewise, that fission normally occurs only for heavier elements. There are extreme astrophysical events that can lead to short periods of fusion with heavier nuclei. This is the process that gives rise to nucleosynthesis, the creation of the heavy elements during events such as supernovae. Following the discovery of quantum tunneling by Friedrich Hund, in 1929 Robert Atkinson and Fritz Houtermans used the measured masses of light elements to predict that large amounts of energy could be released by fusing small nuclei. Building upon the nuclear transmutation experiments by Ernest Rutherford, carried out several years earlier, the laboratory fusion of hydrogen isotopes was first accomplished by Mark Oliphant in 1932. During the remainder of that decade the steps of the main cycle of nuclear fusion in stars were worked out by Hans Bethe.

Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. Fusion was accomplished in 1951 with the Greenhouse Item nuclear test. Nuclear fusion on a large scale in an explosion was first carried out on November 1, 1952, in the Ivy Mike hydrogen bomb test.

Research into developing controlled thermonuclear fusion for civil purposes also began in earnest in the 1950s, and it continues to this day. Two projects, the National Ignition Facility and ITER, have the goal of high gains, that is, producing more energy than required to ignite the reaction, after 60 years of design improvements developed from previous experiments.[citation needed] While these ICF and Tokamak designs became popular in recent times, experiments with Stellarators are gaining international scientific attention again, like Wendelstein 7-X in Greifswald, Germany.
 
fusion energy.
The reaction cross section σ is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross section and velocity. This average is called the 'reactivity', denoted <σv>. The reaction rate (fusions per volume per time) is <σv> times the product of the reactant number densities:
f = n_1 n_2 \langle \sigma v \rangle.
If a species of nuclei is reacting with itself, such as the DD reaction, then the product n_1n_2 must be replaced by (1/2)n^2.
\langle \sigma v \rangle increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of \langle \sigma v \rangle as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current high state of technical prowess.[10]

Methods for achieving fusion

Thermonuclear fusion

Main article: Thermonuclear fusion
If the matter is sufficiently heated (hence being plasma), the fusion reaction may occur due to collisions with extreme thermal kinetic energies of the particles. In the form of thermonuclear weapons, thermonuclear fusion is the only fusion technique so far to yield undeniably large amounts of useful fusion energy.[citation needed] Usable amounts of thermonuclear fusion energy released in a controlled manner have yet to be achieved.

Inertial confinement fusion

Inertial confinement fusion (ICF) is a type of fusion energy research that attempts to initiate nuclear fusion reactions by heating and compressing a fuel target, typically in the form of a pellet that most often contains a mixture of deuterium and tritium.

Beam-beam or beam-target fusion

If the energy to initiate the reaction comes from accelerating one of the nuclei, the process is called beam-target fusion; if both nuclei are accelerated, it is beam-beam fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and can be done in an efficient manner—all it takes is a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between electrodes. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross sections. Therefore the vast majority of ions end up expending their energy on bremsstrahlung and ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of these nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.

Muon-catalyzed fusion

Muon-catalyzed fusion is a well-established and reproducible fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction cannot occur because of the high energy required to create muons, their short 2.2 µs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.[11]

Other principles


The Tokamak à configuration variable, research fusion reactor, at the École Polytechnique Fédérale de Lausanne (Switzerland).

Some other confinement principles have been investigated, some of them have been confirmed to run nuclear fusion while having lesser expectation of eventually being able to produce net power, others have not yet been shown to produce fusion.

Sonofusion or bubble fusion, a controversial variation on the sonoluminescence theme, suggests that acoustic shock waves, creating temporary bubbles (cavitation) that expand and collapse shortly after creation, can produce temperatures and pressures sufficient for nuclear fusion.[12]

The Farnsworth–Hirsch fusor is a tabletop device in which fusion occurs. This fusion comes from high effective temperatures produced by electrostatic acceleration of ions.

The Polywell is a non-thermodynamic equilibrium machine that uses electrostatic confinement to accelerate ions into a center where they fuse together.

Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone.

Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels,[13] the D-D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces.[14][15][16][17]

Hybrid nuclear fusion-fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion.[18] Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However it would also require a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable.

Right-to-work law

From Wikipedia, the free encyclopedia ...