Search This Blog

Sunday, December 23, 2018

Overconfidence effect

From Wikipedia, the free encyclopedia

The overconfidence effect is a well-established bias in which a person's subjective confidence in his or her judgements is reliably greater than the objective accuracy of those judgements, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.
 
The most common way in which overconfidence has been studied is by asking people how confident they are of specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy, implying people are more sure that they are correct than they deserve to be. If human confidence had perfect calibration, judgments with 100% confidence would be correct 100% of the time, 90% confidence correct 90% of the time, and so on for the other levels of confidence. By contrast, the key finding is that confidence exceeds accuracy so long as the subject is answering hard questions about an unfamiliar topic. For example, in a spelling task, subjects were correct about 80% of the time, whereas they claimed to be 100% certain. Put another way, the error rate was 20% when subjects expected it to be 0%. In a series where subjects made true-or-false responses to general knowledge statements, they were overconfident at all levels. When they were 100% certain of their answer to a question, they were wrong 20% of the time.

Overconfidence distinctions

Overestimation

One manifestation of the overconfidence effect is the tendency to overestimate one's standing on a dimension of judgment or performance. This subsection of overconfidence focuses on the certainty one feels in their own ability, performance, level of control, or chance of success. This phenomenon is most likely to occur on hard tasks, hard items, when failure is likely or when the individual making the estimate is not especially skilled. Overestimation has been seen to occur across domains other than those pertaining to one's own performance. This includes the illusion of control, planning fallacy.

Illusion of control

Illusion of control describes the tendency for people to behave as if they might have some control when in fact they have none. However, evidence does not support the notion that people systematically overestimate how much control they have; when they have a great deal of control, people tend to underestimate how much control they have.

Planning fallacy

The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how long it will take them to get things done. It is strongest for long and complicated tasks, and disappears or reverses for simple tasks that are quick to complete.

Contrary evidence

Wishful-thinking effects, in which people overestimate the likelihood of an event because of its desirability, are relatively rare. This may be in part because people engage in more defensive pessimism in advance of important outcomes, in an attempt to reduce the disappointment that follows overly optimistic predictions.

Overprecision

Overprecision is the excessive confidence that one knows the truth. For reviews, see Harvey (1997) or Hoffrage (2004). Much of the evidence for overprecision comes from studies in which participants are asked about their confidence that individual items are correct. This paradigm, while useful, cannot distinguish overestimation from overprecision; they are one and the same in these item-confidence judgments. After making a series of item-confidence judgments, if people try to estimate the number of items they got right, they do not tend to systematically overestimate their scores. The average of their item-confidence judgments exceeds the count of items they claim to have gotten right. One possible explanation for this is that item-confidence judgments were inflated by overprecision, and that their judgments do not demonstrate systematic overestimation.

Confidence intervals

The strongest evidence of overprecision comes from studies in which participants are asked to indicate how precise their knowledge is by specifying a 90% confidence interval around estimates of specific quantities. If people were perfectly calibrated, their 90% confidence intervals would include the correct answer 90% of the time. In fact, hit rates are often as low as 50%, suggesting people have drawn their confidence intervals too narrowly, implying that they think their knowledge is more accurate than it actually is.

Overplacement

Overplacement is perhaps the most prominent manifestation of the overconfidence effect. Overplacement is a judgment of your performance compared to another. This subsection of overconfidence occurs when people believe themselves to be better than others, or "better-than-average". It is the act of placing yourself or rating yourself above others (superior to others). Overplacement more often occurs on simple tasks, ones we believe are easy to accomplish successfully. One explanation for this theory is its ability to self-enhance.

Manifestations

Better-than-average effects
Perhaps the most celebrated better-than-average finding is Svenson's (1981) finding that 93% of American drivers rate themselves as better than the median. The frequency with which school systems claim their students outperform national averages has been dubbed the "Lake Wobegon" effect, after Garrison Keillor's apocryphal town in which "all the children are above average." Overplacement has likewise been documented in a wide variety of other circumstances. Kruger (1999), however, showed that this effect is limited to "easy" tasks in which success is common or in which people feel competent. For difficult tasks, the effect reverses itself and people believe they are worse than others.
Comparative-optimism effects
Some researchers have claimed that people think good things are more likely to happen to them than to others, whereas bad events were less likely to happen to them than to others. But others have pointed out that prior work tended to examine good outcomes that happened to be common (such as owning one's own home) and bad outcomes that happened to be rare (such as being struck by lightning). Event frequency accounts for a proportion of prior findings of comparative optimism. People think common events (such as living past 70) are more likely to happen to them than to others, and rare events (such as living past 100) are less likely to happen to them than to others.
Positive illusions
Taylor and Brown (1988) have argued that people cling to overly positive beliefs about themselves, illusions of control, and beliefs in false superiority, because it helps them cope and thrive. Although there is some evidence that optimistic beliefs are correlated with better life outcomes, most of the research documenting such links is vulnerable to the alternative explanation that their forecasts are accurate.

Practical implications

Overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion."
Daniel Kahneman
Overconfidence has been called the most "pervasive and potentially catastrophic" of all the cognitive biases to which human beings fall victim. It has been blamed for lawsuits, strikes, wars, and stock market bubbles and crashes. 

Strikes, lawsuits, and wars could arise from overplacement. If plaintiffs and defendants were prone to believe that they were more deserving, fair, and righteous than their legal opponents, that could help account for the persistence of inefficient enduring legal disputes. If corporations and unions were prone to believe that they were stronger and more justified than the other side, that could contribute to their willingness to endure labor strikes. If nations were prone to believe that their militaries were stronger than were those of other nations, that could explain their willingness to go to war.

Overprecision could have important implications for investing behavior and stock market trading. Because Bayesians cannot agree to disagree, classical finance theory has trouble explaining why, if stock market traders are fully rational Bayesians, there is so much trading in the stock market. Overprecision might be one answer. If market actors are too sure their estimates of an asset's value is correct, they will be too willing to trade with others who have different information than they do. 

Oskamp (1965) tested groups of clinical psychologists and psychology students on a multiple-choice task in which they drew conclusions from a case study. Along with their answers, subjects gave a confidence rating in the form of a percentage likelihood of being correct. This allowed confidence to be compared against accuracy. As the subjects were given more information about the case study, their confidence increased from 33% to 53%. However their accuracy did not significantly improve, staying under 30%. Hence this experiment demonstrated overconfidence which increased as the subjects had more information to base their judgment on.

Even if there is no general tendency toward overconfidence, social dynamics and adverse selection could conceivably promote it. For instance, those most likely to have the courage to start a new business are those who most overplace their abilities relative to those of other potential entrants. And if voters find confident leaders more credible, then contenders for leadership learn that they should express more confidence than their opponents in order to win election.

Overconfidence can be beneficial to individual self-esteem as well as giving an individual the will to succeed in their desired goal. Just believing in oneself may give one the will to take one's endeavours further than those who do not.

Individual differences

Very high levels of core self-evaluations, a stable personality trait composed of locus of control, neuroticism, self-efficacy, and self-esteem, may lead to the overconfidence effect. People who have high core self-evaluations will think positively of themselves and be confident in their own abilities, although extremely high levels of core self-evaluations may cause an individual to be more confident than is warranted.

String Theory Meets Loop Quantum Gravity

Olena Shmahalo/Quanta Magazine


by Sabine Hossenfelder





January 12, 2016


Eight decades have passed since physicists realized that the theories of quantum mechanics and gravity don’t fit together, and the puzzle of how to combine the two remains unsolved. In the last few decades, researchers have pursued the problem in two separate programs — string theory and loop quantum gravity — that are widely considered incompatible by their practitioners. But now some scientists argue that joining forces is the way forward.
Among the attempts to unify quantum theory and gravity, string theory has attracted the most attention. Its premise is simple: Everything is made of tiny strings. The strings may be closed unto themselves or have loose ends; they can vibrate, stretch, join or split. And in these manifold appearances lie the explanations for all phenomena we observe, both matter and space-time included.

Loop quantum gravity, by contrast, is concerned less with the matter that inhabits space-time than with the quantum properties of space-time itself. In loop quantum gravity, or LQG, space-time is a network. The smooth background of Einstein’s theory of gravity is replaced by nodes and links to which quantum properties are assigned. In this way, space is built up of discrete chunks. LQG is in large part a study of these chunks.

This approach has long been thought incompatible with string theory. Indeed, the conceptual differences are obvious and profound. For starters, LQG studies bits of space-time, whereas string theory investigates the behavior of objects within space-time. Specific technical problems separate the fields. String theory requires that space-time have 10 dimensions; LQG doesn’t work in higher dimensions. String theory also implies the existence of supersymmetry, in which all known particles have yet-undiscovered partners. Supersymmetry isn’t a feature of LQG.

These and other differences have split the theoretical physics community into deeply divergent camps. “Conferences have segregated,” said Jorge Pullin, a physicist at Louisiana State University and co-author of an LQG textbook. “Loopy people go to loopy conferences. Stringy people go to stringy conferences. They don’t even go to ‘physics’ conferences anymore. I think it’s unfortunate that it developed this way.”

But a number of factors may be pushing the camps closer together. New theoretical findings have revealed potential similarities between LQG and string theory. A young generation of string theorists has begun to look outside string theory for methods and tools that might be useful in the quest to understand how to create a “theory of everything.” And a still-raw paradox involving black holes and information loss has given everyone a fresh dose of humility.

Moreover, in the absence of experimental evidence for either string theory or LQG, mathematical proof that the two are in fact opposite sides of the same coin would bolster the argument that physicists are progressing toward the correct theory of everything. Combining LQG and string theory would truly make it the only game in town.

An Unexpected Link

An effort to solve some of LQG’s own internal problems has led to the first surprising link with string theory. Physicists who study LQG lack a clear understanding of how to zoom out from their network of space-time chunks and arrive at a large-scale description of space-time that dovetails with Einstein’s general theory of relativity — our best theory of gravity. More worrying still, their theory can’t reconcile the special case in which gravity can be neglected. It’s a malaise that befalls any approach reliant on chunking-up space-time: In Einstein’s theory of special relativity, an object will appear to contract depending on how fast an observer is moving relative to it. This contraction also affects the size of space-time chunks, which are then perceived differently by observers with different velocities. The discrepancy leads to problems with the central tenet of Einstein’s theory — that the laws of physics should be the same no matter what the observer’s velocity.

“It’s difficult to introduce discrete structures without running into difficulties with special relativity,” said Pullin. In a brief paper he wrote in 2014 with frequent collaborator Rodolfo Gambini, a physicist at the University of the Republic in Montevideo, Uruguay, Pullin argued that making LQG compatible with special relativity necessitates interactions that are similar to those found in string theory.

That the two approaches have something in common seemed likely to Pullin since a seminal discovery in the late 1990s by Juan Maldacena, a physicist at the Institute for Advanced Study in Princeton, N.J. Maldacena matched up a gravitational theory in a so-called anti-de Sitter (AdS) space-time with a field theory (CFT — the “C” is for “conformal”) on the boundary of the space-time. By using this AdS/CFT identification, the gravitational theory can be described by the better-understood field theory.

The full version of the duality is a conjecture, but it has a well-understood limiting case that string theory plays no role in. Because strings don’t matter in this limiting case, it should be shared by any theory of quantum gravity. Pullin sees this as a contact point.

Herman Verlinde, a theoretical physicist at Princeton University who frequently works on string theory, finds it plausible that methods from LQG can help illuminate the gravity side of the duality. In a recent paper, Verlinde looked at AdS/CFT in a simplified model with only two dimensions of space and one of time, or “2+1” as physicists say. He found that the AdS space can be described by a network like those used in LQG. Even though the construction presently only works in 2+1, it offers a new way to think about gravity. Verlinde hopes to generalize the model to higher dimensions. “Loop quantum gravity has been seen too narrowly. My approach is to be inclusive. It’s much more intellectually forward-looking,” he said.
 
But even having successfully combined LQG methods with string theory to make headway in anti-de Sitter space, the question remains: How useful is that combination? Anti-de Sitter space-times have a negative cosmological constant (a number that describes the large-scale geometry of the universe); our universe has a positive one. We just don’t inhabit the mathematical construct that is AdS space.
Verlinde is pragmatic. “One idea is that [for a positive cosmological constant] one needs a totally new theory,” he said. “Then the question is how different that theory is going to look. AdS is at the moment the best hint for the structure we are looking for, and then we have to find the twist to get a positive cosmological constant.” He thinks it’s time well spent: “Though [AdS] doesn’t describe our world, it will teach us some lessons that will guide us where to go.”

Coming Together in a Black Hole

Verlinde and Pullin both point to another chance for the string theory and loop quantum gravity communities to come together: the mysterious fate of information that falls into a black hole. In 2012, four researchers based at the University of California, Santa Barbara, highlighted an internal contradiction in the prevailing theory. They argued that requiring a black hole to let information escape would destroy the delicate structure of empty space around the black hole’s horizon, thereby creating a highly energetic barrier — a black hole “firewall.” This firewall, however, is incompatible with the equivalence principle that underlies general relativity, which holds that observers can’t tell whether they’ve crossed the horizon. The incompatibility roiled string theorists, who thought they understood black hole information and now must revisit their notebooks.

But this isn’t a conundrum only for string theorists. “This whole discussion about the black hole firewalls took place mostly within the string theory community, which I don’t understand,” Verlinde said. “These questions about quantum information, and entanglement, and how to construct a [mathematical] Hilbert space – that’s exactly what people in loop quantum gravity have been working on for a long time.”

Meanwhile, in a development that went unnoted by much of the string community, the barrier once posed by supersymmetry and extra dimensions has fallen as well. A group around Thomas Thiemann at Friedrich-Alexander University in Erlangen, Germany, has extended LQG to higher dimensions and included supersymmetry, both of which were formerly the territory of string theory.

More recently, Norbert Bodendorfer, a former student of Thiemann’s who is now at the University of Warsaw, has applied methods of LQG’s loop quantization to anti-de Sitter space. He argues that LQG can be useful for the AdS/CFT duality in situations where string theorists don’t know how to perform gravitational computations. Bodendorfer feels that the former chasm between string theory and LQG is fading away. “On some occasions I’ve had the impression that string theorists knew very little about LQG and didn’t want to talk about it,” he said. “But [the] younger people in string theory, they are very open-minded. They are very interested what is going on at the interface.”

“The biggest difference is in how we define our questions,” said Verlinde. “It’s more sociological than scientific, unfortunately.” He doesn’t think the two approaches are in conflict: “I’ve always viewed [string theory and loop quantum gravity] as parts of the same description. LQG is a method, it’s not a theory. It’s a method to think of quantum mechanics and geometry. It’s a method that string theorists can use and are actually using. These things are not incompatible.”

Not everyone is so convinced. Moshe Rozali, a string theorist at the University of British Columbia, remains skeptical of LQG: “The reason why I personally don’t work on LQG is the issue with special relativity,” he said. “If your approach does not respect the symmetries of special relativity from the outset, then you basically need a miracle to happen at one of your intermediate steps.” Still, Rozali said, some of the mathematical tools developed in LQG might come in handy. “I don’t think that there is any likelihood that string theory and LQG are going to converge to some middle ground,” he said. “But the methods are what people normally care about, and these are similar enough; the mathematical methods could have some overlap.”

Not everyone on the LQG side expects the two will merge either. Carlo Rovelli, a physicist at the University of Marseille and a founding father of LQG, believes his field ascendant. “The string planet is infinitely less arrogant than ten years ago, especially after the bitter disappointment of the non-appearance of supersymmetric particles,” he said. “It is possible that the two theories could be parts of a common solution … but I myself think it is unlikely. String theory seems to me to have failed to deliver what it had promised in the ’80s, and is one of the many ‘nice-idea-but-nature-is-not-like-that’ that dot the history of science. I do not really understand how can people still have hope in it.”

For Pullin, declaring victory seems premature: “There are LQG people now saying, ‘We are the only game in town.’ I don’t subscribe to this way of arguing. I think both theories are vastly incomplete.”

This article was reprinted on Wired.com and BusinessInsider.com.

Thorium-based nuclear power (updated)

From Wikipedia, the free encyclopedia

A sample of thorium

Thorium-based nuclear power generation is fueled primarily by the nuclear fission of the isotope uranium-233 produced from the fertile element thorium. According to proponents, a thorium fuel cycle offers several potential advantages over a uranium fuel cycle—including much greater abundance of thorium on Earth, superior physical and nuclear fuel properties, and reduced nuclear waste production. However, development of thorium power has significant start-up costs. Proponents also cite the lack of weaponization potential as an advantage of thorium, while critics say that development of breeder reactors in general (including thorium reactors, which are breeders by nature) increases proliferation concerns. Since about 2008, nuclear energy experts have become more interested in thorium to supply nuclear fuel in place of uranium to generate nuclear power. This renewed interest has been highlighted in a number of scientific conferences, the latest of which, ThEC13 was held at CERN by iThEC and attracted over 200 scientists from 32 countries.
A nuclear reactor consumes certain specific fissile isotopes to produce energy. The three most practical types of nuclear reactor fuel are:
  1. Uranium-235, purified (i.e. "enriched") by reducing the amount of uranium-238 in natural mined uranium. Most nuclear power has been generated using low-enriched uranium (LEU), whereas high-enriched uranium (HEU) is necessary for weapons.
  2. Plutonium-239, transmuted from uranium-238 obtained from natural mined uranium.
  3. Uranium-233, transmuted from thorium-232, derived from natural mined thorium, which is the subject of this article.
Some believe thorium is key to developing a new generation of cleaner, safer nuclear power. According to a 2011 opinion piece by a group of scientists at the Georgia Institute of Technology, considering its overall potential, thorium-based power "can mean a 1000+ year solution or a quality low-carbon bridge to truly sustainable energy sources solving a huge portion of mankind’s negative environmental impact."
After studying the feasibility of using thorium, nuclear scientists Ralph W. Moir and Edward Teller suggested that thorium nuclear research should be restarted after a three-decade shutdown and that a small prototype plant should be built.

Background and brief history

Early thorium-based (MSR) nuclear reactor at Oak Ridge National Laboratory in the 1960s

After World War II, uranium-based nuclear reactors were built to produce electricity. These were similar to the reactor designs that produced material for nuclear weapons. During that period, the government of the United States also built an experimental molten salt reactor using U-233 fuel, the fissile material created by bombarding thorium with neutrons. The MSRE reactor, built at Oak Ridge National Laboratory, operated critical for roughly 15,000 hours from 1965 to 1969. In 1968, Nobel laureate and discoverer of plutonium, Glenn Seaborg, publicly announced to the Atomic Energy Commission, of which he was chairman, that the thorium-based reactor had been successfully developed and tested.
In 1973, however, the US government settled on uranium technology and largely discontinued thorium-related nuclear research. The reasons were that uranium-fueled reactors were more efficient, the research was proven, and thorium's breeding ratio was thought insufficient to produce enough fuel to support development of a commercial nuclear industry. As Moir and Teller later wrote, "The competition came down to a liquid metal fast breeder reactor (LMFBR) on the uranium-plutonium cycle and a thermal reactor on the thorium-233U cycle, the molten salt breeder reactor. The LMFBR had a larger breeding rate ... and won the competition." In their opinion, the decision to stop development of thorium reactors, at least as a backup option, “was an excusable mistake.”
Science writer Richard Martin states that nuclear physicist Alvin Weinberg, who was director at Oak Ridge and primarily responsible for the new reactor, lost his job as director because he championed development of the safer thorium reactors. Weinberg himself recalls this period:
[Congressman] Chet Holifield was clearly exasperated with me, and he finally blurted out, "Alvin, if you are concerned about the safety of reactors, then I think it may be time for you to leave nuclear energy." I was speechless. But it was apparent to me that my style, my attitude, and my perception of the future were no longer in tune with the powers within the AEC.
Martin explains that Weinberg's unwillingness to sacrifice potentially safe nuclear power for the benefit of military uses forced him to retire:
Weinberg realized that you could use thorium in an entirely new kind of reactor, one that would have zero risk of meltdown. . . . his team built a working reactor . . . . and he spent the rest of his 18-year tenure trying to make thorium the heart of the nation’s atomic power effort. He failed. Uranium reactors had already been established, and Hyman Rickover, de facto head of the US nuclear program, wanted the plutonium from uranium-powered nuclear plants to make bombs. Increasingly shunted aside, Weinberg was finally forced out in 1973.
Despite the documented history of thorium nuclear power, many of today’s nuclear experts were nonetheless unaware of it. According to Chemical & Engineering News, "most people—including scientists—have hardly heard of the heavy-metal element and know little about it...," noting a comment by a conference attendee that "it's possible to have a Ph.D. in nuclear reactor technology and not know about thorium energy." Nuclear physicist Victor J. Stenger, for one, first learned of it in 2012:
It came as a surprise to me to learn recently that such an alternative has been available to us since World War II, but not pursued because it lacked weapons applications.
Others, including former NASA scientist and thorium expert Kirk Sorensen, agree that "thorium was the alternative path that was not taken … " According to Sorensen, during a documentary interview, he states that if the US had not discontinued its research in 1974 it could have "probably achieved energy independence by around 2000."

Possible benefits

The World Nuclear Association explains some of the possible benefits:
The thorium fuel cycle offers enormous energy security benefits in the long-term – due to its potential for being a self-sustaining fuel without the need for fast neutron reactors. It is therefore an important and potentially viable technology that seems able to contribute to building credible, long-term nuclear energy scenarios.
Moir and Teller agree, noting that the possible advantages of thorium include "utilization of an abundant fuel, inaccessibility of that fuel to terrorists or for diversion to weapons use, together with good economics and safety features … " Thorium is considered the "most abundant, most readily available, cleanest, and safest energy source on Earth," adds science writer Richard Martin.
  • Thorium is three times as abundant as uranium and nearly as abundant as lead and gallium in the Earth's crust. The Thorium Energy Alliance estimates "there is enough thorium in the United States alone to power the country at its current energy level for over 1,000 years." "America has buried tons as a by-product of rare earth metals mining," notes Evans-Pritchard. Almost all thorium is fertile Th-232, compared to uranium that is composed of 99.3% fertile U-238 and 0.7% more valuable fissile U-235.
  • It is difficult to make a practical nuclear bomb from a thorium reactor's byproducts. According to Alvin Radkowsky, designer of the world's first full-scale atomic electric power plant, "a thorium reactor's plutonium production rate would be less than 2 percent of that of a standard reactor, and the plutonium's isotopic content would make it unsuitable for a nuclear detonation." Several uranium-233 bombs have been tested, but the presence of uranium-232 tended to "poison" the uranium-233 in two ways: intense radiation from the uranium-232 made the material difficult to handle, and the uranium-232 led to possible pre-detonation. Separating the uranium-232 from the uranium-233 proved very difficult, although newer laser techniques could facilitate that process.
  • There is much less nuclear waste—up to two orders of magnitude less, state Moir and Teller, eliminating the need for large-scale or long-term storage; "Chinese scientists claim that hazardous waste will be a thousand times less than with uranium." The radioactivity of the resulting waste also drops down to safe levels after just a one or a few hundred years, compared to tens of thousands of years needed for current nuclear waste to cool off.
  • According to Moir and Teller, "once started up [it] needs no other fuel except thorium because it makes most or all of its own fuel." This only applies to breeding reactors, that produce at least as much fissile material as they consume. Other reactors require additional fissile material, such as uranium-235 or plutonium.
  • Thorium fuel cycle is a potential way to produce long term nuclear energy with low radio-toxicity waste. In addition, the transition to thorium could be done through the incineration of weapons grade plutonium (WPu) or civilian plutonium.
  • Since all natural thorium can be used as fuel no expensive fuel enrichment is needed. However the same is true for U-238 as fertile fuel in the uranium-plutonium cycle.
  • Comparing the amount of thorium needed with coal, Nobel laureate Carlo Rubbia of CERN, (European Organization for Nuclear Research), estimates that one ton of thorium can produce as much energy as 200 tons of uranium, or 3,500,000 tons of coal.
  • Liquid fluoride thorium reactors are designed to be meltdown proof. A plug at the bottom of the reactor melts in the event of a power failure or if temperatures exceed a set limit, draining the fuel into an underground tank for safe storage.
  • Mining thorium is safer and more efficient than mining uranium. Thorium's ore monazite generally contains higher concentrations of thorium than the percentage of uranium found in its respective ore. This makes thorium a more cost efficient and less environmentally damaging fuel source. Thorium mining is also easier and less dangerous than uranium mining, as the mine is an open pit which requires no ventilation, unlike underground uranium mines, where radon levels can be potentially harmful.
Summarizing some of the potential benefits, Martin offers his general opinion: "Thorium could provide a clean and effectively limitless source of power while allaying all public concern—weapons proliferation, radioactive pollution, toxic waste, and fuel that is both costly and complicated to process. From an economics viewpoint, UK business editor Ambrose Evans-Pritchard has suggested that "Obama could kill fossil fuels overnight with a nuclear dash for thorium," suggesting a "new Manhattan Project," and adding, "If it works, Manhattan II could restore American optimism and strategic leadership at a stroke …" Moir and Teller estimated in 2004 that the cost for their recommended prototype would be "well under $1 billion with operation costs likely on the order of $100 million per year," and as a result a "large-scale nuclear power plan" usable by many countries could be set up within a decade.
A report by the Bellona Foundation in 2013 concluded that the economics are quite speculative. Thorium nuclear reactors are unlikely to produce cheaper energy, but the management of spent fuel is likely to be cheaper than for uranium nuclear reactors.

Possible disadvantages

Some experts note possible specific disadvantages of thorium nuclear power:
  • Breeding in a thermal neutron spectrum is slow and requires extensive reprocessing. The feasibility of reprocessing is still open.
  • Significant and expensive testing, analysis and licensing work is first required, requiring business and government support. In a 2012 report on the use of thorium fuel with existing water-cooled reactors, the Bulletin of the Atomic Scientists suggested that it would "require too great an investment and provide no clear payoff", and that "from the utilities’ point of view, the only legitimate driver capable of motivating pursuit of thorium is economics".
  • There is a higher cost of fuel fabrication and reprocessing than in plants using traditional solid fuel rods.
  • Thorium, when being irradiated for use in reactors, will make uranium-232, which is very dangerous due to the gamma rays it emits. This irradiation process may be altered slightly by removing protactinium-233. The irradiation would then make uranium-233 in lieu of uranium-232, which can be used in nuclear weapons to make thorium into a dual purpose fuel.

Thorium-based nuclear power projects

Research and development of thorium-based nuclear reactors, primarily the Liquid fluoride thorium reactor (LFTR), MSR design, has been or is now being done in the United States, United Kingdom, Germany, Brazil, India, China, France, the Czech Republic, Japan, Russia, Canada, Israel, and the Netherlands. Conferences with experts from as many as 32 countries are held, including one by the European Organization for Nuclear Research (CERN) in 2013, which focuses on thorium as an alternative nuclear technology without requiring production of nuclear waste. Recognized experts, such as Hans Blix, former head of the International Atomic Energy Agency, calls for expanded support of new nuclear power technology, and states, "the thorium option offers the world not only a new sustainable supply of fuel for nuclear power but also one that makes better use of the fuel's energy content."

Canada

CANDU reactors are capable of using thorium, and Thorium Power Canada has, in 2013, planned and proposed developing thorium power projects for Chile and Indonesia.
The proposed 10 MW demonstration reactor in Chile could be used to power a 20 million litre/day desalination plant. All land and regulatory approvals are currently in process.
Thorium Power Canada's proposal for the development of a 25 MW thorium reactor in Indonesia is meant to be a "demonstration power project" which could provide electrical power to the country’s power grid.
In 2018, the New Brunswick Energy Solutions Corporation announced the participation of Moltex Energy in the nuclear research cluster that will work on research and development on small modular reactor technology.

China

At the 2011 annual conference of the Chinese Academy of Sciences, it was announced that "China has initiated a research and development project in thorium MSR technology." In addition, Dr. Jiang Mianheng, son of China's former leader Jiang Zemin, led a thorium delegation in non-disclosure talks at Oak Ridge National Laboratory, Tennessee, and by late 2013 China had officially partnered with Oak Ridge to aid China in its own development. The World Nuclear Association notes that the China Academy of Sciences in January 2011 announced its R&D program, "claiming to have the world's largest national effort on it, hoping to obtain full intellectual property rights on the technology." According to Martin, "China has made clear its intention to go it alone," adding that China already has a monopoly over most of the world's rare earth minerals.
In March 2014, with their reliance on coal-fired power having become a major cause of their current "smog crisis," they reduced their original goal of creating a working reactor from 25 years down to 10. "In the past, the government was interested in nuclear power because of the energy shortage. Now they are more interested because of smog," said Professor Li Zhong, a scientist working on the project. "This is definitely a race," he added.
In early 2012, it was reported that China, using components produced by the West and Russia, planned to build two prototype thorium MSRs by 2015, and had budgeted the project at $400 million and requiring 400 workers." China also finalized an agreement with a Canadian nuclear technology company to develop improved CANDU reactors using thorium and uranium as a fuel.

Germany, 1980s

The German THTR-300 was a prototype commercial power station using thorium as fertile and highly enriched U-235 as fissile fuel. Though named thorium high temperature reactor, mostly U-235 was fissioned. The THTR-300 was a helium-cooled high-temperature reactor with a pebble-bed reactor core consisting of approximately 670,000 spherical fuel compacts each 6 centimetres (2.4 in) in diameter with particles of uranium-235 and thorium-232 fuel embedded in a graphite matrix. It fed power to Germany's grid for 432 days in the late 1980s, before it was shut down for cost, mechanical and other reasons.

India

India has one of the largest supplies of thorium in the world, with comparatively poor quantities of uranium. India has projected meeting as much as 30% of its electrical demands through thorium by 2050.
In February 2014, Bhabha Atomic Research Centre (BARC), in Mumbai, India, presented their latest design for a "next-generation nuclear reactor" that will burn thorium as its fuel ore, calling it the Advanced Heavy Water Reactor (AWHR). They estimated the reactor could function without an operator for 120 days. Validation of its core reactor physics was underway by late 2017.
According to Dr R K Sinha, chairman of their Atomic Energy Commission, "This will reduce our dependence on fossil fuels, mostly imported, and will be a major contribution to global efforts to combat climate change." Because of its inherent safety, they expect that similar designs could be set up "within" populated cities, like Mumbai or Delhi.
India's government is also developing up to 62, mostly thorium reactors, which it expects to be operational by 2025. It is the "only country in the world with a detailed, funded, government-approved plan" to focus on thorium-based nuclear power. The country currently gets under 2% of its electricity from nuclear power, with the rest coming from coal (60%), hydroelectricity (16%), other renewable sources (12%) and natural gas (9%). It expects to produce around 25% of its electricity from nuclear power. In 2009 the chairman of the Indian Atomic Energy Commission said that India has a "long-term objective goal of becoming energy-independent based on its vast thorium resources."
In late June 2012, India announced that their "first commercial fast reactor" was near completion making India the most advanced country in thorium research." We have huge reserves of thorium. The challenge is to develop technology for converting this to fissile material," stated their former Chairman of India's Atomic Energy Commission. That vision of using thorium in place of uranium was set out in the 1950s by physicist Homi Bhabha. India's first commercial fast breeder reactor — the 500 MWe Prototype Fast Breeder Reactor (PFBR) — is approaching completion at the Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu.
As of July 2013 the major equipment of the PFBR had been erected and the loading of "dummy" fuels in peripheral locations was in progress. The reactor was expected to go critical by September 2014. The Centre had sanctioned Rs. 5,677 crore for building the PFBR and “we will definitely build the reactor within that amount,” Mr. Kumar asserted. The original cost of the project was Rs. 3,492 crore, revised to Rs. 5,677 crore. Electricity generated from the PFBR would be sold to the State Electricity Boards at Rs. 4.44 a unit. BHAVINI builds breeder reactors in India.
In 2013 India's 300 MWe AHWR (pressurized heavy water reactor) was slated to be built at an undisclosed location. The design envisages a start up with reactor grade plutonium that will breed U-233 from Th-232. Thereafter thorium is to be the only fuel. As of 2017, the design is in the final stages of validation.
By Nov 2015 the PFBR was built and expected to Delays have since postponed the commissioning [criticality?] of the PFBR to Sept 2016, but India's commitment to long-term nuclear energy production is underscored by the approval in 2015 of ten new sites for reactors of unspecified types, though procurement of primary fissile material – preferably plutonium – may be problematic due to India's low uranium reserves and capacity for production.

Israel

In May 2010, researchers from Ben-Gurion University of the Negev in Israel and Brookhaven National Laboratory in New York began to collaborate on the development of thorium reactors, aimed at being self-sustaining, "meaning one that will produce and consume about the same amounts of fuel," which is not possible with uranium in a light water reactor.

Japan

In June 2012, Japan utility Chubu Electric Power wrote that they regard thorium as "one of future possible energy resources."

Norway

In late 2012, Norway's privately owned Thor Energy, in collaboration with the government and Westinghouse, announced a four-year trial using thorium in an existing nuclear reactor." In 2013, Aker Solutions purchased patents from Nobel Prize winning physicist Carlo Rubbia for the design of a proton accelerator-based thorium nuclear power plant.

United Kingdom

In Britain, one organisation promoting or examining research on thorium-based nuclear plants is The Alvin Weinberg Foundation. House of Lords member Bryony Worthington is promoting thorium, calling it “the forgotten fuel” that could alter Britain’s energy plans. However, in 2010, the UK’s National Nuclear Laboratory (NNL) concluded that for the short to medium term, "...the thorium fuel cycle does not currently have a role to play," in that it is "technically immature, and would require a significant financial investment and risk without clear benefits," and concluded that the benefits have been "overstated." Friends of the Earth UK considers research into it as "useful" as a fallback option.

United States

In its January 2012 report to the United States Secretary of Energy, the Blue Ribbon Commission on America's Future notes that a "molten-salt reactor using thorium [has] also been proposed." That same month it was reported that the US Department of Energy is "quietly collaborating with China" on thorium-based nuclear power designs using an MSR.
Some experts and politicians want thorium to be "the pillar of the U.S. nuclear future." Senators Harry Reid and Orrin Hatch have supported using $250 million in federal research funds to revive ORNL research. In 2009, Congressman Joe Sestak unsuccessfully attempted to secure funding for research and development of a destroyer-sized reactor [reactor of a size to power a destroyer] using thorium-based liquid fuel.
Alvin Radkowsky, chief designer of the world’s second full-scale atomic electric power plant in Shippingport, Pennsylvania, founded a joint US and Russian project in 1997 to create a thorium-based reactor, considered a "creative breakthrough." In 1992, while a resident professor in Tel Aviv, Israel, he founded the US company, Thorium Power Ltd., near Washington, D.C., to build thorium reactors.
The primary fuel of the proposed HT3R research project near Odessa, Texas, United States, will be ceramic-coated thorium beads. The earliest the reactor would become operational was 2015.
On the research potential of thorium-based nuclear power, Richard L. Garwin, winner of the Presidential Medal of Freedom, and Georges Charpak advise further study of the Energy amplifier in their book Megawatts and Megatons (2001), pages 153-163.

World sources of thorium

World thorium reserves (2007)
Country Tons %
Australia 489,000 18.7%
USA 400,000 15.3%
Turkey 344,000 13.2%
India 319,000 12.2%
Brazil 302,000 11.6%
Venezuela 300,000 11.5%
Norway 132,000 5.1%
Egypt 100,000 3.8%
Russia 75,000 2.9%
Greenland (Denmark) 54,000 2.1%
Canada 44,000 1.7%
South Africa 18,000 0.7%
Other countries 33,000 1.2%
World Total 2,610,000 100.0%

Thorium is mostly found with the rare earth phosphate mineral, monazite, which contains up to about 12% thorium phosphate, but 6-7% on average. World monazite resources are estimated to be about 12 million tons, two-thirds of which are in heavy mineral sands deposits on the south and east coasts of India. There are substantial deposits in several other countries (see table "World thorium reserves"). Monazite is a good source of REEs (Rare Earth Element), but monazites are currently not economical to produce because the radioactive thorium that is produced as a byproduct would have to be stored indefinitely. However, if thorium-based power plants were adopted on a large-scale, virtually all the world's thorium requirements could be supplied simply by refining monazites for their more valuable REEs.
Another estimate of reasonably assured reserves (RAR) and estimated additional reserves (EAR) of thorium comes from OECD/NEA, Nuclear Energy, "Trends in Nuclear Fuel Cycle", Paris, France (2001).
IAEA Estimates in tons (2005)
Country RAR Th EAR Th
India 519,000 21%
Australia 489,000 19%
USA 400,000 13%
Turkey 344,000 11%
Venezuela 302,000 10%
Brazil 302,000 10%
Norway 132,000 4%
Egypt 100,000 3%
Russia 75,000 2%
Greenland 54,000 2%
Canada 44,000 2%
South Africa 18,000 1%
Other countries 33,000 2%
World Total 2,810,000 100%

The preceding figures are reserves and as such refer to the amount of thorium in high-concentration deposits inventoried so far and estimated to be extractable at current market prices; millions of times more total exist in Earth's 3×1019 tonne crust, around 120 trillion tons of thorium, and lesser but vast quantities of thorium exist at intermediate concentrations. Proved reserves are a good indicator of the total future supply of a mineral resource.

Types of thorium-based reactors

According to the World Nuclear Association, there are seven types of reactors that can be designed to use thorium as a nuclear fuel. Six of these have all entered into operational service at some point. The seventh is still conceptual, although currently in development by many countries:

Saturday, December 22, 2018

Stephen Hawking's Final Theory About Our Universe Will Melt Your Brain

main article image
SPACE


Groundbreaking physicist Stephen Hawking left us one last shimmering piece of brilliance before he died: his final paper, detailing his last theory on the origin of the Universe, co-authored with Thomas Hertog from KU Leuven.

The paper, published in the Journal of High Energy Physics in May, puts forward that the Universe is far less complex than current multiverse theories suggest.

It's based around a concept called eternal inflation, first introduced in 1979 and published in 1981.

After the Big Bang, the Universe experienced a period of exponential inflation. Then it slowed down, and the energy converted into matter and radiation.

However, according to the theory of eternal inflation, some bubbles of space stopped inflating or slowed on a stopping trajectory, creating a small fractal dead-end of static space.

Meanwhile, in other bubbles of space, because of quantum effects, inflation never stops - leading to an infinite number of multiverses.

Everything we see in our observable Universe, according to this theory, is contained in just one of these bubbles - in which inflation has stopped, allowing for the formation of stars and galaxies.

multiverse inflating 
Visualisation of the inflating multiverse. (A. Linde/Stanford University)

"The usual theory of eternal inflation predicts that globally our universe is like an infinite fractal, with a mosaic of different pocket universes, separated by an inflating ocean," Hawking explained.

"The local laws of physics and chemistry can differ from one pocket universe to another, which together would form a multiverse. But I have never been a fan of the multiverse. If the scale of different universes in the multiverse is large or infinite the theory can't be tested."

Even one of the original architects of the eternal inflation model has disavowed it in recent years.

Paul Steinhardt, physicist at Princeton University, has gone on record saying that the theory took the problem it was meant to solve - to make the Universe, well, universally consistent with our observations - and just shifted it onto a new model.

Hawking and Hertog are now saying that the eternal inflation model is wrong. This is because Einstein's theory of general relativity breaks down on quantum scales.

"The problem with the usual account of eternal inflation is that it assumes an existing background universe that evolves according to Einstein's theory of general relativity and treats the quantum effects as small fluctuations around this," Hertog explained.

"However, the dynamics of eternal inflation wipes out the separation between classical and quantum physics. As a consequence, Einstein's theory breaks down in eternal inflation."

Hawking's last theory is based on string theory, one of the frameworks that attempts to reconcile general relativity with quantum theory by replacing the point-like particles in particle physics with tiny, vibrating one-dimensional strings.

In string theory, the holographic principle proposes that a volume of space can be described on a lower-dimensional boundary; so the universe is like a hologram, in which physical reality in 3D spaces can be mathematically reduced to 2D projections on their surfaces.

The researchers developed a variation of the holographic principle that projects the time dimension in eternal inflation, which allowed them to describe the concept without having to rely on general relativity.

This then allowed them to mathematically reduce eternal inflation to a timeless state on a spatial surface at the beginning of the Universe - a hologram of eternal inflation.

"When we trace the evolution of our universe backwards in time, at some point we arrive at the threshold of eternal inflation, where our familiar notion of time ceases to have any meaning," said Hertog.

In 1983, Hawking and another researcher, physicist James Hartle, proposed what is known as the 'no boundary theory' or the 'Hartle-Hawking state'. They proposed that, prior to the Big Bang, there was space, but no time. So the Universe, when it began, expanded from a single point, but doesn't have a boundary.

According to the new theory, the early Universe did have a boundary, and that's allowed Hawking and Hertog to derive more reliable predictions about the structure of the Universe.

"We predict that our universe, on the largest scales, is reasonably smooth and globally finite. So it is not a fractal structure," Hawking said.

It's a result that doesn't disprove multiverses, but reduces them to a much smaller range - which means that multiverse theory may be easier to test in the future, if the work can be replicated and confirmed by other physicists.

Hertog plans to test it by looking for gravitational waves that could have been generated by eternal inflation.

These waves are too large to be detected by LIGO, but future gravitational wave interferometers such as space-based LISA, and future studies of the cosmic microwave background, may reveal them.

The team's research was published in the Journal of High Energy Physics, and can be read in full on arXiv. Good luck.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...