Search This Blog

Sunday, December 23, 2018

False consensus effect

From Wikipedia, the free encyclopedia

In psychology, the false-consensus effect or false-consensus bias is an attributional type of cognitive bias whereby people tend to overestimate the extent to which their opinions, beliefs, preferences, values, and habits are normal and typical of those of others (i.e., that others also think the same way that they do). This cognitive bias tends to lead to the perception of a consensus that does not exist, a "false consensus". 
 
This false consensus is significant because it increases self-esteem (overconfidence effect). It can be derived from a desire to conform and be liked by others in a social environment. This bias is especially prevalent in group settings where one thinks the collective opinion of their own group matches that of the larger population. Since the members of a group reach a consensus and rarely encounter those who dispute it, they tend to believe that everybody thinks the same way. The false-consensus effect is not restricted to cases where people believe that their values are shared by the majority, but it still manifests as an overestimate of the extent of their belief.

Additionally, when confronted with evidence that a consensus does not exist, people often assume that those who do not agree with them are defective in some way. There is no single cause for this cognitive bias; the availability heuristic, self-serving bias, and naïve realism have been suggested as at least partial underlying factors. Maintenance of this cognitive bias may be related to the tendency to make decisions with relatively little information. When faced with uncertainty and a limited sample from which to make decisions, people often "project" themselves onto the situation. When this personal knowledge is used as input to make generalizations, it often results in the false sense of being part of the majority.

The false-consensus effect can be contrasted with pluralistic ignorance, an error in which people privately disapprove but publicly support what seems to be the majority view (see below).

Contrasted with pluralistic ignorance

The false-consensus effect can be contrasted with pluralistic ignorance, an error in which people privately disapprove but publicly support what seems to be the majority view (regarding a norm or belief), when the majority in fact shares their (private) disapproval. While the false-consensus effect leads people to wrongly believe that the majority agrees with them (when the majority, in fact, openly disagrees with them), the pluralistic ignorance effect leads people to wrongly believe that they disagree with the majority (when the majority, in fact, covertly agrees with them). Pluralistic ignorance might, for example, lead a student to engage in binge drinking because of the mistaken belief that most other students approve of it, while in reality most other students disapprove, but behave in the same way because they share the same mistaken (but collectively self-sustaining) belief. In a parallel example of the false-consensus effect, a student who likes binge drinking would believe that a majority also likes it, while in reality most others dislike it and openly say so.

Major theoretical approaches

The false-consensus effect can be traced back to two parallel theories of social perception, "the study of how we form impressions of and make inferences about other people". The first is the idea of social comparison. The principal claim of Leon Festinger's (1954) social comparison theory was that individuals evaluate their thoughts and attitudes based on other people. This may be motivated by a desire for confirmation and the need to feel good about oneself. As an extension of this theory, people may use others as sources of information to define social reality and guide behavior. This is called informational social influence. The problem, though, is that people are often unable to accurately perceive the social norm and the actual attitudes of others. In other words, research has shown that people are surprisingly poor "intuitive psychologists" and that our social judgments are often inaccurate. This finding helped to lay the groundwork for an understanding of biased processing and inaccurate social perception. The false-consensus effect is just one example of such an inaccuracy.

The second influential theory is projection, the idea that people project their own attitudes and beliefs onto others. This idea of projection is not a new concept. In fact, it can be found in Sigmund Freud's work on the defense mechanism of projection, D.S. Holmes' work on "attributive projection" (1968), and Gustav Ichheisser's work on social perception (1970). D.S. Holmes, for example, described social projection as the process by which people "attempt to validate their beliefs by projecting their own characteristics onto other individuals".

Here a connection can be made between the two stated theories of social comparison and projection. First, as social comparison theory explains, individuals constantly look to peers as a reference group and are motivated to do so in order to seek confirmation for their own attitudes and beliefs. In order to guarantee confirmation and a higher self-esteem, though, an individual might unconsciously project their own beliefs onto the others (the targets of their comparisons). This final outcome is the false-consensus effect. To summarize, the false-consensus effect can be seen as stemming from both social comparison theory and the concept of projection. 

The false-consensus effect, as defined by Ross, Greene, and House in 1977, came to be the culmination of the many related theories that preceded it. In their well-known series of four studies, Ross and associates hypothesized and then demonstrated that people tend to overestimate the popularity of their own beliefs and preferences. In each of the studies, subjects or "raters" were asked to choose one of a few mutually-exclusive responses. They would then predict the popularity of each of their choices among other participants, referred to as "actors". To take this a step further, Ross and associates also proposed and tested a related bias in social inferences: they found that raters in an experiment estimated their own response to be not only common, but also not very revealing of the actors' "distinguishing personal dispositions". On the other hand, alternative or opposite responses were perceived as much more revealing of the actors as people. In general, the raters made more "extreme predictions" about the personalities of the actors that did not share the raters' own preference. In fact, the raters may have even thought that there was something wrong with the people expressing the alternative response.

In the ten years after the influential Ross et al. study, close to 50 papers were published with data on the false-consensus effect. Theoretical approaches were also expanded. The theoretical perspectives of this era can be divided into four categories: (a) selective exposure and cognitive availability, (b) salience and focus of attention, (c) logical information processing, and (d) motivational processes. In general, the researchers and designers of these theories believe that there is not a single right answer. Instead, they admit that there is overlap among the theories and that the false-consensus effect is most likely due to a combination of these factors.

Selective exposure and cognitive availability

This theory is closely tied to the availability heuristic, which suggests that perceptions of similarity (or difference) are affected by how easily those characteristics can be recalled from memory. And as one might expect, similarities between oneself and others are more easily recalled than differences. This is in part because people usually associate with those who are similar to themselves. This selected exposure to similar people may bias or restrict the "sample of information about the true diversity of opinion in the larger social environment". As a result of the selective exposure and availability heuristic, it is natural for the similarities to prevail in one's thoughts.

Botvin et al. (1992) did a popular study on the effects of the false-consensus effect among a specific adolescent community in an effort to determine whether students show a higher level of false-consensus effect among their direct peers as opposed to society at large. The participants of this experiment were 203 college students ranging in age from 18 to 25 (with an average age of 18.5). The participants were given a questionnaire and asked to answer questions regarding a variety of social topics. For each social topic, they were asked to answer how they felt about the topic and to estimate the percentage of their peers who would agree with them. The results determined that the false-consensus effect was extremely prevalent when participants were describing the rest of their college community; out of twenty topics considered, sixteen of them prominently demonstrated the false-consensus effect. The high levels of false-consensus effect seen in this study can be attributed to the group studied; because the participants were asked to compare themselves to a group of peers that they are constantly around (and view as very similar to themselves), the levels of false-consensus effect increased.

Salience and focus of attention

This theory suggests that when an individual focuses solely on their own preferred position, they are more likely to overestimate its popularity, thus falling victim to the false-consensus effect. This is because that position is the only one in their immediate consciousness. Performing an action that promotes the position will make it more salient and may increase the false-consensus effect. If, however, more positions are presented to the individual, the degree of the false-consensus effect might decrease significantly.

Logical information processing

This theory assumes that active and seemingly rational thinking underlies an individual's estimates of similarity among others. This is manifested in one's causal attributions. For instance, if an individual makes an external attribution for their belief, the individual will likely view his or her experience of the thing in question as merely a matter of objective experience. For example, a few movie-goers may falsely assume that the quality of the film is a purely objective entity. To explain their dissatisfaction with it, the viewers may say that it was simply a bad movie (an external attribution). Based on this (perhaps erroneous) assumption of objectivity, it seems rational or "logical" to assume that everyone else will have the same experience; consensus should be high. On the other hand, someone in the same situation who makes an internal attribution (perhaps a film aficionado who is well-aware of his or her especially high standards) will realize the subjectivity of the experience and will be drawn to the opposite conclusion; their estimation of consensus with their experience will be much lower. Although they result in two opposite outcomes, both paths of attribution rely on an initial assumption which then leads to a "logical" conclusion. By this logic, then, it can be said that the false-consensus effect is really a reflection of the fundamental attribution error (specifically the actor-observer bias), in which people prefer external/situational attributions over internal/dispositional ones to justify their own behaviors.

In a study done by Fox, Yinon, and Mayraz, researchers were attempting to determine whether or not the levels of the false-consensus effect changed in different age groups. In order to come to a conclusion, it was necessary for the researchers to split their participants into four different age groups. Two hundred participants were used, and gender was not considered to be a factor. Just as in the previous study mentioned, this study used a questionnaire as its main source of information. The results showed that the false-consensus effect was extremely prevalent in all groups, but was the most prevalent in the oldest age group (the participants who were labeled as "old-age home residents"). They showed the false-consensus effect in all 12 areas that they were questioned about. The increase in false-consensus effect seen in the oldest age group can be accredited to their high level of "logical" reasoning behind their decisions; the oldest age group has obviously lived the longest, and therefore feels that they can project their beliefs onto all age groups due to their (seemingly objective) past experiences and wisdom. The younger age groups cannot logically relate to those older to them because they have not had that experience and do not pretend to know these objective truths. These results demonstrate a tendency for older people to rely more heavily on situational attributions (life experience) as opposed to internal attributions.

Motivational processes

This theory stresses the benefits of the false-consensus effect: namely, the perception of increased social validation, social support, and self-esteem. It may also be useful to exaggerate similarities in social situations in order to increase liking. It is possible that these benefits serve as positive reinforcement for false-consensus thinking.

Relation to personality psychology

Within the realm of personality psychology, the false-consensus effect does not have significant effects. This is because the false-consensus effect relies heavily on the social environment and how a person interprets this environment. Instead of looking at situational attributions, personality psychology evaluates a person with dispositional attributions, making the false-consensus effect relatively irrelevant in that domain. Therefore, a person's personality potentially could affect the degree to which the person relies on false-consensus effect, but not the existence of such a trait. This should not, however, be interpreted as an individual being the sole product of the social environment. In order for the trait to "exist" in an organism's mind, there must be a biological structure that underpins it. For an organism to visibly see ultraviolet light, they must have genes (which then give rise to the biological structure) that allows them to see the external environment. Since the brain is a biological system, there must be an underlying biological disposition that similarly allows an individual to register and interpret the social environment, thus generating the false-consensus effect. The brain's purpose is, after all, to extract information from the environment and accordingly generate behaviour and regulate physiology. There is no distinction between "innate" or "learned", or "nature" versus "nurture" as the interaction of both are needed; it does not sit along a dimension nor is it to be distinguished from each other. Social and personality psychology are not separate fields, but necessarily complementary fields, as demonstrated by the person-situation debate.

Belief in a favorable future

The concept of false consensus effect can also be extended to predictions about future others. Belief in a favorable future is the belief that future others will change their preferences and beliefs in alignment with one's own. Belief in a favorable future suggests that people overestimate the extent to which other people will come to agree with their preferences and beliefs over time.

Rogers, Moore, and Norton (2017) find that belief in a favorable future is greater in magnitude than the false-consensus effect for two reasons:
  1. It is based in future others whose beliefs are not directly observable, and
  2. It is focused on future beliefs, which gives these future others time to “discover” the truth and change their beliefs.

Applications

The false-consensus effect is an important attribution bias to take into consideration when conducting business and in everyday social interactions. Essentially, people are inclined to believe that the general population agrees with their opinions and judgments. Whether this belief is accurate, it gives them a feeling of more assurance and security in their decisions. This could be an important phenomenon to either exploit or avoid in business dealings. 

For example, if a man doubted whether he wanted to buy a new tool, breaking down his notion that others agree with his doubt would be an important step in persuading him to purchase it. By convincing the customer that other people in fact do want to buy the appliance, the seller could perhaps make a sale that he would not have made otherwise. In this way, the false-consensus effect is closely related to conformity, the effect in which an individual is influenced to match the beliefs or behaviors of a group. There are two differences between the false-consensus effect and conformity: most importantly, conformity is matching the behaviors, beliefs, or attitudes of a real group, while the false-consensus effect is perceiving that others share your behaviors, beliefs, or attitudes, whether or not they really do. Making the customer feel like the opinion of others (society) is to buy the appliance will make the customer feel more confident about his purchase and will make him believe that other people would have made the same decision. 

Similarly, any elements of society affected by public opinion—e.g., elections, advertising, publicity—are very much influenced by the false-consensus effect. This is partially because the way in which people develop their perceptions involves "differential processes of awareness". That is to say, while some people are motivated to reach correct conclusions, others may be motivated to reach preferred conclusions. Members of the latter category will more often experience the false-consensus effect, because the subject is likely to search actively for like-minded supporters and may discount or ignore the opposition.

Uncertainties

There is ambiguity about several facets of the false-consensus effect and of its study. First of all, it is unclear exactly which factors play the largest role in the strength and prevalence of the false-consensus effect in individuals. For example, two individuals in the same group and with very similar social standing could have very different levels of false-consensus effect, but it is unclear what social, personality, or perceptual differences between them play the largest role in causing this disparity.

Additionally, it can be difficult to obtain accurate survey data about the false-consensus effect (as well as other psychological biases) because the search for consistent, reliable groups to be surveyed (often over an extended period of time) often leads to groups that might have dynamics slightly different from those of the "real world". For example, many of the referenced studies in this article examined college students, who might have an especially high level of false-consensus effect both because they are surrounded by their peers (and perhaps experience the availability heuristic) and because they often assume that they are similar to their peers. This may result in distorted data from some studies of the false-consensus effect.

Overconfidence effect

From Wikipedia, the free encyclopedia

The overconfidence effect is a well-established bias in which a person's subjective confidence in his or her judgements is reliably greater than the objective accuracy of those judgements, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.
 
The most common way in which overconfidence has been studied is by asking people how confident they are of specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy, implying people are more sure that they are correct than they deserve to be. If human confidence had perfect calibration, judgments with 100% confidence would be correct 100% of the time, 90% confidence correct 90% of the time, and so on for the other levels of confidence. By contrast, the key finding is that confidence exceeds accuracy so long as the subject is answering hard questions about an unfamiliar topic. For example, in a spelling task, subjects were correct about 80% of the time, whereas they claimed to be 100% certain. Put another way, the error rate was 20% when subjects expected it to be 0%. In a series where subjects made true-or-false responses to general knowledge statements, they were overconfident at all levels. When they were 100% certain of their answer to a question, they were wrong 20% of the time.

Overconfidence distinctions

Overestimation

One manifestation of the overconfidence effect is the tendency to overestimate one's standing on a dimension of judgment or performance. This subsection of overconfidence focuses on the certainty one feels in their own ability, performance, level of control, or chance of success. This phenomenon is most likely to occur on hard tasks, hard items, when failure is likely or when the individual making the estimate is not especially skilled. Overestimation has been seen to occur across domains other than those pertaining to one's own performance. This includes the illusion of control, planning fallacy.

Illusion of control

Illusion of control describes the tendency for people to behave as if they might have some control when in fact they have none. However, evidence does not support the notion that people systematically overestimate how much control they have; when they have a great deal of control, people tend to underestimate how much control they have.

Planning fallacy

The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how long it will take them to get things done. It is strongest for long and complicated tasks, and disappears or reverses for simple tasks that are quick to complete.

Contrary evidence

Wishful-thinking effects, in which people overestimate the likelihood of an event because of its desirability, are relatively rare. This may be in part because people engage in more defensive pessimism in advance of important outcomes, in an attempt to reduce the disappointment that follows overly optimistic predictions.

Overprecision

Overprecision is the excessive confidence that one knows the truth. For reviews, see Harvey (1997) or Hoffrage (2004). Much of the evidence for overprecision comes from studies in which participants are asked about their confidence that individual items are correct. This paradigm, while useful, cannot distinguish overestimation from overprecision; they are one and the same in these item-confidence judgments. After making a series of item-confidence judgments, if people try to estimate the number of items they got right, they do not tend to systematically overestimate their scores. The average of their item-confidence judgments exceeds the count of items they claim to have gotten right. One possible explanation for this is that item-confidence judgments were inflated by overprecision, and that their judgments do not demonstrate systematic overestimation.

Confidence intervals

The strongest evidence of overprecision comes from studies in which participants are asked to indicate how precise their knowledge is by specifying a 90% confidence interval around estimates of specific quantities. If people were perfectly calibrated, their 90% confidence intervals would include the correct answer 90% of the time. In fact, hit rates are often as low as 50%, suggesting people have drawn their confidence intervals too narrowly, implying that they think their knowledge is more accurate than it actually is.

Overplacement

Overplacement is perhaps the most prominent manifestation of the overconfidence effect. Overplacement is a judgment of your performance compared to another. This subsection of overconfidence occurs when people believe themselves to be better than others, or "better-than-average". It is the act of placing yourself or rating yourself above others (superior to others). Overplacement more often occurs on simple tasks, ones we believe are easy to accomplish successfully. One explanation for this theory is its ability to self-enhance.

Manifestations

Better-than-average effects
Perhaps the most celebrated better-than-average finding is Svenson's (1981) finding that 93% of American drivers rate themselves as better than the median. The frequency with which school systems claim their students outperform national averages has been dubbed the "Lake Wobegon" effect, after Garrison Keillor's apocryphal town in which "all the children are above average." Overplacement has likewise been documented in a wide variety of other circumstances. Kruger (1999), however, showed that this effect is limited to "easy" tasks in which success is common or in which people feel competent. For difficult tasks, the effect reverses itself and people believe they are worse than others.
Comparative-optimism effects
Some researchers have claimed that people think good things are more likely to happen to them than to others, whereas bad events were less likely to happen to them than to others. But others have pointed out that prior work tended to examine good outcomes that happened to be common (such as owning one's own home) and bad outcomes that happened to be rare (such as being struck by lightning). Event frequency accounts for a proportion of prior findings of comparative optimism. People think common events (such as living past 70) are more likely to happen to them than to others, and rare events (such as living past 100) are less likely to happen to them than to others.
Positive illusions
Taylor and Brown (1988) have argued that people cling to overly positive beliefs about themselves, illusions of control, and beliefs in false superiority, because it helps them cope and thrive. Although there is some evidence that optimistic beliefs are correlated with better life outcomes, most of the research documenting such links is vulnerable to the alternative explanation that their forecasts are accurate.

Practical implications

Overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion."
Daniel Kahneman
Overconfidence has been called the most "pervasive and potentially catastrophic" of all the cognitive biases to which human beings fall victim. It has been blamed for lawsuits, strikes, wars, and stock market bubbles and crashes. 

Strikes, lawsuits, and wars could arise from overplacement. If plaintiffs and defendants were prone to believe that they were more deserving, fair, and righteous than their legal opponents, that could help account for the persistence of inefficient enduring legal disputes. If corporations and unions were prone to believe that they were stronger and more justified than the other side, that could contribute to their willingness to endure labor strikes. If nations were prone to believe that their militaries were stronger than were those of other nations, that could explain their willingness to go to war.

Overprecision could have important implications for investing behavior and stock market trading. Because Bayesians cannot agree to disagree, classical finance theory has trouble explaining why, if stock market traders are fully rational Bayesians, there is so much trading in the stock market. Overprecision might be one answer. If market actors are too sure their estimates of an asset's value is correct, they will be too willing to trade with others who have different information than they do. 

Oskamp (1965) tested groups of clinical psychologists and psychology students on a multiple-choice task in which they drew conclusions from a case study. Along with their answers, subjects gave a confidence rating in the form of a percentage likelihood of being correct. This allowed confidence to be compared against accuracy. As the subjects were given more information about the case study, their confidence increased from 33% to 53%. However their accuracy did not significantly improve, staying under 30%. Hence this experiment demonstrated overconfidence which increased as the subjects had more information to base their judgment on.

Even if there is no general tendency toward overconfidence, social dynamics and adverse selection could conceivably promote it. For instance, those most likely to have the courage to start a new business are those who most overplace their abilities relative to those of other potential entrants. And if voters find confident leaders more credible, then contenders for leadership learn that they should express more confidence than their opponents in order to win election.

Overconfidence can be beneficial to individual self-esteem as well as giving an individual the will to succeed in their desired goal. Just believing in oneself may give one the will to take one's endeavours further than those who do not.

Individual differences

Very high levels of core self-evaluations, a stable personality trait composed of locus of control, neuroticism, self-efficacy, and self-esteem, may lead to the overconfidence effect. People who have high core self-evaluations will think positively of themselves and be confident in their own abilities, although extremely high levels of core self-evaluations may cause an individual to be more confident than is warranted.

String Theory Meets Loop Quantum Gravity

Olena Shmahalo/Quanta Magazine


by Sabine Hossenfelder





January 12, 2016


Eight decades have passed since physicists realized that the theories of quantum mechanics and gravity don’t fit together, and the puzzle of how to combine the two remains unsolved. In the last few decades, researchers have pursued the problem in two separate programs — string theory and loop quantum gravity — that are widely considered incompatible by their practitioners. But now some scientists argue that joining forces is the way forward.
Among the attempts to unify quantum theory and gravity, string theory has attracted the most attention. Its premise is simple: Everything is made of tiny strings. The strings may be closed unto themselves or have loose ends; they can vibrate, stretch, join or split. And in these manifold appearances lie the explanations for all phenomena we observe, both matter and space-time included.

Loop quantum gravity, by contrast, is concerned less with the matter that inhabits space-time than with the quantum properties of space-time itself. In loop quantum gravity, or LQG, space-time is a network. The smooth background of Einstein’s theory of gravity is replaced by nodes and links to which quantum properties are assigned. In this way, space is built up of discrete chunks. LQG is in large part a study of these chunks.

This approach has long been thought incompatible with string theory. Indeed, the conceptual differences are obvious and profound. For starters, LQG studies bits of space-time, whereas string theory investigates the behavior of objects within space-time. Specific technical problems separate the fields. String theory requires that space-time have 10 dimensions; LQG doesn’t work in higher dimensions. String theory also implies the existence of supersymmetry, in which all known particles have yet-undiscovered partners. Supersymmetry isn’t a feature of LQG.

These and other differences have split the theoretical physics community into deeply divergent camps. “Conferences have segregated,” said Jorge Pullin, a physicist at Louisiana State University and co-author of an LQG textbook. “Loopy people go to loopy conferences. Stringy people go to stringy conferences. They don’t even go to ‘physics’ conferences anymore. I think it’s unfortunate that it developed this way.”

But a number of factors may be pushing the camps closer together. New theoretical findings have revealed potential similarities between LQG and string theory. A young generation of string theorists has begun to look outside string theory for methods and tools that might be useful in the quest to understand how to create a “theory of everything.” And a still-raw paradox involving black holes and information loss has given everyone a fresh dose of humility.

Moreover, in the absence of experimental evidence for either string theory or LQG, mathematical proof that the two are in fact opposite sides of the same coin would bolster the argument that physicists are progressing toward the correct theory of everything. Combining LQG and string theory would truly make it the only game in town.

An Unexpected Link

An effort to solve some of LQG’s own internal problems has led to the first surprising link with string theory. Physicists who study LQG lack a clear understanding of how to zoom out from their network of space-time chunks and arrive at a large-scale description of space-time that dovetails with Einstein’s general theory of relativity — our best theory of gravity. More worrying still, their theory can’t reconcile the special case in which gravity can be neglected. It’s a malaise that befalls any approach reliant on chunking-up space-time: In Einstein’s theory of special relativity, an object will appear to contract depending on how fast an observer is moving relative to it. This contraction also affects the size of space-time chunks, which are then perceived differently by observers with different velocities. The discrepancy leads to problems with the central tenet of Einstein’s theory — that the laws of physics should be the same no matter what the observer’s velocity.

“It’s difficult to introduce discrete structures without running into difficulties with special relativity,” said Pullin. In a brief paper he wrote in 2014 with frequent collaborator Rodolfo Gambini, a physicist at the University of the Republic in Montevideo, Uruguay, Pullin argued that making LQG compatible with special relativity necessitates interactions that are similar to those found in string theory.

That the two approaches have something in common seemed likely to Pullin since a seminal discovery in the late 1990s by Juan Maldacena, a physicist at the Institute for Advanced Study in Princeton, N.J. Maldacena matched up a gravitational theory in a so-called anti-de Sitter (AdS) space-time with a field theory (CFT — the “C” is for “conformal”) on the boundary of the space-time. By using this AdS/CFT identification, the gravitational theory can be described by the better-understood field theory.

The full version of the duality is a conjecture, but it has a well-understood limiting case that string theory plays no role in. Because strings don’t matter in this limiting case, it should be shared by any theory of quantum gravity. Pullin sees this as a contact point.

Herman Verlinde, a theoretical physicist at Princeton University who frequently works on string theory, finds it plausible that methods from LQG can help illuminate the gravity side of the duality. In a recent paper, Verlinde looked at AdS/CFT in a simplified model with only two dimensions of space and one of time, or “2+1” as physicists say. He found that the AdS space can be described by a network like those used in LQG. Even though the construction presently only works in 2+1, it offers a new way to think about gravity. Verlinde hopes to generalize the model to higher dimensions. “Loop quantum gravity has been seen too narrowly. My approach is to be inclusive. It’s much more intellectually forward-looking,” he said.
 
But even having successfully combined LQG methods with string theory to make headway in anti-de Sitter space, the question remains: How useful is that combination? Anti-de Sitter space-times have a negative cosmological constant (a number that describes the large-scale geometry of the universe); our universe has a positive one. We just don’t inhabit the mathematical construct that is AdS space.
Verlinde is pragmatic. “One idea is that [for a positive cosmological constant] one needs a totally new theory,” he said. “Then the question is how different that theory is going to look. AdS is at the moment the best hint for the structure we are looking for, and then we have to find the twist to get a positive cosmological constant.” He thinks it’s time well spent: “Though [AdS] doesn’t describe our world, it will teach us some lessons that will guide us where to go.”

Coming Together in a Black Hole

Verlinde and Pullin both point to another chance for the string theory and loop quantum gravity communities to come together: the mysterious fate of information that falls into a black hole. In 2012, four researchers based at the University of California, Santa Barbara, highlighted an internal contradiction in the prevailing theory. They argued that requiring a black hole to let information escape would destroy the delicate structure of empty space around the black hole’s horizon, thereby creating a highly energetic barrier — a black hole “firewall.” This firewall, however, is incompatible with the equivalence principle that underlies general relativity, which holds that observers can’t tell whether they’ve crossed the horizon. The incompatibility roiled string theorists, who thought they understood black hole information and now must revisit their notebooks.

But this isn’t a conundrum only for string theorists. “This whole discussion about the black hole firewalls took place mostly within the string theory community, which I don’t understand,” Verlinde said. “These questions about quantum information, and entanglement, and how to construct a [mathematical] Hilbert space – that’s exactly what people in loop quantum gravity have been working on for a long time.”

Meanwhile, in a development that went unnoted by much of the string community, the barrier once posed by supersymmetry and extra dimensions has fallen as well. A group around Thomas Thiemann at Friedrich-Alexander University in Erlangen, Germany, has extended LQG to higher dimensions and included supersymmetry, both of which were formerly the territory of string theory.

More recently, Norbert Bodendorfer, a former student of Thiemann’s who is now at the University of Warsaw, has applied methods of LQG’s loop quantization to anti-de Sitter space. He argues that LQG can be useful for the AdS/CFT duality in situations where string theorists don’t know how to perform gravitational computations. Bodendorfer feels that the former chasm between string theory and LQG is fading away. “On some occasions I’ve had the impression that string theorists knew very little about LQG and didn’t want to talk about it,” he said. “But [the] younger people in string theory, they are very open-minded. They are very interested what is going on at the interface.”

“The biggest difference is in how we define our questions,” said Verlinde. “It’s more sociological than scientific, unfortunately.” He doesn’t think the two approaches are in conflict: “I’ve always viewed [string theory and loop quantum gravity] as parts of the same description. LQG is a method, it’s not a theory. It’s a method to think of quantum mechanics and geometry. It’s a method that string theorists can use and are actually using. These things are not incompatible.”

Not everyone is so convinced. Moshe Rozali, a string theorist at the University of British Columbia, remains skeptical of LQG: “The reason why I personally don’t work on LQG is the issue with special relativity,” he said. “If your approach does not respect the symmetries of special relativity from the outset, then you basically need a miracle to happen at one of your intermediate steps.” Still, Rozali said, some of the mathematical tools developed in LQG might come in handy. “I don’t think that there is any likelihood that string theory and LQG are going to converge to some middle ground,” he said. “But the methods are what people normally care about, and these are similar enough; the mathematical methods could have some overlap.”

Not everyone on the LQG side expects the two will merge either. Carlo Rovelli, a physicist at the University of Marseille and a founding father of LQG, believes his field ascendant. “The string planet is infinitely less arrogant than ten years ago, especially after the bitter disappointment of the non-appearance of supersymmetric particles,” he said. “It is possible that the two theories could be parts of a common solution … but I myself think it is unlikely. String theory seems to me to have failed to deliver what it had promised in the ’80s, and is one of the many ‘nice-idea-but-nature-is-not-like-that’ that dot the history of science. I do not really understand how can people still have hope in it.”

For Pullin, declaring victory seems premature: “There are LQG people now saying, ‘We are the only game in town.’ I don’t subscribe to this way of arguing. I think both theories are vastly incomplete.”

This article was reprinted on Wired.com and BusinessInsider.com.

Thorium-based nuclear power (updated)

From Wikipedia, the free encyclopedia

A sample of thorium

Thorium-based nuclear power generation is fueled primarily by the nuclear fission of the isotope uranium-233 produced from the fertile element thorium. According to proponents, a thorium fuel cycle offers several potential advantages over a uranium fuel cycle—including much greater abundance of thorium on Earth, superior physical and nuclear fuel properties, and reduced nuclear waste production. However, development of thorium power has significant start-up costs. Proponents also cite the lack of weaponization potential as an advantage of thorium, while critics say that development of breeder reactors in general (including thorium reactors, which are breeders by nature) increases proliferation concerns. Since about 2008, nuclear energy experts have become more interested in thorium to supply nuclear fuel in place of uranium to generate nuclear power. This renewed interest has been highlighted in a number of scientific conferences, the latest of which, ThEC13 was held at CERN by iThEC and attracted over 200 scientists from 32 countries.
A nuclear reactor consumes certain specific fissile isotopes to produce energy. The three most practical types of nuclear reactor fuel are:
  1. Uranium-235, purified (i.e. "enriched") by reducing the amount of uranium-238 in natural mined uranium. Most nuclear power has been generated using low-enriched uranium (LEU), whereas high-enriched uranium (HEU) is necessary for weapons.
  2. Plutonium-239, transmuted from uranium-238 obtained from natural mined uranium.
  3. Uranium-233, transmuted from thorium-232, derived from natural mined thorium, which is the subject of this article.
Some believe thorium is key to developing a new generation of cleaner, safer nuclear power. According to a 2011 opinion piece by a group of scientists at the Georgia Institute of Technology, considering its overall potential, thorium-based power "can mean a 1000+ year solution or a quality low-carbon bridge to truly sustainable energy sources solving a huge portion of mankind’s negative environmental impact."
After studying the feasibility of using thorium, nuclear scientists Ralph W. Moir and Edward Teller suggested that thorium nuclear research should be restarted after a three-decade shutdown and that a small prototype plant should be built.

Background and brief history

Early thorium-based (MSR) nuclear reactor at Oak Ridge National Laboratory in the 1960s

After World War II, uranium-based nuclear reactors were built to produce electricity. These were similar to the reactor designs that produced material for nuclear weapons. During that period, the government of the United States also built an experimental molten salt reactor using U-233 fuel, the fissile material created by bombarding thorium with neutrons. The MSRE reactor, built at Oak Ridge National Laboratory, operated critical for roughly 15,000 hours from 1965 to 1969. In 1968, Nobel laureate and discoverer of plutonium, Glenn Seaborg, publicly announced to the Atomic Energy Commission, of which he was chairman, that the thorium-based reactor had been successfully developed and tested.
In 1973, however, the US government settled on uranium technology and largely discontinued thorium-related nuclear research. The reasons were that uranium-fueled reactors were more efficient, the research was proven, and thorium's breeding ratio was thought insufficient to produce enough fuel to support development of a commercial nuclear industry. As Moir and Teller later wrote, "The competition came down to a liquid metal fast breeder reactor (LMFBR) on the uranium-plutonium cycle and a thermal reactor on the thorium-233U cycle, the molten salt breeder reactor. The LMFBR had a larger breeding rate ... and won the competition." In their opinion, the decision to stop development of thorium reactors, at least as a backup option, “was an excusable mistake.”
Science writer Richard Martin states that nuclear physicist Alvin Weinberg, who was director at Oak Ridge and primarily responsible for the new reactor, lost his job as director because he championed development of the safer thorium reactors. Weinberg himself recalls this period:
[Congressman] Chet Holifield was clearly exasperated with me, and he finally blurted out, "Alvin, if you are concerned about the safety of reactors, then I think it may be time for you to leave nuclear energy." I was speechless. But it was apparent to me that my style, my attitude, and my perception of the future were no longer in tune with the powers within the AEC.
Martin explains that Weinberg's unwillingness to sacrifice potentially safe nuclear power for the benefit of military uses forced him to retire:
Weinberg realized that you could use thorium in an entirely new kind of reactor, one that would have zero risk of meltdown. . . . his team built a working reactor . . . . and he spent the rest of his 18-year tenure trying to make thorium the heart of the nation’s atomic power effort. He failed. Uranium reactors had already been established, and Hyman Rickover, de facto head of the US nuclear program, wanted the plutonium from uranium-powered nuclear plants to make bombs. Increasingly shunted aside, Weinberg was finally forced out in 1973.
Despite the documented history of thorium nuclear power, many of today’s nuclear experts were nonetheless unaware of it. According to Chemical & Engineering News, "most people—including scientists—have hardly heard of the heavy-metal element and know little about it...," noting a comment by a conference attendee that "it's possible to have a Ph.D. in nuclear reactor technology and not know about thorium energy." Nuclear physicist Victor J. Stenger, for one, first learned of it in 2012:
It came as a surprise to me to learn recently that such an alternative has been available to us since World War II, but not pursued because it lacked weapons applications.
Others, including former NASA scientist and thorium expert Kirk Sorensen, agree that "thorium was the alternative path that was not taken … " According to Sorensen, during a documentary interview, he states that if the US had not discontinued its research in 1974 it could have "probably achieved energy independence by around 2000."

Possible benefits

The World Nuclear Association explains some of the possible benefits:
The thorium fuel cycle offers enormous energy security benefits in the long-term – due to its potential for being a self-sustaining fuel without the need for fast neutron reactors. It is therefore an important and potentially viable technology that seems able to contribute to building credible, long-term nuclear energy scenarios.
Moir and Teller agree, noting that the possible advantages of thorium include "utilization of an abundant fuel, inaccessibility of that fuel to terrorists or for diversion to weapons use, together with good economics and safety features … " Thorium is considered the "most abundant, most readily available, cleanest, and safest energy source on Earth," adds science writer Richard Martin.
  • Thorium is three times as abundant as uranium and nearly as abundant as lead and gallium in the Earth's crust. The Thorium Energy Alliance estimates "there is enough thorium in the United States alone to power the country at its current energy level for over 1,000 years." "America has buried tons as a by-product of rare earth metals mining," notes Evans-Pritchard. Almost all thorium is fertile Th-232, compared to uranium that is composed of 99.3% fertile U-238 and 0.7% more valuable fissile U-235.
  • It is difficult to make a practical nuclear bomb from a thorium reactor's byproducts. According to Alvin Radkowsky, designer of the world's first full-scale atomic electric power plant, "a thorium reactor's plutonium production rate would be less than 2 percent of that of a standard reactor, and the plutonium's isotopic content would make it unsuitable for a nuclear detonation." Several uranium-233 bombs have been tested, but the presence of uranium-232 tended to "poison" the uranium-233 in two ways: intense radiation from the uranium-232 made the material difficult to handle, and the uranium-232 led to possible pre-detonation. Separating the uranium-232 from the uranium-233 proved very difficult, although newer laser techniques could facilitate that process.
  • There is much less nuclear waste—up to two orders of magnitude less, state Moir and Teller, eliminating the need for large-scale or long-term storage; "Chinese scientists claim that hazardous waste will be a thousand times less than with uranium." The radioactivity of the resulting waste also drops down to safe levels after just a one or a few hundred years, compared to tens of thousands of years needed for current nuclear waste to cool off.
  • According to Moir and Teller, "once started up [it] needs no other fuel except thorium because it makes most or all of its own fuel." This only applies to breeding reactors, that produce at least as much fissile material as they consume. Other reactors require additional fissile material, such as uranium-235 or plutonium.
  • Thorium fuel cycle is a potential way to produce long term nuclear energy with low radio-toxicity waste. In addition, the transition to thorium could be done through the incineration of weapons grade plutonium (WPu) or civilian plutonium.
  • Since all natural thorium can be used as fuel no expensive fuel enrichment is needed. However the same is true for U-238 as fertile fuel in the uranium-plutonium cycle.
  • Comparing the amount of thorium needed with coal, Nobel laureate Carlo Rubbia of CERN, (European Organization for Nuclear Research), estimates that one ton of thorium can produce as much energy as 200 tons of uranium, or 3,500,000 tons of coal.
  • Liquid fluoride thorium reactors are designed to be meltdown proof. A plug at the bottom of the reactor melts in the event of a power failure or if temperatures exceed a set limit, draining the fuel into an underground tank for safe storage.
  • Mining thorium is safer and more efficient than mining uranium. Thorium's ore monazite generally contains higher concentrations of thorium than the percentage of uranium found in its respective ore. This makes thorium a more cost efficient and less environmentally damaging fuel source. Thorium mining is also easier and less dangerous than uranium mining, as the mine is an open pit which requires no ventilation, unlike underground uranium mines, where radon levels can be potentially harmful.
Summarizing some of the potential benefits, Martin offers his general opinion: "Thorium could provide a clean and effectively limitless source of power while allaying all public concern—weapons proliferation, radioactive pollution, toxic waste, and fuel that is both costly and complicated to process. From an economics viewpoint, UK business editor Ambrose Evans-Pritchard has suggested that "Obama could kill fossil fuels overnight with a nuclear dash for thorium," suggesting a "new Manhattan Project," and adding, "If it works, Manhattan II could restore American optimism and strategic leadership at a stroke …" Moir and Teller estimated in 2004 that the cost for their recommended prototype would be "well under $1 billion with operation costs likely on the order of $100 million per year," and as a result a "large-scale nuclear power plan" usable by many countries could be set up within a decade.
A report by the Bellona Foundation in 2013 concluded that the economics are quite speculative. Thorium nuclear reactors are unlikely to produce cheaper energy, but the management of spent fuel is likely to be cheaper than for uranium nuclear reactors.

Possible disadvantages

Some experts note possible specific disadvantages of thorium nuclear power:
  • Breeding in a thermal neutron spectrum is slow and requires extensive reprocessing. The feasibility of reprocessing is still open.
  • Significant and expensive testing, analysis and licensing work is first required, requiring business and government support. In a 2012 report on the use of thorium fuel with existing water-cooled reactors, the Bulletin of the Atomic Scientists suggested that it would "require too great an investment and provide no clear payoff", and that "from the utilities’ point of view, the only legitimate driver capable of motivating pursuit of thorium is economics".
  • There is a higher cost of fuel fabrication and reprocessing than in plants using traditional solid fuel rods.
  • Thorium, when being irradiated for use in reactors, will make uranium-232, which is very dangerous due to the gamma rays it emits. This irradiation process may be altered slightly by removing protactinium-233. The irradiation would then make uranium-233 in lieu of uranium-232, which can be used in nuclear weapons to make thorium into a dual purpose fuel.

Thorium-based nuclear power projects

Research and development of thorium-based nuclear reactors, primarily the Liquid fluoride thorium reactor (LFTR), MSR design, has been or is now being done in the United States, United Kingdom, Germany, Brazil, India, China, France, the Czech Republic, Japan, Russia, Canada, Israel, and the Netherlands. Conferences with experts from as many as 32 countries are held, including one by the European Organization for Nuclear Research (CERN) in 2013, which focuses on thorium as an alternative nuclear technology without requiring production of nuclear waste. Recognized experts, such as Hans Blix, former head of the International Atomic Energy Agency, calls for expanded support of new nuclear power technology, and states, "the thorium option offers the world not only a new sustainable supply of fuel for nuclear power but also one that makes better use of the fuel's energy content."

Canada

CANDU reactors are capable of using thorium, and Thorium Power Canada has, in 2013, planned and proposed developing thorium power projects for Chile and Indonesia.
The proposed 10 MW demonstration reactor in Chile could be used to power a 20 million litre/day desalination plant. All land and regulatory approvals are currently in process.
Thorium Power Canada's proposal for the development of a 25 MW thorium reactor in Indonesia is meant to be a "demonstration power project" which could provide electrical power to the country’s power grid.
In 2018, the New Brunswick Energy Solutions Corporation announced the participation of Moltex Energy in the nuclear research cluster that will work on research and development on small modular reactor technology.

China

At the 2011 annual conference of the Chinese Academy of Sciences, it was announced that "China has initiated a research and development project in thorium MSR technology." In addition, Dr. Jiang Mianheng, son of China's former leader Jiang Zemin, led a thorium delegation in non-disclosure talks at Oak Ridge National Laboratory, Tennessee, and by late 2013 China had officially partnered with Oak Ridge to aid China in its own development. The World Nuclear Association notes that the China Academy of Sciences in January 2011 announced its R&D program, "claiming to have the world's largest national effort on it, hoping to obtain full intellectual property rights on the technology." According to Martin, "China has made clear its intention to go it alone," adding that China already has a monopoly over most of the world's rare earth minerals.
In March 2014, with their reliance on coal-fired power having become a major cause of their current "smog crisis," they reduced their original goal of creating a working reactor from 25 years down to 10. "In the past, the government was interested in nuclear power because of the energy shortage. Now they are more interested because of smog," said Professor Li Zhong, a scientist working on the project. "This is definitely a race," he added.
In early 2012, it was reported that China, using components produced by the West and Russia, planned to build two prototype thorium MSRs by 2015, and had budgeted the project at $400 million and requiring 400 workers." China also finalized an agreement with a Canadian nuclear technology company to develop improved CANDU reactors using thorium and uranium as a fuel.

Germany, 1980s

The German THTR-300 was a prototype commercial power station using thorium as fertile and highly enriched U-235 as fissile fuel. Though named thorium high temperature reactor, mostly U-235 was fissioned. The THTR-300 was a helium-cooled high-temperature reactor with a pebble-bed reactor core consisting of approximately 670,000 spherical fuel compacts each 6 centimetres (2.4 in) in diameter with particles of uranium-235 and thorium-232 fuel embedded in a graphite matrix. It fed power to Germany's grid for 432 days in the late 1980s, before it was shut down for cost, mechanical and other reasons.

India

India has one of the largest supplies of thorium in the world, with comparatively poor quantities of uranium. India has projected meeting as much as 30% of its electrical demands through thorium by 2050.
In February 2014, Bhabha Atomic Research Centre (BARC), in Mumbai, India, presented their latest design for a "next-generation nuclear reactor" that will burn thorium as its fuel ore, calling it the Advanced Heavy Water Reactor (AWHR). They estimated the reactor could function without an operator for 120 days. Validation of its core reactor physics was underway by late 2017.
According to Dr R K Sinha, chairman of their Atomic Energy Commission, "This will reduce our dependence on fossil fuels, mostly imported, and will be a major contribution to global efforts to combat climate change." Because of its inherent safety, they expect that similar designs could be set up "within" populated cities, like Mumbai or Delhi.
India's government is also developing up to 62, mostly thorium reactors, which it expects to be operational by 2025. It is the "only country in the world with a detailed, funded, government-approved plan" to focus on thorium-based nuclear power. The country currently gets under 2% of its electricity from nuclear power, with the rest coming from coal (60%), hydroelectricity (16%), other renewable sources (12%) and natural gas (9%). It expects to produce around 25% of its electricity from nuclear power. In 2009 the chairman of the Indian Atomic Energy Commission said that India has a "long-term objective goal of becoming energy-independent based on its vast thorium resources."
In late June 2012, India announced that their "first commercial fast reactor" was near completion making India the most advanced country in thorium research." We have huge reserves of thorium. The challenge is to develop technology for converting this to fissile material," stated their former Chairman of India's Atomic Energy Commission. That vision of using thorium in place of uranium was set out in the 1950s by physicist Homi Bhabha. India's first commercial fast breeder reactor — the 500 MWe Prototype Fast Breeder Reactor (PFBR) — is approaching completion at the Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu.
As of July 2013 the major equipment of the PFBR had been erected and the loading of "dummy" fuels in peripheral locations was in progress. The reactor was expected to go critical by September 2014. The Centre had sanctioned Rs. 5,677 crore for building the PFBR and “we will definitely build the reactor within that amount,” Mr. Kumar asserted. The original cost of the project was Rs. 3,492 crore, revised to Rs. 5,677 crore. Electricity generated from the PFBR would be sold to the State Electricity Boards at Rs. 4.44 a unit. BHAVINI builds breeder reactors in India.
In 2013 India's 300 MWe AHWR (pressurized heavy water reactor) was slated to be built at an undisclosed location. The design envisages a start up with reactor grade plutonium that will breed U-233 from Th-232. Thereafter thorium is to be the only fuel. As of 2017, the design is in the final stages of validation.
By Nov 2015 the PFBR was built and expected to Delays have since postponed the commissioning [criticality?] of the PFBR to Sept 2016, but India's commitment to long-term nuclear energy production is underscored by the approval in 2015 of ten new sites for reactors of unspecified types, though procurement of primary fissile material – preferably plutonium – may be problematic due to India's low uranium reserves and capacity for production.

Israel

In May 2010, researchers from Ben-Gurion University of the Negev in Israel and Brookhaven National Laboratory in New York began to collaborate on the development of thorium reactors, aimed at being self-sustaining, "meaning one that will produce and consume about the same amounts of fuel," which is not possible with uranium in a light water reactor.

Japan

In June 2012, Japan utility Chubu Electric Power wrote that they regard thorium as "one of future possible energy resources."

Norway

In late 2012, Norway's privately owned Thor Energy, in collaboration with the government and Westinghouse, announced a four-year trial using thorium in an existing nuclear reactor." In 2013, Aker Solutions purchased patents from Nobel Prize winning physicist Carlo Rubbia for the design of a proton accelerator-based thorium nuclear power plant.

United Kingdom

In Britain, one organisation promoting or examining research on thorium-based nuclear plants is The Alvin Weinberg Foundation. House of Lords member Bryony Worthington is promoting thorium, calling it “the forgotten fuel” that could alter Britain’s energy plans. However, in 2010, the UK’s National Nuclear Laboratory (NNL) concluded that for the short to medium term, "...the thorium fuel cycle does not currently have a role to play," in that it is "technically immature, and would require a significant financial investment and risk without clear benefits," and concluded that the benefits have been "overstated." Friends of the Earth UK considers research into it as "useful" as a fallback option.

United States

In its January 2012 report to the United States Secretary of Energy, the Blue Ribbon Commission on America's Future notes that a "molten-salt reactor using thorium [has] also been proposed." That same month it was reported that the US Department of Energy is "quietly collaborating with China" on thorium-based nuclear power designs using an MSR.
Some experts and politicians want thorium to be "the pillar of the U.S. nuclear future." Senators Harry Reid and Orrin Hatch have supported using $250 million in federal research funds to revive ORNL research. In 2009, Congressman Joe Sestak unsuccessfully attempted to secure funding for research and development of a destroyer-sized reactor [reactor of a size to power a destroyer] using thorium-based liquid fuel.
Alvin Radkowsky, chief designer of the world’s second full-scale atomic electric power plant in Shippingport, Pennsylvania, founded a joint US and Russian project in 1997 to create a thorium-based reactor, considered a "creative breakthrough." In 1992, while a resident professor in Tel Aviv, Israel, he founded the US company, Thorium Power Ltd., near Washington, D.C., to build thorium reactors.
The primary fuel of the proposed HT3R research project near Odessa, Texas, United States, will be ceramic-coated thorium beads. The earliest the reactor would become operational was 2015.
On the research potential of thorium-based nuclear power, Richard L. Garwin, winner of the Presidential Medal of Freedom, and Georges Charpak advise further study of the Energy amplifier in their book Megawatts and Megatons (2001), pages 153-163.

World sources of thorium

World thorium reserves (2007)
Country Tons %
Australia 489,000 18.7%
USA 400,000 15.3%
Turkey 344,000 13.2%
India 319,000 12.2%
Brazil 302,000 11.6%
Venezuela 300,000 11.5%
Norway 132,000 5.1%
Egypt 100,000 3.8%
Russia 75,000 2.9%
Greenland (Denmark) 54,000 2.1%
Canada 44,000 1.7%
South Africa 18,000 0.7%
Other countries 33,000 1.2%
World Total 2,610,000 100.0%

Thorium is mostly found with the rare earth phosphate mineral, monazite, which contains up to about 12% thorium phosphate, but 6-7% on average. World monazite resources are estimated to be about 12 million tons, two-thirds of which are in heavy mineral sands deposits on the south and east coasts of India. There are substantial deposits in several other countries (see table "World thorium reserves"). Monazite is a good source of REEs (Rare Earth Element), but monazites are currently not economical to produce because the radioactive thorium that is produced as a byproduct would have to be stored indefinitely. However, if thorium-based power plants were adopted on a large-scale, virtually all the world's thorium requirements could be supplied simply by refining monazites for their more valuable REEs.
Another estimate of reasonably assured reserves (RAR) and estimated additional reserves (EAR) of thorium comes from OECD/NEA, Nuclear Energy, "Trends in Nuclear Fuel Cycle", Paris, France (2001).
IAEA Estimates in tons (2005)
Country RAR Th EAR Th
India 519,000 21%
Australia 489,000 19%
USA 400,000 13%
Turkey 344,000 11%
Venezuela 302,000 10%
Brazil 302,000 10%
Norway 132,000 4%
Egypt 100,000 3%
Russia 75,000 2%
Greenland 54,000 2%
Canada 44,000 2%
South Africa 18,000 1%
Other countries 33,000 2%
World Total 2,810,000 100%

The preceding figures are reserves and as such refer to the amount of thorium in high-concentration deposits inventoried so far and estimated to be extractable at current market prices; millions of times more total exist in Earth's 3×1019 tonne crust, around 120 trillion tons of thorium, and lesser but vast quantities of thorium exist at intermediate concentrations. Proved reserves are a good indicator of the total future supply of a mineral resource.

Types of thorium-based reactors

According to the World Nuclear Association, there are seven types of reactors that can be designed to use thorium as a nuclear fuel. Six of these have all entered into operational service at some point. The seventh is still conceptual, although currently in development by many countries:

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...