Search This Blog

Wednesday, February 19, 2020

Pareto principle

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Pareto_principle
 
Pareto principle applied to community fundraising

The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes. Management consultant Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who noted the 80/20 connection while at the University of Lausanne in 1896, as published in his first work: Cours d'économie politique; in it, Pareto showed that approximately 80% of the land in Italy was owned by 20% of the population. 

It is an axiom of business management that "80% of sales come from 20% of clients".

Mathematically, the 80/20 rule is roughly followed by a power law distribution (also known as a Pareto distribution) for a particular set of parameters, and many natural phenomena have been shown empirically to exhibit such a distribution.

The Pareto principle is only tangentially related to Pareto efficiency. Pareto developed both concepts in the context of the distribution of income and wealth among the population.

In economics

The original observation was in connection with population and wealth. Pareto noticed that approximately 80% of Italy's land was owned by 20% of the population. He then carried out surveys on a variety of other countries and found to his surprise that a similar distribution applied.

A chart that gave the inequality a very visible and comprehensible form, the so-called "champagne glass" effect, was contained in the 1992 United Nations Development Program Report, which showed that distribution of global income is very uneven, with the richest 20% of the world's population controlling 82.7% of the world's income. Still, the Gini index of the world shows that nations have wealth distributions that vary greatly.


Distribution of world GDP, 1989
Quintile of population Income
Richest 20% 82.70%
Second 20% 11.75%
Third 20% 2.30%
Fourth 20% 1.85%
Poorest 20% 1.40%

The Pareto principle also could be seen as applying to taxation. In the US, the top 20% of earners have paid roughly 80-90% of Federal income taxes in 2000 and 2006, and again in 2018.

However, it is important to note that while there have been associations of such with meritocracy, the principle should not be confused with further reaching implications. As Alessandro Pluchino at the University of Catania in Italy points out, other attributes do not necessarily correlate. Using talent as an example, he and other researchers state, “The maximum success never coincides with the maximum talent, and vice-versa,” and that such factors are the result of chance.

In computing

In computer science the Pareto principle can be applied to optimization efforts. For example, Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated. Lowell Arthur expressed that "20 percent of the code has 80 percent of the errors. Find them, fix them!" It was also discovered that in general the 80% of a certain piece of software can be written in 20% of the total allocated time. Conversely, the hardest 20% of the code takes 80% of the time. This factor is usually a part of COCOMO estimating for software coding. 

In sports

It has been inferred the Pareto principle applies to athletic training, where roughly 20% of the exercises and habits have 80% of the impact and the trainee should not focus so much on a varied training. This does not necessarily mean that having a healthy diet or going to the gym are not important, but they are not as significant as the key activities. It is also important to note this 80/20 rule has yet to be scientifically tested in controlled studies of athletic training. 

In baseball, the Pareto principle has been perceived in Wins Above Replacement (an attempt to combine multiple statistics to determine a player's overall importance to a team). "15% of all the players last year produced 85% of the total wins with the other 85% of the players creating 15% of the wins. The Pareto principle holds up pretty soundly when it is applied to baseball."

Occupational health and safety

Occupational health and safety professionals use the Pareto principle to underline the importance of hazard prioritization. Assuming 20% of the hazards account for 80% of the injuries, and by categorizing hazards, safety professionals can target those 20% of the hazards that cause 80% of the injuries or accidents. Alternatively, if hazards are addressed in random order, a safety professional is more likely to fix one of the 80% of hazards that account only for some fraction of the remaining 20% of injuries.

Aside from ensuring efficient accident prevention practices, the Pareto principle also ensures hazards are addressed in an economical order, because the technique ensures the utilized resources are best used to prevent the most accidents.

Other applications

In engineering control theory, such as for electromechanical energy converters, the 80/20 principle applies to optimization efforts.

The law of the few can be also seen in betting, where it is said that with 20% effort you can match the accuracy of 80% of the bettors.

In the systems science discipline, Joshua M. Epstein and Robert Axtell created an agent-based simulation model called Sugarscape, from a decentralized modeling approach, based on individual behavior rules defined for each agent in the economy. Wealth distribution and Pareto's 80/20 principle became emergent in their results, which suggests the principle is a collective consequence of these individual rules.
The Pareto principle has many applications in quality control. It is the basis for the Pareto chart, one of the key tools used in total quality control and Six Sigma techniques. The Pareto principle serves as a baseline for ABC-analysis and XYZ-analysis, widely used in logistics and procurement for the purpose of optimizing stock of goods, as well as costs of keeping and replenishing that stock.

In health care in the United States, in one instance 20% of patients have been found to use 80% of health care resources.

Some cases of super-spreading conform to the 20/80 rule, where approximately 20% of infected individuals are responsible for 80% of transmissions, although super-spreading can still be said to occur when super-spreaders account for a higher or lower percentage of transmissions. In epidemics with super-spreading, the majority of individuals infect relatively few secondary contacts.

The Dunedin Study has found 80% of crimes are committed by 20% of criminals. This statistic has been used to support both stop-and-frisk policies and broken windows policing, as catching those criminals committing minor crimes will supposedly net many criminals wanted for (or who would normally commit) larger ones.

Many video rental shops reported in 1988 that 80% of revenue came from 20% of videotapes. A video-chain executive discussed the "Gone with the Wind syndrome", however, in which every store had to offer classics like Gone with the Wind, Casablanca, or The African Queen to appear to have a large inventory, even if customers very rarely rented them.

Mathematical notes

The idea has a rule of thumb application in many places, but it is commonly misused. For example, it is a misuse to state a solution to a problem "fits the 80/20 rule" just because it fits 80% of the cases; it must also be that the solution requires only 20% of the resources that would be needed to solve all cases. Additionally, it is a misuse of the 80/20 rule to interpret a small number of categories or observations. 

This is a special case of the wider phenomenon of Pareto distributions. If the Pareto index α, which is one of the parameters characterizing a Pareto distribution, is chosen as α = log45 ≈ 1.16, then one has 80% of effects coming from 20% of causes.

It follows that one also has 80% of that top 80% of effects coming from 20% of that top 20% of causes, and so on. Eighty percent of 80% is 64%; 20% of 20% is 4%, so this implies a "64/4" law; and similarly implies a "51.2/0.8" law. Similarly for the bottom 80% of causes and bottom 20% of effects, the bottom 80% of the bottom 80% only cause 20% of the remaining 20%. This is broadly in line with the world population/wealth table above, where the bottom 60% of the people own 5.5% of the wealth, approximating to a 64/4 connection. 

The 64/4 correlation also implies a 32% 'fair' area between the 4% and 64%, where the lower 80% of the top 20% (16%) and upper 20% of the bottom 80% (also 16%) relates to the corresponding lower top and upper bottom of effects (32%). This is also broadly in line with the world population table above, where the second 20% control 12% of the wealth, and the bottom of the top 20% (presumably) control 16% of the wealth.

The term 80/20 is only a shorthand for the general principle at work. In individual cases, the distribution could just as well be, say, nearer to 90/10 or 70/30. There is no need for the two numbers to add up to the number 100, as they are measures of different things, (e.g., 'number of customers' vs 'amount spent'). However, each case in which they do not add up to 100%, is equivalent to one in which they do. For example, as noted above, the "64/4 law" (in which the two numbers do not add up to 100%) is equivalent to the "80/20 law" (in which they do add up to 100%). Thus, specifying two percentages independently does not lead to a broader class of distributions than what one gets by specifying the larger one and letting the smaller one be its complement relative to 100%. Thus, there is only one degree of freedom in the choice of that parameter.

Adding up to 100 leads to a nice symmetry. For example, if 80% of effects come from the top 20% of sources, then the remaining 20% of effects come from the lower 80% of sources. This is called the "joint ratio", and can be used to measure the degree of imbalance: a joint ratio of 96:4 is very imbalanced, 80:20 is significantly imbalanced (Gini index: 76%), 70:30 is moderately imbalanced (Gini index: 28%), and 55:45 is just slightly imbalanced (Gini index 14%).

The Pareto principle is an illustration of a "power law" relationship, which also occurs in phenomena such as brush fires and earthquakes. Because it is self-similar over a wide range of magnitudes, it produces outcomes completely different from Normal or Gaussian distribution phenomena. This fact explains the frequent breakdowns of sophisticated financial instruments, which are modeled on the assumption that a Gaussian relationship is appropriate to something like stock price movements.

Equality measures


Gini coefficient and Hoover index

Using the "A : B" notation (for example, 0.8:0.2) and with A + B = 1, inequality measures like the Gini index (G) and the Hoover index (H) can be computed. In this case both are the same.

Tuesday, February 18, 2020

Only child

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Only_child
 
An only child is a person with no siblings, biological nor adopted.

Children who have half-siblings or step-siblings, either living at the same house or a different house - especially those who were born considerably later - may have a similar family environment to only children, as may children who have much younger siblings from both of the same parents (generally ten or more years).
 

Overview

Throughout history, only children were relatively uncommon. From around the middle of the 20th century, birth rates and average family sizes fell sharply, for a number of reasons including increasing costs of raising children and more women having their first child later in life. The proportion of families in the U.S. with only children increased during the Great Depression but fell during the Post–World War II baby boom. After the Korean War ended in 1953, the South Korean government suggested citizens each have one or two children to boost economic prosperity, which resulted in significantly lowered birth rates and a larger number of only children to the country.

From 1979 to 2015, the one-child policy in the People's Republic of China restricted most parents to having only one child, although it was subject to local relaxations and individual circumstances (for instance when twins were conceived).

Families may have an only child for a variety of reasons, including: personal preference, family planning, financial and emotional or physical health issues, desire to travel, stress in the family, educational advantages, late marriage, stability, focus, time constraints, fears over pregnancy, advanced age, illegitimate birth, infertility, divorce, and death of a sibling or parent. The premature death of one parent also contributed to a small percentage of marriages producing just one child until around the mid 20th century, not to mention the then rare occurrence of divorce. 

Only children are sometimes said to be more likely to develop precocious interests (from spending more time with adults) and to feel lonely. Sometimes they compensate for the aloneness by developing a stronger relationship with themselves or developing an active fantasy life that includes imaginary friends. Children whose only siblings are much older than them sometimes report feeling like an only child. Advantages cited of having an only child are the decreased financial burden, the absence of any sibling rivalry, and that it becomes possible to take the child to an event suitable for their age without having to bring along an uninterested sibling. A disadvantage is that it can be harder for an only child to singlehandedly look after their aging parents.

Stereotypes

In Western countries, only children can be the subject of a stereotype that equates them with "spoiled brats". G. Stanley Hall was one of the first commentators to give only children a bad reputation when he referred to their situation as "a disease in itself". Even today, only children are commonly stereotyped as "spoiled, selfish, and bratty". While many only children receive a lot of attention and resources for their development, it is not clear that as a class they are overindulged or differ significantly from children with siblings. Susan Newman, a social psychologist at Rutgers University and the author of Parenting an Only Child, says that this is a myth. "People articulate that only children are spoiled, they're aggressive, they're bossy, they're lonely, they're maladjusted", she said. "There have been hundreds and hundreds of research studies that show that only children are no different from their peers." However, differences have been found. Research involving teacher ratings of U.S. children's social and interpersonal skills has scored only children lower in self-control and interpersonal skills. While a later study failed to find evidence this continued through middle and high school, a further study showed that deficits persisted until at least the fifth grade.

In China, perceived behavioral problems in only children has been called the Little Emperor Syndrome and the lack of siblings has been blamed for a number of social ills such as materialism and crime. However, recent studies do not support these claims, and show no significant differences in personality between only children and children in larger families. The one child policy has also been speculated to be the underlying cause of forced abortions, female infanticide, underreporting of female births, and has been suggested as a possible cause behind China's increasing number of crimes and gender imbalance. Regardless, a 2008 survey given by the Pew Research Center reports that 76% of the Chinese population supports the policy.

The popular media often posit that it is more difficult for only children to cooperate in a conventional family environment, as they have no competitors for the attention of their parents and other relatives. It is suggested that confusion arises about the norms of ages and roles and that a similar effect exists in understanding during relationships with other peers and youth, all throughout life. Furthermore, it is suggested that many feel that their parents place extra pressure and expectations on the only child, and that often, only children are perfectionists. Only children are noted to have a tendency to mature faster.

Scientific research

A 1987 quantitative review of 141 studies on 16 different personality traits failed to support the opinion, held by theorists including Alfred Adler, that only children are more likely to be maladjusted due to pampering. The study found no evidence of any greater prevalence of maladjustment in only children. The only statistically significant difference discovered was that only children possessed a higher achievement motivation, which Denise Polit and Toni Falbo attributed to their greater share of parental resources, expectations, and scrutiny exposing them to a greater degree of reward, and greater likelihood of punishment for falling short. A second analysis by the authors revealed that only children, children with only one sibling, and first-borns in general, score higher on tests of verbal ability than later-borns and children with multiple siblings.

According to the Resource Dilution Model, parental resources (e.g. time to read to the child) are important in development. Because these resources are finite, children with many siblings are thought to receive fewer resources. However, the Confluence Model suggests there is an opposing effect from the benefits to the non-youngest children of tutoring younger siblings, though being tutored does not make up the reduced share of parental resources. This provides one explanation for the poorer performance on tests of ability of only children compared to first-borns, commonly seen in the literature, though explanations such as the increased and earlier likelihood of experiencing parental separation or loss for last-born and only children have also been suggested, as this may be the cause of their very status.

In his book Maybe One, the environmental campaigner Bill McKibben argues in favor of a voluntary one child policy on the grounds of climate change and overpopulation. He reassures the reader with a narrative constructed from interviews with researchers and writers on only children, combined with snippets from the research literature, that this would not be harmful to child development. He argues that most cultural stereotypes are false, that there are not many differences between only children and other children, and where there are differences, they are favorable to the only child.

Most research on only children has been quantitative and focused on the behaviour of only-children and on how others, for example teachers, assess that behaviour. Bernice Sorensen, in contrast, used qualitative methods in order to elicit meaning and to discover what only-children themselves understand, feel or sense about their lives that are lived without siblings. Her research showed that during their life span only children often become more aware of their only child status and are very much affected by society's stereotype of the only-child whether or not the stereotype is true or false. She argues in her book, Only Child Experience and Adulthood, that growing up in a predominantly sibling society affects only children and that their lack of sibling relationships can have an important effect on both the way they see themselves and others and how they interact with the world.

The latest research by Cameron et al. (2011) controls for endogeneity associated with being only children. Parents that choose to have only one child could differ systematically in their characteristics from parents who choose to have more than one child. The paper concludes that "those who grew up as only children as a consequence of the (one-child) policy (in China) are found to be less trusting, less trustworthy, less likely to take risks, and less competitive than if they had had siblings. They are also less optimistic, less conscientious, and more prone to neuroticism". Furthermore, according to Professor Cameron, it was found that "greater exposure to other children in childhood – for example, frequent interactions with cousins and/or attending childcare – was not a substitute for having siblings."

In his book Born to Rebel, Frank Sulloway provides evidence that birth order influences the development of the "big five personality traits" (also known as the Five Factor Model). Sulloway suggests that firstborns and only children are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to laterborns. However, his conclusions have been challenged by other researchers, who argue that birth order effects are weak and inconsistent. In one of the largest studies conducted on the effect of birth order on the Big Five, data from a national sample of 9,664 subjects found no association between birth order and scores on the NEO PI-R personality test.

Marriage gap

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Marriage_gap

The marriage gap describes observed economic and political disparities in the United States between those who are married and those who are single. The marriage gap can be compared to, but should not be confused with, the gender gap. As noted by Dr. W. Bradford Wilcox, American sociologist and director of the National Marriage Project at the University of Virginia, and Wendy Wang, director of research at the Institute for Family Studies, "College-educated and more affluent Americans enjoy relatively strong and stable marriages and the economic and social benefits that flow from such marriages. By contrast, not just poor but also working-class Americans face rising rates of family instability, single parenthood, and lifelong singleness."

Politics and marriage

As part of the marriage gap, unmarried people are "considerably more liberal" than married people.[1] With little variation between professed moderates, married people respond to be conservative 9 percent more, and single people respond to be liberal 10 percent more. Married people tend to hold political opinions that differ from those of people who have never married. 

Party affiliation in the United States

In the U.S., being a married woman is correlated with a higher level of support for the Republican Party, and being single with the Democratic Party. Marriage seems to have a moderate effect on party affiliation among single people. As of 2004, 32 percent of married people called themselves Republicans while 31 percent said they were Democrats. Among single people, 19 percent were Republicans and 38 percent Democrats. The difference is most striking between married and single women. Married women respond as being Republicans 15 percent more; single women respond as being Democrats 11 percent more.

Political issues

The marriage gap is evident on a range of political issues in the United States:
  • same-sex marriage, 11% more married people favour Constitutional amendments disallowing it
  • abortion, 14% more married people favour completely banning it
  • school vouchers, 3% more married people favour them

Marriage and cohabitation

It is not clear that legally or religiously formalized marriages are associated with better outcomes than long-term cohabitation. Part of the issue is that in many western countries, married couples will have cohabited before marrying so that the stability of the resulting marriage might be attributable to the cohabitation having worked.

A chief executive of an organisation that studies relationships are quoted for having said:
"Because we now have the acceptance of long-term cohabitation, people who go into marriage and stay in marriage are a more homogenous group. They are people who believe in certain things that contribute to stability. So the selection effect is really important. Yes, it's true that married couples on average stay together longer than cohabiting couples. But cohabitation is such an unhelpful word because it covers a whole ragbag of relationships, so it's not really comparable. We're better off talking about formal and informal marriages: those that have legal certificates, and those that don't. Is there any difference between a formal and informal marriage? If we really compare like with like, I'm not sure you'd see much difference." – Penny Mansfield

Interpreting the data

American marriage and family life are divided more today than it ever has been. "Less than half of poor Americans age 18 to 55 ( just 26 percent) and 39 percent of working-class Americans are currently married, compared to more than half (56 percent) of middle- and upper-class Americans." (cite) And when it comes to coupling, poor and working-class Americans are more likely to substitute cohabitation for marriage: poor Americans are almost three times more likely to cohabit (13%), and working-class Americans are twice as likely to cohabit (10%), compared with their middle- and upper-class peers age 18–55 (5%). These findings suggest that lower income and less-educated Americans are more likely to be living outside of a partnership. Specifically, about six in 10 poor Americans are single, about five in 10 working-class Americans are single, and about four in 10 middle- and upper-class Americans are single.

And when it comes to childbearing, working-class and especially poor women are more likely to have children than their middle- and upper-class peers and these children of poor women have a significantly higher chance of being born out of wedlock. Estimates derived from the 2013–15 National Survey of Family Growth indicate that poor women currently have about 2.4 children, compared with 1.8 children for working-class women and 1.7 children for middle- and upper-class women. According to the 2015 American Community Survey, 64 percent of children born to poor women are born out of wedlock, compared with 36 percent of children born to working-class women, and 13 percent of children born to middle- and upper-class children.

With respect to divorce, working-class and poor adults age 18-55 are more likely to divorce than are their middle- and upper-class counterparts. 46 percent of poor Americans aged 18–55 are divorced, compared with 41% of working-class adults and 30 percent of middle- and upper-class adults.

The marriage gap is susceptible to multiple interpretations because it is not clear to what extent it is attributable to causation and what to correlation. It may be that people who already have a number of positive indicators of future wellbeing in terms of wealth and education are more likely to get married. "The distinction between correlation and causation cuts to the heart of the debate about marriage. The evidence is unequivocal; children raised by married couples are healthier, do better at school, commit fewer crimes, go further in education, report higher levels of wellbeing. It is easy for politicians to deduce - and assert - that married couples, therefore, produce superior children. But the children do not necessarily do better because their parents are married and there is actually very little evidence that marriage alone, in the absence of anything else, benefits children." – Penny Mansfield

Why the marriage divide?

As noted by W. Bradford Wilcox and Wendy Wang,
A series of interlocking economic, policy, civic, and cultural changes since the 1960s in America combined to create a perfect family storm for poor and working-class Americans.12 On the economic front, the move to a postindustrial economy in the 1970s made it more difficult for poor and working-class men to find and hold stable, decent-paying jobs.13 See, for example, the increase in unemployment for less-educated but not college-educated men depicted in Figure 9.14 The losses that less-educated men have experienced since the 1970s in job stability and real income have rendered them less “marriageable,” that is, less attractive as husbands—and more vulnerable to divorce.
Wilcox and Wang continue, however, and contend that it is not only economics. Citing Cornell sociologist Daniel Lichter and colleagues, they note that "shifts in state-level employment trends and macroeconomic performance do not explain the majority of the decline of marriage in this period; indeed, the retreat from marriage continued in the 1990s even as the economy boomed across much of the country in this decade." In the words of Lichter in colleagues, "“Our results call into question the appropriateness of monocausal economic explanations of declining marriage." In fact, "The decline of marriage and rise of single parenthood in the late 1960s preceded the economic changes that undercut men’s wages and job stability in the 1970s."

There exist several possible reasons for the emergence of the Marriage Divide. First, as posited by W. Bradford Wilcox, Wendy Wang, and Nicholas Wolfinger, "because working-class and poor Americans have less of a social and economic stake in stable marriage, they depend more on cultural supports for marriage than do their middle- and upper-class peers."

Second, "Working-class and poor Americans have fewer cultural and educational resources to successfully navigate the increasingly deinstitutionalized character of dating, childbearing, and marriage. The legal scholar Amy Wax argues that the “moral deregulation” of matters related to sex, parenthood, marriage, and divorce proved more difficult for poor and working-class Americans to navigate than for more educated and affluent Americans because the latter group was and remains more likely to approach these matters with a disciplined, long-term perspective." "Today’s ethos of freedom and choice when it comes to dating, childbearing, and marriage is more difficult for working-class and poor Americans to navigate."

Third, "in recent years, middle- and upper-class Americans have rejected the most permissive dimensions of the counterculture for themselves and their children, even as poor and working-class Americans have adopted a more permissive orientation toward matters such as divorce and premarital sex. The end result has been that key norms, values, and virtues— from fidelity to attitudes about teen pregnancy—that sustain a strong marriage culture are now generally weaker in poor and working-class communities."

Metallicity

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Metallicity
 
The globular cluster M80. Stars in globular clusters are mainly older metal-poor members of Population II.

In astronomy, metallicity is the abundance of elements present in an object that are heavier than hydrogen or helium. Most of the physical matter in the Universe is in the form of hydrogen and helium, so astronomers use the word "metals" as a convenient short term for "all elements except hydrogen and helium". This usage is distinct from the usual physical definition of a solid metal. For example, stars and nebulae with relatively high abundances of carbon, nitrogen, oxygen, and neon are called "metal-rich" in astrophysical terms, even though those elements are non-metals in chemistry.

The presence of heavier elements hails from stellar nucleosynthesis, the theory that the majority of elements heavier than hydrogen and helium in the Universe ("metals", hereafter) are formed in the cores of stars as they evolve. Over time, stellar winds and supernovae deposit the metals into the surrounding environment, enriching the interstellar medium and providing recycling materials for the birth of new stars. It follows that older generations of stars, which formed in the metal-poor early Universe, generally have lower metallicities than those of younger generations, which formed in a more metal-rich Universe.

Observed changes in the chemical abundances of different types of stars, based on the spectral peculiarities that were later attributed to metallicity, led astronomer Walter Baade in 1944 to propose the existence of two different populations of stars. These became commonly known as Population I (metal-rich) and Population II (metal-poor) stars. A third stellar population was introduced in 1978, known as Population III stars. These extremely metal-poor stars were theorised to have been the "first-born" stars created in the Universe.

Common methods of calculation

Astronomers use several different methods to describe and approximate metal abundances, depending on the available tools and the object of interest. Some methods include determining the fraction of mass that is attributed to gas versus metals, or measuring the ratios of the number of atoms of two different elements as compared to the ratios found in the Sun

Mass fraction

Stellar composition is often simply defined by the parameters X, Y and Z. Here X is the mass fraction of hydrogen, Y is the mass fraction of helium, and Z is the mass fraction of all the remaining chemical elements. Thus
In most stars, nebulae, H II regions, and other astronomical sources, hydrogen and helium are the two dominant elements. The hydrogen mass fraction is generally expressed as , where is the total mass of the system, and is the fractional mass of the hydrogen it contains. Similarly, the helium mass fraction is denoted as . The remainder of the elements are collectively referred to as "metals", and the metallicity—the mass fraction of elements heavier than helium—can be calculated as
For the surface of the Sun, these parameters are measured to have the following values:

Description Solar value
Hydrogen mass fraction
Helium mass fraction
Metallicity

Due to the effects of stellar evolution, neither the initial composition nor the present day bulk composition of the Sun is the same as its present-day surface composition.

Chemical abundance ratios

The overall stellar metallicity is often defined using the total iron content of the star, as iron is among the easiest to measure with spectral observations in the visible spectrum (even though oxygen is the most abundant heavy element – see metallicities in HII regions below). The abundance ratio is defined as the logarithm of the ratio of a star's iron abundance compared to that of the Sun and is expressed thus:
where and are the number of iron and hydrogen atoms per unit of volume respectively. The unit often used for metallicity is the dex, contraction of "decimal exponent". By this formulation, stars with a higher metallicity than the Sun have a positive logarithmic value, whereas those with a lower metallicity than the Sun have a negative value. For example, stars with a [Fe/H] value of +1 have 10 times the metallicity of the Sun (101); conversely, those with a [Fe/H] value of −1 have 1/10, while those with a [Fe/H] value of 0 have the same metallicity as the Sun, and so on. Young Population I stars have significantly higher iron-to-hydrogen ratios than older Population II stars. Primordial Population III stars are estimated to have a metallicity of less than −6.0, that is, less than a millionth of the abundance of iron in the Sun.

The same notation is used to express variations in abundances between other the individual elements as compared to solar proportions. For example, the notation "[O/Fe]" represents the difference in the logarithm of the star's oxygen abundance versus its iron content compared to that of the Sun. In general, a given stellar nucleosynthetic process alters the proportions of only a few elements or isotopes, so a star or gas sample with nonzero [X/Fe] values may be showing the signature of particular nuclear processes.

Photometric colors

Astronomers can estimate metallicities through measured and calibrated systems that correlate photometric measurements and spectroscopic measurements. For example, the Johnson UVB filters can be used to detect an ultraviolet (UV) excess in stars, where a larger UV excess indicates a larger presence of metals that absorb the UV radiation, thereby making the star appear "redder". The UV excess, δ(U−B), is defined as the difference between a star's U and B band magnitudes, compared to the difference between U and B band magnitudes of metal-rich stars in the Hyades cluster. Unfortunately, δ(U−B) is sensitive to both metallicity and temperature: if two stars are equally metal-rich, but one is cooler than the other, they will likely have different δ(U−B) values. To help mitigate this degeneracy, a star's B−V color can be used as an indicator for temperature. Furthermore, the UV excess and B−V color can be corrected to relate the δ(U−B) value to iron abundances.

Other photometric systems that can be used to determine metallicities of certain astrophysical objects include the Strӧmgren system, the Geneva system, the Washington system, and the DDO system.

Metallicities in various astrophysical objects

Stars

At a given mass and age, a metal-poor star will be slightly warmer. Population II stars' metallicities are roughly 1/1000 to 1/10 of the Sun's ([Z/H] = −3.0 to −1.0), but the group appears cooler than Population I overall, as heavy Population II stars have long since died. Above 40 solar masses, metallicity influences how a star will die: outside the pair-instability window, lower metallicity stars will collapse directly to a black hole, while higher metallicity stars undergo a Type Ib/c supernova and may leave a neutron star.

Relationship between stellar metallicity and planets

A star's metallicity measurement is one parameter that helps determine whether a star has planets and the type of planets, as there is a direct correlation between metallicity and the type of planets a star may have. Measurements have demonstrated the connection between a star's metallicity and gas giant planets, like Jupiter and Saturn. The more metals in a star and thus its planetary system and proplyd, the more likely the system may have gas giant planets and rocky planets. Current models show that the metallicity along with the correct planetary system temperature and distance from the star are key to planet and planetesimal formation. For two stars that have equal age and mass but different metallicity, the less metallic star is bluer. Among stars of the same color, less metallic stars emit more ultraviolet radiation. The Sun, with 8 planets and 5 known dwarf planets, is used as the reference, with a [Fe/H] of 0.00.

HII regions

Young, massive and hot stars (typically of spectral types O and B) in H II regions emit UV photons that ionize ground-state hydrogen atoms, knocking electrons and protons free; this process is known as photoionization. The free electrons can strike other atoms nearby, exciting bound metallic electrons into a metastable state, which eventually decay back into a ground state, emitting photons with energies that correspond to forbidden lines. Through these transitions, astronomers have developed several observational methods to estimate metal abundances in HII regions, where the stronger the forbidden lines in spectroscopic observations, the higher the metallicity. These methods are dependent on one or more of the following: the variety of asymmetrical densities inside HII regions, the varied temperatures of the embedded stars, and/or the electron density within the ionized region.

Theoretically, to determine the total abundance of a single element in an HII region, all transition lines should be observed and summed. However, this can be observationally difficult due to variation in line strength. Some of the most common forbidden lines used to determine metal abundances in HII regions are from oxygen (e.g. [O II] λ = (3727, 7318, 7324) Å, and [O III] λ = (4363, 4959, 5007) Å), nitrogen (e.g. [NII] λ = (5755, 6548, 6584) Å), and sulfur (e.g. [SII] λ = (6717,6731) Å and [SIII] λ = (6312, 9069, 9531) Å) in the optical spectrum, and the [OIII] λ = (52, 88) μm and [NIII] λ = 57 μm lines in the infrared spectrum. Oxygen has some of the stronger, more abundant lines in HII regions, making it a main target for metallicity estimates within these objects. To calculate metal abundances in HII regions using oxygen flux measurements, astronomers often use the R23 method, in which
where is the sum of the fluxes from oxygen emission lines measured at the rest frame λ = (3727, 4959 and 5007) Å wavelengths, divided by the flux from the Hβ emission line at the rest frame λ = 4861 Å wavelength. This ratio is well defined through models and observational studies, but caution should be taken, as the ratio is often degenerate, providing both a low and high metallicity solution, which can be broken with additional line measurements. Similarly, other strong forbidden line ratios can be used, e.g. for sulfur, where
Metal abundances within HII regions are typically less than 1%, with the percentage decreasing on average with distance from the Galactic Center.

Natural satellite

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Natural_satellite
 
Most of the 194 known natural satellites of the planets are irregular moons. Ganymede, followed by Titan, Callisto, Io and Earth's Moon are the largest natural satellites in the Solar System. Venus has no moons, while Neptune has 14.
 
A natural satellite, or moon, is, in the most common usage, an astronomical body that orbits a planet or minor planet (or sometimes another small Solar System body).

In the Solar System there are six planetary satellite systems containing 205 known natural satellites. Four IAU-listed dwarf planets are also known to have natural satellites: Pluto, Haumea, Makemake, and Eris. As of September 2018, there are 334 other minor planets known to have moons.

The Earth–Moon system is unique among planetary systems in that the ratio of the mass of the Moon to the mass of Earth is much greater than that of any other natural-satellite–planet ratio in the Solar System. At 3,474 km (2,158 miles) across, the Moon is 0.273 times the diameter of Earth. This is five times greater than the next largest moon-to-planet diameter ratio (with Neptune's largest moon at 0.055, Saturn's at 0.044, Jupiter's at 0.038 and Uranus' as 0.031). For the category of planetoids, among the five that are known in the Solar System, Charon has the largest ratio, being half (0.52) the diameter of Pluto.

Terminology

The first known natural satellite was the Moon, but it was considered a "planet" until Copernicus' introduction of De revolutionibus orbium coelestium in 1543. Until the discovery of the Galilean satellites in 1610 there was no opportunity for referring to such objects as a class. Galileo chose to refer to his discoveries as Planetæ ("planets"), but later discoverers chose other terms to distinguish them from the objects they orbited.

The first to use the term satellite to describe orbiting bodies was the German astronomer Johannes Kepler in his pamphlet Narratio de Observatis a se quatuor Iouis satellitibus erronibus ("Narration About Four Satellites of Jupiter Observed") in 1610. He derived the term from the Latin word satelles, meaning "guard", "attendant", or "companion", because the satellites accompanied their primary planet in their journey through the heavens.

The term satellite thus became the normal one for referring to an object orbiting a planet, as it avoided the ambiguity of "moon". In 1957, however, the launching of the artificial object Sputnik created a need for new terminology. The terms man-made satellite and artificial moon were very quickly abandoned in favor of the simpler satellite, and as a consequence, the term has become linked primarily with artificial objects flown in space – including, sometimes, even those not in orbit around a planet.

Because of this shift in meaning, the term moon, which had continued to be used in a generic sense in works of popular science and in fiction, has regained respectability and is now used interchangeably with natural satellite, even in scientific articles. When it is necessary to avoid both the ambiguity of confusion with Earth's natural satellite the Moon and the natural satellites of the other planets on the one hand, and artificial satellites on the other, the term natural satellite (using "natural" in a sense opposed to "artificial") is used. To further avoid ambiguity, the convention is to capitalize the word Moon when referring to Earth's natural satellite, but not when referring to other natural satellites.

Many authors define "satellite" or "natural satellite" as orbiting some planet or minor planet, synonymous with "moon" – by such a definition all natural satellites are moons, but Earth and other planets are not satellites. A few recent authors define "moon" as "a satellite of a planet or minor planet", and "planet" as "a satellite of a star" – such authors consider Earth as a "natural satellite of the Sun".

Definition of a moon

Size comparison of Earth and the Moon
 
There is no established lower limit on what is considered a "moon". Every natural celestial body with an identified orbit around a planet of the Solar System, some as small as a kilometer across, has been considered a moon, though objects a tenth that size within Saturn's rings, which have not been directly observed, have been called moonlets. Small asteroid moons (natural satellites of asteroids), such as Dactyl, have also been called moonlets.

The upper limit is also vague. Two orbiting bodies are sometimes described as a double planet rather than primary and satellite. Asteroids such as 90 Antiope are considered double asteroids, but they have not forced a clear definition of what constitutes a moon. Some authors consider the Pluto–Charon system to be a double (dwarf) planet. The most common dividing line on what is considered a moon rests upon whether the barycentre is below the surface of the larger body, though this is somewhat arbitrary, because it depends on distance as well as relative mass. 

Origin and orbital characteristics

Two moons: Saturn's natural satellite Dione occults Enceladus
 
The natural satellites orbiting relatively close to the planet on prograde, uninclined circular orbits (regular satellites) are generally thought to have been formed out of the same collapsing region of the protoplanetary disk that created its primary. In contrast, irregular satellites (generally orbiting on distant, inclined, eccentric and/or retrograde orbits) are thought to be captured asteroids possibly further fragmented by collisions. Most of the major natural satellites of the Solar System have regular orbits, while most of the small natural satellites have irregular orbits. The Moon and possibly Charon are exceptions among large bodies in that they are thought to have originated by the collision of two large proto-planetary objects. The material that would have been placed in orbit around the central body is predicted to have reaccreted to form one or more orbiting natural satellites. As opposed to planetary-sized bodies, asteroid moons are thought to commonly form by this process. Triton is another exception; although large and in a close, circular orbit, its motion is retrograde and it is thought to be a captured dwarf planet.

Temporary satellites

The capture of an asteroid from a heliocentric orbit is not always permanent. According to simulations, temporary satellites should be a common phenomenon. The only observed example is 2006 RH120, which was a temporary satellite of Earth for nine months in 2006 and 2007.

Tidal locking

Most regular moons (natural satellites following relatively close and prograde orbits with small orbital inclination and eccentricity) in the Solar System are tidally locked to their respective primaries, meaning that the same side of the natural satellite always faces its planet. The only known exception is Saturn's natural satellite Hyperion, which rotates chaotically because of the gravitational influence of Titan

In contrast, the outer natural satellites of the giant planets (irregular satellites) are too far away to have become locked. For example, Jupiter's Himalia, Saturn's Phoebe, and Neptune's Nereid have rotation periods in the range of ten hours, whereas their orbital periods are hundreds of days. 

Satellites of satellites

Artist impression of Rhea's proposed rings
 
No "moons of moons" or subsatellites (natural satellites that orbit a natural satellite of a planet) are currently known as of 2020. In most cases, the tidal effects of the planet would make such a system unstable. 

However, calculations performed after the recent detection of a possible ring system around Saturn's moon Rhea indicate that satellites orbiting Rhea could have stable orbits. Furthermore, the suspected rings are thought to be narrow, a phenomenon normally associated with shepherd moons. However, targeted images taken by the Cassini spacecraft failed to detect rings around Rhea.

It has also been proposed that Saturn's moon Iapetus had a satellite in the past; this is one of several hypotheses that have been put forward to account for its equatorial ridge.

Trojan satellites

Two natural satellites are known to have small companions at both their L4 and L5 Lagrangian points, sixty degrees ahead and behind the body in its orbit. These companions are called trojan moons, as their orbits are analogous to the trojan asteroids of Jupiter. The trojan moons are Telesto and Calypso, which are the leading and following companions, respectively, of the Saturnian moon Tethys; and Helene and Polydeuces, the leading and following companions of the Saturnian moon Dione

Asteroid satellites

The discovery of 243 Ida's natural satellite Dactyl in the early 1990s confirmed that some asteroids have natural satellites; indeed, 87 Sylvia has two. Some, such as 90 Antiope, are double asteroids with two comparably sized components. 

Shape

The relative masses of the natural satellites of the Solar System. Mimas, Enceladus, and Miranda are too small to be visible at this scale. All the irregularly shaped natural satellites, even added together, would also be too small to be visible.

Neptune's moon Proteus is the largest irregularly shaped natural satellite. All other known natural satellites that are at least the size of Uranus's Miranda have lapsed into rounded ellipsoids under hydrostatic equilibrium, i.e. are "round/rounded satellites". The larger natural satellites, being tidally locked, tend toward ovoid (egg-like) shapes: squat at their poles and with longer equatorial axes in the direction of their primaries (their planets) than in the direction of their motion. Saturn's moon Mimas, for example, has a major axis 9% greater than its polar axis and 5% greater than its other equatorial axis. Methone, another of Saturn's moons, is only around 3 km in diameter and visibly egg-shaped. The effect is smaller on the largest natural satellites, where their own gravity is greater relative to the effects of tidal distortion, especially those that orbit less massive planets or, as in the case of the Moon, at greater distances. 

Name Satellite of Difference in axes
km
% of mean
diameter
Mimas Saturn 33.4 (20.4 / 13.0) 8.4 (5.1 / 3.3)
Enceladus Saturn 16.6 3.3
Miranda Uranus 14.2 3.0
Tethys Saturn 25.8 2.4
Io Jupiter 29.4 0.8
The Moon Earth 4.3 0.1

Geological activity

Of the nineteen known natural satellites in the Solar System that are large enough to have lapsed into hydrostatic equilibrium, several remain geologically active today. Io is the most volcanically active body in the Solar System, while Europa, Enceladus, Titan and Triton display evidence of ongoing tectonic activity and cryovolcanism. In the first three cases, the geological activity is powered by the tidal heating resulting from having eccentric orbits close to their giant-planet primaries. (This mechanism would have also operated on Triton in the past, before its orbit was circularized.) Many other natural satellites, such as Earth's Moon, Ganymede, Tethys and Miranda, show evidence of past geological activity, resulting from energy sources such as the decay of their primordial radioisotopes, greater past orbital eccentricities (due in some cases to past orbital resonances), or the differentiation or freezing of their interiors. Enceladus and Triton both have active features resembling geysers, although in the case of Triton solar heating appears to provide the energy. Titan and Triton have significant atmospheres; Titan also has hydrocarbon lakes. Also Io and Callisto have atmospheres, even if they are extremely thin. Four of the largest natural satellites, Europa, Ganymede, Callisto, and Titan, are thought to have subsurface oceans of liquid water, while smaller Enceladus may have localized subsurface liquid water.

Natural satellites of the Solar System

Euler diagram showing the types of bodies in the Solar System.
 
Of the objects within our Solar System known to have natural satellites, there are 76 in the asteroid belt (five with two each), four Jupiter trojans, 39 near-Earth objects (two with two satellites each), nd 14 Mars-crossers. There are also 84 known natural satellites of trans-Neptunian objects. Some 150 additional small bodies have been observed within the rings of Saturn, but only a few were tracked long enough to establish orbits. Planets around other stars are likely to have satellites as well, and although numerous candidates have been detected to date, none have yet been confirmed.

Of the inner planets, Mercury and Venus have no natural satellites; Earth has one large natural satellite, known as the Moon; and Mars has two tiny natural satellites, Phobos and Deimos. The giant planets have extensive systems of natural satellites, including half a dozen comparable in size to Earth's Moon: the four Galilean moons, Saturn's Titan, and Neptune's Triton. Saturn has an additional six mid-sized natural satellites massive enough to have achieved hydrostatic equilibrium, and Uranus has five. It has been suggested that some satellites may potentially harbour life.

Among the identified dwarf planets, Ceres has no known natural satellites. Pluto has the relatively large natural satellite Charon and four smaller natural satellites; Styx, Nix, Kerberos, and Hydra. Haumea has two natural satellites, and Eris and Makemake have one each. The Pluto–Charon system is unusual in that the center of mass lies in open space between the two, a characteristic sometimes associated with a double-planet system. 

The seven largest natural satellites in the Solar System (those bigger than 2,500 km across) are Jupiter's Galilean moons (Ganymede, Callisto, Io, and Europa), Saturn's moon Titan, Earth's moon, and Neptune's captured natural satellite Triton. Triton, the smallest of these, has more mass than all smaller natural satellites together. Similarly in the next size group of nine mid-sized natural satellites, between 1,000 km and 1,600 km across, Titania, Oberon, Rhea, Iapetus, Charon, Ariel, Umbriel, Dione, and Tethys, the smallest, Tethys, has more mass than all smaller natural satellites together. As well as the natural satellites of the various planets, there are also over 80 known natural satellites of the dwarf planets, minor planets and other small Solar System bodies. Some studies estimate that up to 15% of all trans-Neptunian objects could have satellites.

Computer-aided software engineering

From Wikipedia, the free encyclopedia ...