Search This Blog

Monday, February 19, 2024

Law of large numbers

From Wikipedia, the free encyclopedia
An illustration of the law of large numbers using a particular run of rolls of a single die. As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. Although each run would show a distinctive shape over a small number of throws (at the left), over a large number of rolls (to the right) the shapes would be extremely similar.

In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent and identical random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).

The LLN only applies to the average of the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that the sum of n results gets close to the expected value times n as n increases.

Throughout its history, many mathematicians have refined this law. Today, the LLN is used in many fields including statistics, probability theory, economics, and insurance.

Examples

For example, a single roll of a fair, six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of the average of the rolls is:

According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) will approach 3.5, with the precision increasing as more dice are rolled.

It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the expected value is the theoretical probability of success, and the average of n such variables (assuming they are independent and identically distributed (i.i.d.)) is precisely the relative frequency.

A graphical example of the law of large numbers used for two dice rolls. The sum of the two dice fluctuates in the first few rolls, but as the number of rolls increases, the expected value of the sum of the two dice approaches 7.

For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to 12. Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly 12. In particular, the proportion of heads after n flips will almost surely converge to 12 as n approaches infinity.

Although the proportion of heads (and tails) approaches 12, almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, the expected difference grows, but at a slower rate than the number of flips.

Another good example of the LLN is the Monte Carlo method. These methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The larger the number of repetitions, the better the approximation tends to be. The reason that this method is important is mainly that, sometimes, it is difficult or impossible to use other approaches.

Limitation

The average of the results obtained from a large number of trials may fail to converge in some cases. For instance, the average of n results taken from the Cauchy distribution or some Pareto distributions (α<1) will not converge as n becomes larger; the reason is heavy tails. The Cauchy distribution and the Pareto distribution represent two cases: the Cauchy distribution does not have an expectation, whereas the expectation of the Pareto distribution (α<1) is infinite. One way to generate the Cauchy-distributed example is where the random numbers equal the tangent of an angle uniformly distributed between −90° and +90°. The median is zero, but the expected value does not exist, and indeed the average of n such variables have the same distribution as one such variable. It does not converge in probability toward zero (or any other value) as n goes to infinity.

And if the trials embed a selection bias, typical in human economic/rational behaviour, the law of large numbers does not help in solving the bias. Even if the number of trials is increased the selection bias remains.

History

Diffusion is an example of the law of large numbers. Initially, there are solute molecules on the left side of a barrier (magenta line) and none on the right. The barrier is removed, and the solute diffuses to fill the whole container.
  • Top: With a single molecule, the motion appears to be quite random.
  • Middle: With more molecules, there is clearly a trend where the solute fills the container more and more uniformly, but there are also random fluctuations.
  • Bottom: With an enormous number of solute molecules (too many to see), the randomness is essentially gone: The solute appears to move smoothly and systematically from high-concentration areas to low-concentration areas. In realistic situations, chemists can describe diffusion as a deterministic macroscopic phenomenon (see Fick's laws), despite its underlying random nature.

The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's theorem". This should not be confused with Bernoulli's principle, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S. D. Poisson further described it under the name "la loi des grands nombres" ("the law of large numbers"). Thereafter, it was known under both names, but the "law of large numbers" is most frequently used.

After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli, Kolmogorov and Khinchin. Markov showed that the law can apply to a random variable that does not have a finite variance under some other weaker assumption, and Khinchin showed in 1929 that if the series consists of independent identically distributed random variables, it suffices that the expected value exists for the weak law of large numbers to be true. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak.

Forms

There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers. Stated for the case where X1, X2, ... is an infinite sequence of independent and identically distributed (i.i.d.) Lebesgue integrable random variables with expected value E(X1) = E(X2) = ... = µ, both versions of the law state that the sample average

converges to the expected value:

 

 

 

 

(1)

(Lebesgue integrability of Xj means that the expected value E(Xj) exists according to Lebesgue integration and is finite. It does not mean that the associated probability measure is absolutely continuous with respect to Lebesgue measure.)

Introductory probability texts often additionally assume identical finite variance (for all ) and no correlation between random variables. In that case, the variance of the average of n random variables is

which can be used to shorten and simplify the proofs. This assumption of finite variance is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway.

Mutual independence of the random variables can be replaced by pairwise independence or exchangeability in both versions of the law.

The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables.

Weak law

Simulation illustrating the law of large numbers. Each frame, a coin that is red on one side and blue on the other is flipped, and a dot is added in the corresponding column. A pie chart shows the proportion of red and blue so far. Notice that while the proportion varies significantly at first, it approaches 50% as the number of trials increases.

The weak law of large numbers (also called Khinchin's law) states that given a collection of iid samples from a random variable with finite mean, the sample mean converges in probability to the expected value

 

 

 

 

(2)

That is, for any positive number ε,

Interpreting this result, the weak law states that for any nonzero margin specified (ε), no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin.

As mentioned earlier, the weak law applies in the case of i.i.d. random variables, but it also applies in some other cases. For example, the variance may be different for each random variable in the series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown by Chebyshev as early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshev's proof works so long as the variance of the average of the first n values goes to zero as n goes to infinity. As an example, assume that each random variable in the series follows a Gaussian distribution (normal distribution) with mean zero, but with variance equal to , which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to . The variance of the average is therefore asymptotic to and goes to zero.

There are also examples of the weak law applying even though the expected value does not exist.

Strong law

The strong law of large numbers (also called Kolmogorov's law) states that the sample average converges almost surely to the expected value

 

 

 

 

(3)

That is,

What this means is that the probability that, as the number of trials n goes to infinity, the average of the observations converges to the expected value, is equal to one. The modern proof of the strong law is more complex than that of the weak law, and relies on passing to an appropriate subsequence.

The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem. This view justifies the intuitive interpretation of the expected value (for Lebesgue integration only) of a random variable when sampled repeatedly as the "long-term average".

Law 3 is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then the convergence is only weak (in probability). See differences between the weak law and the strong law.

The strong law applies to independent identically distributed random variables having an expected value (like the weak law). This was proved by Kolmogorov in 1930. It can also apply in other cases. Kolmogorov also showed, in 1933, that if the variables are independent and identically distributed, then for the average to converge almost surely on something (this can be considered another statement of the strong law), it is necessary that they have an expected value (and then of course the average will converge almost surely on that).

If the summands are independent but not identically distributed, then

 

 

 

 

(2)

provided that each Xk has a finite second moment and

This statement is known as Kolmogorov's strong law, see e.g. Sen & Singer (1993, Theorem 2.3.10).

Differences between the weak law and the strong law

The weak law states that for a specified large n, the average is likely to be near μ. Thus, it leaves open the possibility that happens an infinite number of times, although at infrequent intervals. (Not necessarily for all n).

The strong law shows that this almost surely will not occur. It does not imply that with probability 1, we have that for any ε > 0 the inequality holds for all large enough n, since the convergence is not necessarily uniform on the set where it holds.

The strong law does not hold in the following cases, but the weak law does.

  1. Let X be an exponentially distributed random variable with parameter 1. The random variable has no expected value according to Lebesgue integration, but using conditional convergence and interpreting the integral as a Dirichlet integral, which is an improper Riemann integral, we can say:
  2. Let X be a geometrically distributed random variable with probability 0.5. The random variable does not have an expected value in the conventional sense because the infinite series is not absolutely convergent, but using conditional convergence, we can say:
  3. If the cumulative distribution function of a random variable is
    then it has no expected value, but the weak law is true.
  4. Let Xk be plus or minus (starting at sufficiently large k so that the denominator is positive) with probability 12 for each. The variance of Xk is then Kolmogorov's strong law does not apply because the partial sum in his criterion up to k = n is asymptotic to and this is unbounded. If we replace the random variables with Gaussian variables having the same variances, namely , then the average at any point will also be normally distributed. The width of the distribution of the average will tend toward zero (standard deviation asymptotic to ), but for a given ε, there is probability which does not go to zero with n, while the average sometime after the nth trial will come back up to ε. Since the width of the distribution of the average is not zero, it must have a positive lower bound p(ε), which means there is a probability of at least p(ε) that the average will attain ε after n trials. It will happen with probability p(ε)/2 before some m which depends on n. But even after m, there is still a probability of at least p(ε) that it will happen. (This seems to indicate that p(ε)=1 and the average will attain ε an infinite number of times.)

Uniform laws of large numbers

There are extensions of the law of large numbers to collections of estimators, where the convergence is uniform over the collection; thus the name uniform law of large numbers.

Suppose f(x,θ) is some function defined for θ ∈ Θ, and continuous in θ. Then for any fixed θ, the sequence {f(X1,θ), f(X2,θ), ...} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is the pointwise (in θ) convergence.

A particular example of a uniform law of large numbers states the conditions under which the convergence happens uniformly in θ. If

  1. Θ is compact,
  2. f(x,θ) is continuous at each θ ∈ Θ for almost all xs, and measurable function of x at each θ.
  3. there exists a dominating function d(x) such that E[d(X)] < ∞, and

Then E[f(X,θ)] is continuous in θ, and

This result is useful to derive consistency of a large class of estimators (see Extremum estimator).

Borel's law of large numbers

Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event is expected to occur approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if E denotes the event in question, p its probability of occurrence, and Nn(E) the number of times E occurs in the first n trials, then with probability one,

This theorem makes rigorous the intuitive notion of probability as the expected long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory.

Chebyshev's inequality. Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number k > 0,

Proof of the weak law

Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value , we are interested in the convergence of the sample average

The weak law of large numbers states:

 

 

 

 

(2)

Proof using Chebyshev's inequality assuming finite variance

This proof uses the assumption of finite variance (for all ). The independence of the random variables implies no correlation between them, and we have that

The common mean μ of the sequence is the mean of the sample average:

Using Chebyshev's inequality on results in

This may be used to obtain the following:

As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained

 

 

 

 

(2)

Proof using convergence of characteristic functions

By Taylor's theorem for complex functions, the characteristic function of any random variable, X, with finite mean μ, can be written as

All X1, X2, ... have the same characteristic function, so we will simply denote this φX.

Among the basic properties of characteristic functions there are

if X and Y are independent.

These rules can be used to calculate the characteristic function of in terms of φX:

The limit eitμ is the characteristic function of the constant random variable μ, and hence by the Lévy continuity theorem, converges in distribution to μ:

μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (see Convergence of random variables.) Therefore,

 

 

 

 

(2)

This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists.

Proof of the strong law

We give a relatively simple proof of the strong law under the assumptions that the are iid, , , and .

Let us first note that without loss of generality we can assume that by centering. In this case, the strong law says that

or
It is equivalent to show that
Note that
and thus to prove the strong law we need to show that for every , we have
Define the events , and if we can show that
then the Borel-Cantelli Lemma implies the result. So let us estimate .

We compute

We first claim that every term of the form where all subscripts are distinct, must have zero expectation. This is because by independence, and the last term is zero --- and similarly for the other terms. Therefore the only terms in the sum with nonzero expectation are and . Since the are identically distributed, all of these are the same, and moreover .

There are terms of the form and terms of the form , and so

Note that the right-hand side is a quadratic polynomial in , and as such there exists a such that for sufficiently large. By Markov,
for sufficiently large, and therefore this series is summable. Since this holds for any , we have established the Strong LLN.


Another proof can be found in 

For a proof without the added assumption of a finite fourth moment, see Section 22 of.

Consequences

The law of large numbers provides an expectation of an unknown distribution from a realization of the sequence, but also any feature of the probability distribution. By applying Borel's law of large numbers, one could easily obtain the probability mass function. For each event in the objective probability mass function, one could approximate the probability of the event's occurrence with the proportion of times that any specified event occurs. The larger the number of repetitions, the better the approximation. As for the continuous case: , for small positive h. Thus, for large n:

With this method, one can cover the whole x-axis with a grid (with grid size 2h) and obtain a bar graph which is called a histogram.

Applications

One application of the LLN is the use of an important method of approximation, the Monte Carlo Method. This method uses a random sampling of numbers to approximate numerical results. The algorithm to compute an integral of f(x) on an interval [a,b] is as follows:

  1. Simulate uniform random variables X1, X2, ..., Xn which can be done using a software, and use a random number table that gives U1, U2, ..., Un independent and identically distributed (i.i.d.) random variables on [0,1]. Then let Xi = a+(b - a)Ui for i= 1, 2, ..., n. Then X1, X2, ..., Xn are independent and identically distributed uniform random variables on [a, b].
  2. Evaluate f(X1), f(X2), ..., f(Xn)
  3. Take the average of f(X1), f(X2), ..., f(Xn) by computing and then by the Strong Law of Large Numbers, this converges to = =

We can find the integral of on [-1,2]. Using traditional methods to compute this integral is very difficult, so the Monte Carlo Method can be used here. Using the above algorithm, we get

= 0.905 when n=25

and

= 1.028 when n=250

We observe that as n increases, the numerical value also increases. When we get the actual results for the integral we get

= 1.000194

By using the LLN, the approximation of the integral was more accurate and was closer to its true value.

Another example is the integration of f(x) = on [0,1]. Using the Monte Carlo Method and the LLN, we can see that as the number of samples increases, the numerical value gets closer to 0.4180233.

Sunday, February 18, 2024

Social ecological model

From Wikipedia, the free encyclopedia

Socio-ecological models were developed to further the understanding of the dynamic interrelations among various personal and environmental factors. Socioecological models were introduced to urban studies by sociologists associated with the Chicago School after the First World War as a reaction to the narrow scope of most research conducted by developmental psychologists. These models bridge the gap between behavioral theories that focus on small settings and anthropological theories.

Introduced as a conceptual model in the 1970s, formalized as a theory in the 1980s, and continually revised by Bronfenbrenner until his death in 2005, Urie Bronfenbrenner's Ecological Framework for Human Development applies socioecological models to human development. In his initial theory, Bronfenbrenner postulated that in order to understand human development, the entire ecological system in which growth occurs needs to be taken into account. In subsequent revisions, Bronfenbrenner acknowledged the relevance of biological and genetic aspects of the person in human development.

At the core of Bronfenbrenner’s ecological model is the child’s biological and psychological makeup, based on individual and genetic developmental history. This makeup continues to be affected and modified by the child’s immediate physical and social environment (microsystem) as well as interactions among the systems within the environment (mesosystems). Other broader social, political and economic conditions (exosystem) influence the structure and availability of microsystems and the manner in which they affect the child. Finally, social, political, and economic conditions are themselves influenced by the general beliefs and attitudes (macrosystems) shared by members of the society. (Bukatko & Daehler, 1998)

In its simplest terms, systems theory is the idea that one thing affects another. The basic idea behind systems theory is that one thing affects another event and existence does not occur in a vacuum but in relation to changing circumstances systems are dynamic and paradoxically retain their own integrity while adapting to the inevitable changes going on around them. Our individual and collective behaviour is influenced by everything from our genes to the political environment. It is not possible to fully understand our development and behaviour without taking into account all of these elements. And indeed, this is what some social work theories insist that we do if we are to make effective interventions. Lying behind these models is the idea that everything is connected, everything can affect everything else. Complex systems are made up of many parts. It is not possible to understand the whole without recognizing how the component parts interact, affect and change each other. As the parts interact, they create the character and function of the whole.

From systems thinking to socioecological models

A system can be defined as a comparatively bounded structure consisting of interacting, interrelated, or interdependent elements that form a whole. Systems thinking argues that the only way to fully understand something or an occurrence is to understand the parts in relation to the whole. Thus, systems thinking, which is the process of understanding how things influence one another within a whole, is central to ecological models. Generally, a system is a community situated within an environment. Examples of systems are health systems, education systems, food systems, and economic systems.

Drawing from natural ecosystems which are defined as the network of interactions among organisms and between organisms and their environment, social ecology is a framework or set of theoretical principles for understanding the dynamic interrelations among various personal and environmental factors. Social ecology pays explicit attention to the social, institutional, and cultural contexts of people-environment relations. This perspective emphasizes the multiple dimensions (example: physical environment, social and cultural environment, personal attributes), multiple levels (example: individuals, groups, organizations), and complexity of human situations (example: cumulative impact of events over time). Social ecology also incorporates concepts such as interdependence and homeostasis from systems theory to characterize reciprocal and dynamic person-environment transactions.

Individuals are key agents in ecological systems. From an ecological perspective, the individual is both a postulate (a basic entity whose existence is taken for granted) and a unit of measurement. As a postulate, an individual has several characteristics. First an individual requires access to an environment, upon which they are dependent for knowledge. Second, they are interdependent with other humans; that is, is always part of a population and cannot exist otherwise. Third, an individual is time bound, or has a finite life cycle. Fourth, they have an innate tendency to preserve and expand life. Fifth, they have capacity for behavioral variability. Social ecological models are thus applicable to the processes and conditions that govern the lifelong course of human development in the actual environment in which human beings live. Urie Bronfenbrenner's Ecological Framework for Human Development is considered to be the most recognized and utilized social ecological model (as applied to human development). Ecological systems theory considers a child's development within the context of the systems of relationship that form his or her environment.

Bronfenbrenner's ecological framework for human development

Illustration of Bronfenbrenner's ecological framework for human development. Individual's environment is influenced by each nested layer but interconnected structures.

Bronfenbrenner's ecological framework for human development was first introduced in the 1970s as a conceptual model and became a theoretical model in the 1980s. Two distinct phases of the theory can be identified. Bronfenbrenner stated that "it is useful to distinguish two periods: the first ending with the publication of the Ecology of Human Development (1979), and the second characterized by a series of papers that called the original model into question." Bronfenbrenner's initial theory illustrated the importance of place to aspects of the context, and in the revision, he engaged in self-criticism for discounting the role a person plays in his or her own development while focusing too much on the context. Although revised, altered, and extended, the heart of Bronfenbrenner's theory remains the ecological-stressing person-context interrelatedness.

The Bronfenbrenner ecological model examines human development by studying how human beings create the specific environments in which they live. In other words, human beings develop according to their environment; this can include society as a whole and the period in which they live, which will impact behavior and development. This views behavior and development as a symbiotic relationship, which is why this is also known as the “bioecological” model.

Ecological systems theory

Bronfenbrenner made his Ecological systems theory to explain how everything in a child and the child's environment affects how a child grows and develops. In his original theory, Bronfenbrenner postulated that in order to understand human development, the entire ecological system in which growth occurs needs to be taken into account. This system is composed of five socially organized subsystems that support and guide human development. Each system depends on the contextual nature of the person's life and offers an evergrowing diversity of options and sources of growth. Furthermore, within and between each system are bi-directional influences. These bi-directional influences imply that relationships have impact in two directions, both away from the individual and towards the individual.

Because we potentially have access to these subsystems we are able to have more social knowledge, an increased set of possibilities for learning problem solving, and access to new dimensions of self-exploration.

Microsystem

The microsystem is the layer closest to the child and contains the structures with which the child has direct contact. The microsystem encompasses the relationships and interactions a child has with his or her immediate surroundings such as family, school, neighborhood, or childcare environments. At the microsystem level, bi-directional influences are strongest and have the greatest impact on the child. However, interactions at outer levels can still impact the inner structures. This core environment stands as the child's venue for initially learning about the world. As the child's most intimate learning setting, it offers him or her a reference point for the world. The microsystem may provide the nurturing centerpiece for the child or become a haunting set of memories. The real power in this initial set of interrelations with family for the child is what they experience in terms of developing trust and mutuality with their significant people. The family is the child's early microsystem for learning how to live. The caring relations between child and parents (or other caregivers) can help to influence a healthy personality. For example, the attachment behaviors of parents offer children their first trust-building experience.

Mesosystem

The mesosystem moves us beyond the dyad or two-party relation. Mesosystems connect two or more systems in which child, parent and family live. Mesosystems provide the connection between the structures of the child's microsystem. For example, the connection between the child's teacher and his parents, between his church and his neighborhood, each represent mesosystems.

Exosystem

The exosystem defines the larger social system in which the child does not directly function. The structures in this layer impact the child's development by interacting with some structure in his/her microsystem. Parent workplace schedules or community-based family resources are examples. The child may not be directly involved at this level, but they do feel the positive or negative force involved with the interaction with their own system. The main exosystems that indirectly influence youth through their family include: school and peers, parents' workplace, family social networks and neighborhood community contexts, local politics and industry. Exosystems can be empowering (example: a high quality child-care program that benefits the entire family) or they can be degrading (example: excessive stress at work impacts the entire family). Furthermore, absence from a system makes it no less powerful in a life. For example, many children realise the stress of their parent's workplaces without ever physically being in these places.

Macrosystem

The macrosystem is the larger cultural context, such as attitudes and social conditions within the culture where the child is located. Macrosystems can be used to describe the cultural or social context of various societal groups such as social classes, ethnic groups, or religious affiliates. This layer is the outermost layer in the child's environment. The effects of larger principles defined by the macrosystem have a cascading influence throughout the interactions of all other layers. The macrosystem influences what, how, when and where we carry out our relations. For example, a program like Women, Infants, and Children (WIC) may positively impact a young mother through health care, vitamins, and other educational resources. It may empower her life so that she, in turn, is more effective and caring with her newborn. In this example, without an umbrella of beliefs, services, and support for families, children and their parents are open to great harm and deterioration. In a sense, the macrosytem that surrounds us helps us to hold together the many threads of our lives.

Chronosystem

The chronosystem encompasses the dimension of time as it relates to a child's environment. Elements within this system can be either external, such as the timing of a parent's death, or internal, such as the physiological changes that occur with the aging of a child. Family dynamics need to be framed in the historical context as they occur within each system. Specifically, the powerful influence that historical influences in the macrosystem have on how families can respond to different stressors. Bronfenbrenner suggests that, in many cases, families respond to different stressors within the societal parameters existent in their lives.

Process person context time model

Bronfenbrenner's most significant departure from his original theory is the inclusion of processes of human development. Processes, per Bronfenbrenner, explain the connection between some aspect of the context or some aspect of the individual and an outcome of interest. The full, revised theory deals with the interaction among processes, person, context and time, and is labeled the Process–Person–Context–Time model (PPCT). Two interdependent propositions define the properties of the model. Furthermore, contrary to the original model, the Process–Person–Context–Time model is more suitable for scientific investigation. Per Bronfenbrenner:

"Proposition 1: In its early phase and throughout the lifecourse, human development takes place through processes of progressively more complex reciprocal interactions between an active, evolving biopsychological human organism and the persons, objects and symbols in its immediate environment. To be effective, the interaction must occur on a fairly regular basis over extended periods of time. These forms of interaction in the immediate environment are referred to as proximal processes.
Proposition 2: the form, power and content and direction of the proximal processes affecting development vary systematically as a joint function of the characteristics of the developing person, of the environment-immediate and more remote-in which the processes are taking place and the nature of the developmental outcome under consideration."

Processes play a crucial role in development. Proximal processes are fundamental to the theory. They constitute the engines of development because it is by engaging in activities and interactions that individuals come to make sense of their world, understand their place in it, and both play their part in changing the prevailing order while fitting into the existing one. The nature of proximal processes varies according to aspects of the individual and of the context—both spatially and temporally. As explained in the second of the two central propositions, the social continuities and changes occur overtime through the life course and the historical period during which the person lives. Effects of proximal processes are thus more powerful than those of the environmental contexts in which they occur.

Person. Bronfenbrenner acknowledges here the relevance of biological and genetic aspects of the person. However, he devoted more attention to the personal characteristics that individuals bring with them into any social situation. He divided these characteristics into three types' demand, resource, and force characteristics. Demand characteristics are those that act as an immediate stimulus to another person, such as age, gender, skin color, and physical appearance. These types of characteristics may influence initial interactions because of the expectations formed immediately. Resource characteristics are those that relate partly to mental and emotional resources such as past experiences, skills, and intelligence, and also to social and material resources (access to good food, housing, caring parents, and educational opportunities appropriate to the needs of the particular society). Finally, force characteristics are those that have to do with differences of temperament, motivation, and persistence. According to Bronfenbrenner, two children may have equal resource characteristics, but their developmental trajectories will be quite different if one is motivated to succeed and persists in tasks and the other is not motivated and does not persist. As such, Bronfenbrenner provided a clearer view of individuals' roles in changing their context. The change can be relatively passive (a person changes the environment simply by being in it), to more active (the ways in which the person changes the environment are linked to his or her resource characteristics, whether physical, mental, or emotional), to most active (the extent to which the person changes the environment is linked, in part, to the desire and drive to do so, or force characteristics).

The context, or environment, involves four of the five interrelated systems of the original theory: the microsystem, the mesosystem, the exosystem, and the macrosystem.

The final element of the PPCT model is time. Time plays a crucial role in human development. In the same way that both context and individual factors are divided into sub-factors or sub-systems, Bronfenbrenner and Morris wrote about time as constituting micro-time (what is occurring during the course of some specific activity or interaction), meso-time (the extent to which activities and interactions occur with some consistency in the developing person's environment), and macro-time (the chronosystem). Time and timing are equally important because all aspects of the PPCT model can be thought of in terms of relative constancy and change.

Applications

The application of social ecological theories and models focus on several goals: to explain the person-environment interaction, to improve people-environment transactions, to nurture human growth and development in particular environments, and to improve environments so they support expression of individual's system's dispositions. Some examples are:

  • Political and economic policies that support the importance of parent's roles in their children's development such as Head Start or Women Infants and Children programs.
  • Fostering of societal attitudes that value work done on behalf of children at all levels: parents, teachers, extended family, mentors, work supervisors, legislators.
  • In community health promotion: identifying high impact leverage points and intermediaries within organizations that can facilitate the successful implementation of health promoting interventions, combining person focused and environmentally based components within comprehensive health promotion programs, and measuring the scope and sustainability of intervention outcomes over prolonged periods. Basis of intervention programs to address issues such as bullying, obesity, overeating and physical activity.
  • Interventions that use the social ecological model as a framework include mass media campaigns, social marketing, and skills development.
  • In economics: economics, human habits, and cultural characteristics are shaped by geography. In economics, an output is a function of natural resources, human resources, capital resources, and technology. The environment (macrosystem) dictates a considerable amount to the lifestyle of the individual and the economy of the country. For instance, if the region is mountainous or arid and there is little land for agriculture, the country typically will not prosper as much as another country that has greater resources.
  • In risk communication: used to assist the researcher to analyze the timing of when information is received and identify the receivers and stakeholders. This situation is an environmental influence that may be very far reaching. The individual's education level, understanding, and affluence may dictate what information he or she receives and processes and through which medium.
  • In personal health: to prevent illnesses, a person should avoid an environment in which they may be more susceptible to contracting a virus or where their immune system would be weakened. This also includes possibly removing oneself from a potentially dangerous environment or avoiding a sick coworker. On the other hand, some environments are particularly conducive to health benefits. Surrounding oneself with physically fit people will potentially act as a motivator to become more active, diet, or work out at the gym. The government banning trans fat may have a positive top-down effect on the health of all individuals in that state or country.
  • In human nutrition: used as a model for nutrition research and interventions. The social ecological model looks at multiple levels of influence on specific health behaviors. Levels include intrapersonal (individual's knowledge, demographics, attitudes, values, skills, behavior, self-concept, self-esteem), interpersonal (social networks, social supports, families, work groups, peers, friends, neighbors), organizational (norms, incentives, organizational culture, management styles, organizational structure, communication networks), community (community resources, neighborhood organizations, folk practices, non-profit organizations, informal and formal leadership practices), and public policy level (legislation, policies, taxes, regulatory agencies, laws) Multi-level interventions are thought to be most effective in changing behavior.
  • In public health: drawing upon this model to address the health of a nation's population is viewed as critically important to the strategic alignment of policy and services across the continuum of population health needs, including the design of effective health promotion and disease prevention and control strategies. Thus also, in the development of universal health care systems, it is appropriate to recognize "Health in All Policies" as the overarching policy framework, with public health, primary health care and community services as the cross-cutting framework for all health and health-related services operating across the spectrum from primary prevention to long term care and end-stage conditions. Although this perspective is both logical and well grounded, the reality is different in most settings, and there is room for improvement everywhere.
  • In politics: the act of politics is making decisions. A decision may be required of an individual, organization, community, or country. A decision a congressman makes affects anyone in his or her jurisdiction. If one makes the decision not to vote for the President of the United States, one has given oneself no voice in the election. If many other individuals choose not to voice their opinion and/or vote, they have inadvertently allowed a majority of others to make the decision for them. On the international level, if the leadership of the U.S. decides to occupy a foreign country it not only affects the leadership; it also affects U.S. service members, their families, and the communities they come from. There are multiple cross-level and interactive effects of such a decision.

Criticism

Although generally well received, Urie Bronfenbrenner's models have encountered some criticism throughout the years. Most criticism center around the difficulties to empirically test the theory and model and the broadness of the theory that makes it challenging to intervene at an any given level. One main critique of Brenfenbrenner's Biological model is that it "...focuses too much on the biological and cognitive aspects of human development, but not much on socioemotional aspect of human development". Some examples of critiques of the theory are:

  • Challenging to evaluate all components empirically.
  • Difficult explanatory model to apply because it requires extensive scope of ecological detail with which to build up meaning that everything in someone's environment needs to be taken into account.
  • Failure to acknowledge that children positively cross boundaries to develop complex identities.
  • Inability to recognize that children's own constructions of family are more complex than traditional theories account for
  • The systems around children are not always linear.
  • Preoccupation with achieving "normal" childhood without a common understanding of "normal".
  • Fails to see that the variables of social life are in constant interplay and that small variables can change a system.
  • Misses the tension between control and self-realization in child-adult relationships; children can shape culture.
  • Underplays abilities, overlooks rights/feelings/complexity.
  • Gives too little attention to biological and cognitive factors in children's development.
  • Does not address developmental stages that are the focus of theories like Piaget's and Erikson's.

Key contributors

Mathematical diagram

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mathematical_diagram
Euclid's Elements, ms. from Lüneburg, A.D. 1200

Mathematical diagrams, such as charts and graphs, are mainly designed to convey mathematical relationships—for example, comparisons over time.

Specific types of mathematical diagrams

Argand diagram

Argand diagram.

A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand (1768–1822), although they were first described by Norwegian-Danish land surveyor and mathematician Caspar Wessel (1745–1818). Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane.

The concept of the complex plane allows a geometric interpretation of complex numbers. Under addition, they add like vectors. The multiplication of two complex numbers can be expressed most easily in polar coordinates — the magnitude or modulus of the product is the product of the two absolute values, or moduli, and the angle or argument of the product is the sum of the two angles, or arguments. In particular, multiplication by a complex number of modulus 1 acts as a rotation.

Butterfly diagram

Butterfly diagram

In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.

The butterfly diagram show a data-flow diagram connecting the inputs x (left) to the outputs y that depend on them (right) for a "butterfly" step of a radix-2 Cooley–Tukey FFT algorithm. This diagram resembles a butterfly as in the Morpho butterfly shown for comparison, hence the name.

A commutative diagram depicting the five lemma

Commutative diagram

In mathematics, and especially in category theory, a commutative diagram is a diagram of objects, also known as vertices, and morphisms, also known as arrows or edges, such that when selecting two objects any directed path through the diagram leads to the same result by composition.

Commutative diagrams play the role in category theory that equations play in algebra.

Hasse diagram.

Hasse diagrams

A Hasse diagram is a simple picture of a finite partially ordered set, forming a drawing of the partial order's transitive reduction. Concretely, one represents each element of the set as a vertex on the page and draws a line segment or curve that goes upward from x to y precisely when x < y and there is no z such that x < z < y. In this case, we say y covers x, or y is an immediate successor of x. In a Hasse diagram, it is required that the curves be drawn so that each meets exactly two vertices: its two endpoints. Any such diagram (given that the vertices are labeled) uniquely determines a partial order, and any partial order has a unique transitive reduction, but there are many possible placements of elements in the plane, resulting in different Hasse diagrams for a given order that may have widely varying appearances.

Knot diagram.

Knot diagrams

In Knot theory a useful way to visualise and manipulate knots is to project the knot onto a plane—;think of the knot casting a shadow on the wall. A small perturbation in the choice of projection will ensure that it is one-to-one except at the double points, called crossings, where the "shadow" of the knot crosses itself once transversely

At each crossing we must indicate which section is "over" and which is "under", so as to be able to recreate the original knot. This is often done by creating a break in the strand going underneath. If by following the diagram the knot alternately crosses itself "over" and "under", then the diagram represents a particularly well-studied class of knot, alternating knots.

Venn diagram.

Venn diagram

A Venn diagram is a representation of mathematical sets: a mathematical diagram representing sets as circles, with their relationships to each other expressed through their overlapping positions, so that all possible relationships between the sets are shown.

The Venn diagram is constructed with a collection of simple closed curves drawn in the plane. The principle of these diagrams is that classes be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not null.

Voronoi centerlines.

Voronoi diagram

A Voronoi diagram is a special kind of decomposition of a metric space determined by distances to a specified discrete set of objects in the space, e.g., by a discrete set of points. This diagram is named after Georgy Voronoi, also called a Voronoi tessellation, a Voronoi decomposition, or a Dirichlet tessellation after Peter Gustav Lejeune Dirichlet.

In the simplest case, we are given a set of points S in the plane, which are the Voronoi sites. Each site s has a Voronoi cell V(s) consisting of all points closer to s than to any other site. The segments of the Voronoi diagram are all the points in the plane that are equidistant to two sites. The Voronoi nodes are the points equidistant to three (or more) sites

Wallpaper group diagram.

Wallpaper group diagrams

A wallpaper group or plane symmetry group or plane crystallographic group is a mathematical classification of a two-dimensional repetitive pattern, based on the symmetries in the pattern. Such patterns occur frequently in architecture and decorative art. There are 17 possible distinct groups.

Wallpaper groups are two-dimensional symmetry groups, intermediate in complexity between the simpler frieze groups and the three-dimensional crystallographic groups, also called space groups. Wallpaper groups categorize patterns by their symmetries. Subtle differences may place similar patterns in different groups, while patterns which are very different in style, color, scale or orientation may belong to the same group.

Young diagram

A Young diagram or Young tableau, also called Ferrers diagram, is a finite collection of boxes, or cells, arranged in left-justified rows, with the row sizes weakly decreasing (each row has the same or shorter length than its predecessor).

Young diagram.

Listing the number of boxes in each row gives a partition of a positive integer n, the total number of boxes of the diagram. The Young diagram is said to be of shape , and it carries the same information as that partition. Listing the number of boxes in each column gives another partition, the conjugate or transpose partition of ; one obtains a Young diagram of that shape by reflecting the original diagram along its main diagonal.

Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians.

Other mathematical diagrams

Moon

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Moon   Near side of the Moon , lunar ...