Search This Blog

Friday, November 8, 2024

Righteousness

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Righteousness

Righteousness, or rectitude, is the quality or state of being morally correct and justifiable. It can be considered synonymous with "rightness" or being "upright" or to-the-light and visible. It can be found in Indian, Chinese and Abrahamic religions and traditions, among others, as a theological concept. For example, from various perspectives in Zoroastrianism, Hinduism, Buddhism, Islam, Christianity, Confucianism, Taoism, Judaism it is considered an attribute that implies that a person's actions are justified, and can have the connotation that the person has been "judged" or "reckoned" as leading a life that is pleasing to God.

William Tyndale (translator of the Bible into English in 1526) remodelled the word after an earlier word rihtwis, which would have yielded modern English *rightwise or *rightways. He used it to translate the Hebrew root צדק tzedek, which appears over five hundred times in the Hebrew Bible, and the Greek word δίκαιος (dikaios), which appears more than two hundred times in the New Testament.

Etymologically, it comes from Old English rihtwīs, from riht 'right' + wīs 'manner, state, condition' (as opposed to wrangwīs, "wrongful"). The change in the ending in the 16th century was due to association with words such as bounteous.

Ethics or moral philosophy

Ethics is a major branch of philosophy which encompasses right conduct and good living. Rushworth Kidder states that "standard definitions of ethics have typically included such phrases as 'the science of the ideal human character' or 'the science of moral duty'". Richard William Paul and Linda Elder define ethics as "a set of concepts and principles that guide us in determining what behavior helps or harms sentient creatures".  The Cambridge Dictionary of Philosophy states that the word ethics is "commonly used interchangeably with 'morality' ... and sometimes it is used more narrowly to mean the moral principles of a particular tradition, group or individual".

Abrahamic and Abrahamic-inspired religions

Christianity

In the New Testament, the word righteousness, a translation for the Greek word dikaiosunē, is used in the sense of 'being righteous before others' (e.g. Matthew 5:20) or 'being righteous before God' (e.g. Romans 1:17). William Lane Craig argues that we should think of God as the "paradigm, the locus, the source of all moral value and standards". In Matthew's account of the Baptism of Jesus, Jesus tells the prophet "it is fitting for us to fulfill all righteousness" as Jesus requests that John perform the rite for him. The Sermon on the Mount contains the memorable commandment "Seek ye first the kingdom of God and His righteousness".

A secondary meaning of the Greek word is 'justice', which is used to render it in a few places by a few Bible translations, e.g. in Matthew 6:33 in the New English Bible.

Jesus asserts the importance of righteousness by saying in Matthew 5:20, "For I tell you that unless your righteousness surpasses that of the Pharisees and the teachers of the law, you will certainly not enter the kingdom of heaven".

However, Paul the Apostle speaks of two ways, at least in theory, to achieve righteousness: through the Law of Moses (or Torah), and through faith in the atonement made possible through the death and resurrection of Jesus Christ (Romans 10:3–13). However he repeatedly emphasizes that faith is the effective way. For example, just a few verses earlier, he states the Jews did not attain the law of righteousness because they sought it not by faith, but by works. The New Testament speaks of a salvation founded on God's righteousness, as exemplified throughout the history of salvation narrated in the Old Testament (Romans 9–11). Paul writes to the Romans that righteousness comes by faith: "... a righteousness that is by faith from first to last, just as it is written: 'The righteous will live by faith'" (Romans 1:17).

In 2 Corinthians 9:9 the New Revised Standard Version has a footnote that the original word has the meaning of 'benevolence', and the Messianic Jewish commentary of David Stern affirms the Jewish practice of 'doing tzedakah' as charity, in referring to the Matthew 6:33 and 2 Corinthians 9:9 passages.

James 2:14–26 speaks of the relationship between works of righteousness and faith, saying that "faith without works is dead". Righteous acts according to James include works of charity (James 2:15–16) as well as avoiding sins against the Law of Moses (James 2:11–12).

2 Peter 2:7–8 describes Lot as a righteous man.

Type of saint

In the Eastern Orthodox Church, "Righteous" is a type of saint who is regarded as a holy person under the Old Covenant (Old Testament Israel). The word is also sometimes used for married saints of the New Covenant (the Church). According to Orthodox theology, the Righteous saints of the Old Covenant were not able to enter into heaven until after the death of Jesus on the cross (Hebrews 11:40), but had to await salvation in the Bosom of Abraham (see: Harrowing of Hell).

Islam

Righteousness is mentioned several times in the Quran. The Quran says that a life of righteousness is the only way to go to Heaven.

We will give the home of the Hereafter to those who do not want arrogance or mischief on earth; and the end is best for the righteous.

O mankind! We created you from a single (pair) of a male and a female and made you into nations and tribes that ye may know each other (not that ye may despise each other). Verily the most honored of you in the sight of Allah is (he who is) the most righteous of you. And Allah has full knowledge and is well acquainted (with all things).

Righteousness is not that you turn your faces to the east and the west [in prayer]. But righteous is the one who believes in God, the Last Day, the Angels, the Scripture and the Prophets; who gives his wealth in spite of love for it to kinsfolk, orphans, the poor, the wayfarer, to those who ask and to set slaves free. And (righteous are) those who pray, pay alms, honor their agreements, and are patient in (times of) poverty, ailment and during conflict. Such are the people of truth. And they are the God-Fearing.

Judaism

Righteousness is one of the chief attributes of God as portrayed in the Hebrew Bible. Its chief meaning concerns ethical conduct (for example, Leviticus 19:36; Deuteronomy 25:1; Psalms 1:6; Proverbs 8:20). In the Book of Job, the title character is introduced as "a good and righteous man". The Book of Wisdom calls on rulers of the world to embrace righteousness.

Mandaeism

An early self-appellation for Mandaeans is bhiri zidqa meaning 'elect of righteousness' or 'the chosen righteous', a term found in the Book of Enoch and Genesis Apocryphon II, 4. In addition to righteousness, zidqa also refers to alms or almsgiving.

East Asian religions

Yi (Confucianism)

Yi, (Chinese: ; simplified Chinese: ; traditional Chinese: ; pinyin: ; Jyutping: Ji6; Zhuyin Fuhao: ㄧˋ), literally "justice, or justness, righteousness or rightness, meaning", is an important concept in Confucianism. It involves a moral disposition for the good in life, with the sustainable intuition, purpose, and sensibility to do good competently with no expectation of reward.

Yi resonates with Confucian philosophy's orientation towards the cultivation of reverence or benevolence (ren) and skillful practice (li).

Yi represents moral acumen that goes beyond simple rule-following, as it is based on empathy, it involves a balanced understanding of a situation, and it incorporates the "creative insights" and grounding necessary to apply virtues through deduction (Yin and Yang) and reason "with no loss of purpose and direction for the total good of fidelity. Yi represents this ideal of totality as well as a decision-generating ability to apply a virtue properly and appropriately in a situation."

In application, yi is a "complex principle" that includes:

  1. skill in crafting actions which have moral fitness according to a given concrete situation
  2. the wise recognition of such fitness
  3. the intrinsic satisfaction that comes from that recognition.

Indian religions

There might not be a single-word translation for dharma in English, but it can be translated as righteousness, religion, faith, duty, law, and virtue. Connotations of dharma include rightness, good, natural, morality, righteousness, and virtue. In common parlance, dharma means "right way of living" and "path of rightness". It encompasses ideas such as duty, rights, character, vocation, religion, customs and all behaviour considered appropriate, correct or "morally upright". It is explained as a law of righteousness and equated to satya (truth): "...when a man speaks the Truth, they say, 'He speaks the Dharma'; and if he speaks Dharma, they say, 'He speaks the Truth!' For both are one"

The wheel in the centre of India's flag symbolises Dharma.

The importance of dharma to Indian sentiments is illustrated by the government of India's decision in 1947 to include the Ashoka Chakra, a depiction of the dharmachakra ( the "wheel of dharma"), as the central motif on its flag.

Hinduism

In Hindu philosophy and religion, major emphasis is placed on individual practical morality. In the Sanskrit epics, this concern is omnipresent. Including duties, rights, laws, conduct, virtues and "right way of living". The Sanskrit epics contain themes and examples where right prevails over wrong, good over evil.

In an inscription attributed to the Indian Emperor Ashoka from the year 258 BCE, in Sanskrit, Aramaic, and Greek text, appears a Greek rendering for the Sanskrit word dharma: the word eusebeia This suggests dharma was a central concept in India at that time, and meant not only religious ideas, but ideas of right, of good, and of one's duty.

The Ramayana is one of the two great Indian epics. It tells about life in India around 1000 BCE and offers models in dharma. The hero, Rama, lived his whole life by the rules of dharma; this is why he is considered heroic. When Rama was a young boy, he was the perfect son. Later he was an ideal husband to his faithful wife, Sita, and a responsible ruler of Aydohya. Each episode of Ramayana presents life situations and ethical questions in symbolic terms. The situation is debated by the characters, and finally right prevails over wrong, good over evil. For this reason, in Hindu Epics, the good, morally upright, law-abiding king is referred to as dharmaraja.

In Mahabharata, the other major Indian epic, similarly, dharma is central, and it is presented with symbolism and metaphors. Near the end of the epic, the god Yama, referred to as dharma in the text, is portrayed as taking the form of a dog to test the compassion of Yudhishthira, who is told he may not enter paradise with such an animal, but who refuses to abandon his companion, for which decision he is then praised by dharma. The value and appeal of the Mahabharata is not as much in its complex and rushed presentation of metaphysics in the 12th book, claims Daniel H.H. Ingalls, because Indian metaphysics is more eloquently presented in other Sanskrit scriptures. The appeal of Mahabharata, like Ramayana, is in its presentation of a series of moral problems and life situations, to which there are usually three answers given, according to Ingalls: one answer is of Bhima, which is the answer of brute force, an individual angle representing materialism, egoism, and self; the second answer is of Yudhishthira, which is always an appeal to piety and gods, of social virtue and of tradition; the third answer is of introspective Arjuna, which falls between the two extremes, and who, claims Ingalls, symbolically reveals the finest moral qualities of man. The Epics of Hinduism are a symbolic treatise about life, virtues, customs, morals, ethics, law, and other aspects of dharma. There is extensive discussion of dharma at the individual level in the Epics of Hinduism, observes Ingalls; for example, on free will versus destiny, when and why human beings believe in either, ultimately concluding that the strong and prosperous naturally uphold free will, while those facing grief or frustration naturally lean towards destiny. The Epics of Hinduism illustrate various aspects of dharma, they are a means of communicating dharma with metaphors.

In Hinduism, dharma signifies behaviors that are considered to be in accord with Ṛta, the order that makes life and universe possible, and includes duties, rights, laws, conduct, virtues, and "right way of living". The concept of dharma was already in use in the historical Vedic religion, and its meaning and conceptual scope has evolved over several millennia. The ancient Tamil moral text of Tirukkural is solely based on aṟam, the Tamil term for dharma. The antonym of dharma is adharma.

Buddhism

In Buddhism dharma means cosmic law and order, but is also applied to the teachings of the Buddha. In Buddhist philosophy, dhamma/dharma is also the term for "phenomena". Dharma refers not only to the sayings of the Buddha, but also to the later traditions of interpretation and addition that the various schools of Buddhism have developed to help explain and to expand upon the Buddha's teachings. For others still, they see the dharma as referring to the "truth", or the ultimate reality of "the way that things really are" (Tibetan: ཆོས, THL: chö).

Jainism

Tattvartha Sutra mentions Das-dharma with the meaning of "righteous". These are forbearance, modesty, straightforwardness, purity, truthfulness, self-restraint, austerity, renunciation, non-attachment, and celibacy.

A right believer should constantly meditate on virtues of dharma, like supreme modesty, in order to protect the soul from all contrary dispositions. He should also cover up the shortcomings of others.

— Puruṣārthasiddhyupāya (27)

Sikhism

For Sikhs, the word Dharm means the path of righteousness and proper religious practice. For Sikhs, the word dharam (Punjabi: ਧਰਮ, dharam) means the path of righteousness and proper religious practice. Guru Granth Sahib in hymn 1353 connotes dharam as duty. The 3HO movement in Western culture, which has incorporated certain Sikh beliefs, defines Sikh dharam broadly as all that constitutes religion, moral duty, and way of life.

Persian religions

Zoroastrianism

In Zoroastrianism, asha is an important tenet of the Zoroastrian religion with a complex and nuanced range of meaning. It is commonly summarized in accord with its contextual implications of 'truth' and 'right(eousness)', 'order' and 'right working'.

From an early age, Zoroastrians are taught to pursue righteousness by following the Threefold Path of asha: humata, huxta, huvarshta (Good Thoughts, Good Words, Good Deeds).

One of the most sacred mantras in the religion is the Ashem Vohu, which has been translated as an "Ode to Righteousness". There are many translations, that differ due to the complexity of Avestan and the concepts involved (for other translations, see: Ashem Vohu).

"Righteousness is the best good and it is happiness. Happiness is to her/him who is righteous, for the sake of the best righteousness".

Imputation (statistics)

From Wikipedia, the free encyclopedia

In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency. Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.

Listwise (complete case) deletion

By far, the most common means of dealing with missing data is listwise deletion (also known as complete case), which is when all cases with a missing value are deleted. If the data are missing completely at random, then listwise deletion does not add any bias, but it does decrease the power of the analysis by decreasing the effective sample size. For example, if 1000 cases are collected but 80 have missing values, the effective sample size after listwise deletion is 920. If the cases are not missing completely at random, then listwise deletion will introduce bias because the sub-sample of cases represented by the missing data are not representative of the original sample (and if the original sample was itself a representative sample of a population, the complete cases are not representative of that population either). While listwise deletion is unbiased when the missing data is missing completely at random, this is rarely the case in actuality.

Pairwise deletion (or "available case analysis") involves deleting a case when it is missing a variable required for a particular analysis, but including that case in analyses for which all required variables are present. When pairwise deletion is used, the total N for analysis will not be consistent across parameter estimations. Because of the incomplete N values at some points in time, while still maintaining complete case comparison for other parameters, pairwise deletion can introduce impossible mathematical situations such as correlations that are over 100%.

The one advantage complete case deletion has over other methods is that it is straightforward and easy to implement. This is a large reason why complete case is the most popular method of handling missing data in spite of the many disadvantages it has.

Single imputation

Hot-deck

A once-common method of imputation was hot-deck imputation where a missing value was imputed from a randomly selected similar record. The term "hot deck" dates back to the storage of data on punched cards, and indicates that the information donors come from the same dataset as the recipients. The stack of cards was "hot" because it was currently being processed.

One form of hot-deck imputation is called "last observation carried forward" (or LOCF for short), which involves sorting a dataset according to any of a number of variables, thus creating an ordered dataset. The technique then finds the first missing value and uses the cell value immediately prior to the data that are missing to impute the missing value. The process is repeated for the next cell with a missing value until all missing values have been imputed. In the common scenario in which the cases are repeated measurements of a variable for a person or other entity, this represents the belief that if a measurement is missing, the best guess is that it hasn't changed from the last time it was measured. This method is known to increase risk of increasing bias and potentially false conclusions. For this reason LOCF is not recommended for use.

Cold-deck

Cold-deck imputation, by contrast, selects donors from another dataset. Due to advances in computer power, more sophisticated methods of imputation have generally superseded the original random and sorted hot deck imputation techniques. It is a method of replacing with response values of similar items in past surveys. It is available in surveys that measure time intervals.

Mean substitution

Another imputation technique involves replacing any missing value with the mean of that variable for all other cases, which has the benefit of not changing the sample mean for that variable. However, mean imputation attenuates any correlations involving the variable(s) that are imputed. This is because, in cases with imputation, there is guaranteed to be no relationship between the imputed variable and any other measured variables. Thus, mean imputation has some attractive properties for univariate analysis but becomes problematic for multivariate analysis.

Mean imputation can be carried out within classes (i.e. categories such as gender), and can be expressed as where is the imputed value for record and is the sample mean of respondent data within some class . This is a special case of generalized regression imputation:

Here the values are estimated from regressing on in non-imputed data, is a dummy variable for class membership, and data are split into respondent () and missing ().

Non-negative matrix factorization

Non-negative matrix factorization (NMF) can take missing data while minimizing its cost function, rather than treating these missing data as zeros that could introduce biases. This makes it a mathematically proven method for data imputation. NMF can ignore missing data in the cost function, and the impact from missing data can be as small as a second order effect.

Regression

Regression imputation has the opposite problem of mean imputation. A regression model is estimated to predict observed values of a variable based on other variables, and that model is then used to impute values in cases where the value of that variable is missing. In other words, available information for complete and incomplete cases is used to predict the value of a specific variable. Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have an error term included in their estimation, thus the estimates fit perfectly along the regression line without any residual variance. This causes relationships to be over identified and suggest greater precision in the imputed values than is warranted. The regression model predicts the most likely value of missing data but does not supply uncertainty about that value.

Stochastic regression was a fairly successful attempt to correct the lack of an error term in regression imputation by adding the average regression variance to the regression imputations to introduce error. Stochastic regression shows much less bias than the above-mentioned techniques, but it still missed one thing – if data are imputed then intuitively one would think that more noise should be introduced to the problem than simple residual variance.

Multiple imputation

In order to deal with the problem of increased noise due to imputation, Rubin (1987) developed a method for averaging the outcomes across multiple imputed data sets to account for this. All multiple imputation methods follow three steps.

  1. Imputation – Similar to single imputation, missing values are imputed. However, the imputed values are drawn m times from a distribution rather than just once. At the end of this step, there should be m completed datasets.
  2. Analysis – Each of the m datasets is analyzed. At the end of this step there should be m analyses.
  3. Pooling – The m results are consolidated into one result by calculating the mean, variance, and confidence interval of the variable of concern or by combining simulations from each separate model.

Multiple imputation can be used in cases where the data are missing completely at random, missing at random, and missing not at random, though it can be biased in the latter case. One approach is multiple imputation by chained equations (MICE), also known as "fully conditional specification" and "sequential regression multiple imputation." MICE is designed for missing at random data, though there is simulation evidence to suggest that with a sufficient number of auxiliary variables it can also work on data that are missing not at random. However, MICE can suffer from performance problems when the number of observation is large and the data have complex features, such as nonlinearities and high dimensionality.

More recent approaches to multiple imputation use machine learning techniques to improve its performance. MIDAS (Multiple Imputation with Denoising Autoencoders), for instance, uses denoising autoencoders, a type of unsupervised neural network, to learn fine-grained latent representations of the observed data. MIDAS has been shown to provide accuracy and efficiency advantages over traditional multiple imputation strategies.

As alluded in the previous section, single imputation does not take into account the uncertainty in the imputations. After imputation, the data is treated as if they were the actual real values in single imputation. The negligence of uncertainty in the imputation can lead to overly precise results and errors in any conclusions drawn. By imputing multiple times, multiple imputation accounts for the uncertainty and range of values that the true value could have taken. As expected, the combination of both uncertainty estimation and deep learning for imputation is among the best strategies and has been used to model heterogeneous drug discovery data.

Additionally, while single imputation and complete case are easier to implement, multiple imputation is not very difficult to implement. There are a wide range of statistical packages in different statistical software that readily performs multiple imputation. For example, the MICE package allows users in R to perform multiple imputation using the MICE method. MIDAS can be implemented in R with the rMIDAS package and in Python with the MIDASpy package.

Generalized least squares

From Wikipedia, the free encyclopedia

It requires knowledge of the covariance matrix for the residuals. If this is unknown, estimating the covariance matrix gives the method of feasible generalized least squares (FGLS). However, FGLS provides fewer guarantees of improvement.

Method

In standard linear regression models, one observes data on n statistical units with k − 1 predictor values and one response value each.

The response values are placed in a vector, and the predictor values are placed in the design matrix, where each row is a vector of the predictor variables (including a constant) for the th data point.

The model assumes that the conditional mean of given to be a linear function of and that the conditional variance of the error term given is a known non-singular covariance matrix, . That is, where is a vector of unknown constants, called "regression coefficients", which are estimated from the data.

If is a candidate estimate for , then the residual vector for is . The generalized least squares method estimates by minimizing the squared Mahalanobis length of this residual vector:which is equivalent towhich is a quadratic programming problem. The stationary point of the objective function occurs whenso the estimator isThe quantity is known as the precision matrix (or dispersion matrix), a generalization of the diagonal weight matrix.

Properties

The GLS estimator is unbiased, consistent, efficient, and asymptotically normal withGLS is equivalent to applying ordinary least squares (OLS) to a linearly transformed version of the data. This can be seen by factoring using a method such as Cholesky decomposition. Left-multiplying both sides of by yields an equivalent linear model: In this model, , where is the identity matrix. Then, can be efficiently estimated by applying OLS to the transformed data, which requires minimizing the objective, This transformation effectively standardizes the scale of and de-correlates the errors. When OLS is used on data with homoscedastic errors, the Gauss–Markov theorem applies, so the GLS estimate is the best linear unbiased estimator for .

Weighted least squares

A special case of GLS, called weighted least squares (WLS), occurs when all the off-diagonal entries of Ω are 0. This situation arises when the variances of the observed values are unequal or when heteroscedasticity is present, but no correlations exist among the observed variances. The weight for unit i is proportional to the reciprocal of the variance of the response for unit i.

Derivation by maximum likelihood estimation

Ordinary least squares can be interpreted as maximum likelihood estimation with the prior that the errors are independent and normally distributed with zero mean and common variance. In GLS, the prior is generalized to the case where errors may not be independent and may have differing variances. For given fit parameters , the conditional probability density function of the errors are assumed to be: By Bayes' theorem,In GLS, a uniform (improper) prior is taken for , and as is a marginal distribution, it does not depend on . Therefore the log-probability iswhere the hidden terms are those that do not depend on , and is the log-likelihood. The maximum a posteriori (MAP) estimate is then the maximum likelihood estimate (MLE), which is equivalent to the optimization problem from above,

where the optimization problem has been re-written using the fact that the logarithm is a strictly increasing function and the property that the argument solving an optimization problem is independent of terms in the objective function which do not involve said terms. Substituting for ,

Feasible generalized least squares

If the covariance of the errors is unknown, one can get a consistent estimate of , say , using an implementable version of GLS known as the feasible generalized least squares (FGLS) estimator.

In FGLS, modeling proceeds in two stages:

  1. The model is estimated by OLS or another consistent (but inefficient) estimator, and the residuals are used to build a consistent estimator of the errors covariance matrix (to do so, one often needs to examine the model adding additional constraints; for example, if the errors follow a time series process, a statistician generally needs some theoretical assumptions on this process to ensure that a consistent estimator is available).
  2. Then, using the consistent estimator of the covariance matrix of the errors, one can implement GLS ideas.

Whereas GLS is more efficient than OLS under heteroscedasticity (also spelled heteroskedasticity) or autocorrelation, this is not true for FGLS. The feasible estimator is asymptotically more efficient (provided the errors covariance matrix is consistently estimated), but for a small to medium-sized sample, it can be actually less efficient than OLS. This is why some authors prefer to use OLS and reformulate their inferences by simply considering an alternative estimator for the variance of the estimator robust to heteroscedasticity or serial autocorrelation. However, for large samples, FGLS is preferred over OLS under heteroskedasticity or serial correlation. A cautionary note is that the FGLS estimator is not always consistent. One case in which FGLS might be inconsistent is if there are individual-specific fixed effects.

In general, this estimator has different properties than GLS. For large samples (i.e., asymptotically), all properties are (under appropriate conditions) common with respect to GLS, but for finite samples, the properties of FGLS estimators are unknown: they vary dramatically with each particular model, and as a general rule, their exact distributions cannot be derived analytically. For finite samples, FGLS may be less efficient than OLS in some cases. Thus, while GLS can be made feasible, it is not always wise to apply this method when the sample is small. A method used to improve the accuracy of the estimators in finite samples is to iterate; that is, to take the residuals from FGLS to update the errors' covariance estimator and then update the FGLS estimation, applying the same idea iteratively until the estimators vary less than some tolerance. However, this method does not necessarily improve the efficiency of the estimator very much if the original sample was small.

A reasonable option when samples are not too large is to apply OLS but discard the classical variance estimator

(which is inconsistent in this framework) and instead use a HAC (Heteroskedasticity and Autocorrelation Consistent) estimator. In the context of autocorrelation, the Newey–West estimator can be used, and in heteroscedastic contexts, the Eicker–White estimator can be used instead. This approach is much safer, and it is the appropriate path to take unless the sample is large, where "large" is sometimes a slippery issue (e.g., if the error distribution is asymmetric the required sample will be much larger).

The ordinary least squares (OLS) estimator is calculated by:

and estimates of the residuals are constructed.

For simplicity, consider the model for heteroscedastic and non-autocorrelated errors. Assume that the variance-covariance matrix of the error vector is diagonal, or equivalently that errors from distinct observations are uncorrelated. Then each diagonal entry may be estimated by the fitted residuals so may be constructed by:

It is important to notice that the squared residuals cannot be used in the previous expression; an estimator of the errors' variances is needed. To do so, a parametric heteroskedasticity model or nonparametric estimator can be used.

Estimate using using weighted least squares:

The procedure can be iterated. The first iteration is given by:

This estimation of can be iterated to convergence.

Under regularity conditions, the FGLS estimator (or the estimator of its iterations, if a finite number of iterations are conducted) is asymptotically distributed as:

where is the sample size, and

where means limit in probability.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...