Search This Blog

Friday, December 15, 2023

Self-efficacy

From Wikipedia, the free encyclopedia

In psychology, self-efficacy is an individual's belief in their capacity to act in the ways necessary to reach specific goals. The concept was originally proposed by the psychologist Albert Bandura.

Self-efficacy affects every area of human endeavor. By determining the beliefs a person holds regarding their power to affect situations, self-efficacy strongly influences both the power a person actually has to face challenges competently and the choices a person is most likely to make. These effects are particularly apparent, and compelling, with regard to investment behaviors such as in health, education, and agriculture.

A strong sense of self-efficacy promotes human accomplishment and personal well-being. A person with high self-efficacy views challenges as things that are supposed to be mastered rather than threats to avoid. These people are able to recover from failure faster and are more likely to attribute failure to a lack of effort. They approach threatening situations with the belief that they can control them. These things have been linked to lower levels of stress and a lower vulnerability to depression.

In contrast, people with a low sense of self-efficacy view difficult tasks as personal threats and shy away from them. Difficult tasks lead them to look at the skills they lack rather than the ones they have. It is easy for them to lose faith in their own abilities after a failure. Low self-efficacy can be linked to higher levels of stress and depression.

Theoretical approaches

Social cognitive theory

Psychologist Albert Bandura has defined self-efficacy as one's belief in one's ability to succeed in specific situations or accomplish a task. One's sense of self-efficacy can play a major role in how one approaches goals, tasks, and challenges. The theory of self-efficacy lies at the center of Bandura's social cognitive theory, which emphasizes the role of observational learning and social experience in the development of personality. The main concept in social cognitive theory is that an individual's actions and reactions, including social behaviors and cognitive processes, in almost every situation are influenced by the actions that individual has observed in others. Because self-efficacy is developed from external experiences and self-perception and is influential in determining the outcome of many events, it is an important aspect of social cognitive theory. Self-efficacy represents the personal perception of external social factors. According to Bandura's theory, people with high self-efficacy—that is, those who believe they can perform well—are more likely to view difficult tasks as something to be mastered rather than something to be avoided.

Social learning theory

Social learning theory describes the acquisition of skills that are developed exclusively or primarily within a social group. Social learning depends on how individuals either succeed or fail at dynamic interactions within groups, and promotes the development of individual emotional and practical skills as well as accurate perception of self and acceptance of others. According to this theory, people learn from one another through observation, imitation, and modeling. Self-efficacy reflects an individual's understanding of what skills he/she can offer in a group setting.

Self-concept theory

Self-concept theory seeks to explain how people perceive and interpret their own existence from clues they receive from external sources, focusing on how these impressions are organized and how they are active throughout life. Successes and failures are closely related to the ways in which people have learned to view themselves and their relationships with others. This theory describes self-concept as learned (i.e., not present at birth); organized (in the way it is applied to the self); and dynamic (i.e., ever-changing, and not fixed at a certain age).

Attribution theory

Attribution theory focuses on how people attribute events and how those beliefs interact with self-perception. Attribution theory defines three major elements of cause:

  • Locus is the location of the perceived cause. If the locus is internal (dispositional), feelings of self-esteem and self-efficacy will be enhanced by success and diminished by failure.
  • Stability describes whether the cause is perceived as static or dynamic over time. It is closely related to expectations and goals, in that when people attribute their failures to stable factors such as the difficulty of a task, they will expect to fail in that task in the future.
  • Controllability describes whether a person feels actively in control of the cause. Failing at a task one thinks one cannot control can lead to feelings of humiliation, shame, and/or anger.

Sources of self-efficacy

Mastery experiences

According to Bandura, the most effective way to build self-efficacy is to engage in mastery experiences. These mastery experiences can be defined as a personal experience of success. Achieving difficult goals in the face of adversity helps build confidence and strengthen perseverance.

Vicarious experiences of social models

Another source of self-efficacy is through vicarious experiences of social models. Seeing someone, who you view as similar to yourself, succeed at something difficult can motivate you to believe that you have the skills necessary to achieve a similar goal. However, the inverse of the previous statement is true as well. Seeing someone fail at a task can lead to doubt in personal skills and abilities. "The greater the assumed similarity, the more persuasive are the models' successes and failures."

Belief in success

A third source of self-efficacy is found through strengthening the belief that one has the ability to succeed. Those who are positively persuaded that they have the ability to complete a given task show a greater and more sustained effort to complete a task. It also lowers the effect of self-doubt in a person. However, it is important to remember that those who are doing the encouraging, put the person in a situation where success is more often likely to be attained. If they are put in a situation prematurely with no hope of any success, it can undermine self-efficacy.

Physiological and psychological states

A person's emotional and physiological state can also influence an individual's belief about their ability to perform in a given situation. When judging their own capabilities, people will often take in information from their body, how a person interprets that information impacts self-efficacy. For example, in activities that require physical strength, someone may take fatigue or pain as an indicator of inability or of effort.

How it affects human function

Choices regarding behavior

People generally avoid tasks where self-efficacy is low, but undertake tasks where self-efficacy is high. When self-efficacy is significantly beyond actual ability, it leads to an overestimation of the ability to complete tasks. On the other hand, when self-efficacy is significantly lower than actual ability, it discourages growth and skill development. Research shows that the optimum level of self-efficacy is slightly above ability; in this situation, people are most encouraged to tackle challenging tasks and gain experience. Self-efficacy is made of dimensions like magnitude, strength, and generality to explain how one believes they will perform on a specific task.

Motivation

High self-efficacy can affect motivation in both positive and negative ways. In general, people with high self-efficacy are more likely to make efforts to complete a task, and to persist longer in those efforts, than those with low self-efficacy. The stronger the self-efficacy or mastery expectations, the more active the efforts.

A negative effect of low self-efficacy is that it can lead to a state of learned helplessness. Learned helplessness was studied by Martin Seligman in an experiment in which shocks were applied to animals. Through the experiment, it was discovered that the animals placed in a cage where they could escape shocks by moving to a different part of the cage did not attempt to move if they had formerly been placed in a cage in which escape from the shocks was not possible. Low self-efficacy can lead to this state in which it is believed that no amount of effort will make a difference in the success of the task at hand.

Work performance

Self-efficacy theory has been embraced by management scholars and practitioners because of its applicability in the workplace. Overall, self-efficacy is positively and strongly related to work-related performance as measured by the weighted average correlation across 114 selected studies. The strength of the relationship, though, is moderated by both task complexity and environmental context. For more complex tasks, the relationships between self-efficacy and work performance is weaker than for easier work-related tasks. In actual work environments, which are characterized by performance constraints, ambiguous demands, deficient performance feedback, and other complicating factors, the relationship appears weaker than in controlled laboratory settings. The implications of this research is that managers should provide accurate descriptions of tasks and provide clear and concise instructions. Moreover, they should provide the necessary supporting elements, including training employees in developing their self-efficacy in addition to task-related skills, for employees to be successful. It has also been suggested that managers should factor in self-efficacy when trying to decide candidates for developmental or training programs. It has been found that those who are high in self-efficacy learn more which leads to higher job performance.

Social cognitive theory explains that employees use five basic capabilities to self influence themselves in order to initiate, regulate and sustain their behavior: symbolizing, forethought, observational, self-regulatory and self reflective.

According to one study, the study presents a new questionnaire called Work Agentic Capabilities (WAC) that measures the four agentic capabilities in the organizational context: forethought, self-regulation, self-reflection, and vicarious capability. The WAC questionnaire was validated through exploratory and confirmatory factor analyses, and it was found to be positively correlated with psychological capital, positive job attitudes, proactive organizational behaviors, perceived job performance, and promotion prospects. The study concludes that the WAC questionnaire can reliably measure agentic capabilities and can be useful in understanding the sociodemographic and organizational differences in mean values of agentic capabilities.

Thought patterns and responses

Self-efficacy has several effects on thought patterns and responses:

  • Low self-efficacy can lead people to believe tasks to be harder than they actually are, while high self-efficacy can lead people to believe tasks to be easier than they are. This often results in poor task planning, as well as increased stress.
  • People become erratic and unpredictable when engaging in a task in which they have low self-efficacy.
  • People with high self-efficacy tend to take a wider view of a task in order to determine the best plan.
  • Obstacles often stimulate people with high self-efficacy to greater efforts, where someone with low self-efficacy will tend toward discouragement and giving up.
  • A person with high self-efficacy will attribute failure to external factors, where a person with low self-efficacy will blame low ability. For example, someone with high self-efficacy in regards to mathematics may attribute a poor test grade to a harder-than-usual test, illness, lack of effort, or insufficient preparation. A person with a low self-efficacy will attribute the result to poor mathematical ability.

Health behaviors

A number of studies on the adoption of health practices have measured self-efficacy to assess its potential to initiate behavior change. With increased self-efficacy, individuals have greater confidence in their ability and thus are more likely to engage in healthy behaviors. Greater engagement in healthy behaviors, result in positive patient health outcomes such as improved quality of life. Choices affecting health (such as smoking, physical exercise, dieting, condom use, dental hygiene, seat belt use, and breast self-examination) are dependent on self-efficacy. Self-efficacy beliefs are cognitions that determine whether health behavior change will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and failures. Self-efficacy influences how high people set their health goals (e.g., "I intend to reduce my smoking", or "I intend to quit smoking altogether").

Relationship to locus of control

Bandura showed that difference in self-efficacy correlates to fundamentally different world views. People with high self-efficacy generally believe that they are in control of their own lives, that their own actions and decisions shape their lives, while people with low self-efficacy may see their lives as outside their control. For example, a student with high self-efficacy who does poorly on an exam will likely attribute the failure to the fact that they did not study enough. However, a student with low self-efficacy who does poorly on an exam is likely to believe the cause of that failure was due to the test being too difficult or challenging, which the student does not control.

Factors affecting self-efficacy

Bandura identifies four factors affecting self-efficacy.

  1. Experience, or "enactive attainment" – The experience of mastery is the most important factor determining a person's self-efficacy. Success raises self-efficacy, while failure lowers it. According to psychologist Erik Erikson: "Children cannot be fooled by empty praise and condescending encouragement. They may have to accept artificial bolstering of their self-esteem in lieu of something better, but what I call their accruing ego identity gains real strength only from wholehearted and consistent recognition of real accomplishment, that is, achievement that has meaning in their culture."
  2. Modeling, or "vicarious experience" – Modeling is experienced as, "If they can do it, I can do it as well". When we see someone succeeding, our own self-efficacy increases; where we see people failing, our self-efficacy decreases. This process is most effectual when we see ourselves as similar to the model. Although not as influential as direct experience, modeling is particularly useful for people who are particularly unsure of themselves.
  3. Social persuasion – Social persuasion generally manifests as direct encouragement or discouragement from another person. Discouragement is generally more effective at decreasing a person's self-efficacy than encouragement is at increasing it.
  4. Physiological factors – In stressful situations, people commonly exhibit signs of distress: shakes, aches and pains, fatigue, fear, nausea, etc. Perceptions of these responses in oneself can markedly alter self-efficacy. Getting "butterflies in the stomach" before public speaking will be interpreted by someone with low self-efficacy as a sign of inability, thus decreasing self-efficacy further, where high self-efficacy would lead to interpreting such physiological signs as normal and unrelated to ability. It is one's belief in the implications of physiological response that alters self-efficacy, rather than the physiological response itself.

Genetic and environmental determinants

In a Norwegian twin study, the heritability of self-efficacy in adolescents was estimated at 75 percent. The remaining variance, 25 percent, was due to environmental influences not shared between family members. The shared family environment did not contribute to individual differences in self-efficacy. The twins reared-together design may overestimate the effect of genetic influences and underestimate shared environmental influences because variables measured on the family level are modeled to be equal for both twins and thus cannot be separated into genetic and environmental components. Employing an alternative design, namely that of adoptive siblings, Buchanan et al. found significant shared environmental effects.

Self-efficacy was also found to be influenced by environmental factors like cultural context, home environment and educational environment. For example, parents provide their children with sets of aspirations, role models and expectations, and form beliefs about their children's abilities. Parents' beliefs are communicated to their children and affect the children's own ability beliefs. The classroom environment can also influence the students' self-efficacy through the amount and type of teacher attention, social comparisons, the tasks, the grading system and more. These are often influenced by school environment, including its culture and its educational philosophy. Studies showed that school environment influences the way the four sources of self-efficacy shape students' academic self-efficacy. For example, in different school systems - Democratic schools, Waldorf schools and mainstream public schools - there were differences in the way academic self-efficacy changed along grade levels, as well as variations in the roles of the various sources of self-efficacy. Both parental and educational environments are embedded in wider cultural contexts which influence the way self-efficacy is formed. For example, the mathematics self-efficacy of students from collectivist cultures was found to be more influenced by vicarious experiences and social persuasions than self-efficacy of students from individualist cultures.

Theoretical models of behavior

A theoretical model of the effect of self-efficacy on transgressive behavior was developed and verified in research with school children.

Prosociality and moral disengagement

Prosocial behavior (such as helping others, sharing, and being kind and cooperative) and moral disengagement (manifesting in behaviors such as making excuses for bad behavior, avoiding responsibility for consequences, and blaming the victim) are negatively correlated. Academic, social, and self-regulatory self-efficacy encourages prosocial behavior, and thus helps prevent moral disengagement.

Over-efficaciousness in learning

In low-performing students, self-efficacy is not a self-fulfilling prophecy. Over-efficaciousness or 'illusional' efficacy discourages the critical examination of one's practices, therefore inhibiting professional learning. One study, which included 101 lower-division Portuguese students at U.T. Austin, examined the foreign students' beliefs about learning, goal attainment, and motivation to continue with language study. It was concluded that over-efficaciousness negatively affected student motivation, so that students who believed they were "good at languages" had less motivation to study.

Health behavior change

Social-cognitive models of health behavior change cast self-efficacy as predictor, mediator, or moderator. As a predictor, self-efficacy is supposed to facilitate the forming of behavioral intentions, the development of action plans, and the initiation of action. As mediator, self-efficacy can help prevent relapse to unhealthy behavior. As a moderator, self-efficacy can support the translation of intentions into action. See Health action process approach.

Possible applications

Academic contexts

Parents' sense of academic efficacy for their child is linked to their children's scholastic achievement. If the parents have higher perceived academic capabilities and aspirations for their child, the child itself will share those same beliefs. This promotes academic self-efficacy for the child, and in turn, leads to scholastic achievement. It also leads to prosocial behavior, and reduces vulnerability to feelings of futility and depression. There is a relationship between low self-efficacy and depression.

In a study, the majority of a group of students questioned felt they had a difficulty with listening in class situations. Instructors then helped strengthen their listening skills by making them aware about how the use of different strategies could produce better outcomes. This way, their levels of self-efficacy were improved as they continued to figure out what strategies worked for them.

STEM

Self-efficacy has proven especially useful for helping undergraduate students to gain insights into their career development in STEM fields. Researchers have reported that mathematics self-efficacy is more predictive of mathematics interest, choice of math-related courses, and math majors than past achievements in math or outcome expectations.

Self-efficacy theory has been applied to the career area to examine why women are underrepresented in male-dominated STEM fields such as mathematics, engineering, and science. It was found that gender differences in self-efficacy expectancies importantly influence the career-related behaviors and career choices of young women.

Technical self-efficacy was found to be a crucial factor for teaching computer programming to school students, as students with higher levels of technological self-efficacy achieve higher learning outcomes. The effect of technical self-efficacy was found to be even stronger than the effect of gender.

Writing

Writing studies research indicates a strong relationship linking perceived self-efficacy to motivation and performance outcomes. Students' academic accomplishments are inextricably connected to their self-thought of efficacy and constructed motivation within their contexts. The resilient efforts that highly self-efficacious individuals exert usually enable them to face the challenge and produce high-performance achievements. Besides, individuals place more value on the academic activities which they used to achieve success  Recent writing research accentuated this connection between writers' self-efficacy, motivation and efforts offered, and achieving success in writing. In another way, writers with a high level of confidence in their writing capabilities and processes are more willing to work persistently for satisfying and effective writing. In contrast, those who have less sense of efficacy are unable to resist any failure and tend to avoid what they believe it as a painful experience_ writing. There is a causal relationship between self-efficacy beliefs that the writers hold and the accomplishments that they can achieve in their writing. Accordingly, scholars emphasized that writing self-efficacy beliefs are instrumental for making predictions of crafting outcomes.

Empirically speaking, there is a study on introductory Composition courses that proved that poor writing is strongly sponsored by the writers' self-doubts of making effective writing rather than their actual writing capabilities. Self-referent thought is a powerful mediator that links one's knowledge and actions. Therefore, even when individuals have the required skills and knowledge, their self-referent may continue in hindering their optimal performance. A 1997 study looked at how self-efficacy could influence the writing ability of 5th graders in the United States. Researchers found that there was a direct correlation between students' self-efficacy and their own writing apprehension, essay performance, and perceived usefulness of writing. As the researchers suggest, this study is important because it showed how important it is for teachers to teach skills and also to build confidence in their students. A more recent study was done that seemed to replicate the findings of the previous study quite nicely. This study found that students' beliefs about their own writing did have an impact on their self-efficacy, apprehension, and performance. This is also evident in a different study on collegiate students that reported the change of knowledge seeking as an outcome of their self-efficacy promotion. Thus, students' self-efficacy is predictive of students' production of effective writing. Therefore, increasing their writing positive beliefs resulted in better performance in their writing. Nurturing the participants' perceived self-efficacy elevated the goals that they used to set up in the writing courses, and this, in turn, promoted their quality of writing and placed more sense of self-satisfaction. Self-regulatory writing is another key determinant associated with writing efficacy and has great influence on writing development. Self-regulation encapsulates the writing dynamism of complexities, time structure, strategies, and whether deficiencies or capabilities. Through self-regulatory efficacy, writers strive toward more self-efficaciousness that effectively impacts their writing attainments.

Motivation

One of the factors most commonly associated with self-efficacy in writing studies is motivation. Motivation is often divided into two categories: extrinsic and intrinsic. McLeod suggests that intrinsic motivators tend to be more effective than extrinsic motivators because students then perceive the given task as inherently valuable. Additionally, McCarthy, Meier, and Rinderer explain that writers who are intrinsically motivated tend to be more self-directed, take active control of their writing, and see themselves as more capable of setting and accomplishing goals. Furthermore, writing studies research indicates that self-efficacy influences student choices, effort, persistence, perseverance, thought patterns, and emotional reactions when completing a writing assignment. Students with a high self-efficacy are more likely to attempt and persist in unfamiliar writing tasks.

Performance outcomes

Self-efficacy has often been linked to students' writing performance outcomes. More so than any other element within the cognitive-affective domain, self-efficacy beliefs have proven to be predictive of performance outcomes in writing. In order to assess the relationship between self-efficacy and writing capabilities, several studies have constructed scales to measure students' self-efficacy beliefs. The results of these scales are then compared to student writing samples. The studies included other variables, such as writing anxiety, grade goals, depth of processing, and expected outcomes. However, self-efficacy was the only variable that was a statistically significant predictor of writing performance.

Public speaking

A strong negative relationship has been suggested between levels of speech anxiety and self-efficacy.

Healthcare

As the focus of healthcare continues to transition from the medical model to health promotion and preventive healthcare, the role of self-efficacy as a potent influence on health behavior and self-care has come under review. According to Luszczynska and Schwarzer, self-efficacy plays a role in influencing the adoption, initiation, and maintenance of healthy behaviors, as well as curbing unhealthy practices.

Healthcare providers can integrate self-efficacy interventions into patient education. One method is to provide examples of other people acting on a health promotion behavior and then work with the patient to encourage their belief in their own ability to change. Furthermore, when nurses followed-up by telephone after hospital discharge, individuals with chronic obstructive pulmonary disease (COPD) were found to have increased self-efficacy in managing breathing difficulties. In this study, the nurses helped reinforce education and reassured patients regarding their self-care management techniques while in their home environment.

Other contexts

At the National Kaohsiung First University of Science and Technology in Taiwan, researchers investigated the correlations between general Internet self-efficacy (GISE), Web-specific self-efficacy (WSE), and e-service usage. Researchers concluded that GISE directly affects the WSE of a consumer, which in turn shows a strong correlation with e-service usage. These findings are significant for future consumer targeting and marketing.

Furthermore, self-efficacy has been included as one of the four factors of core self-evaluation, one's fundamental appraisal of oneself, along with locus of control, neuroticism, and self-esteem. Core self-evaluation has shown to predict job satisfaction and job performance.

Researchers have also examined self-efficacy in the context of the work–life interface. Chan et al. (2016) developed and validated a measure "self-efficacy to regulate work and life" and defined it as "the belief one has in one's own ability to achieve a balance between work and non-work responsibilities, and to persist and cope with challenges posed by work and non-work demands" (p. 1758). Specifically, Chan et al. (2016) found that "self-efficacy to regulate work and life" helped to explain the relationship between work–family enrichment, work–life balance, and job satisfaction and family satisfaction. Chan et al. (2017) also found that "self-efficacy to regulate work and life" assists individuals to achieve work–life balance and work engagement despite the presence of family and work demands.

Subclassifications

While self-efficacy is sometimes measured as a whole, as with the General Self-Efficacy Scale, it is also measured in particular functional situations.

Social self-efficacy

Social self-efficacy has been variably defined and measured. According to Smith and Betz, social self-efficacy is "an individual's confidence in her/his ability to engage in the social interactional tasks necessary to initiate and maintain interpersonal relationships." They measured social self-efficacy using an instrument of their own devise called the Scale of Perceived Social Self-Efficacy, which measured six domains: (1) making friends, (2) pursuing romantic relationships, (3) social assertiveness, (4) performance in public situations, (5) groups or parties, and (6) giving or receiving help. More recently, it has been suggested that social self-efficacy can also be operationalised in terms of cognitive (confidence in knowing what to do in social situations) and behavioral (confidence in performing in social situations) social self-efficacy.

Matsushima and Shiomi measured self-efficacy by focusing on self-confidence about social skill in personal relationship, trust in friends, and trust by friends.

Researchers suggest that social self-efficacy is strongly correlated with shyness and social anxiety.

Academic self-efficacy

Academic self-efficacy refers to the belief that one can successfully engage in and complete course-specific academic tasks, such as accomplishing course aims, satisfactorily completing assignments, achieving a passing grade, and meeting the requirements to continue to pursue one's major course of study.[78] Various empirical inquiries have been aimed at measuring academic self-efficacy.

Positive academic emotions, such as pride, enthusiasm, and enjoyment, are likely to be influenced by the level of self-efficacy an individual holds. This is because self-efficacy has been linked to an individual's belief in their ability to successfully complete tasks. Therefore, as an individual's self-efficacy increases, they may be more likely to experience positive academic emotions.

Eating self-efficacy

Eating self-efficacy refers to an individual's perceived belief that they can resist the impulse to eat.

Other

Other areas of self-efficacy that have been identified for study include teacher self-efficacy and technological self-efficacy.

Clarifications and distinctions

Self-efficacy versus Efficacy
Unlike efficacy, which is the power to produce an effect—in essence, competence—the term self-efficacy is used, by convention, to refer to the belief (accurate or not) that one has the power to produce that effect by completing a given task or activity related to that competency. Self-efficacy is the belief in one's efficacy.
Self-efficacy versus Self-esteem
Self-efficacy is the perception of one's own ability to reach a goal; self-esteem is the sense of self-worth. For example, a person who is a terrible rock climber would probably have poor self-efficacy with regard to rock climbing, but this will not affect self-esteem if the person does not rely on rock climbing to determine self-worth. On the other hand, one might have enormous confidence with regard to rock climbing, yet set such a high standard, and base enough of self-worth on rock-climbing skill, that self-esteem is low. Someone who has high self-efficacy in general but is poor at rock climbing might have misplaced confidence, or believe that improvement is possible.
Self-efficacy versus Confidence
Canadian-American psychologist Albert Bandura describes the difference between self-efficacy and confidence as such:

the construct of self-efficacy differs from the colloquial term 'confidence.' Confidence is a nonspecific term that refers to strength of belief but does not necessarily specify what the certainty is about. I can be supremely confident that I will fail at an endeavor. Perceived self-efficacy refers to belief in one's agentive capabilities, that one can produce given levels of attainment. A self-efficacy belief, therefore, includes both an affirmation of a capability level and the strength of that belief.

Self-efficacy versus Self-concept
Self-efficacy comprises beliefs of personal capability to perform specific actions. Self-concept is measured more generally and includes the evaluation of such competence and the feelings of self-worth associated with the behaviors in question. In an academic situation, a student's confidence in their ability to write an essay is self-efficacy. Self-concept, on the other hand, could be how a student's level of intelligence affects their beliefs regarding their worth as a person.
Self-efficacy as part of core self-evaluations
Timothy A. Judge et al. (2002) has argued that the concepts of locus of control, neuroticism, generalized self-efficacy (which differs from Bandura's theory of self-efficacy) and self-esteem are so strongly correlated and exhibit such a high degree of theoretical overlap that they are actually aspects of the same higher order construct, which he calls core self-evaluations.

Thursday, December 14, 2023

Cognitive test

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cognitive_test

Cognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence.

Modern cognitive tests originated through the work of James McKeen Cattell who coined the term "mental tests". They followed Francis Galton's development of physical and physiological tests. For example, Galton measured strength of grip and height and weight. He established an "Anthropometric Laboratory" in the 1880s where patrons paid to have physical and physiological attributes measured. Galton's measurements had an enormous influence on psychology. Cattell continued the measurement approach with simple measurements of perception. Cattell's tests were eventually abandoned in favor of the battery test approach developed by Alfred Binet.

List of human tests

  • Inductive reasoning tests
    • Inductive reasoning aptitude: Also known as abstract reasoning tests and diagrammatic style tests, are utilized by examining a person's problem-solving skills. This test is used to "measure the ability to work flexibly with unfamiliar information to find solutions." These tests are often visualized through a set of patterns or sequences, with the user determining what does or does not belong.
  • Intelligence quotient
    • Situational judgement test: A situational judgement test is used to examine how an individual responds to certain situations. Oftentimes these tests include a scenario with multiple responses, with the user selecting which response they feel is the most appropriate given the situation. This is used to assess how the user would respond to certain situations that may arise in the future. Some companies that use situational judgement tests during their hiring process include Sony, Walmart, Herbert Smith, and much more.
    • Intelligence tests
      • Kohs block design test: "The Kohs Block Design Test is a non-verbal assessment of executive functioning, useful with the language and hearing impaired"
      • Mental age
      • Miller Analogies Test: According to Pearson Assessments, the Miller Analogies Test is used to determine a students ability to think analytically. The test is 60 minutes long, and is used by schools to determine those who are able to think analytically, and those who are only "memorizing and repeating information"
      • Otis–Lennon School Ability Test: The OLSAT is a multiple choice exam administered to students anywhere from Pre-K to 12th grade, used to identify which students are intellectually gifted. Students will need to be able to: "Follow directions, detect likenesses and differences, recall words and numbers, classify items, establish sequences, solve arithmetic problems, and complete analogies." The test consists of a mixture between verbal and non-verbal sections, helping inform the schools of the students "verbal, nonverbal, and quantitative ability"
      • Raven's Progressive Matrices: The Raven's Progressive Matrices is a nonverbal test consisting of 60 multiple choice questions. This test is used to measure the individual's abstract reasoning, and is considered a nonverbal way to test an individual's "fluid intelligence."
      • Stanford–Binet Intelligence Scales: By measuring the memory, reasoning, knowledge, and processing power of the user, this test is able to determine "an individual's overall intelligence, cognitive ability, and detect any cognitive impairment or learning disabilities." This test measures five factors of cognitive ability, which are as follows: "fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory."
      • Wechsler Adult Intelligence Scale: The Wechsler Adult Intelligence Scale (WAIS) is used to determine and assess the intelligence of the participant. This is one of the more common tests used to test an individual's intelligence quotient. Throughout its history, this test has been revised multiple times since its creation, starting with the WAIS in 1955, to the WAIS-R in 1981, to the WAIS-III in 1996, and most recently the WAIS-IV in 2008. This test helps assess the level of the individuals verbal comprehension, perceptual reasoning, working memory, and processing speed.
      • Wechsler Intelligence Scale for Children: The Wechsler Intelligence Scale for Children (WISC) is for children within the age range of six to sixteen years old. While this test can be used to help determine a child's intelligence quotient, it is often used to determine a child's cognitive abilities. First introduced in 1949, the WSIC is now on its fifth edition (WISC-V), and was most recently updated in 2014. Similar to the WAIS (Wechsler Adult Intelligence Scale), this test helps assess the level of the individuals verbal comprehension, perceptual reasoning, working memory, and processing speed.
      • Wechsler Preschool and Primary Scale of Intelligence: The Wechsler Preschool and Primary Scale of Intelligence (WPPSI) is used to assess the cognitive ability of children ages two years and six months old to seven years and seven months old. The current version of the test is the fourth edition (WPPSI-IV). Children between the ages of two years and six months old, to three years and 11 months old, are testing on the following: "block design, information, object assembly, picture naming, and receptive vocabulary". Children between the ages of four years old, to seven years and 7 months old, are testing on the following: "coding, comprehension, matrix reasoning, picture completion, picture concepts, similarities, symbol search, vocabulary, and word reasoning."
      • Wonderlic test: The Wonderlic test is a multiple choice test consisting of 50 questions within a 12-minute time frame. Throughout the test, the questions become more and more difficult. The test is used to determine not only the individuals intelligence quotient, but also the strengths and weaknesses of the individual. The test consists of questions ranging from "English, reading, math, and logic problems" The Wonderlic test is notoriously used by NFL teams to help gain a better understanding of college prospects during the NFL combine.
  • Cognitive development tests
    • Cambridge Neuropsychological Test Automated Battery: The Cambridge Neuropsychological Test Automated Battery (CANTAB) is a test used to assess the "neuro-cognitive dysfunctions associated with neurologic disorders, phannacologic manipulations, and neuro-cognitive syndromes." CANTAB is computer based program from Cambridge Cognition, and can test for "working memory, learning and executive function; visual, verbal and episodic memory; attention, information processing and reaction time; social and emotion recognition, decision making and response control."
    • CAT4: The Cognitive Ability Test was developed by GL Education and is used to predict student success through the evaluation of verbal, non-verbal, mathematical, and spatial reasoning. It is being used by many international schools as part of their admissions process.
    • CDR computerized assessment system: The Cognitive Drug Research computerized assessment system is used to help determine if a drug has "cognitive-impairing properties". It is also used to "ensure that unwanted interactions with alcohol and other medications do not occur, or, if they do, to put them in context."
    • Cognitive bias (see also Emotion in animals § Cognitive bias test)
    • Cognitive pretesting: Cognitive pretests are used to evaluate the "comprehensibility of questions", usually given on a survey. This gives the surveyors a better understanding of how their questions are being perceived, and the "quality of the data" that is gained from the survey.
    • Draw-a-Person test: The Draw-a-Person test can be used on children, adolescents, and adults. It is most commonly used as a test for children and adolescents to assess their cognitive and intellectual ability by scoring their ability to draw human figures.
    • Knox Cubes: The Knox Cube Imitation Test (KCIT) is a nonverbal test used to assess intelligence. The creator of the KCIT, Howard A. Knox, described the test as: "Four 1-inch [black] cubes, 4 inches apart, are fastened to a piece of thin boarding. The movements and tapping are done with a smaller cube. The operator moves the cube from left to right facing the subject, and after completing each movement, the latter is asked to do likewise. Line a is tried first, then b, and so on to e. Three trials are given if necessary on lines a, b, c, and d, and five trials if needed on line e. To obtain the correct perspective the subject should be two feet from the cubes. The movements of the operator should be slow and deliberate."
    • Modern Language Aptitude Test
    • Multiple choice: The style of multiple choice examination was expanded upon in 1934 when IBM introduced a "test scoring machine" that electronically sensed the location of lead pencil marks on a scanning sheet. This further increased the efficiency of scoring multiple-choice items and created a large-scale educational testing method.
    • Pimsleur Language Aptitude Battery:
      • Grade levels: 6, 7, 8, 9, 10, 11, 12
      • Proficiency level: Beginner
      • Intended test use: placement, admission, fulfilling a requirement, aptitude
      • Skills tested: listening, grammar, vocabulary
      • Test length: 50–60 minutes
      • Test materials: reusable test booklet, consumable answer sheet, consumable performance chart and report to parents, test administrator manual, audio CD, scoring stencil for test administrator
      • Test format: multiple choice
      • Scoring method: number correct
      • Results reported: percentile, raw score
      • Administered by: trained testers, classroom teachers, school administrators
      • Administration time period: prior to foreign language study, at discretion of guidance counselor, school psychologist, or other administration
    • Porteus Maze test:
      • a supplement to the Stanford-Binet Intelligence Test.
      • PMT performance seems to be a valid indicator of planning and behavioral disinhibition across socioeconomic status and culture, can be administered without the use of language, and is inexpensive. The PMT also have a relatively short administration time of 10–15 minutes.
  • Consensus based assessment
    • Knowledge organization: Features Ranganathan's PMEST formula: Personality, Matter, Energy, Space and Time, consisting of five fundamental categories- the arrangement of which is used to establish the facet order.
    • Knowledge hierarchies
  • Memory
  • Self
  • Thought
  • Mental chronometry
  • Neuropsychological tests: These are standardized test which are given in the same manner to all examinees and are scored in a similar fashion. The examinees scores on the tests are interpreted by comparing their score to that of healthy individuals of a similar demographic background and to standard levels of operation.

List of animal tests

Life expectancy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Life_expectancy

Life expectancy and healthy life expectancy in various countries of the world in 2019, according to WHO
Life expectancy and healthy life expectancy by sex
Life expectancy development in some big countries of the world since 1960
Life expectancy at birth, measured by region, between 1950 and 2050
Life expectancy by world region, from 1770 to 2018
“Gender Die Gap”: global female life expectancy gap at birth for countries and territories as defined by WHO for 2019. Open the original svg-file and hover over a bubble to show its data. The square of the bubbles is proportional to country population based on estimation of the UN.

Life expectancy is a statistical measure of the estimate of the span of a life. The most commonly used measure is life expectancy at birth (LEB), which can be defined in two ways. Cohort LEB is the mean length of life of a birth cohort (in this case, all individuals born in a given year) and can be computed only for cohorts born so long ago that all their members have died. Period LEB is the mean length of life of a hypothetical cohort assumed to be exposed, from birth through death, to the mortality rates observed at a given year. National LEB figures reported by national agencies and international organizations for human populations are estimates of period LEB.

In the Bronze Age and the Iron Age, human LEB was 26 years; in 2010, world LEB was 67.2 years. In recent years, LEB in Eswatini (formerly Swaziland) is 49, while LEB in Japan is 83. The combination of high infant mortality and deaths in young adulthood from accidents, epidemics, plagues, wars, and childbirth, before modern medicine was widely available, significantly lowers LEB. For example, a society with a LEB of 40 would have relatively few people dying at exactly 40: most will die before 30 or after 55. In populations with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity, LEB can be grossly misinterpreted, leading to the belief that a population with a low LEB would have a small proportion of older people. A different measure, such as life expectancy at age 5 (e5), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood. For instance, in a society with a life expectancy of 30, it may nevertheless be common to have a 40-year remaining timespan at age 5 (but perhaps not a 60-year one).

Until the middle of the 20th century, infant mortality was approximately 40–60% of the total mortality. Excluding child mortality, the average life expectancy during the 12th–19th centuries was approximately 55 years. If a person survived childhood, they had about a 50% chance of living 50–55 years, instead of only 25–40 years. As of 2016, the overall worldwide life expectancy had reached the highest level that has been measured in modern times.

Aggregate population measures—such as the proportion of the population in various age groups—are also used alongside individual-based measures—such as formal life expectancy—when analyzing population structure and dynamics. Pre-modern societies had universally higher mortality rates and lower life expectancies at every age for both males and females. This example is relatively rare.

Life expectancy, longevity, and maximum lifespan are not synonymous. Longevity refers to the relatively long lifespan of some members of a population. Maximum lifespan is the age at death for the longest-lived individual of a species. Mathematically, life expectancy is denoted   and is the mean number of years of life remaining at a given age , with a particular mortality. Because life expectancy is an average, a particular person may die many years before or after the expected survival.

Life expectancy is also used in plant or animal ecology, and in life tables (also known as actuarial tables). The concept of life expectancy may also be used in the context of manufactured objects, though the related term shelf life is commonly used for consumer products, and the terms "mean time to breakdown" and "mean time between failures" are used in engineering.

History

The earliest documented work on life expectancy was done in the 1660s by John Graunt, Christiaan Huygens, and Lodewijck Huygens.

Human patterns

Maximum

The longest verified lifespan for any human is that of Frenchwoman Jeanne Calment, who is verified as having lived to age 122 years, 164 days, between 21 February 1875 and 4 August 1997. This is referred to as the "maximum life span", which is the upper boundary of life, the maximum number of years any human is known to have lived. A theoretical study shows that the maximum life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years. According to a study by biologists Bryan G. Hughes and Siegfried Hekimi, there is no evidence for limit on human lifespan. However, this view has been questioned on the basis of error patterns.

Records of human lifespan above age 100 are highly susceptible to errors. For example, the previous world-record holder for human lifespan, Carrie C. White, was uncovered as a simple typographic error after more than two decades.

Variation over time

The following information is derived from the 1961 Encyclopædia Britannica and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender.

Life expectancy at birth takes account of infant mortality and child mortality but not prenatal mortality.

Era Life expectancy at birth in years Notes
Paleolithic 22–33 Based on the data from modern hunter-gatherer populations, it is estimated that at 15, life expectancy another 39 years (54 years total). There was a 60% probability of surviving until age 15.
Neolithic 20–33 Based on Early Neolithic data, total life expectancy at 15 would be 28–33 years.
Bronze Age and Iron Age 26 Based on Early and Middle Bronze Age data, total life expectancy at 15 would be 28–36 years.
Classical Greece 25–28 Based on Athens Agora and Corinth data, total life expectancy at 15 would be 37–41 years. Most Greeks and Romans died young. About half of all children died before adolescence. Those who survived to the age of 30 had a reasonable chance of reaching 50 or 60. The truly elderly, however, were rare. Because so many died in childhood, life expectancy at birth was probably between 20 and 30 years.
Ancient Rome 20–33


Data is lacking, but computer models provide the estimate. If a person survived to age 20, they could expect to live around 30 years more. Life expectancy was probably slightly longer for women than men.

When infant mortality is factored out (i.e. counting only the 67–75% who survived the first year), life expectancy is around 34–41 more years (i.e. expected to live to 35–42). When child mortality is factored out (i.e. counting only the 55-65% who survived to age 5), life expectancy is around 40–45 (i.e. age 45–50). The ~50% that reached age 10 could also expect to reach ~45-50; at 15 to ~48–54; at 40 to ~60; at 50 to ~64–68; at 60 to ~70–72; at 70 to ~76–77.

Wang clan of China, 1st c. AD – 1749 35 For the 60% that survived the first year (i.e. excluding infant mortalities), life expectancy rose to ~35.
Early Middle Ages (Europe, from the late 5th or early 6th century to the 10th century) 30–35 Life expectancy for those of both sexes who survived birth averaged about 30–35 years. However, if a Gaulish boy made it past age 20, he might expect to live 25 more years, while a woman at age 20 could normally expect about 17 more years. Anyone who survived until 40 had a good chance at another 15 to 20 years.
Pre-Columbian Mesoamerica >40 The average Aztec life expectancy was 41.2 years for men and 42.1 for women.
Late medieval English peerage 30–33 In Europe, around one-third of infants died in their first year. Once children reached the age of 10, their life expectancy was 32.2 years, and for those who survived to 25, the remaining life expectancy was 23.3 years. Such estimates reflected the life expectancy of adult males from the higher ranks of English society in the Middle Ages, and were similar to that computed for monks of the Christ Church in Canterbury during the 15th century. At age 21, life expectancy of an aristocrat was an additional 43 years (total age 64).
Early modern Britain (16th – 18th century) 33–40 For males in the 18th century it was 34 years. For 15-year-old girls: around the 15th – 16th century it was ~33 years (48 total), and in the 18th century it was ~42 (57 total).
18th-century England 25–40 For most of the century it ranged from 35 to 40; however, in the 1720s it dipped as low as 25. For 15-year-old girls, it was ~42 (57 total). During the second half of the century it was ~37, while for the elite it passed 40 and approached 50.
Pre-Champlain Canadian Maritimes 60 Samuel de Champlain wrote that in his visits to Mi'kmaq and Huron communities, he met people over 100 years old. Daniel Paul attributes the incredible lifespan in the region to low stress and a healthy diet of lean meats, diverse vegetables, and legumes.
18th-century Prussia 24.7 For males.
18th-century France 27.5–30 For males: 24.8 years in 1740–1749, 27.9 years in 1750–1759, 33.9 years in 1800–1809.
18th-century American colonies 28 Massachusetts colonists who reached the age of 50 could expect to live until 71, and those who were still alive at 60 could expect to reach 75.
Beginning of the 19th century ~29 Demographic research suggests that at the beginning of the 19th century, no country in the world had a life expectancy longer than 40 years. India was ~25 years, while Belgium was ~40 years. For Europe as a whole, it was ~33 years.
Early 19th-century England 40 For the 84% who survived the first year (i.e. excluding infant mortality), the average age was ~46–48. If they reached 20, then it was ~60; if 50, then ~70; if 70, then ~80. For a 15-year-old girl it was ~60–65. For the upper-class, LEB rose from ~45 to 50.

Less than half of the people born in the mid-19th century made it past their 50th birthday. In contrast, 97% of the people born in 21st century England and Wales can expect to live longer than 50 years.

19th-century British India 25.4
19th-century world average 28.5–32 Over the course of the century: Europe rose from ~33 to 43, the Americas from ~35 to 41, Oceania ~35 to 48, Asia ~28, Africa 26. In 1820s France, LEB was ~38, and for the 80% that survived, it rose to ~47. For Moscow serfs, LEB was ~34, and for the 66% that survived, it rose to ~36. Western Europe in 1830 was ~33 years, while for the people of Hau-Lou in China, it was ~40. The LEB for a 10-year-old in Sweden rose from ~44 to ~54.
1900 world average 31–32 Around 48 years in Oceania, 43 in Europe, and 41 in the Americas. Around 47 in the U.S. and around 48 for 15-year-old girls in England.
1950 world average 45.7–48 Around 60 years in Europe, North America, Oceania, Japan, and parts of South America; but only 41 in Asia and 36 in Africa. Norway led with 72, while in Mali it was merely 26.
2019–2020 world average 72.6–73.2

  • Females: 75.6 years
  • Males: 70.8 years
  • Range: ~54 (Central African Republic) – 85.3 (Hong Kong)
Life expectancy increases with age already achieved.

Life expectancy increases with age as the individual survives the higher mortality rates associated with childhood. For instance, the table above gives the life expectancy at birth among 13th-century English nobles as 30. Having survived to the age of 21, a male member of the English aristocracy in this period could expect to live:

  • 1200–1300: to age 64
  • 1300–1400: to age 45 (because of the bubonic plague)
  • 1400–1500: to age 69
  • 1500–1550: to age 71

17th-century English life expectancy was only about 35 years, largely because infant and child mortality remained high. Life expectancy was under 25 years in the early Colony of Virginia, and in seventeenth-century New England, about 40% died before reaching adulthood. During the Industrial Revolution, the life expectancy of children increased dramatically. The under-5 mortality rate in London decreased from 74.5% (in 1730–1749) to 31.8% (in 1810–1829).

Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemic, the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health.

Regional variations

Life expectancy in 1800, 1950, and 2015 – visualization by Our World in Data

Human beings are expected to live on average 30–40 years in Eswatini and 82.6 years in Japan. An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities, public health, and diet.

Plot of life expectancy vs. GDP per capita in 2009. This phenomenon is known as the Preston curve.
Graphs of life expectancy at birth for some sub-Saharan countries showing the fall in the 1990s primarily due to the HIV pandemic

There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health, medical care, and diet. The impact of AIDS on life expectancy is particularly notable in many African countries. According to projections made by the United Nations in 2002, the life expectancy at birth for 2010–2015 (if HIV/AIDS did not exist) would have been:

  • 70.7 years instead of 31.6 years, Botswana
  • 69.9 years instead of 41.5 years, South Africa
  • 70.5 years instead of 31.8 years, Zimbabwe

Actual life expectancy in Botswana declined from 65 in 1990 to 49 in 2000 before increasing to 66 in 2011. In South Africa, life expectancy was 63 in 1990, 57 in 2000, and 58 in 2011. And in Zimbabwe, life expectancy was 60 in 1990, 43 in 2000, and 54 in 2011.

During the last 200 years, African countries have generally not had the same improvements in mortality rates that have been enjoyed by countries in Asia, Latin America, and Europe.

In the United States, African-American people have shorter life expectancies than their European-American counterparts. For example, white Americans in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since 1975 at the latest. The greatest difference was 7.1 years in 1993. In contrast, Asian-American women live the longest of all ethnic groups in the United States, with a life expectancy of 85.8 years. The life expectancy of Hispanic-Americans is 81.2 years. According to the new government reports in the US, life expectancy in the country dropped again because of the rise in suicide and drug overdose rates. The Centers for Disease Control (CDC) found nearly 70,000 more Americans died in 2017 than in 2016, with rising rates of death among 25- to 44-year-olds.

Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In United States cities such as Cincinnati, the life expectancy gap between low income and high-income neighborhoods touches 20 years.

Economic circumstances

Life expectancy is higher in rich countries with low economic inequality.
Life expectancy vs healthcare spending of rich OECD countries. US average of $10,447 in 2018.

Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest and richest areas is several years higher than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas. In Glasgow, the disparity is amongst the highest in the world: life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie, which is only 8 km (5.0 mi) away.

A 2013 study found a pronounced relationship between economic inequality and life expectancy. However, in contrast, a study by José A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression, and during recessions and depressions in general. The authors suggest that when people are working at a more extreme degree during prosperous economic times, they undergo more stress, exposure to pollution, and the likelihood of injury among other longevity-limiting factors.

Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution. This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have lower life expectancies than average. Other factors affecting an individual's life expectancy are genetic disorders, drug use, tobacco smoking, excessive alcohol consumption, obesity, access to health care, diet, and exercise.

Sex differences

Pink: Countries where female life expectancy at birth is higher than males. Blue: A few countries in southern Africa where females have shorter lives due to AIDS.

In the present, female human life expectancy is greater than that of males, despite females having higher morbidity rates (see Health survival paradox). There are many potential reasons for this. Traditional arguments tend to favor sociology-environmental factors: historically, men have generally consumed more tobacco, alcohol, and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer, tuberculosis, and cirrhosis of the liver. Men are also more likely to die from injuries, whether unintentional (such as occupational, war, or car wrecks) or intentional (suicide). Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease. These far outweigh the female mortality rate from breast cancer and cervical cancer. In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age.

A paper from 2015 found that female foetuses have a higher mortality rate than male foetuses. This finding contradicts papers dating from 2002 and earlier that attribute the male sex to higher in-utero mortality rates. Among the smallest premature babies (those under 2 pounds (910 grams)), females have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8 years in 1979 to 5.3 years in 2005, with women expected to live to age 80.1 in 2005. Data from the United Kingdom shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates.

Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger-sized individuals within a species tend, on average, to have shorter lives. This biological difference occurs because women have more resistance to infections and degenerative diseases.

In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. However, Kalben's study was restricted to data in Western Europe alone, where the demographic transition occurred relatively early. United Nations statistics from mid-twentieth century onward, show that in all parts of the world, females have a higher life expectancy at age 60 than males. Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. Except for birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors.

One recent suggestion is that mitochondrial mutations which shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore, such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation.

Another explanation is the unguarded X hypothesis. According to this hypothesis, one reason for why the average lifespan of males isn't as long as that of females––by 18% on average, according to the study––is that they have a Y chromosome which can't protect an individual from harmful genes expressed on the X chromosome, while a duplicate X chromosome, as present in female organisms, can ensure harmful genes aren't expressed.

In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880, death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline. Most of the difference in life expectancy between the sexes is accounted for by differences in the rate of death by cardiovascular diseases among persons aged 50–70.

Genetics

The heritability of lifespan is estimated to be less than 10%, meaning the majority of variation in lifespan is attributable due to differences in environment rather than genetic variation. However, researchers have identified regions of the genome which can influence the length of life and the number of years lived in good health. For example, a genome-wide association study of 1 million lifespans found 12 genetic loci which influenced lifespan by modifying susceptibility to cardiovascular and smoking-related disease. The locus with the largest effect is APOE. Carriers of the APOE ε4 allele live approximately one year less than average (per copy of the ε4 allele), mainly due to increased risk of Alzheimer's disease.

"Healthspan, parental lifespan, and longevity are highly genetically correlated."

In July 2020, scientists identified 10 genomic loci with consistent effects across multiple lifespan-related traits, including healthspan, lifespan, and longevity. The genes affected by variation in these loci highlighted haem metabolism as a promising candidate for further research within the field. This study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans.

A follow-up study which investigated the genetics of frailty and self-rated health in addition to healthspan, lifespan, and longevity also highlighted haem metabolism as an important pathway, and found genetic variants which lower blood protein levels of LPA and VCAM1 were associated with increased healthy lifespan.

Centenarians

In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050. Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane Prefecture had an estimated 743 centenarians per million inhabitants.

In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants).

Mental illness

Mental illness is reported to occur in approximately 18% of the average American population.

Life expectancy in the seriously mentally ill is much shorter than the general population.

The mentally ill have been shown to have a 10- to 25-year reduction in life expectancy. Generally, the reduction of lifespan in the mentally ill population compared to the mentally stable population has been studied and documented.

The greater mortality of people with mental disorders may be due to death from injury, from co-morbid conditions, or medication side effects. For instance, psychiatric medications can increase the risk of developing diabetes. It has been shown that the psychiatric medication olanzapine can increase risk of developing agranulocytosis, among other comorbidities. Psychiatric medicines also affect the gastrointestinal tract; the mentally ill have a four times risk of gastrointestinal disease.

As of 2020 and the COVID-19 pandemic, researchers have found an increased risk of death in the mentally ill.

Other illnesses

Post-COVID life expectancy in the US, UK, Netherlands, and Austria

The life expectancy of people with diabetes, which is 9.3% of the U.S. population, is reduced by roughly 10–20 years. People over 60 years old with Alzheimer's disease have about a 50% life expectancy of 3–10 years. Other demographics that tend to have a lower life expectancy than average include transplant recipients and the obese.

Education

Education on all levels has been shown to be strongly associated with increased life expectancy. This association may be due partly to higher income, which can lead to increased life expectancy. Despite the association, among identical twin pairs with different education levels, there is only weak evidence of a relationship between educational attainment and adult mortality.

According to a paper from 2015, the mortality rate for the Caucasian population in the United States from 1993 to 2001 is four times higher for those who did not complete high school compared to those who have at least 16 years of education. In fact, within the U.S. adult population, people with less than a high school education have the shortest life expectancies.

Preschool education also plays a large role in life expectancy. It was found that high-quality early-stage childhood education had positive effects on health. Researchers discovered this by analyzing the results of the Carolina Abecedarian Project, finding that the disadvantaged children who were randomly assigned to treatment had lower instances of risk factors for cardiovascular and metabolic diseases in their mid-30s.

Evolution and aging rate

Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms which—by virtue of their defenses or lifestyle—live for long periods and avoid accidents, disease, predation, etc. are likely to have genes that code for slow aging, which often translates to good cellular repair. One theory is that if predation or accidental deaths prevent most individuals from living to an old age, there will be less natural selection to increase the intrinsic life span. That finding was supported in a classic study of opossums by Austad; however, the opposite relationship was found in an equally prominent study of guppies by Reznick.

One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction. Caloric restriction observed in many animals (most notably mice and rats) shows a near doubling of life span from a very limited calorific intake. Support for the theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy. That is the key to why animals like giant tortoises can live so long. Studies of humans with life spans of at least 100 have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate.

The ability of skin fibroblasts to perform DNA repair after UV irradiation was measured in shrew, mouse, rat, hamster, cow, elephant and human. It was found that DNA repair capability increased systematically with species life span. Since this original study in 1974, at least 14 additional studies were performed on mammals to test this correlation. In all, but two of these studies, lifespan correlated with DNA repair levels, suggesting that DNA repair capability contributes to life expectancy.

In a broad survey of zoo animals, no relationship was found between investment of the animal in reproduction and its life span.

Calculation

A survival tree to explain the calculations of life-expectancy. Red numbers indicate a chance of survival at a specific age, and blue ones indicate age-specific death rates.

In actuarial notation, the probability of surviving from age to age is denoted and the probability of dying during age (i.e. between ages and ) is denoted . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, the age-specific death probability at 90 would be 10%. This probability describes the likelihood of dying at that age, and is not the rate at which people of that age die. It can be shown that

 

 

 

 

(1)

The curtate future lifetime, denoted , is a discrete random variable representing the remaining lifetime at age , rounded down to whole years. Life expectancy, more technically called the curtate expected lifetime and denoted , is the mean of —that is to say, the expected number of whole years of life remaining, assuming survival to age . So,

 

 

 

 

(2)

Substituting (1) into the sum and simplifying gives the final result 

 

 

 

 

(3)

If the assumption is made that, on average, people live a half year on the year of their death, the complete life expectancy at age would be .

By definition, life expectancy is an arithmetic mean. It can also be calculated by integrating the survival curve from 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called 'omega'). For an extinct or completed cohort (all people born in the year 1850, for example), it can of course simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. The estimates are called period cohort life expectancies.

The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large amount of data is available, a statistical population can be created that allow the age-specific death rates to be simply taken as the mortality rates actually experienced at each age (the number of deaths divided by the number of years "exposed to risk" in each data cell). However, it is customary to apply smoothing to remove (as much as possible) the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function, but more sophisticated methods are now used. The most common modern methods include:

  • fitting a mathematical formula (such as the Gompertz function, or an extension of it) to the data.
  • looking at an established mortality table derived from a larger population and making a simple adjustment to it (such as multiplying by a constant factor) to fit the data. (In cases of relatively small amounts of data.)
  • looking at the mortality rates actually experienced at each age and applying a piecewise model (such as by cubic splines) to fit the data. (In cases of relatively large amounts of data.)

The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (such as males and females, or smokers and non-smokers) and are then used to calculate a life table from which one can calculate the probability of surviving to each age. While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking, and recapturing them. The life of a product, more often termed shelf life, is also computed using similar methods. In the case of long-lived components, such as those used in critical applications (e.g. aircraft), methods like accelerated aging are used to model the life expectancy of a component.

It is important to note that the life expectancy statistic is usually based on past mortality experience and assumes that the same age-specific mortality rates will continue. Thus, such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population. However, for some purposes, such as pensions calculations, it is usual to adjust the life table used by assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. That is often done by simply extrapolating past trends, but some models exist to account for the evolution of mortality, like the Lee–Carter model.

As discussed above, on an individual basis, some factors correlate with longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use (including smoking and alcohol consumption), disposition, education, environment, sleep, climate, and health care.

Healthy life expectancy

To assess the quality of these additional years of life, 'healthy life expectancy' has been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy (HALE), defined as the average number of years that a person can expect to live in "full health" excluding the years lived in less than full health due to disease and/or injury. Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States uses similar indicators in the framework of the national health promotion and disease prevention plan "Healthy People 2010". More and more countries are using health expectancy indicators to monitor the health of their population.

Healthy Life Expectancy (HALE) vs GDP per Capita in different countries

The long-standing quest for longer life led in the 2010s to a more promising focus on increasing HALE, also known as a person's "healthspan". Besides the benefits of keeping people healthier longer, a goal is to reduce health-care expenses on the many diseases associated with cellular senescence. Approaches being explored include fasting, exercise, and senolytic drugs.

Forecasting

Forecasting life expectancy and mortality form an important subdivision of demography. Future trends in life expectancy have huge implications for old-age support programs (like U.S. Social Security and pension) since the cash flow in these systems depends on the number of recipients who are still living (along with the rate of return on the investments or the tax rate in pay-as-you-go systems). With longer life expectancies, the systems see increased cash outflow; if the systems underestimate increases in life-expectancies, they will be unprepared for the large payments that will occur, as humans live longer and longer.

Life expectancy forecasting is usually based on one of two different approaches:

  1. Forecasting the life expectancy directly, generally using ARIMA or other time-series extrapolation procedures. This has the advantage of simplicity, but it cannot account for changes in mortality at specific ages, and the forecast number cannot be used to derive other life table results. Analyses and forecasts using this approach can be done with any common statistical/mathematical software package, like EViews, R, SAS, Stata, Matlab, or SPSS.
  2. Forecasting age-specific death rates and computing the life expectancy from the results with life table methods. This is usually more complex than simply forecasting life expectancy because the analyst must deal with correlated age-specific mortality rates, but it seems to be more robust than simple one-dimensional time series approaches. It also yields a set of age-specific rates that may be used to derive other measures, such as survival curves or life expectancies at different ages. The most important approach in this group is the Lee-Carter model, which uses the singular value decomposition on a set of transformed age-specific mortality rates to reduce their dimensionality to a single time series, forecasts that time series, and then recovers a full set of age-specific mortality rates from that forecasted value. The software includes Professor Rob J. Hyndman's R package called 'demography' and UC Berkeley's LCFIT system.

Policy uses

Life expectancy is one of the factors in measuring the Human Development Index (HDI) of each nation along with adult literacy, education, and standard of living.

Life expectancy is used in describing the physical quality of life of an area. It is also used for an individual when the value of a life settlement is determined a life insurance policy is sold for a cash asset.

Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality. For the top 21 industrialized countries, if each person is counted equally, life expectancy is lower in more unequal countries (r = −0.907). There is a similar relationship among states in the U.S. (r = −0.620).

Life expectancy vs. other measures of longevity

"Remaining" life expectancy—expected number of remaining years of life as a function of current age—is used in retirement income planning.

Life expectancy is commonly confused with the average age an adult could expect to live. This confusion may create the expectation that an adult would be unlikely to exceed an life expectancy, even though, with all statistical probability, an adult, who has already avoided many statistical causes of adolescent mortality, should be expected to outlive the life expectancy calculated from birth. One must compare the life expectancy of the period after childhood to estimate also the life expectancy of an adult.

Life expectancy can change dramatically after childhood. In the table above, note the life expectancy at birth in the Paleolithic is 22–33 years, but life expectancy at 15 is 54 years. Additional studies similarly show a dramatic increase in life expectancy once adulthood was reached.

Maximum life span is an individual-specific concept, and therefore is an upper bound rather than an average. Science author Christopher Wanjek writes, "[H]as the human race increased its life span? Not at all. This is one of the biggest misconceptions about old age: we are not living any longer." The maximum life span, or oldest age a human can live, may be constant. Further, there are many examples of people living significantly longer than the average life expectancy of their time period, such as Socrates (71), Saint Anthony the Great (105), Michelangelo (88), and John Adams (90).

However, anthropologist John D. Hawks criticizes the popular conflation of life span (life expectancy) and maximum life span when popular science writers falsely imply that the average adult human does not live longer than their ancestors. He writes, "[a]ge-specific mortality rates have declined across the adult lifespan. A smaller fraction of adults die at 20, at 30, at 40, at 50, and so on across the lifespan. As a result, we live longer on average... In every way we can measure, human lifespans are longer today than in the immediate past, and longer today than they were 2000 years ago... age-specific mortality rates in adults really have reduced substantially."

Life expectancy can also create popular misconceptions about at what age a certain population may expect to die at. The modal age at death is the age when most deaths occur in a population, and is sometimes used instead of life expectancy for this kind of understanding. For example, in the table above, life expectancy in the Paleolithic is listed as 22–33 years. For many, this implies that most people in the Paleolithic died in their late twenties. However, in the Paleolithic, most adults in the Paleolithic died at 72 years (the average modal adult life span).

Politics of Europe

From Wikipedia, the free encyclopedia ...