Search This Blog

Sunday, May 14, 2023

Locus of control

From Wikipedia, the free encyclopedia
A person with an external locus of control attributes academic success or failure to luck or chance, a higher power or the influence of another person, rather than their own actions. They also struggle more with procrastination and difficult tasks.

Locus of control is the degree to which people believe that they, as opposed to external forces (beyond their influence), have control over the outcome of events in their lives. The concept was developed by Julian B. Rotter in 1954, and has since become an aspect of personality psychology. A person's "locus" (plural "loci", Latin for "place" or "location") is conceptualized as internal (a belief that one can control one's own life) or external (a belief that life is controlled by outside factors which the person cannot influence, or that chance or fate controls their lives).

Individuals with a strong internal locus of control believe events in their life are primarily a result of their own actions: for example, when receiving exam results, people with an internal locus of control tend to praise or blame themselves and their abilities. People with a strong external locus of control tend to praise or blame external factors such as the teacher or the difficulty of the exam.

Locus of control has generated much research in a variety of areas in psychology. The construct is applicable to such fields as educational psychology, health psychology, industrial and organizational psychology, and clinical psychology. Debate continues whether domain-specific or more global measures of locus of control will prove to be more useful in practical application. Careful distinctions should also be made between locus of control (a personality variable linked with generalized expectancies about the future) and attributional style (a concept concerning explanations for past outcomes), or between locus of control and concepts such as self-efficacy.

Locus of control is one of the four dimensions of core self-evaluations – one's fundamental appraisal of oneself – along with neuroticism, self-efficacy, and self-esteem. The concept of core self-evaluations was first examined by Judge, Locke, and Durham (1997), and since has proven to have the ability to predict several work outcomes, specifically, job satisfaction and job performance. In a follow-up study, Judge et al. (2002) argued that locus of control, neuroticism, self-efficacy, and self-esteem factors may have a common core.

History

Weiner's attribution theory as
applied to student motivation

Perceived locus of control
Internal External
Attributions of control Ability Hardness of tasks
Attributions of no control Effort Luck or fate

Locus of control as a theoretical construct derives from Julian B. Rotter's (1954) social learning theory of personality. It is an example of a problem-solving generalized expectancy, a broad strategy for addressing a wide range of situations. In 1966 he published an article in Psychological Monographs which summarized over a decade of research (by Rotter and his students), much of it previously unpublished. In 1976, Herbert M. Lefcourt defined the perceived locus of control: "...a generalised expectancy for internal as opposed to external control of reinforcements". Attempts have been made to trace the genesis of the concept to the work of Alfred Adler, but its immediate background lies in the work of Rotter and his students. Early work on the topic of expectations about control of reinforcement had been performed in the 1950s by James and Phares (prepared for unpublished doctoral dissertations supervised by Rotter at Ohio State University).

Another Rotter student, William H. James studied two types of "expectancy shifts":

  • Typical expectancy shifts, believing that success (or failure) would be followed by a similar outcome
  • Atypical expectancy shifts, believing that success (or failure) would be followed by a dissimilar outcome

Additional research led to the hypothesis that typical expectancy shifts were displayed more often by those who attributed their outcomes to ability, whereas those who displayed atypical expectancy were more likely to attribute their outcomes to chance. This was interpreted that people could be divided into those who attribute to ability (an internal cause) versus those who attribute to luck (an external cause). Bernard Weiner argued that rather than ability-versus-luck, locus may relate to whether attributions are made to stable or unstable causes.

Rotter (1975, 1989) has discussed problems and misconceptions in others' use of the internal-versus-external construct.

Personality orientation

Rotter (1975) cautioned that internality and externality represent two ends of a continuum, not an either/or typology. Internals tend to attribute outcomes of events to their own control. People who have internal locus of control believe that the outcomes of their actions are results of their own abilities. Internals believe that their hard work would lead them to obtain positive outcomes. They also believe that every action has its consequence, which makes them accept the fact that things happen and it depends on them if they want to have control over it or not. Externals attribute outcomes of events to external circumstances. A person with an external locus of control will tend to believe that the their present circumstances are not the effect of their own influence, decisions, or control, and even that their own actions are a result of external factors, such as fate, luck, history, the influence of powerful forces, or individually or unspecified others (such as governmental entities; corporations; racial, religious, ethnic, or fraternal groups; sexes; political affiliations; outgroups; or even perceived individual personal antagonists) and/or a belief that the world is too complex for one to predict or influence its outcomes. Laying blame on others for one's own circumstances with the implication one is owed a moral or other debt is an indicator of a tendency toward an external locus of control. It should not be thought, however, that internality is linked exclusively with attribution to effort and externality with attribution to luck (as Weiner's work – see below – makes clear). This has obvious implications for differences between internals and externals in terms of their achievement motivation, suggesting that internal locus is linked with higher levels of need for achievement. Due to their locating control outside themselves, externals tend to feel they have less control over their fate. People with an external locus of control tend to be more stressed and prone to clinical depression.

Internals were believed by Rotter (1966) to exhibit two essential characteristics: high achievement motivation and low outer-directedness. This was the basis of the locus-of-control scale proposed by Rotter in 1966, although it was based on Rotter's belief that locus of control is a single construct. Since 1970, Rotter's assumption of uni-dimensionality has been challenged, with Levenson (for example) arguing that different dimensions of locus of control (such as beliefs that events in one's life are self-determined, or organized by powerful others and are chance-based) must be separated. Weiner's early work in the 1970s suggested that orthogonal to the internality-externality dimension, differences should be considered between those who attribute to stable and those who attribute to unstable causes.

This new, dimensional theory meant that one could now attribute outcomes to ability (an internal stable cause), effort (an internal unstable cause), task difficulty (an external stable cause) or luck (an external, unstable cause). Although this was how Weiner originally saw these four causes, he has been challenged as to whether people see luck (for example) as an external cause, whether ability is always perceived as stable, and whether effort is always seen as changing. Indeed, in more recent publications (e.g. Weiner, 1980) he uses different terms for these four causes (such as "objective task characteristics" instead of "task difficulty" and "chance" instead of "luck"). Psychologists since Weiner have distinguished between stable and unstable effort, knowing that in some circumstances effort could be seen as a stable cause (especially given the presence of words such as "industrious" in English).

Regarding locus of control, there is another type of control that entails a mix among the internal and external types. People that have the combination of the two types of locus of control are often referred to as Bi-locals. People that have Bi-local characteristics are known to handle stress and cope with their diseases more efficiently by having the mixture of internal and external locus of control. People that have this mix of loci of control can take personal responsibility for their actions and the consequences thereof while remaining capable of relying upon and having faith in outside resources; these characteristics correspond to the internal and external loci of control, respectively.

Measuring scales

The most widely used questionnaire to measure locus of control is the 13-item (plus six filler items), forced-choice scale of Rotter (1966). However, this is not the only questionnaire; Bialer's (1961) 23-item scale for children predates Rotter's work. Also relevant to the locus-of-control scale are the Crandall Intellectual Ascription of Responsibility Scale (Crandall, 1965) and the Nowicki-Strickland Scale (Nowicki & Strickland 1973). One of the earliest psychometric scales to assess locus of control (using a Likert-type scale, in contrast to the forced-choice alternative measure in Rotter's scale) was that devised by W. H. James for his unpublished doctoral dissertation, supervised by Rotter at Ohio State University; however, this remains unpublished.

Many measures of locus of control have appeared since Rotter's scale. These were reviewed by Furnham and Steele (1993) and include those related to health psychology, industrial and organizational psychology and those specifically for children (such as the Stanford Preschool Internal-External Scale  for three- to six-year-olds). Furnham and Steele (1993) cite data suggesting that the most reliable, valid questionnaire for adults is the Duttweiler scale. For a review of the health questionnaires cited by these authors, see "Applications" below.

The Duttweiler (1984) Internal Control Index (ICI) addresses perceived problems with the Rotter scales, including their forced-choice format, susceptibility to social desirability and heterogeneity (as indicated by factor analysis). She also notes that, while other scales existed in 1984 to measure locus of control, "they appear to be subject to many of the same problems". Unlike the forced-choice format used on Rotter's scale, Duttweiler's 28-item ICI uses a Likert-type scale in which people must state whether they would rarely, occasionally, sometimes, frequently or usually behave as specified in each of 28 statements. The ICI assess variables pertinent to internal locus: cognitive processing, autonomy, resistance to social influence, self-confidence and delay of gratification. A small (133 student-subject) validation study indicated that the scale had good internal consistency reliability (a Cronbach's alpha of 0.85).

Attributional style

Attributional style (or explanatory style) is a concept introduced by Lyn Yvonne Abramson, Martin Seligman and John D. Teasdale. This concept advances a stage further than Weiner, stating that in addition to the concepts of internality-externality and stability a dimension of globality-specificity is also needed. Abramson et al. believed that how people explained successes and failures in their lives related to whether they attributed these to internal or external factors, short-term or long-term factors, and factors that affected all situations.

The topic of attribution theory (introduced to psychology by Fritz Heider) has had an influence on locus of control theory, but there are important historical differences between the two models. Attribution theorists have been predominantly social psychologists, concerned with the general processes characterizing how and why people make the attributions they do, whereas locus of control theorists have been concerned with individual differences.

Significant to the history of both approaches are the contributions made by Bernard Weiner in the 1970s. Before this time, attribution theorists and locus of control theorists had been largely concerned with divisions into external and internal loci of causality. Weiner added the dimension of stability-instability (and later controllability), indicating how a cause could be perceived as having been internal to a person yet still beyond the person's control. The stability dimension added to the understanding of why people succeed or fail after such outcomes.

Applications

Locus of control's best known application may have been in the area of health psychology, largely due to the work of Kenneth Wallston. Scales to measure locus of control in the health domain were reviewed by Furnham and Steele in 1993. The best-known are the Health Locus of Control Scale and the Multidimensional Health Locus of Control Scale, or MHLC. The latter scale is based on the idea (echoing Levenson's earlier work) that health may be attributed to three sources: internal factors (such as self-determination of a healthy lifestyle), powerful others (such as one's doctor) or luck (which is very dangerous as lifestyle advice will be ignored – these people are very difficult to help).

Some of the scales reviewed by Furnham and Steele (1993) relate to health in more specific domains, such as obesity (for example, Saltzer's (1982) Weight Locus of Control Scale or Stotland and Zuroff's (1990) Dieting Beliefs Scale), mental health (such as Wood and Letak's (1982) Mental Health Locus of Control Scale or the Depression Locus of Control Scale of Whiteman, Desmond and Price, 1987) and cancer (the Cancer Locus of Control Scale of Pruyn et al., 1988). In discussing applications of the concept to health psychology Furnham and Steele refer to Claire Bradley's work, linking locus of control to the management of diabetes mellitus. Empirical data on health locus of control in a number of fields was reviewed by Norman and Bennett in 1995; they note that data on whether certain health-related behaviors are related to internal health locus of control have been ambiguous. They note that some studies found that internal health locus of control is linked with increased exercise, but cite other studies which found a weak (or no) relationship between exercise behaviors (such as jogging) and internal health locus of control. A similar ambiguity is noted for data on the relationship between internal health locus of control and other health-related behaviors (such as breast self-examination, weight control and preventive-health behavior). Of particular interest are the data cited on the relationship between internal health locus of control and alcohol consumption.

Norman and Bennett note that some studies that compared alcoholics with non-alcoholics suggest alcoholism is linked to increased externality for health locus of control; however, other studies have linked alcoholism with increased internality. Similar ambiguity has been found in studies of alcohol consumption in the general, non-alcoholic population. They are more optimistic in reviewing the literature on the relationship between internal health locus of control and smoking cessation, although they also point out that there are grounds for supposing that powerful-others and internal-health loci of control may be linked with this behavior. It is thought that, rather than being caused by one or the other, that alcoholism is directly related to the strength of the locus, regardless of type, internal or external.

They argue that a stronger relationship is found when health locus of control is assessed for specific domains than when general measures are taken. Overall, studies using behavior-specific health locus scales have tended to produce more positive results. These scales have been found to be more predictive of general behavior than more general scales, such as the MHLC scale. Norman and Bennett cite several studies that used health-related locus-of-control scales in specific domains (including smoking cessation), diabetes, tablet-treated diabetes, hypertension, arthritis, cancer, and heart and lung disease.

They also argue that health locus of control is better at predicting health-related behavior if studied in conjunction with health value (the value people attach to their health), suggesting that health value is an important moderator variable in the health locus of control relationship. For example, Weiss and Larsen (1990) found an increased relationship between internal health locus of control and health when health value was assessed. Despite the importance Norman and Bennett attach to specific measures of locus of control, there are general textbooks on personality which cite studies linking internal locus of control with improved physical health, mental health and quality of life in people with diverse conditions: HIV, migraines, diabetes, kidney disease and epilepsy.

During the 1970s and 1980s, Whyte correlated locus of control with the academic success of students enrolled in higher-education courses. Students who were more internally controlled believed that hard work and focus would result in successful academic progress, and they performed better academically. Those students who were identified as more externally controlled (believing that their future depended upon luck or fate) tended to have lower academic-performance levels. Cassandra B. Whyte researched how control tendency influenced behavioral outcomes in the academic realm by examining the effects of various modes of counseling on grade improvements and the locus of control of high-risk college students.

Rotter also looked at studies regarding the correlation between gambling and either an internal or external locus of control. For internals, gambling is more reserved. When betting, they primarily focus on safe and moderate wagers. Externals, however, take more chances and, for example, bet more on a card or number that has not appeared for a certain period, under the notion that this card or number has a higher chance of occurring.

Organizational psychology and religion

Other fields to which the concept has been applied include industrial and organizational psychology, sports psychology, educational psychology and the psychology of religion. Richard Kahoe has published work in the latter field, suggesting that intrinsic religious orientation correlates positively (and extrinsic religious orientation correlates negatively) with internal locus. Of relevance to both health psychology and the psychology of religion is the work of Holt, Clark, Kreuter and Rubio (2003) on a questionnaire to assess spiritual-health locus of control. The authors distinguished between an active spiritual-health locus of control (in which "God empowers the individual to take healthy actions") and a more passive spiritual-health locus of control (where health is left up to God). In industrial and organizational psychology, it has been found that internals are more likely to take positive action to change their jobs (rather than merely talk about occupational change) than externals. Locus of control relates to a wide variety of work variables, with work-specific measures relating more strongly than general measures. In Educational setting, some research has shown that students who were intrinsically motivated had processed reading material more deeply and had better academic performance than students with extrinsic motivation.

Consumer research

Locus of control has also been applied to the field of consumer research. For example, Martin, Veer and Pervan (2007) examined how the weight locus of control of women (i.e., beliefs about the control of body weight) influence how they react to female models in advertising of different body shapes. They found that women who believe they can control their weight ("internals"), respond most favorably to slim models in advertising, and this favorable response is mediated by self-referencing. In contrast, women who feel powerless about their weight ("externals"), self-reference larger-sized models, but only prefer larger-sized models when the advertisement is for a non-fattening product. For fattening products, they exhibit a similar preference for larger-sized models and slim models. The weight locus of control measure was also found to be correlated with measures for weight control beliefs and willpower.

Political ideology

Locus of control has been linked to political ideology. In the 1972 U.S. presidential election, research of college students found that those with an internal locus of control were substantially more likely to register as a Republican, while those with an external locus of control were substantially more likely to register as a Democratic. A 2011 study surveying students at Cameron University in Oklahoma found similar results, although these studies were limited in scope. Consistent with these findings, Kaye Sweetser (2014) found that Republicans significantly displayed greater internal locus of control than Democrats and Independents.

Those with an internal locus of control are more likely to be of higher socioeconomic status, and are more likely to be politically involved (e.g., following political news, joining a political organization) Those with an internal locus of control are also more likely to vote.

Familial origins

The development of locus of control is associated with family style and resources, cultural stability and experiences with effort leading to reward.[citation needed] Many internals have grown up with families modeling typical internal beliefs; these families emphasized effort, education, responsibility and thinking, and parents typically gave their children rewards they had promised them. In contrast, externals are typically associated with lower socioeconomic status. Societies experiencing social unrest increase the expectancy of being out-of-control; therefore, people in such societies become more external.

The 1995 research of Schneewind suggests that "children in large single parent families headed by women are more likely to develop an external locus of control" Schultz and Schultz also claim that children in families where parents have been supportive and consistent in discipline develop internal locus of control. At least one study has found that children whose parents had an external locus of control are more likely to attribute their successes and failures to external causes. Findings from early studies on the familial origins of locus of control were summarized by Lefcourt: "Warmth, supportiveness and parental encouragement seem to be essential for development of an internal locus". However, causal evidence regarding how parental locus of control influences offspring locus of control (whether genetic, or environmentally mediated) is lacking.

Locus of control becomes more internal with age. As children grow older, they gain skills which give them more control over their environment. However, whether this or biological development is responsible for changes in locus is unclear.

Age

Some studies showed that with age people develop a more internal locus of control, but other study results have been ambiguous. Longitudinal data collected by Gatz and Karel imply that internality may increase until middle age, decreasing thereafter. Noting the ambiguity of data in this area, Aldwin and Gilmer (2004) cite Lachman's claim that locus of control is ambiguous. Indeed, there is evidence here that changes in locus of control in later life relate more visibly to increased externality (rather than reduced internality) if the two concepts are taken to be orthogonal. Evidence cited by Schultz and Schultz (2005) suggests that locus of control increases in internality until middle age. The authors also note that attempts to control the environment become more pronounced between ages eight and fourteen.

Health locus of control is how people measure and understand how people relate their health to their behavior, health status and how long it may take to recover from a disease. Locus of control can influence how people think and react towards their health and health decisions. Each day we are exposed to potential diseases that may affect our health. The way we approach that reality has a lot to do with our locus of control. Sometimes it is expected to see older adults experience progressive declines in their health, for this reason it is believed that their health locus of control will be affected. However, this does not necessarily mean that their locus of control will be affected negatively but older adults may experience decline in their health and this can show lower levels of internal locus of control.

Age plays an important role in one's internal and external locus of control. When comparing a young child and an older adult with their levels of locus of control in regards to health, the older person will have more control over their attitude and approach to the situation. As people age they become aware of the fact that events outside of their own control happen and that other individuals can have control of their health outcomes.

A study published in the journal Psychosomatic Medicine examined the health effect of childhood locus of control. 7,500 British adults (followed from birth), who had shown an internal locus of control at age 10, were less likely to be overweight at age 30. The children who had an internal locus of control also appeared to have higher levels of self-esteem.

Gender-based differences

As Schultz and Schultz (2005) point out, significant gender differences in locus of control have not been found for adults in the U.S. population. However, these authors also note that there may be specific sex-based differences for specific categories of items to assess locus of control; for example, they cite evidence that men may have a greater internal locus for questions related to academic achievement.

A study made by Takaki and colleagues (2006), focused on the sex or gendered differences with relationship to internal locus of control and self-efficacy in hemodialysis patients and their compliance. This study showed that female people who had high internal locus of control were less compliant in regards to their health and medical advice compared to the male people that participated in this study. Compliance is known to be the degree in which a person's behavior, in this case the patient, has a relationship with the medical advice. For example, a person that is compliant will correctly follow his/her doctor's advice.

A 2018 study that looked at the relationship between locus of control and optimism among children aged 10–15, however, found that an external locus of control was more prevalent among young girls. The study found no significant differences had been found in internal and unknown locus of control.

Cross-cultural and regional issues

The question of whether people from different cultures vary in locus of control has long been of interest to social psychologists.

Japanese people tend to be more external in locus-of-control orientation than people in the U.S.; however, differences in locus of control between different countries within Europe (and between the U.S. and Europe) tend to be small. As Berry et al. pointed out in 1992, ethnic groups within the United States have been compared on locus of control; African Americans in the U.S. are more external than whites when socioeconomic status is controlled. Berry et al. also pointed out in 1992 how research on other ethnic minorities in the U.S. (such as Hispanics) has been ambiguous. More on cross-cultural variations in locus of control can be found in Shiraev & Levy (2004). Research in this area indicates that locus of control has been a useful concept for researchers in cross-cultural psychology.

On a less broad scale, Sims and Baumann explained how regions in the United States cope with natural disasters differently. The example they used was tornados. They "applied Rotter's theory to explain why more people have died in tornado[e]s in Alabama than in Illinois". They explain that after giving surveys to residents of four counties in both Alabama and Illinois, Alabama residents were shown to be more external in their way of thinking about events that occur in their lives. Illinois residents, however, were more internal. Because Alabama residents had a more external way of processing information, they took fewer precautions prior to the appearance of a tornado. Those in Illinois, however, were more prepared, thus leading to fewer casualties.

Later studies find that these geographic differences can be explained by differences in relational mobility. Relational mobility is a measure of how much choice individuals have in terms of whom to form relationships with, including friendships, romantic partnerships, and work relations. Relational mobility is low in cultures with a subsistence economy that requires tight cooperation and coordination, such as farming, while it is high in cultures based on nomadic herding and in urban industrial cultures. A cross-cultural study found that the relational mobility is lowest in East Asian countries where rice farming is common, and highest in South American countries.

Self-efficacy

Self-efficacy refers to an individual's belief in their capacity to execute behaviors necessary to produce specific performance attainments. It is a related concept introduced by Albert Bandura, and has been measured by means of a psychometric scale. It differs from locus of control by relating to competence in circumscribed situations and activities (rather than more general cross-situational beliefs about control). Bandura has also emphasised differences between self-efficacy and self-esteem, using examples where low self-efficacy (for instance, in ballroom dancing) are unlikely to result in low self-esteem because competence in that domain is not very important (see valence) to an individual. Although individuals may have a high internal health locus of control and feel in control of their own health, they may not feel efficacious in performing a specific treatment regimen that is essential to maintaining their own health. Self-efficacy plays an important role in one's health because when people feel that they have self-efficacy over their health conditions, the effects of their health becomes less of a stressor.

Smith (1989) has argued that locus of control only weakly measures self-efficacy; "only a subset of items refer directly to the subject's capabilities". Smith noted that training in coping skills led to increases in self-efficacy, but did not affect locus of control as measured by Rotter's 1966 scale.

Stress

The previous section showed how self-efficacy can be related to a person's locus of control, and stress also has a relationship in these areas. Self-efficacy can be something that people use to deal with the stress that they are faced within their everyday lives. Some findings suggest that higher levels of external locus of control combined with lower levels self-efficacy are related to higher illness-related psychological distress. People who report a more external locus of control also report more concurrent and future stressful experiences and higher levels of psychological and physical problems. These people are also more vulnerable to external influences and as a result, they become more responsive to stress.

Veterans of the military forces who have spinal cord injuries and post-traumatic stress are a good group to look at in regard to locus of control and stress. Aging shows to be a very important factor that can be related to the severity of the symptoms of PTSD experienced by patients following the trauma of war. Research suggests that patients with a spinal cord injury benefit from knowing that they have control over their health problems and their disability, which reflects the characteristics of having an internal locus of control.

A study by Chung et al. (2006) focused on how the responses of spinal cord injury post-traumatic stress varied depending on age. The researchers tested different age groups including young adults, middle-aged, and elderly; the average age was 25, 48, and 65 for each group respectively. After the study, they concluded that age does not make a difference on how spinal cord injury patients respond to the traumatic events that happened. However, they did mention that age did play a role in the extent to which the external locus of control was used, and concluded that the young adult group demonstrated more external locus of control characteristics than the other age groups to which they were being compared.

Paradigm

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Paradigm

In science and philosophy, a paradigm (/ˈpærədm/) is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitute legitimate contributions to a field. The word paradigm is Greek in origin, meaning "pattern", and is used to illustrate similar occurrences.

Etymology

Paradigm comes from Greek παράδειγμα (paradeigma), "pattern, example, sample" from the verb παραδείκνυμι (paradeiknumi), "exhibit, represent, expose" and that from παρά (para), "beside, beyond" and δείκνυμι (deiknumi), "to show, to point out".

In classical (Greek-based) rhetoric, a paradeigma aims to provide an audience with an illustration of a similar occurrence. This illustration is not meant to take the audience to a conclusion, however it is used to help guide them to get there.

One way of how a paradeigma is meant to guide an audience would be exemplified by the role of a personal accountant. It is not the job of a personal accountant to tell a client exactly what (and what not) to spend money on, but to aid in guiding a client as to how money should be spent based on the client's financial goals. Anaximenes defined paradeigma as "actions that have occurred previously and are similar to, or the opposite of, those which we are now discussing."

The original Greek term παράδειγμα (paradeigma) was used by scribes in Greek texts (such as Plato's dialogues Timaeus (c. 360 BCE) and Parmenides) as one possibility for the model or the pattern that the demiurge supposedly used to create the cosmos.

The English-language term paradigm has technical meanings in the fields of grammar (as applied, for example, to declension and conjugation - the 1900 Merriam-Webster dictionary defines the technical use of paradigm only in the context of grammar) and of rhetoric (as a term for an illustrative parable or fable). In linguistics, Ferdinand de Saussure (1857-1913) used paradigm to refer to a class of elements with similarities (as opposed to syntagma - a class of elements expressing relationship).

The Merriam-Webster Online dictionary defines one usage of paradigm as "a philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and generalizations and the experiments performed in support of them are formulated; broadly: a philosophical or theoretical framework of any kind."

The Oxford Dictionary of Philosophy (2008) attributes the following description of the term in the history and philosophy of science to Thomas Kuhn's 1962 work The Structure of Scientific Revolutions:

Kuhn suggests that certain scientific works, such as Newton's Principia or John Dalton's New System of Chemical Philosophy (1808), provide an open-ended resource: a framework of concepts, results, and procedures within which subsequent work is structured. Normal science proceeds within such a framework or paradigm. A paradigm does not impose a rigid or mechanical approach, but can be taken more or less creatively and flexibly.

Scientific paradigm

The Oxford English Dictionary defines a paradigm as "a pattern or model, an exemplar; a typical instance of something, an example". The historian of science Thomas Kuhn gave the word its contemporary meaning when he adopted the word to refer to the set of concepts and practices that define a scientific discipline at any particular period of time. In his book, The Structure of Scientific Revolutions (first published in 1962), Kuhn defines a scientific paradigm as: "universally recognized scientific achievements that, for a time, provide model problems and solutions to a community of practitioners, i.e.,

  • what is to be observed and scrutinized
  • the kind of questions that are supposed to be asked and probed for answers in relation to this subject
  • how these questions are to be structured
  • what predictions made by the primary theory within the discipline
  • how the results of scientific investigations should be interpreted
  • how an experiment is to be conducted, and what equipment is available to conduct the experiment.

In The Structure of Scientific Revolutions, Kuhn saw the sciences as going through alternating periods of normal science, when an existing model of reality dominates a protracted period of puzzle-solving, and revolution, when the model of reality itself undergoes sudden drastic change. Paradigms have two aspects. Firstly, within normal science, the term refers to the set of exemplary experiments that are likely to be copied or emulated. Secondly, underpinning this set of exemplars are shared preconceptions, made prior to – and conditioning – the collection of evidence. These preconceptions embody both hidden assumptions and elements that Kuhn describes as quasi-metaphysical. The interpretations of the paradigm may vary among individual scientists.

Kuhn was at pains to point out that the rationale for the choice of exemplars is a specific way of viewing reality: that view and the status of "exemplar" are mutually reinforcing. For well-integrated members of a particular discipline, its paradigm is so convincing that it normally renders even the possibility of alternatives unconvincing and counter-intuitive. Such a paradigm is opaque, appearing to be a direct view of the bedrock of reality itself, and obscuring the possibility that there might be other, alternative imageries hidden behind it. The conviction that the current paradigm is reality tends to disqualify evidence that might undermine the paradigm itself; this in turn leads to a build-up of unreconciled anomalies. It is the latter that is responsible for the eventual revolutionary overthrow of the incumbent paradigm, and its replacement by a new one. Kuhn used the expression paradigm shift (see below) for this process, and likened it to the perceptual change that occurs when our interpretation of an ambiguous image "flips over" from one state to another. (The rabbit-duck illusion is an example: it is not possible to see both the rabbit and the duck simultaneously.) This is significant in relation to the issue of incommensurability (see below).

An example of a currently accepted paradigm would be the standard model of physics. The scientific method allows for orthodox scientific investigations into phenomena that might contradict or disprove the standard model; however grant funding would be proportionately more difficult to obtain for such experiments, depending on the degree of deviation from the accepted standard model theory the experiment would test for. To illustrate the point, an experiment to test for the mass of neutrinos or the decay of protons (small departures from the model) is more likely to receive money than experiments that look for the violation of the conservation of momentum, or ways to engineer reverse time travel.

Mechanisms similar to the original Kuhnian paradigm have been invoked in various disciplines other than the philosophy of science. These include: the idea of major cultural themes, worldviews (and see below), ideologies, and mindsets. They have somewhat similar meanings that apply to smaller and larger scale examples of disciplined thought. In addition, Michel Foucault used the terms episteme and discourse, mathesis, and taxinomia, for aspects of a "paradigm" in Kuhn's original sense.

Paradigm shifts

In The Structure of Scientific Revolutions, Kuhn wrote that "the successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12).

Paradigm shifts tend to appear in response to the accumulation of critical anomalies as well as in the form of the proposal of a new theory with the power to encompass both older relevant data and explain relevant anomalies. New paradigms tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, a statement generally attributed to physicist Lord Kelvin famously claimed, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement." Five years later, Albert Einstein published his paper on special relativity, which challenged the set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years. In this case, the new paradigm reduces the old to a special case in the sense that Newtonian mechanics is still a good model for approximation for speeds that are slow compared to the speed of light. Many philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Kuhn's original model is now generally seen as too limited.

Some examples of contemporary paradigm shifts include:

  • In medicine, the transition from "clinical judgment" to evidence-based medicine
  • In social psychology, the transition from p-hacking to replication[20]
  • In software engineering, the transition from the Rational Paradigm to the Empirical Paradigm [21]
  • In artificial intelligence, the transition from classical AI to data-driven AI [22]

Kuhn's idea was, itself, revolutionary in its time. It caused a major change in the way that academics talk about science; and, so, it may be that it caused (or was part of) a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognize such a paradigm shift. Being in the social sciences, people can still use earlier ideas to discuss the history of science.

Paradigm paralysis

Perhaps the greatest barrier to a paradigm shift, in some cases, is the reality of paradigm paralysis: the inability or refusal to see beyond the current models of thinking. This is similar to what psychologists term confirmation bias and the Semmelweis reflex. Examples include rejection of Aristarchus of Samos', Copernicus', and Galileo's theory of a heliocentric solar system, the discovery of electrostatic photography, xerography and the quartz clock.

Incommensurability

Kuhn pointed out that it could be difficult to assess whether a particular paradigm shift had actually led to progress, in the sense of explaining more facts, explaining more important facts, or providing better explanations, because the understanding of "more important", "better", etc. changed with the paradigm. The two versions of reality are thus incommensurable. Kuhn's version of incommensurability has an important psychological dimension. This is apparent from his analogy between a paradigm shift and the flip-over involved in some optical illusions. However, he subsequently diluted his commitment to incommensurability considerably, partly in the light of other studies of scientific development that did not involve revolutionary change. One of the examples of incommensurability that Kuhn used was the change in the style of chemical investigations that followed the work of Lavoisier on atomic theory in the late 18th Century. In this change, the focus had shifted from the bulk properties of matter (such as hardness, colour, reactivity, etc.) to studies of atomic weights and quantitative studies of reactions. He suggested that it was impossible to make the comparison needed to judge which body of knowledge was better or more advanced. However, this change in research style (and paradigm) eventually (after more than a century) led to a theory of atomic structure that accounts well for the bulk properties of matter; see, for example, Brady's General Chemistry. According to P J Smith, this ability of science to back off, move sideways, and then advance is characteristic of the natural sciences, but contrasts with the position in some social sciences, notably economics.

This apparent ability does not guarantee that the account is veridical at any one time, of course, and most modern philosophers of science are fallibilists. However, members of other disciplines do see the issue of incommensurability as a much greater obstacle to evaluations of "progress"; see, for example, Martin Slattery's Key Ideas in Sociology.

Subsequent developments

Opaque Kuhnian paradigms and paradigm shifts do exist. A few years after the discovery of the mirror-neurons that provide a hard-wired basis for the human capacity for empathy, the scientists involved were unable to identify the incidents that had directed their attention to the issue. Over the course of the investigation, their language and metaphors had changed so that they themselves could no longer interpret all of their own earlier laboratory notes and records.

Imre Lakatos and research programmes

However, many instances exist in which change in a discipline's core model of reality has happened in a more evolutionary manner, with individual scientists exploring the usefulness of alternatives in a way that would not be possible if they were constrained by a paradigm. Imre Lakatos suggested (as an alternative to Kuhn's formulation) that scientists actually work within research programmes. In Lakatos' sense, a research programme is a sequence of problems, placed in order of priority. This set of priorities, and the associated set of preferred techniques, is the positive heuristic of a programme. Each programme also has a negative heuristic; this consists of a set of fundamental assumptions that – temporarily, at least – takes priority over observational evidence when the two appear to conflict.

This latter aspect of research programmes is inherited from Kuhn's work on paradigms, and represents an important departure from the elementary account of how science works. According to this, science proceeds through repeated cycles of observation, induction, hypothesis-testing, etc., with the test of consistency with empirical evidence being imposed at each stage. Paradigms and research programmes allow anomalies to be set aside, where there is reason to believe that they arise from incomplete knowledge (about either the substantive topic, or some aspect of the theories implicitly used in making observations.

Larry Laudan: Dormant anomalies, fading credibility, and research traditions

Larry Laudan has also made two important contributions to the debate. Laudan believed that something akin to paradigms exist in the social sciences (Kuhn had contested this, see below); he referred to these as research traditions. Laudan noted that some anomalies become "dormant", if they survive a long period during which no competing alternative has shown itself capable of resolving the anomaly. He also presented cases in which a dominant paradigm had withered away because its lost credibility when viewed against changes in the wider intellectual milieu.

In social sciences

Kuhn himself did not consider the concept of paradigm as appropriate for the social sciences. He explains in his preface to The Structure of Scientific Revolutions that he developed the concept of paradigm precisely to distinguish the social from the natural sciences. While visiting the Center for Advanced Study in the Behavioral Sciences in 1958 and 1959, surrounded by social scientists, he observed that they were never in agreement about the nature of legitimate scientific problems and methods. He explains that he wrote this book precisely to show that there can never be any paradigms in the social sciences. Mattei Dogan, a French sociologist, in his article "Paradigms in the Social Sciences," develops Kuhn's original thesis that there are no paradigms at all in the social sciences since the concepts are polysemic, involving the deliberate mutual ignorance between scholars and the proliferation of schools in these disciplines. Dogan provides many examples of the non-existence of paradigms in the social sciences in his essay, particularly in sociology, political science and political anthropology.

However, both Kuhn's original work and Dogan's commentary are directed at disciplines that are defined by conventional labels (such as "sociology"). While it is true that such broad groupings in the social sciences are usually not based on a Kuhnian paradigm, each of the competing sub-disciplines may still be underpinned by a paradigm, research programme, research tradition, and/ or professional imagery. These structures will be motivating research, providing it with an agenda, defining what is and is not anomalous evidence, and inhibiting debate with other groups that fall under the same broad disciplinary label. (A good example is provided by the contrast between Skinnerian radical behaviourism and personal construct theory (PCT) within psychology. The most significant of the many ways these two sub-disciplines of psychology differ concerns meanings and intentions. In PCT, they are seen as the central concern of psychology; in radical behaviourism, they are not scientific evidence at all, as they cannot be directly observed.)

Such considerations explain the conflict between the Kuhn/ Dogan view, and the views of others (including Larry Laudan, see above), who do apply these concepts to social sciences.

Handa, M.L. (1986) introduced the idea of "social paradigm" in the context of social sciences. He identified the basic components of a social paradigm. Like Kuhn, Handa addressed the issue of changing paradigm; the process popularly known as "paradigm shift". In this respect, he focused on social circumstances that precipitate such a shift and the effects of the shift on social institutions, including the institution of education. This broad shift in the social arena, in turn, changes the way the individual perceives reality.

Another use of the word paradigm is in the sense of "worldview". For example, in social science, the term is used to describe the set of experiences, beliefs and values that affect the way an individual perceives reality and responds to that perception. Social scientists have adopted the Kuhnian phrase "paradigm shift" to denote a change in how a given society goes about organizing and understanding reality. A "dominant paradigm" refers to the values, or system of thought, in a society that are most standard and widely held at a given time. Dominant paradigms are shaped both by the community's cultural background and by the context of the historical moment. Hutchin  outlines some conditions that facilitate a system of thought to become an accepted dominant paradigm:

  • Professional organizations that give legitimacy to the paradigm
  • Dynamic leaders who introduce and purport the paradigm
  • Journals and editors who write about the system of thought. They both disseminate the information essential to the paradigm and give the paradigm legitimacy
  • Government agencies who give credence to the paradigm
  • Educators who propagate the paradigm's ideas by teaching it to students
  • Conferences conducted that are devoted to discussing ideas central to the paradigm
  • Media coverage
  • Lay groups, or groups based around the concerns of lay persons, that embrace the beliefs central to the paradigm
  • Sources of funding to further research on the paradigm

Other uses

The word paradigm is also still used to indicate a pattern or model or an outstandingly clear or typical example or archetype. The term is frequently used in this sense in the design professions. Design Paradigms or archetypes comprise functional precedents for design solutions. The best known references on design paradigms are Design Paradigms: A Sourcebook for Creative Visualization, by Wake, and Design Paradigms by Petroski.

This term is also used in cybernetics. Here it means (in a very wide sense) a (conceptual) protoprogram for reducing the chaotic mass to some form of order. Note the similarities to the concept of entropy in chemistry and physics. A paradigm there would be a sort of prohibition to proceed with any action that would increase the total entropy of the system. To create a paradigm requires a closed system that accepts changes. Thus a paradigm can only apply to a system that is not in its final stage.

Beyond its use in the physical and social sciences, Kuhn's paradigm concept has been analysed in relation to its applicability in identifying 'paradigms' with respect to worldviews at specific points in history. One example is Matthew Edward Harris' book The Notion of Papal Monarchy in the Thirteenth Century: The Idea of Paradigm in Church History. Harris stresses the primarily sociological importance of paradigms, pointing towards Kuhn's second edition of The Structure of Scientific Revolutions. Although obedience to popes such as Innocent III and Boniface VIII was widespread, even written testimony from the time showing loyalty to the pope does not demonstrate that the writer had the same worldview as the Church, and therefore pope, at the centre. The difference between paradigms in the physical sciences and in historical organisations such as the Church is that the former, unlike the latter, requires technical expertise rather than repeating statements. In other words, after scientific training through what Kuhn calls 'exemplars', one could not genuinely believe that, to take a trivial example, the earth is flat, whereas thinkers such as Giles of Rome in the thirteenth century wrote in favour of the pope, then could easily write similarly glowing things about the king. A writer such as Giles would have wanted a good job from the pope; he was a papal publicist. However, Harris writes that 'scientific group membership is not concerned with desire, emotions, gain, loss and any idealistic notions concerning the nature and destiny of humankind...but simply to do with aptitude, explanation, [and] cold description of the facts of the world and the universe from within a paradigm'.

 

Neural network

From Wikipedia, the free encyclopedia
Simplified view of a feedforward artificial neural network

A neural network can refer to either a neural circuit of biological neurons (sometimes also called a biological neural network), or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

These artificial networks may be used for predictive modeling, adaptive control and applications where they can be trained via a dataset. Self-learning resulting from experience can occur within networks, which can derive conclusions from a complex and seemingly unrelated set of information.

Overview

A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion.

Artificial intelligence, cognitive modelling, and neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots.

Historically, digital computers evolved from the von Neumann model, and operate via the execution of explicit instructions via access to memory by a number of processors. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems. Unlike the von Neumann model, neural network computing does not separate memory and processing.

Neural network theory has served to identify better how the neurons in the brain function and provide the basis for efforts to create artificial intelligence.

History

The preliminary theoretical base for contemporary neural networks was independently proposed by Alexander Bain (1873) and William James (1890). In their work, both thoughts and body activity resulted from interactions among neurons within the brain.

Computer simulation of the branching architecture of the dendrites of pyramidal neurons.

For Bain, every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to the formation of memory. The general scientific community at the time was skeptical of Bain's theory because it required what appeared to be an inordinate number of neural connections within the brain. It is now apparent that the brain is exceedingly complex and that the same brain “wiring” can handle multiple problems and inputs.

James's theory was similar to Bain's, however, he suggested that memories and actions resulted from electrical currents flowing among the neurons in the brain. His model, by focusing on the flow of electrical currents, did not require individual neural connections for each memory or action.

C. S. Sherrington (1898) conducted experiments to test James's theory. He ran electrical currents down the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected by James, Sherrington found that the electrical current strength decreased as the testing continued over time. Importantly, this work led to the discovery of the concept of habituation.

Wilhelm Lenz (1920) and Ernst Ising (1925) created and analyzed the Ising model which is essentially a non-learning artificial recurrent neural network (RNN) consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was popularised by John Hopfield in 1982. McCulloch and Pitts (1943) also created a computational model for neural networks based on mathematics and algorithms. They called this model threshold logic. These early models paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence.

In the late 1940s psychologist Donald Hebb created a hypothesis of learning based on the mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is considered to be a 'typical' unsupervised learning rule and its later variants were early models for long term potentiation. These ideas started being applied to computational models in 1948 with Turing's B-type machines.

Farley and Clark (1954) first used computational machines, then called calculators, to simulate a Hebbian network at MIT. Other neural network computational machines were created by Rochester, Holland, Habit, and Duda (1956).

Frank Rosenblatt (1958) created the perceptron, an algorithm for pattern recognition based on a two-layer learning computer network using simple addition and subtraction. With mathematical notation, Rosenblatt also described circuitry not in the basic perceptron, such as the exclusive-or circuit.

Some say that neural network research stagnated after the publication of machine learning research by Marvin Minsky and Seymour Papert (1969). They discovered two key issues with the computational machines that processed neural networks. The first issue was that single-layer neural networks were incapable of processing the exclusive-or circuit. The second significant issue was that computers were not sophisticated enough to effectively handle the long run time required by large neural networks. However, by the time this book came out, methods for training multilayer perceptrons (MLPs) were already known. The first deep learning MLP was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965. The first deep learning MLP trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.

Neural network research was boosted when computers achieved greater processing power. Also key in later advances was the backpropagation algorithm. It is an efficient application of the Leibniz chain rule (1673) to networks of differentiable nodes. It is also known as the reverse mode of automatic differentiation or reverse accumulation, due to Seppo Linnainmaa (1970). The term "back-propagating errors" was introduced in 1962 by Frank Rosenblatt, but he did not have an implementation of this procedure, although Henry J. Kelley had a continuous precursor of backpropagation already in 1960 in the context of control theory. In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard.

In the late 1970s to early 1980s, interest briefly emerged in theoretically investigating the Ising model by Wilhelm Lenz (1920) and Ernst Ising (1925) in relation to Cayley tree topologies and large neural networks. In 1981, the Ising model was solved exactly for the general case of closed Cayley trees (with loops) with an arbitrary branching ratio and found to exhibit unusual phase transition behavior in its local-apex and long-range site-site correlations.

The parallel distributed processing of the mid-1980s became popular under the name connectionism. The text by Rumelhart and McClelland (1986) provided a full exposition on the use of connectionism in computers to simulate neural processes.

Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function.

Artificial intelligence

A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.

In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. Artificial neurons were first proposed in 1943 by Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, who first collaborated at the University of Chicago.

One classical type of artificial neural network is the recurrent Hopfield network.

The concept of a neural network appears to have first been proposed by Alan Turing in his 1948 paper Intelligent Machinery in which he called them "B-type unorganised machines".

The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. Unsupervised neural networks can also be used to learn representations of the input that capture the salient characteristics of the input distribution, e.g., see the Boltzmann machine (1983), and more recently, deep learning algorithms, which can implicitly learn the distribution function of the observed data. Learning in neural networks is particularly useful in applications where the complexity of the data or task makes the design of such functions by hand impractical.

Applications

Neural networks can be used in different fields. The tasks to which artificial neural networks are applied tend to fall within the following broad categories:

Application areas of ANNs include nonlinear system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering. For example, it is possible to create a semantic profile of user's interests emerging from pictures trained for object recognition.

Neuroscience

Theoretical and computational neuroscience is the field concerned with the analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling.

The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (biological neural network models) and theory (statistical learning theory and information theory).

Types of models

Many models are used; defined at different levels of abstraction, and modeling different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of the dynamics of neural circuitry arising from interactions between individual neurons, to models of behaviour arising from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and its relation to learning and memory, from the individual neuron to the system level.

Connectivity

In August 2020 scientists reported that bi-directional connections, or added appropriate feedback connections, can accelerate and improve communication between and in modular neural networks of the brain's cerebral cortex and lower the threshold for their successful communication. They showed that adding feedback connections between a resonance pair can support successful propagation of a single pulse packet throughout the entire network.

Criticism

Historically, a common criticism of neural networks, particularly in robotics, was that they require a large diversity of training samples for real-world operation. This is not surprising, since any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Dean Pomerleau, in his research presented in the paper "Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving," uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.). A large amount of his research is devoted to (1) extrapolating multiple training scenarios from a single training experience, and (2) preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right). These issues are common in neural networks that must decide from amongst a wide variety of responses, but can be dealt with in several ways, for example by randomly shuffling the training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, or by grouping examples in so-called mini-batches.

A. K. Dewdney, a former Scientific American columnist, wrote in 1997, "Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool."

Arguments for Dewdney's position are that to implement large and effective software neural networks, much processing and storage resources need to be committed. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a most simplified form on Von Neumann technology may compel a neural network designer to fill many millions of database rows for its connections—which can consume vast amounts of computer memory and data storage capacity. Furthermore, the designer of neural network systems will often need to simulate the transmission of signals through many of these connections and their associated neurons—which must often be matched with incredible amounts of CPU processing power and time. While neural networks often yield effective programs, they too often do so at the cost of efficiency (they tend to consume considerable amounts of time and money).

Arguments against Dewdney's position are that neural nets have been successfully used to solve many complex and diverse tasks, such as autonomously flying aircraft.

Technology writer Roger Bridgman commented on Dewdney's statements about neural nets:

Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".

In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.

Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.

Some other criticisms came from believers of hybrid models (combining neural networks and symbolic approaches). They advocate the intermix of these two approaches and believe that hybrid models can better capture the mechanisms of the human mind (Sun and Bookman, 1990).[full citation needed]

Recent improvements

While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning.

Biophysical models, such as BCM theory, have been important in understanding mechanisms for synaptic plasticity, and have had applications in both computer science and neuroscience. Research is ongoing in understanding the computational algorithms used in the brain, with some recent biological evidence for radial basis networks and neural backpropagation as mechanisms for processing data.

Computational devices have been created in CMOS for both biophysical simulation and neuromorphic computing. More recent efforts show promise for creating nanodevices for very large scale principal components analyses and convolution. If successful, these efforts could usher in a new era of neural computing that is a step beyond digital computing, because it depends on learning rather than programming and because it is fundamentally analog rather than digital even though the first instantiations may in fact be with CMOS digital devices.

Between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions in pattern recognition and machine learning. For example, multi-dimensional long short term memory (LSTM) won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages to be learned.

Variants of the back-propagation algorithm as well as unsupervised methods by Geoff Hinton and colleagues at the University of Toronto can be used to train deep, highly nonlinear neural architectures, similar to the 1980 Neocognitron by Kunihiko Fukushima, and the "standard architecture of vision", inspired by the simple and complex cells identified by David H. Hubel and Torsten Wiesel in the primary visual cortex.

Radial basis function and wavelet networks have also been introduced. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.

Deep learning feedforward networks alternate convolutional layers and max-pooling layers, topped by several pure classification layers. Fast GPU-based implementations of this approach have won several pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition and the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge. Such neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem of Yann LeCun and colleagues at NYU.

Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks.

Saturday, May 13, 2023

Large-scale brain network

From Wikipedia, the free encyclopedia

Large-scale brain networks (also known as intrinsic brain networks) are collections of widespread brain regions showing functional connectivity by statistical analysis of the fMRI BOLD signal or other recording methods such as EEG, PET and MEG. An emerging paradigm in neuroscience is that cognitive tasks are performed not by individual brain regions working in isolation but by networks consisting of several discrete brain regions that are said to be "functionally connected". Functional connectivity networks may be found using algorithms such as cluster analysis, spatial independent component analysis (ICA), seed based, and others. Synchronized brain regions may also be identified using long-range synchronization of the EEG, MEG, or other dynamic brain signals.

The set of identified brain areas that are linked together in a large-scale network varies with cognitive function. When the cognitive state is not explicit (i.e., the subject is at "rest"), the large-scale brain network is a resting state network (RSN). As a physical system with graph-like properties, a large-scale brain network has both nodes and edges and cannot be identified simply by the co-activation of brain areas. In recent decades, the analysis of brain networks was made feasible by advances in imaging techniques as well as new tools from graph theory and dynamical systems.

Large-scale brain networks are identified by their function and provide a coherent framework for understanding cognition by offering a neural model of how different cognitive functions emerge when different sets of brain regions join together as self-organized coalitions. The number and composition of the coalitions will vary with the algorithm and parameters used to identify them. In one model, there is only the default mode network and the task-positive network, but most current analyses show several networks, from a small handful to 17. The most common and stable networks are enumerated below. The regions participating in a functional network may be dynamically reconfigured.

Disruptions in activity in various networks have been implicated in neuropsychiatric disorders such as depression, Alzheimer's, autism spectrum disorder, schizophrenia, ADHD and bipolar disorder.

Core networks

An example that identified 10 large-scale brain networks from resting state fMRI activity through independent component analysis.

Because brain networks can be identified at various different resolutions and with various different neurobiological properties, there is currently no universal atlas of brain networks that fits all circumstances. The Organization for Human Brain Mapping has the Workgroup for HArmonized Taxonomy of NETworks (WHATNET) group to work towards a consensus regarding network nomenclature. While the work continues, Uddin, Yeo, and Spreng proposed in 2019 that the following six networks should be defined as core networks based on converging evidences from multiple studies to facilitate communication between researchers.

Default Mode (Medial frontoparietal)

  • The default mode network is active when an individual is awake and at rest. It preferentially activates when individuals focus on internally-oriented tasks such as daydreaming, envisioning the future, retrieving memories, and theory of mind. It is negatively correlated with brain systems that focus on external visual signals. It is the most widely researched network.

Salience (Midcingulo-Insular)

  • The salience network consists of several structures, including the anterior (bilateral) insula, dorsal anterior cingulate cortex, and three subcortical structures which are the ventral striatum, substantia nigra/ventral tegmental region. It plays the key role of monitoring the salience of external inputs and internal brain events. Specifically, it aids in directing attention by identifying important biological and cognitive events.
  • This network includes the ventral attention network, which primarily includes the temporoparietal junction and the ventral frontal cortex of the right hemisphere. These areas respond when behaviorally relevant stimuli occur unexpectedly. The ventral attention network is inhibited during focused attention in which top-down processing is being used, such as when visually searching for something. This response may prevent goal-driven attention from being distracted by non-relevant stimuli. It becomes active again when the target or relevant information about the target is found.

Attention (Dorsal frontoparietal)

  • This network is involved in the voluntary, top-down deployment of attention. Within the dorsal attention network, the intraparietal sulcus and frontal eye fields influence the visual areas of the brain. These influencing factors allow for the orientation of attention.

Control (Lateral frontoparietal)

  • This network initiates and modulates cognitive control and comprises 18 sub-regions of the brain. There is a strong correlation between fluid intelligence and the involvement of the fronto-parietal network with other networks.
  • Versions of this network have also been called the central executive (or executive control) network and the cognitive control network.

Sensorimotor or Somatomotor (Pericentral)

  • This network processes somatosensory information and coordinates motion. The auditory cortex may be included.

Visual (Occipital)

  • This network handles visual information processing.

Other networks

Different methods and data have identified several other brain networks, many of which greatly overlap or are subsets of more well-characterized core networks.

  • Limbic
  • Auditory
  • Right/left executive
  • Cerebellar
  • Spatial attention
  • Language
  • Lateral visual
  • Temporal
  • Visual perception/imagery

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...