Search This Blog

Wednesday, June 21, 2023

Executive functions

From Wikipedia, the free encyclopedia

In cognitive science and neuropsychology, executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility. Higher-order executive functions require the simultaneous use of multiple basic executive functions and include planning and fluid intelligence (e.g., reasoning and problem-solving).

Executive functions gradually develop and change across the lifespan of an individual and can be improved at any time over the course of a person's life. Similarly, these cognitive processes can be adversely affected by a variety of events which affect an individual. Both neuropsychological tests (e.g., the Stroop test) and rating scales (e.g., the Behavior Rating Inventory of Executive Function) are used to measure executive functions. They are usually performed as part of a more comprehensive assessment to diagnose neurological and psychiatric disorders.

Cognitive control and stimulus control, which is associated with operant and classical conditioning, represent opposite processes (internal vs external or environmental, respectively) that compete over the control of an individual's elicited behaviors; in particular, inhibitory control is necessary for overriding stimulus-driven behavioral responses (stimulus control of behavior). The prefrontal cortex is necessary but not solely sufficient for executive functions; for example, the caudate nucleus and subthalamic nucleus also have a role in mediating inhibitory control.

Cognitive control is impaired in addiction, attention deficit hyperactivity disorder, autism, and a number of other central nervous system disorders. Stimulus-driven behavioral responses that are associated with a particular rewarding stimulus tend to dominate one's behavior in an addiction.

Neuroanatomy

Historically, the executive functions have been seen as regulated by the prefrontal regions of the frontal lobes, but it is still a matter of ongoing debate if that really is the case. Even though articles on prefrontal lobe lesions commonly refer to disturbances of executive functions and vice versa, a review found indications for the sensitivity but not for the specificity of executive function measures to frontal lobe functioning. This means that both frontal and non-frontal brain regions are necessary for intact executive functions. Probably the frontal lobes need to participate in basically all of the executive functions, but they are not the only brain structure involved.

Neuroimaging and lesion studies have identified the functions which are most often associated with the particular regions of the prefrontal cortex and associated areas.

  • The dorsolateral prefrontal cortex (DLPFC) is involved with "on-line" processing of information such as integrating different dimensions of cognition and behavior. As such, this area has been found to be associated with verbal and design fluency, ability to maintain and shift set, planning, response inhibition, working memory, organisational skills, reasoning, problem-solving, and abstract thinking.
Side view of the brain, illustrating dorsolateral prefrontal and orbitofrontal cortex
  • The anterior cingulate cortex (ACC) is involved in emotional drives, experience and integration. Associated cognitive functions include inhibition of inappropriate responses, decision making and motivated behaviors. Lesions in this area can lead to low drive states such as apathy, abulia or akinetic mutism and may also result in low drive states for such basic needs as food or drink and possibly decreased interest in social or vocational activities and sex.
  • The orbitofrontal cortex (OFC) plays a key role in impulse control, maintenance of set, monitoring ongoing behavior and socially appropriate behaviors. The orbitofrontal cortex also has roles in representing the value of rewards based on sensory stimuli and evaluating subjective emotional experiences. Lesions can cause disinhibition, impulsivity, aggressive outbursts, sexual promiscuity and antisocial behavior.

Furthermore, in their review, Alvarez and Emory state that:

The frontal lobes have multiple connections to cortical, subcortical and brain stem sites. The basis of "higher-level" cognitive functions such as inhibition, flexibility of thinking, problem solving, planning, impulse control, concept formation, abstract thinking, and creativity often arise from much simpler, "lower-level" forms of cognition and behavior. Thus, the concept of executive function must be broad enough to include anatomical structures that represent a diverse and diffuse portion of the central nervous system.

The cerebellum also appears to be involved in mediating certain executive functions, as do the ventral tegmental area and the substantia nigra.

Hypothesized role

The executive system is thought to be heavily involved in handling novel situations outside the domain of some of our 'automatic' psychological processes that could be explained by the reproduction of learned schemas or set behaviors. Psychologists Don Norman and Tim Shallice have outlined five types of situations in which routine activation of behavior would not be sufficient for optimal performance:

  1. Those that involve planning or decision-making
  2. Those that involve error correction or troubleshooting
  3. Situations where responses are not well-rehearsed or contain novel sequences of actions
  4. Dangerous or technically difficult situations
  5. Situations that require the overcoming of a strong habitual response or resisting temptation.

A prepotent response is a response for which immediate reinforcement (positive or negative) is available or has been previously associated with that response.

Executive functions are often invoked when it is necessary to override prepotent responses that might otherwise be automatically elicited by stimuli in the external environment. For example, on being presented with a potentially rewarding stimulus, such as a tasty piece of chocolate cake, a person might have the automatic response to take a bite. However, where such behavior conflicts with internal plans (such as having decided not to eat chocolate cake while on a diet), the executive functions might be engaged to inhibit that response.

Although suppression of these prepotent responses is ordinarily considered adaptive, problems for the development of the individual and the culture arise when feelings of right and wrong are overridden by cultural expectations or when creative impulses are overridden by executive inhibitions.

Historical perspective

Although research into the executive functions and their neural basis has increased markedly over recent years, the theoretical framework in which it is situated is not new. In the 1940s, the British psychologist Donald Broadbent drew a distinction between "automatic" and "controlled" processes (a distinction characterized more fully by Shiffrin and Schneider in 1977), and introduced the notion of selective attention, to which executive functions are closely allied. In 1975, the US psychologist Michael Posner used the term "cognitive control" in his book chapter entitled "Attention and cognitive control".

The work of influential researchers such as Michael Posner, Joaquin Fuster, Tim Shallice, and their colleagues in the 1980s (and later Trevor Robbins, Bob Knight, Don Stuss, and others) laid much of the groundwork for recent research into executive functions. For example, Posner proposed that there is a separate "executive" branch of the attentional system, which is responsible for focusing attention on selected aspects of the environment. The British neuropsychologist Tim Shallice similarly suggested that attention is regulated by a "supervisory system", which can override automatic responses in favour of scheduling behaviour on the basis of plans or intentions. Throughout this period, a consensus emerged that this control system is housed in the most anterior portion of the brain, the prefrontal cortex (PFC).

Psychologist Alan Baddeley had proposed a similar system as part of his model of working memory and argued that there must be a component (which he named the "central executive") that allows information to be manipulated in short-term memory (for example, when doing mental arithmetic).

Development

The executive functions are among the last mental functions to reach maturity. This is due to the delayed maturation of the prefrontal cortex, which is not completely myelinated until well into a person's third decade of life. Development of executive functions tends to occur in spurts, when new skills, strategies, and forms of awareness emerge. These spurts are thought to reflect maturational events in the frontal areas of the brain. Attentional control appears to emerge in infancy and develop rapidly in early childhood. Cognitive flexibility, goal setting, and information processing usually develop rapidly during ages 7–9 and mature by age 12. Executive control typically emerges shortly after a transition period at the beginning of adolescence. It is not yet clear whether there is a single sequence of stages in which executive functions appear, or whether different environments and early life experiences can lead people to develop them in different sequences.

Early childhood

Inhibitory control and working memory act as basic executive functions that make it possible for more complex executive functions like problem-solving to develop. Inhibitory control and working memory are among the earliest executive functions to appear, with initial signs observed in infants, 7 to 12 months old. Then in the preschool years, children display a spurt in performance on tasks of inhibition and working memory, usually between the ages of 3 and 5 years. Also during this time, cognitive flexibility, goal-directed behavior, and planning begin to develop. Nevertheless, preschool children do not have fully mature executive functions and continue to make errors related to these emerging abilities – often not due to the absence of the abilities, but rather because they lack the awareness to know when and how to use particular strategies in particular contexts.

Preadolescence

Preadolescent children continue to exhibit certain growth spurts in executive functions, suggesting that this development does not necessarily occur in a linear manner, along with the preliminary maturing of particular functions as well. During preadolescence, children display major increases in verbal working memory; goal-directed behavior (with a potential spurt around 12 years of age); response inhibition and selective attention; and strategic planning and organizational skills. Additionally, between the ages of 8 and 10, cognitive flexibility in particular begins to match adult levels. However, similar to patterns in childhood development, executive functioning in preadolescents is limited because they do not reliably apply these executive functions across multiple contexts as a result of ongoing development of inhibitory control.

Adolescence

Many executive functions may begin in childhood and preadolescence, such as inhibitory control. Yet, it is during adolescence when the different brain systems become better integrated. At this time, youth implement executive functions, such as inhibitory control, more efficiently and effectively and improve throughout this time period. Just as inhibitory control emerges in childhood and improves over time, planning and goal-directed behavior also demonstrate an extended time course with ongoing growth over adolescence. Likewise, functions such as attentional control, with a potential spurt at age 15, along with working memory, continue developing at this stage.

Adulthood

The major change that occurs in the brain in adulthood is the constant myelination of neurons in the prefrontal cortex. At age 20–29, executive functioning skills are at their peak, which allows people of this age to participate in some of the most challenging mental tasks. These skills begin to decline in later adulthood. Working memory and spatial span are areas where decline is most readily noted. Cognitive flexibility, however, has a late onset of impairment and does not usually start declining until around age 70 in normally functioning adults. Impaired executive functioning has been found to be the best predictor of functional decline in the elderly.

Models

Top-down inhibitory control

Aside from facilitatory or amplificatory mechanisms of control, many authors have argued for inhibitory mechanisms in the domain of response control, memory, selective attention, theory of mind, emotion regulation, as well as social emotions such as empathy. A recent review on this topic argues that active inhibition is a valid concept in some domains of psychology/cognitive control.

Working memory model

One influential model is Baddeley's multicomponent model of working memory, which is composed of a central executive system that regulates three subsystems: the phonological loop, which maintains verbal information; the visuospatial sketchpad, which maintains visual and spatial information; and the more recently developed episodic buffer that integrates short-term and long-term memory, holding and manipulating a limited amount of information from multiple domains in temporal and spatially sequenced episodes.

Researchers have found significant positive effects of biofeedback-enhanced relaxation on memory and inhibition in children. Biofeedback is a mind-body tool where people can learn to control and regulate their body to improve and control their executive functioning skills. To measure one's processes, researchers use their heart rate and or respiratory rates. Biofeedback-relaxation includes music therapy, art, and other mindfulness activities.

Executive functioning skills are important for many reasons, including children's academic success and social emotional development. According to the study "The Efficacy of Different Interventions to Foster Children's Executive Function Skills: A Series of Meta-Analyses", researchers found that it is possible to train executive functioning skills. Researchers conducted a meta-analytic study that looked at the combined effects of prior studies in order to find the overarching effectiveness of different interventions that promote the development of executive functioning skills in children. The interventions included computerized and non-computerized training, physical exercise, art, and mindfulness exercises. However, researchers could not conclude that art activities or physical activities could improve executive functioning skills.

Supervisory attentional system (SAS)

Another conceptual model is the supervisory attentional system (SAS). In this model, contention scheduling is the process where an individual's well-established schemas automatically respond to routine situations while executive functions are used when faced with novel situations. In these new situations, attentional control will be a crucial element to help generate new schema, implement these schema, and then assess their accuracy.

Self-regulatory model

Russell Barkley proposed a widely known model of executive functioning that is based on self-regulation. Primarily derived from work examining behavioral inhibition, it views executive functions as composed of four main abilities. One element is working memory that allows individuals to resist interfering information. A second component is the management of emotional responses in order to achieve goal-directed behaviors. Thirdly, internalization of self-directed speech is used to control and sustain rule-governed behavior and to generate plans for problem-solving. Lastly, information is analyzed and synthesized into new behavioral responses to meet one's goals. Changing one's behavioral response to meet a new goal or modify an objective is a higher level skill that requires a fusion of executive functions including self-regulation, and accessing prior knowledge and experiences.

According to this model, the executive system of the human brain provides for the cross-temporal organization of behavior towards goals and the future and coordinates actions and strategies for everyday goal-directed tasks. Essentially, this system permits humans to self-regulate their behavior so as to sustain action and problem-solving toward goals specifically and the future more generally. Thus, executive function deficits pose serious problems for a person's ability to engage in self-regulation over time to attain their goals and anticipate and prepare for the future.

Teaching children self-regulation strategies is a way to improve their inhibitory control and their cognitive flexibility. These skills allow children to manage their emotional responses. These interventions include teaching children executive function-related skills that provide the steps necessary to implement them during classroom activities and educating children on how to plan their actions before acting upon them. Executive functioning skills are how the brain plans and reacts to situations. Offering new self-regulation strategies allow children to improve their executive functioning skills by practicing something new. It is also concluded that mindfulness practices are shown to be a significantly effective intervention for children to self-regulate. This includes biofeedback-enhanced relaxation. These strategies support the growth of children's executive functioning skills.

Problem-solving model

Yet another model of executive functions is a problem-solving framework where executive functions are considered a macroconstruct composed of subfunctions working in different phases to (a) represent a problem, (b) plan for a solution by selecting and ordering strategies, (c) maintain the strategies in short-term memory in order to perform them by certain rules, and then (d) evaluate the results with error detection and error correction.

Lezak's conceptual model

One of the most widespread conceptual models on executive functions is Lezak's model. This framework proposes four broad domains of volition, planning, purposive action, and effective performance as working together to accomplish global executive functioning needs. While this model may broadly appeal to clinicians and researchers to help identify and assess certain executive functioning components, it lacks a distinct theoretical basis and relatively few attempts at validation.

Miller and Cohen's model

In 2001, Earl Miller and Jonathan Cohen published their article "An integrative theory of prefrontal cortex function", in which they argue that cognitive control is the primary function of the prefrontal cortex (PFC), and that control is implemented by increasing the gain of sensory or motor neurons that are engaged by task- or goal-relevant elements of the external environment. In a key paragraph, they argue:

We assume that the PFC serves a specific function in cognitive control: the active maintenance of patterns of activity that represent goals and the means to achieve them. They provide bias signals throughout much of the rest of the brain, affecting not only visual processes but also other sensory modalities, as well as systems responsible for response execution, memory retrieval, emotional evaluation, etc. The aggregate effect of these bias signals is to guide the flow of neural activity along pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task.

Miller and Cohen draw explicitly upon an earlier theory of visual attention that conceptualises perception of visual scenes in terms of competition among multiple representations – such as colors, individuals, or objects. Selective visual attention acts to 'bias' this competition in favour of certain selected features or representations. For example, imagine that you are waiting at a busy train station for a friend who is wearing a red coat. You are able to selectively narrow the focus of your attention to search for red objects, in the hope of identifying your friend. Desimone and Duncan argue that the brain achieves this by selectively increasing the gain of neurons responsive to the color red, such that output from these neurons is more likely to reach a downstream processing stage, and, as a consequence, to guide behaviour. According to Miller and Cohen, this selective attention mechanism is in fact just a special case of cognitive control – one in which the biasing occurs in the sensory domain. According to Miller and Cohen's model, the PFC can exert control over input (sensory) or output (response) neurons, as well as over assemblies involved in memory, or emotion. Cognitive control is mediated by reciprocal PFC connectivity with the sensory and motor cortices, and with the limbic system. Within their approach, thus, the term "cognitive control" is applied to any situation where a biasing signal is used to promote task-appropriate responding, and control thus becomes a crucial component of a wide range of psychological constructs such as selective attention, error monitoring, decision-making, memory inhibition, and response inhibition.

Miyake and Friedman's model

Miyake and Friedman's theory of executive functions proposes that there are three aspects of executive functions: updating, inhibition, and shifting. A cornerstone of this theoretical framework is the understanding that individual differences in executive functions reflect both unity (i.e., common EF skills) and diversity of each component (e.g., shifting-specific). In other words, aspects of updating, inhibition, and shifting are related, yet each remains a distinct entity. First, updating is defined as the continuous monitoring and quick addition or deletion of contents within one's working memory. Second, inhibition is one's capacity to supersede responses that are prepotent in a given situation. Third, shifting is one's cognitive flexibility to switch between different tasks or mental states.

Miyake and Friedman also suggest that the current body of research in executive functions suggest four general conclusions about these skills. The first conclusion is the unity and diversity aspects of executive functions. Second, recent studies suggest that much of one's EF skills are inherited genetically, as demonstrated in twin studies. Third, clean measures of executive functions can differentiate between normal and clinical or regulatory behaviors, such as ADHD. Last, longitudinal studies demonstrate that EF skills are relatively stable throughout development.

Banich's "cascade of control" model

This model from 2009 integrates theories from other models, and involves a sequential cascade of brain regions involved in maintaining attentional sets in order to arrive at a goal. In sequence, the model assumes the involvement of the posterior dorsolateral prefrontal cortex (DLPFC), the mid-DLPFC, and the posterior and anterior dorsal anterior cingulate cortex (ACC).

The cognitive task used in the article is selecting a response in the Stroop task, among conflicting color and word responses, specifically a stimulus where the word "green" is printed in red ink. The posterior DLPFC creates an appropriate attentional set, or rules for the brain to accomplish the current goal. For the Stroop task, this involves activating the areas of the brain involved in color perception, and not those involved in word comprehension. It counteracts biases and irrelevant information, like the fact that the semantic perception of the word is more salient to most people than the color in which it is printed.

Next, the mid-DLPFC selects the representation that will fulfill the goal. The task-relevant information must be separated from other sources of information in the task. In the example, this means focusing on the ink color and not the word.

The posterior dorsal ACC is next in the cascade, and it is responsible for response selection. This is where the decision is made whether the Stroop task participant will say "green" (the written word and the incorrect answer) or "red" (the font color and correct answer).

Following the response, the anterior dorsal ACC is involved in response evaluation, deciding whether one's response were correct or incorrect. Activity in this region increases when the probability of an error is higher.

The activity of any of the areas involved in this model depends on the efficiency of the areas that came before it. If the DLPFC imposes a lot of control on the response, the ACC will require less activity.

Recent work using individual differences in cognitive style has shown exciting support for this model. Researchers had participants complete an auditory version of the Stroop task, in which either the location or semantic meaning of a directional word had to be attended to. Participants that either had a strong bias toward spatial or semantic information (different cognitive styles) were then recruited to participate in the task. As predicted, participants that had a strong bias toward spatial information had more difficulty paying attention to the semantic information and elicited increased electrophysiological activity from the ACC. A similar activity pattern was also found for participants that had a strong bias toward verbal information when they tried to attend to spatial information.

Assessment

Assessment of executive functions involves gathering data from several sources and synthesizing the information to look for trends and patterns across time and settings. Apart from standardized neuropsychological tests, other measures can and should be used, such as behaviour checklists, observations, interviews, and work samples. From these, conclusions may be drawn on the use of executive functions.

There are several different kinds of instruments (e.g., performance based, self-report) that measure executive functions across development. These assessments can serve a diagnostic purpose for a number of clinical populations.

Experimental evidence

The executive system has been traditionally quite hard to define, mainly due to what psychologist Paul W. Burgess calls a lack of "process-behaviour correspondence". That is, there is no single behavior that can in itself be tied to executive function, or indeed executive dysfunction. For example, it is quite obvious what reading-impaired patients cannot do, but it is not so obvious what exactly executive-impaired patients might be incapable of.

This is largely due to the nature of the executive system itself. It is mainly concerned with the dynamic, "online" co-ordination of cognitive resources, and, hence, its effect can be observed only by measuring other cognitive processes. In similar manner, it does not always fully engage outside of real-world situations. As neurologist Antonio Damasio has reported, a patient with severe day-to-day executive problems may still pass paper-and-pencil or lab-based tests of executive function.

Theories of the executive system were largely driven by observations of patients with frontal lobe damage. They exhibited disorganized actions and strategies for everyday tasks (a group of behaviors now known as dysexecutive syndrome) although they seemed to perform normally when clinical or lab-based tests were used to assess more fundamental cognitive functions such as memory, learning, language, and reasoning. It was hypothesized that, to explain this unusual behaviour, there must be an overarching system that co-ordinates other cognitive resources.

Much of the experimental evidence for the neural structures involved in executive functions comes from laboratory tasks such as the Stroop task or the Wisconsin Card Sorting Task (WCST). In the Stroop task, for example, human subjects are asked to name the color that color words are printed in when the ink color and word meaning often conflict (for example, the word "RED" in green ink). Executive functions are needed to perform this task, as the relatively overlearned and automatic behaviour (word reading) has to be inhibited in favour of a less practiced task – naming the ink color. Recent functional neuroimaging studies have shown that two parts of the PFC, the anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (DLPFC), are thought to be particularly important for performing this task.

Context-sensitivity of PFC neurons

Other evidence for the involvement of the PFC in executive functions comes from single-cell electrophysiology studies in non-human primates, such as the macaque monkey, which have shown that (in contrast to cells in the posterior brain) many PFC neurons are sensitive to a conjunction of a stimulus and a context. For example, PFC cells might respond to a green cue in a condition where that cue signals that a leftwards fast movement of the eyes and the head should be made, but not to a green cue in another experimental context. This is important, because the optimal deployment of executive functions is invariably context-dependent.

One example from Miller & Cohen involves a pedestrian crossing the street. In the United States, where cars drive on the right side of the road, an American learns to look left when crossing the street. However, if that American visits a country where cars drive on the left, such as the United Kingdom, then the opposite behavior would be required (looking to the right). In this case, the automatic response needs to be suppressed (or augmented) and executive functions must make the American look to the right while in the UK.

Neurologically, this behavioural repertoire clearly requires a neural system that is able to integrate the stimulus (the road) with a context (US or UK) to cue a behaviour (look left or look right). Current evidence suggests that neurons in the PFC appear to represent precisely this sort of information. Other evidence from single-cell electrophysiology in monkeys implicates ventrolateral PFC (inferior prefrontal convexity) in the control of motor responses. For example, cells that increase their firing rate to NoGo signals as well as a signal that says "don't look there!" have been identified.

Attentional biasing in sensory regions

Electrophysiology and functional neuroimaging studies involving human subjects have been used to describe the neural mechanisms underlying attentional biasing. Most studies have looked for activation at the 'sites' of biasing, such as in the visual or auditory cortices. Early studies employed event-related potentials to reveal that electrical brain responses recorded over left and right visual cortex are enhanced when the subject is instructed to attend to the appropriate (contralateral) side of space.

The advent of bloodflow-based neuroimaging techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) has more recently permitted the demonstration that neural activity in a number of sensory regions, including color-, motion-, and face-responsive regions of visual cortex, is enhanced when subjects are directed to attend to that dimension of a stimulus, suggestive of gain control in sensory neocortex. For example, in a typical study, Liu and coworkers presented subjects with arrays of dots moving to the left or right, presented in either red or green. Preceding each stimulus, an instruction cue indicated whether subjects should respond on the basis of the colour or the direction of the dots. Even though colour and motion were present in all stimulus arrays, fMRI activity in colour-sensitive regions (V4) was enhanced when subjects were instructed to attend to the colour, and activity in motion-sensitive regions was increased when subjects were cued to attend to the direction of motion. Several studies have also reported evidence for the biasing signal prior to stimulus onset, with the observation that regions of the frontal cortex tend to come active prior to the onset of an expected stimulus.

Connectivity between the PFC and sensory regions

Despite the growing currency of the 'biasing' model of executive functions, direct evidence for functional connectivity between the PFC and sensory regions when executive functions are used, is to date rather sparse. Indeed, the only direct evidence comes from studies in which a portion of frontal cortex is damaged, and a corresponding effect is observed far from the lesion site, in the responses of sensory neurons. However, few studies have explored whether this effect is specific to situations where executive functions are required. Other methods for measuring connectivity between distant brain regions, such as correlation in the fMRI response, have yielded indirect evidence that the frontal cortex and sensory regions communicate during a variety of processes thought to engage executive functions, such as working memory, but more research is required to establish how information flows between the PFC and the rest of the brain when executive functions are used. As an early step in this direction, an fMRI study on the flow of information processing during visuospatial reasoning has provided evidence for causal associations (inferred from the temporal order of activity) between sensory-related activity in occipital and parietal cortices and activity in posterior and anterior PFC. Such approaches can further elucidate the distribution of processing between executive functions in PFC and the rest of the brain.

Bilingualism and executive functions

A growing body of research demonstrates that bilinguals might show advantages in executive functions, specifically inhibitory control and task switching. A possible explanation for this is that speaking two languages requires controlling one's attention and choosing the correct language to speak. Across development, bilingual infants, children, and elderly show a bilingual advantage when it comes to executive functioning. The advantage does not seem to manifest in younger adults. Bimodal bilinguals, or people who speak one oral language and one sign language, do not demonstrate this bilingual advantage in executive functioning tasks. This may be because one is not required to actively inhibit one language in order to speak the other. Bilingual individuals also seem to have an advantage in an area known as conflict processing, which occurs when there are multiple representations of one particular response (for example, a word in one language and its translation in the individual's other language). Specifically, the lateral prefrontal cortex has been shown to be involved with conflict processing. However, there are still some doubts. In a meta-analytic review, researchers concluded that bilingualism did not enhance executive functioning in adults.

In disease or disorder

The study of executive function in Parkinson's disease suggests subcortical areas such as the amygdala, hippocampus and basal ganglia are important in these processes. Dopamine modulation of the prefrontal cortex is responsible for the efficacy of dopaminergic drugs on executive function, and gives rise to the Yerkes–Dodson Curve. The inverted U represents decreased executive functioning with excessive arousal (or increased catecholamine release during stress), and decreased executive functioning with insufficient arousal. The low activity polymorphism of catechol-O-methyltransferase is associated with slight increase in performance on executive function tasks in healthy persons. Executive functions are impaired in multiple disorders including anxiety disorder, major depressive disorder, bipolar disorder, attention deficit hyperactivity disorder, schizophrenia and autism. Lesions to the prefrontal cortex, such as in the case of Phineas Gage, may also result in deficits of executive function. Damage to these areas may also manifest in deficits of other areas of function, such as motivation, and social functioning.

Future directions

Other important evidence for executive functions processes in the prefrontal cortex have been described. One widely cited review article emphasizes the role of the medial part of the PFC in situations where executive functions are likely to be engaged – for example, where it is important to detect errors, identify situations where stimulus conflict may arise, make decisions under uncertainty, or when a reduced probability of obtaining favourable performance outcomes is detected. This review, like many others, highlights interactions between medial and lateral PFC, whereby posterior medial frontal cortex signals the need for increased executive functions and sends this signal on to areas in dorsolateral prefrontal cortex that actually implement control. Yet there has been no compelling evidence at all that this view is correct, and, indeed, one article showed that patients with lateral PFC damage had reduced ERNs (a putative sign of dorsomedial monitoring/error-feedback) – suggesting, if anything, that the direction of flow of the control could be in the reverse direction. Another prominent theory emphasises that interactions along the perpendicular axis of the frontal cortex, arguing that a 'cascade' of interactions between anterior PFC, dorsolateral PFC, and premotor cortex guides behaviour in accordance with past context, present context, and current sensorimotor associations, respectively.

Advances in neuroimaging techniques have allowed studies of genetic links to executive functions, with the goal of using the imaging techniques as potential endophenotypes for discovering the genetic causes of executive function.

More research is required to develop interventions that can improve executive functions and help people generalize those skills to daily activities and settings

Will (philosophy)

From Wikipedia, the free encyclopedia

Will, within philosophy, is a faculty of the mind. Will is important as one of the parts of the mind, along with reason and understanding. It is considered central to the field of ethics because of its role in enabling deliberate action.

A recurring question in Western philosophical tradition is about free will—and the related, but more general notion of fate—which asks how the will can truly be free if a person's actions have either natural or divine causes determining them. In turn, this is directly connected to discussions on the nature of freedom and to the problem of evil.

Classical philosophy

The classical treatment of the ethical importance of will is to be found in the Nicomachean Ethics of Aristotle, in Books III (chapters 1–5), and Book VII (chapters 1–10). These discussions have been a major influence in the development of ethical and legal thinking in Western civilization.

In Book III Aristotle divided actions into three categories instead of two:

  • Voluntary (ekousion) acts.
  • Involuntary or unwilling (akousion) acts, which are in the simplest case where people do not praise or blame. In such cases a person does not choose the wrong thing, for example if the wind carries a person off, or if a person has a wrong understanding of the particular facts of a situation. Note that ignorance of what aims are good and bad, such as people of bad character always have, is not something people typically excuse as ignorance in this sense. "Acting on account of ignorance seems different from acting while being ignorant".
  • "Non-voluntary" or "non willing" actions (ouk ekousion) which are bad actions done by choice, or more generally (as in the case of animals and children when desire or spirit causes an action) whenever "the source of the moving of the parts that are instrumental in such actions is in oneself" and anything "up to oneself either to do or not". However, these actions are not taken because they are preferred in their own right, but rather because all options available are worse.

It is concerning this third class of actions that there is doubt about whether they should be praised or blamed or condoned in different cases.

Virtue and vice, according to Aristotle, are "up to us". This means that although no one is willingly unhappy, vice by definition always involves actions which were decided upon willingly. Vice comes from bad habits and aiming at the wrong things, not deliberately aiming to be unhappy. The vices then, are voluntary just as the virtues are. He states that people would have to be unconscious not to realize the importance of allowing themselves to live badly, and he dismisses any idea that different people have different innate visions of what is good.

In Book VII, Aristotle discusses self-mastery, or the difference between what people decide to do, and what they actually do. For Aristotle, akrasia, "unrestraint", is distinct from animal-like behavior because it is specific to humans and involves conscious rational thinking about what to do, even though the conclusions of this thinking are not put into practice. When someone behaves in a purely animal-like way, then for better or worse they are not acting based upon any conscious choice.

Aristotle also addresses a few questions raised earlier, on the basis of what he has explained:

  • Not everyone who stands firm on the basis of a rational and even correct decision has self-mastery. Stubborn people are actually more like a person without self-mastery, because they are partly led by the pleasure coming from victory.
  • Not everyone who fails to stand firm on the basis of his best deliberations has a true lack of self mastery. As an example he gives the case of Neoptolemus (in Sophocles' Philoctetes) refusing to lie despite being part of a plan he agreed with.
  • A person with practical wisdom (phronesis) can not have akrasia. Instead it might sometimes seem so, because mere cleverness can sometimes recite words which might make them sound wise, like an actor or a drunk person reciting poetry. A person lacking self-mastery can have knowledge, but not an active knowledge to which they are paying attention. For example, when someone is in a state such as being drunk or enraged, people may have knowledge, and even show that they have that knowledge, like an actor, but not be using it.

Medieval-European philosophy

Inspired by Islamic philosophers Avicenna and Averroes, Aristotelian philosophy became part of a standard approach to all legal and ethical discussion in Europe by the time of Thomas Aquinas. His philosophy can be seen as a synthesis of Aristotle and early Christian doctrine as formulated by Boethius and Augustine of Hippo, although sources such as Maimonides and Plato and the aforementioned Muslim scholars are also cited.

With the use of Scholasticism, Thomas Aquinas's Summa Theologica makes a structured treatment of the concept of will. A very simple representation of this treatment may look like this:

  • Does the will desire nothing? (No.)
  • Does it desire all things of necessity, whatever it desires? (No.)
  • Is it a higher power than the intellect? (No.)
  • Does the will move the intellect? (Yes.)
  • Is the will divided into irascible and concupiscible? (No.)

This is related to the following points on free will:

  • Does man have free-will? (Yes.)
  • What is free-will—a power, an act, or a habit? (A power.)
  • If it is a power, is it appetitive or cognitive? (Appetitive.)
  • If it is appetitive, is it the same power as the will, or distinct? (The same, with contingencies).

Early-modern philosophy

The use of English in philosophical publications began in the early modern period, and therefore the English word "will" became a term used in philosophical discussion. During this same period, Scholasticism, which had largely been a Latin language movement, was heavily criticized. Both Francis Bacon and René Descartes described the human intellect or understanding as something which needed to be considered limited, and needing the help of a methodical and skeptical approach to learning about nature. Bacon emphasized the importance of analyzing experience in an organized way, for example experimentation, while Descartes, seeing the success of Galileo in using mathematics in physics, emphasized the role of methodical reasoning as in mathematics and geometry. Descartes specifically said that error comes about because the will is not limited to judging things which the understanding is limited to, and described the possibility of such judging or choosing things ignorantly, without understanding them, as free will. Dutch theologian Jacobus Arminius, considered the freedom of human will is to work toward individual salvation and constrictions occur due to the work of passion that a person holds. Augustine calls will as "the mother and guardian of all virtues".

Under the influence of Bacon and Descartes, Thomas Hobbes made one of the first attempts to systematically analyze ethical and political matters in a modern way. He defined will in his Leviathan Chapter VI, in words which explicitly criticize the medieval scholastic definitions:

In deliberation, the last appetite, or aversion, immediately adhering to the action, or to the omission thereof, is that we call the will; the act, not the faculty, of willing. And beasts that have deliberation, must necessarily also have will. The definition of the will, given commonly by the Schools, that it is a rational appetite, is not good. For if it were, then could there be no voluntary act against reason. For a voluntary act is that, which proceedeth from the will, and no other. But if instead of a rational appetite, we shall say an appetite resulting from a precedent deliberation, then the definition is the same that I have given here. Will therefore is the last appetite in deliberating. And though we say in common discourse, a man had a will once to do a thing, that nevertheless he forbore to do; yet that is properly but an inclination, which makes no action voluntary; because the action depends not of it, but of the last inclination, or appetite. For if the intervenient appetites, make any action voluntary; then by the same reason all intervenient aversions, should make the same action involuntary; and so one and the same action, should be both voluntary and involuntary.

By this it is manifest, that not only actions that have their beginning from covetousness, ambition, lust, or other appetites to the thing propounded; but also those that have their beginning from aversion, or fear of those consequences that follow the omission, are voluntary actions.

Concerning "free will", most early modern philosophers, including Hobbes, Spinoza, Locke and Hume believed that the term was frequently used in a wrong or illogical sense, and that the philosophical problems concerning any difference between "will" and "free will" are due to verbal confusion (because all will is free):

a FREEMAN, is he, that in those things, which by his strength and wit he is able to do, is not hindered to do what he has a will to. But when the words free, and liberty, are applied to any thing but bodies, they are abused; for that which is not subject to motion, is not subject to impediment: and therefore, when it is said, for example, the way is free, no liberty of the way is signified, but of those that walk in it without stop. And when we say a gift is free, there is not meant any liberty of the gift, but of the giver, that was not bound by any law or covenant to give it. So when we speak freely, it is not the liberty of voice, or pronunciation, but of the man, whom no law hath obliged to speak otherwise than he did. Lastly, from the use of the word free-will, no liberty can be inferred of the will, desire, or inclination, but the liberty of the man; which consisteth in this, that he finds no stop, in doing what he has the will, desire, or inclination to do.."

Spinoza argues that seemingly "free" actions aren't actually free, or that the entire concept is a chimera because "internal" beliefs are necessarily caused by earlier external events. The appearance of the internal is a mistake rooted in ignorance of causes, not in an actual volition, and therefore the will is always determined. Spinoza also rejects teleology, and suggests that the causal nature along with an originary orientation of the universe is everything we encounter.

Some generations later, David Hume made a very similar point to Hobbes in other words:

But to proceed in this reconciling project with regard to the question of liberty and necessity; the most contentious question of metaphysics, the most contentious science; it will not require many words to prove, that all mankind have ever agreed in the doctrine of liberty as well as in that of necessity, and that the whole dispute, in this respect also, has been hitherto merely verbal. For what is meant by liberty, when applied to voluntary actions? We cannot surely mean that actions have so little connexion with motives, inclinations, and circumstances, that one does not follow with a certain degree of uniformity from the other, and that one affords no inference by which we can conclude the existence of the other. For these are plain and acknowledged matters of fact. By liberty, then, we can only mean a power of acting or not acting, according to the determinations of the will; that is, if we choose to remain at rest, we may; if we choose to move, we also may. Now this hypothetical liberty is universally allowed to belong to every one who is not a prisoner and in chains. Here, then, is no subject of dispute.

Rousseau

Jean-Jacques Rousseau, A Philosopher of the General Will

Jean-Jacques Rousseau added a new type of will to those discussed by philosophers, which he called the "General will" (volonté générale). This concept developed from Rousseau's considerations on the social contract theory of Hobbes, and describes the shared will of a whole citizenry, whose agreement is understood to exist in discussions about the legitimacy of governments and laws.

The general will consists of a group of people who believe they are in unison, for which they have one will that is concerned with their collective well-being. In this group, people maintain their autonomy to think and act for themselves—to much concern of libertarians, including "John Locke, David Hume, Adam Smith, and Immanuel Kant," who proclaim an emphasis of individuality and a separation between "public and private spheres of life." Nonetheless, they also think on behalf of the community of which they are a part.

This group creates the social compact, that is supposed to voice cooperation, interdependence, and reciprocal activity. As a result of the general will being expressed in the social contract, the citizens of the community that composes the general will consent to all laws, even those that they disagree with, or are meant to punish them if they disobey the law—the aim of the general will is to guide all of them in social and political life. This, in other words, makes the general will consistent amongst the members of the state, implying that every single one of them have citizenship and have freedom as long as they are consenting to a set of norms and beliefs that promote equality, the common welfare, and lack servitude.

The House of Commons Voting on the Family of Action Plan in Budapest, Hungary. This would be an example of the general will espoused by Rousseau.

According to Thompson, the general will has three rules that have to be obeyed in order for the general will to function as intended: (1) the rule of equality—no unequal duties are to be placed upon any other community member for one's personal benefit or for that of the community; (2) the rule of generality—the general will's end must be applicable to the likewise needs of citizens, and all the members' interests are to be accounted for; (3) the rule of non-servitude—no one has to relinquish themselves to any other member of the community, corporation, or individual, nor do they have to be subordinate to the mentioned community's, corporation's, or individuals' interests or wills.

Nonetheless, there are ways in which the general will can fail, as Rousseau mentioned in The Social Contract. If the will does not produce a consensus amongst a majority of its members, but has a minority consensus instead, then liberty is not feasible. Also, the general will is weakened consequent to altruistic interests becoming egoistical, which manifests into debates, further prompting the citizenry to not participate in government, and bills directed for egotistical interests get ratified as "'laws.'" This leads into the distinction between the will of all versus the general will: the former is looking after the interests of oneself or that of a certain faction, whereas the latter is looking out for the interests of society as a whole.

Although Rousseau believes that the general will is beneficial, there are those in the libertarian camp who assert that the will of the individual trumps that of the whole. For instance, G.W.F Hegel criticized Rousseau's general will, in that it could lead to tension. This tension, in Hegel's view is that between the general will and the subjective particularity of the individual. Here is the problem: when one consents to the general will, then individuality is lost as a result of one having to be able to consent to things on behalf of the populous, but, paradoxically, when the general will is in action, impartiality is lost as a result of the general will conforming to one course of action alone, that consented to by the populous.

Another problem that Hegel puts forth is one of arbitrary contingency. For Hegel, the problem is called ""the difference that action implies,'" in which a doer's description of an action varies from that of others, and the question arises, "Who [chooses] which [action] description is appropriate?" To Rousseau, the majority is where the general will resides, but to Hegel that is arbitrary. Hegel's solution is to find universality in society's institutions—this implies that a decision, a rule, etc. must be understandable and the reasoning behind it cannot rest on the majority rules over the minority alone. Universality in societies' institutions is found via reflecting on historical progress and that the general will at present is a part of the development from history in its continuation and improvement. In terms of the general will, universality from looking at historical development can allow the participants composing the general will to determine how they fit into the scheme of being in an equal community with others, while not allowing themselves to obey an arbitrary force. The people of the general will see themselves as superior to their antecedents who have or have not done what they are doing, and judge themselves in retrospect of what has happened in the course of occurrences in the present in order to from an equal community with others that is not ruled arbitrarily.

Besides Hegel, another philosopher who differed in the Rousseauian idea of the general will was John Locke. Locke, though a social contractarian, believed that individualism was crucial for society, inspired by reading Cicero's On Duties, in which Cicero proclaimed that all people "desire preeminence and are consequently reluctant to subject themselves to others." Also, Cicero mentioned how every person is unique in a special way; therefore, people should "accept and tolerate these differences, treating all with consideration and upholding the [dignity]... of each." In addition, Locke was inspired by Cicero's idea of rationally pursuing one's self-interest, from his book On Duties. Locke wrote how people have a duty to maximize their personal good while not harming that of their neighbor. For Locke, another influence was Sir Francis Bacon. Locke started to believe, and then spread, the ideas of "freedom of thought and expression" and having "a... questioning attitude towards authority" one is under and opinions one receives because of Sir Francis Bacon.

John Locke: A Philosopher with a Social Contractarian View Similar to Rousseau

For Locke, land, money, and labor were important parts of his political ideas. Land was the source of all other products that people conceived as property. Because there is land, money can cause property to have a varying value, and labor starts. To Locke, labor is an extension of a person because the laborer used his body and hands in crafting the object, which him- or herself has a right to only, barring others from having the same. Nonetheless, land is not possessed by the owner one-hundred percent of the time. This is a result of a "fundamental law of nature, the preservation of society...takes precedence over self-preservation."

In Locke's Second Treatise, the purpose of government was to protect its citizens' "life, liberty, and property -- these he conceived as people's natural rights. He conceived a legislature as the top sector in power, which would be beholden to the people, that had means of enforcing against transgressors of its laws, and for law to be discretionary when it did not clarify, all for the common good. As a part of his political philosophy, Locke believed in consent for governmental rule at the individual level, similar to Rousseau, as long as it served the common good, in obedience with the law and natural law. Furthermore, Locke advocated for freedom of expression and thought and religious toleration as a result of that allowing for commerce and economy to prosper. In other words, Locke believed in the common good of society, but there are also certain natural rights that a government is bound to protect, in the course of maintaining law and order-- these were the mentioned: life, liberty, and property."

Kant

Immanuel Kant: The Philosopher who Conceived the Will being Guided by Laws and Maxims

Immanuel Kant's theory of the will consists of the will being guided subjectively by maxims and objectively via laws. The former, maxims, are precepts. On the other hand, laws are objective, apprehended a priori—prior to experience. In other words, Kant's belief in the a priori proposes that the will is subject to a before-experience practical law—this is, according to Kant in the Critique of Practical Reason, when the law is seen as "valid for the will of every rational being", which is also termed as "universal laws"

Nonetheless, there is a hierarchy of what covers a person individually versus a group of people. Specifically, laws determine the will to conform to the maxims before experience is had on behalf of the subject in question. Maxims, as mentioned, only deal with what one subjectively considers.

This hierarchy exists as a result of a universal law constituted of multi-faceted parts from various individuals (people's maxims) not being feasible.

Because of the guidance by the universal law that guides maxims, an individual's will is free. Kant's theory of the will does not advocate for determinism on the ground that the laws of nature on which determinism is based prompts for an individual to have only one course of action—whatever nature's prior causes trigger an individual to do. On the other hand, Kant's categorical imperative provides "objective oughts", which exert influence over us a priori if we have the power to accept or defy them. Nonetheless, if we do not have the opportunity to decide between the right and the wrong option in regard to the universal law, in the course of which our will is free, then natural causes have led us to one decision without any alternative options.

There are some objections posited against Kant's view. For instance, in Kohl's essay "Kant on Determinism and the Categorical Imperative", there is the question about the imperfect will, if one's will compels them to obey the universal law, but not for "recognizing the law's force of reason." To this, Kant would describe the agent's will as "impotent rather than... imperfect since... the right reasons cannot [compel] her to act."

John Stuart Mill: A Philosopher Who Conceived of a Utilitarian View of the Will-- More Pleasure Over Pain

Besides the objections in Kohl's essay, John Stuart Mill had another version of the will, as written in his Utilitarianism book. John Stuart Mill, as his ethical theory runs, proposes the will operates in the same fashion, that is following the greatest happiness principle: actions are morally right as long as they advocate for happiness and morally wrong if they advocate for pain The will is demonstrated when someone executes their goals without pleasure from incentivizing their contemplation or the end of fulfilling them, and he or she continues to act according to his or her goals, even if the emotions one had felt in the beginning of fulfilling their goals has decreased over time, whether it be from changes in their personality or desires, or their goals become counterbalanced by the pains of trying to fulfill them. Also, John Stuart Mill mentioned that the process of using one's will can become unnoticeable. This is a consequence of habit making volition—the act "of choosing or determining"—second nature. Sometimes, using the will, according to Mill, becomes so habitual that it opposes any deliberate contemplation of one's options. This, he believes, is commonplace among those who have sinister, harmful habits.

Although the will can seem to become second nature because of habit, that is not always the case since the habit is changeable to the will, and the "will is [changeable] to habit." This could happen when one wills away from habit what he or she no longer desires for their self, or one could desire from willing to desire something. In the case of someone who does not have a virtuous will, Mill recommends to make that individual "desire virtue". In this, Mill means desiring virtue because of the pleasure it brings over the pain that not having it would bring, in accordance with the greatest happiness principle: actions are morally right as long as they advocate for happiness and morally wrong if they advocate for pain. Then, one has to routinely "will what is right" in order to make their will instrumental in achieving more pleasure than pain.

Schopenhauer

Schopenhauer disagreed with Kant's critics and stated that it is absurd to assume that phenomena have no basis. Schopenhauer proposed that we cannot know the thing in itself as though it is a cause of phenomena. Instead, he said that we can know it by knowing our own body, which is the only thing that we can know at the same time as both a phenomenon and a thing in itself.

When we become conscious of ourself, we realize that our essential qualities are endless urging, craving, striving, wanting, and desiring. These are characteristics of that which we call our will. Schopenhauer affirmed that we can legitimately think that all other phenomena are also essentially and basically will. According to him, will "is the innermost essence, the kernel, of every particular thing and also of the whole. It appears in every blindly acting force of nature, and also in the deliberate conduct of man…." Schopenhauer said that his predecessors mistakenly thought that the will depends on knowledge. According to him, though, the will is primary and uses knowledge in order to find an object that will satisfy its craving. That which, in us, we call will is Kant's "thing in itself", according to Schopenhauer.

Arthur Schopenhauer put the puzzle of free will and moral responsibility in these terms:

Everyone believes himself a priori to be perfectly free, even in his individual actions, and thinks that at every moment he can commence another manner of life ... But a posteriori, through experience, he finds to his astonishment that he is not free, but subjected to necessity, that in spite of all his resolutions and reflections he does not change his conduct, and that from the beginning of his life to the end of it, he must carry out the very character which he himself condemns...

In his On the Freedom of the Will, Schopenhauer stated, "You can do what you will, but in any given moment of your life you can will only one definite thing and absolutely nothing other than that one thing."

Nietzsche

Friedrich Wilhelm Nietzsche was influenced by Schopenhauer when younger, but later felt him to be wrong. However, he maintained a modified focus upon will, making the term "will to power" famous as an explanation of human aims and actions.

Psychology/Psychiatry

Psychologists also deal with issues of will and "willpower" the ability to affect will in behavior; some people are highly intrinsically motivated and do whatever seems best to them, while others are "weak-willed" and easily suggestible (extrinsically motivated) by society or outward inducement. Apparent failures of the will and volition have also been reported associated with a number of mental and neurological disorders. They also study the phenomenon of Akrasia, wherein people seemingly act against their best interests and know that they are doing so (for instance, restarting cigarette smoking after having intellectually decided to quit). Advocates of Sigmund Freud's psychology stress the importance of the influence of the unconscious mind upon the apparent conscious exercise of will. Abraham Low, a critic of psychoanalysis, stressed the importance of will, the ability to control thoughts and impulses, as fundamental for achieving mental health.

Categorical imperative

From Wikipedia, the free encyclopedia

The categorical imperative (German: kategorischer Imperativ) is the central philosophical concept in the deontological moral philosophy of Immanuel Kant. Introduced in Kant's 1785 Groundwork of the Metaphysics of Morals, it is a way of evaluating motivations for action. It is best known in its original formulation: "Act only according to that maxim whereby you can at the same time will that it should become a universal law."

According to Kant, rational beings occupy a special place in creation, and morality can be summed up in an imperative, or ultimate commandment of reason, from which all duties and obligations derive. He defines an imperative as any proposition declaring a certain action (or inaction) to be necessary. Hypothetical imperatives apply to someone who wishes to attain certain ends. For example, "I must drink something to quench my thirst" or "I must study to pass this exam." A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that must be obeyed in all circumstances and is justified as an end in itself.

Kant expressed extreme dissatisfaction with the popular moral philosophy of his day, believing that it could never surpass the level of hypothetical imperatives: a utilitarian says that murder is wrong because it does not maximize good for those involved, but this is irrelevant to people who are concerned only with maximizing the positive outcome for themselves. Consequently, Kant argued, hypothetical moral systems cannot persuade moral action or be regarded as bases for moral judgments against others, because the imperatives on which they are based rely too heavily on subjective considerations. He presented a deontological moral system, based on the demands of the categorical imperative, as an alternative.

Outline

Pure practical reason

The capacity that underlies deciding what is moral is called pure practical reason, which is contrasted with: pure reason, which is the capacity to know without having been shown; and mere practical reason, which allows us to interact with the world in experience.

Hypothetical imperatives tell us which means best achieve our ends. They do not, however, tell us which ends we should choose. The typical dichotomy in choosing ends is between ends that are right (e.g., helping someone) and those that are good (e.g., enriching oneself). Kant considered the right superior to the good; to him, the latter was morally irrelevant. In Kant's view, a person cannot decide whether conduct is right, or moral, through empirical means. Such judgments must be reached a priori, using pure practical reason.

What action can be constituted as moral is universally reasoned by the categorical imperative, separate from observable experience. This distinction, that it is imperative that each action is not empirically reasoned by observable experience, has had wide social impact in the legal and political concepts of human rights and equality.

Possibility

People see themselves as belonging to both the world of understanding and the world of sense. As a member of the world of understanding, a person's actions would always conform to the autonomy of the will. As a part of the world of sense, he would necessarily fall under the natural law of desires and inclinations. However, since the world of understanding contains the ground of the world of sense, and thus of its laws, his actions ought to conform to the autonomy of the will, and this categorical "ought" represents a synthetic proposition a priori.

Freedom and autonomy

Kant viewed the human individual as a rationally self-conscious being with "impure" freedom of choice:

The faculty of desire in accordance with concepts, in-so-far as the ground determining it to action lies within itself and not in its object, is called a faculty to "do or to refrain from doing as one pleases". Insofar as it is joined with one's consciousness of the ability to bring about its object by one's action it is called choice (Willkür); if it is not joined with this consciousness its act is called a wish. The faculty of desire whose inner determining ground, hence even what pleases it, lies within the subject's reason is called the will (Wille). The will is therefore the faculty of desire considered not so much in relation to action (as choice is) but rather in relation to the ground determining choice in action. The will itself, strictly speaking, has no determining ground; insofar as it can determine choice, it is instead practical reason itself. Insofar as reason can determine the faculty of desire as such, not only choice but also mere wish can be included under the will. That choice which can be determined by pure reason is called free choice. That which can be determined only by inclination (sensible impulse, stimulus) would be animal choice (arbitrium brutum). Human choice, however, is a choice that can indeed be affected but not determined by impulses, and is therefore of itself (apart from an acquired proficiency of reason) not pure but can still be determined to actions by pure will.

— Immanuel Kant, Metaphysics of Morals 6:213–4

For a will to be considered free, we must understand it as capable of affecting causal power without being caused to do so. However, the idea of lawless free will, meaning a will acting without any causal structure, is incomprehensible. Therefore, a free will must be acting under laws that it gives to itself.

Although Kant conceded that there could be no conceivable example of free will, because any example would only show us a will as it appears to us—as a subject of natural laws—he nevertheless argued against determinism. He proposed that determinism is logically inconsistent: the determinist claims that because A caused B, and B caused C, that A is the true cause of C. Applied to a case of the human will, a determinist would argue that the will does not have causal power and that something outside the will causes the will to act as it does. But this argument merely assumes what it sets out to prove: viz. that the human will is part of the causal chain.

Secondly, Kant remarks that free will is inherently unknowable. Since even a free person could not possibly have knowledge of their own freedom, we cannot use our failure to find a proof for freedom as evidence for a lack of it. The observable world could never contain an example of freedom because it would never show us a will as it appears to itself, but only a will that is subject to natural laws imposed on it. But we do appear to ourselves as free. Therefore, he argued for the idea of transcendental freedom—that is, freedom as a presupposition of the question "what ought I to do?" This is what gives us sufficient basis for ascribing moral responsibility: the rational and self-actualizing power of a person, which he calls moral autonomy: "the property the will has of being a law unto itself."

First formulation: Universality and the law of nature

Act only according to that maxim whereby you can at the same time will that it should become a universal law.

— Immanuel Kant, Groundwork of the Metaphysics of Morals

Kant concludes that a moral proposition that is true must be one that is not tied to any particular conditions, including the identity and desires of the person making the moral deliberation.

A moral maxim must imply absolute necessity, which is to say that it must be disconnected from the particular physical details surrounding the proposition, and could be applied to any rational being. This leads to the first formulation of the categorical imperative, sometimes called the principle of universalizability: "Act only according to that maxim whereby you can at the same time will that it should become a universal law."

Closely connected with this formulation is the law of nature formulation. Because laws of nature are by definition universal, Kant claims we may also express the categorical imperative as:

Act as if the maxims of your action were to become through your will a universal law of nature.

Kant divides the duties imposed by this formulation into two sets of two subsets. The first division is between duties that we have to ourselves versus those we have to others. For example, we have an obligation not to kill ourselves as well as an obligation not to kill others. Kant also, however, introduces a distinction between perfect and imperfect duties.

Perfect duty

According to Kant's reasoning, we first have a perfect duty not to act by maxims that result in logical contradictions when we attempt to universalize them. The moral proposition A: "It is permissible to steal" would result in a contradiction upon universalisation. The notion of stealing presupposes the existence of personal property, but were A universalized, then there could be no personal property, and so the proposition has logically negated itself.

In general, perfect duties are those that are blameworthy if not met, as they are a basic required duty for a human being.

Imperfect duty

Second, we have imperfect duties, which are still based on pure reason, but which allow for desires in how they are carried out in practice. Because these depend somewhat on the subjective preferences of humankind, this duty is not as strong as a perfect duty, but it is still morally binding. As such, unlike perfect duties, you do not attract blame should you not complete an imperfect duty but you shall receive praise for it should you complete it, as you have gone beyond the basic duties and taken duty upon yourself. Imperfect duties are circumstantial, meaning simply that you could not reasonably exist in a constant state of performing that duty. This is what truly differentiates between perfect and imperfect duties, because imperfect duties are those duties that are never truly completed. A particular example provided by Kant is the imperfect duty to cultivate one's own talents.

Second formulation: Humanity

Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.

— Immanuel Kant, Groundwork of the Metaphysics of Morals

Every rational action must set before itself not only a principle, but also an end. Most ends are of a subjective kind, because they need only be pursued if they are in line with some particular hypothetical imperative that a person may choose to adopt. For an end to be objective, it would be necessary that we categorically pursue it.

The free will is the source of all rational action. But to treat it as a subjective end is to deny the possibility of freedom in general. Because the autonomous will is the one and only source of moral action, it would contradict the first formulation to claim that a person is merely a means to some other end, rather than always an end in themselves.

On this basis, Kant derives the second formulation of the categorical imperative from the first.

By combining this formulation with the first, we learn that a person has perfect duty not to use the humanity of themselves or others merely as a means to some other end. As a slave owner would be effectively asserting a moral right to own a person as a slave, they would be asserting a property right in another person. This would violate the categorical imperative, because it denies the basis for there to be free rational action at all; it denies the status of a person as an end in themselves. One cannot, on Kant's account, ever suppose a right to treat another person as a mere means to an end. In the case of a slave owner, the slaves are being used to cultivate the owner's fields (the slaves acting as the means) to ensure a sufficient harvest (the end goal of the owner).

The second formulation also leads to the imperfect duty to further the ends of ourselves and others. If any person desires perfection in themselves or others, it would be their moral duty to seek that end for all people equally, so long as that end does not contradict perfect duty.

Third formulation: Autonomy

Thus the third practical principle follows [from the first two] as the ultimate condition of their harmony with practical reason: the idea of the will of every rational being as a universally legislating will.

— Immanuel Kant, Groundwork of the Metaphysics of Morals

Kant claims that the first formulation lays out the objective conditions on the categorical imperative: that it be universal in form and thus capable of becoming a law of nature. Likewise, the second formulation lays out subjective conditions: that there be certain ends in themselves, namely rational beings as such. The result of these two considerations is that we must will maxims that can be at the same time universal, but which do not infringe on the freedom of ourselves nor of others. A universal maxim, however, could only have this form if it were a maxim that each subject by himself endorsed. Because it cannot be something which externally constrains each subject's activity, it must be a constraint that each subject has set for himself. This leads to the concept of self-legislation. Each subject must through his own use of reason will maxims which have the form of universality, but do not impinge on the freedom of others: thus each subject must will maxims that could be universally self-legislated.

The result, of course, is a formulation of the categorical imperative that contains much of the same as the first two. We must will something that we could at the same time freely will of ourselves. After introducing this third formulation, Kant introduces a distinction between autonomy (literally: self-law-giving) and heteronomy (literally: other-law-giving). This third formulation makes it clear that the categorical imperative requires autonomy. It is not enough that the right conduct be followed, but that one also demands that conduct of oneself.

Fourth formulation: The Kingdom of Ends

Act according to maxims of a universally legislating member of a merely possible kingdom of ends.

— Immanuel Kant, Groundwork of the Metaphysics of Morals

In the Groundwork, Kant goes on to formulate the categorical imperative in a number of ways following the first three; however, because Kant himself claims that there are only three principles, little attention has been given to these other formulations. Moreover, they are often easily assimilated to the first three formulations, as Kant takes himself to be explicitly summarizing these earlier principles.

There is, however, another formulation that has received additional attention as it appears to introduce a social dimension into Kant's thought. This is the formulation of the "Kingdom of Ends."

Because a truly autonomous will would not be subjugated to any interest, it would only be subject to those laws it makes for itself—but it must also regard those laws as if they would be bound to others, or they would not be universalizable, and hence they would not be laws of conduct at all. Thus, Kant presents the notion of the hypothetical Kingdom of Ends of which he suggests all people should consider themselves never solely as means but always as ends.

We ought to act only by maxims that would harmonize with a possible kingdom of ends. We have perfect duty not to act by maxims that create incoherent or impossible states of natural affairs when we attempt to universalize them, and we have imperfect duty not to act by maxims that lead to unstable or greatly undesirable states of affairs.

Application

Although Kant was intensely critical of the use of examples as moral yardsticks, as they tend to rely on our moral intuitions (feelings) rather than our rational powers, this section explores some applications of the categorical imperative for illustrative purposes.

Deception

Kant asserted that lying, or deception of any kind, would be forbidden under any interpretation and in any circumstance. In Groundwork, Kant gives the example of a person who seeks to borrow money without intending to pay it back. This is a contradiction because if it were a universal action, no person would lend money anymore as he knows that he will never be paid back. The maxim of this action, says Kant, results in a contradiction in conceivability (and thus contradicts perfect duty). With lying, it would logically contradict the reliability of language. If it were universally acceptable to lie, then no one would believe anyone and all truths would be assumed to be lies. In each case, the proposed action becomes inconceivable in a world where the maxim exists as law. In a world where no one would lend money, seeking to borrow money in the manner originally imagined is inconceivable. In a world where no one trusts one another, the same is true about manipulative lies.

The right to deceive could also not be claimed because it would deny the status of the person deceived as an end in itself. The theft would be incompatible with a possible kingdom of ends. Therefore, Kant denied the right to lie or deceive for any reason, regardless of context or anticipated consequences.

Theft

Kant argued that any action taken against another person to which he or she could not possibly consent is a violation of perfect duty as interpreted through the second formulation. If a thief were to steal a book from an unknowing victim, it may have been that the victim would have agreed, had the thief simply asked. However, no person can consent to theft, because the presence of consent would mean that the transfer was not a theft. Because the victim could not have consented to the action, it could not be instituted as a universal law of nature, and theft contradicts perfect duty.

Suicide

In the Groundwork of the Metaphysics of Morals, Kant applies his categorical imperative to the issue of suicide motivated by a sickness of life:

A man reduced to despair by a series of misfortunes feels sick of life, but is still so far in possession of his reason that he can ask himself whether taking his own life would not be contrary to his duty to himself. Now he asks whether the maxim of his action could become a universal law of nature. But his maxim is this: from self-love I make as my principle to shorten my life when its continued duration threatens more evil than it promises satisfaction. There only remains the question as to whether this principle of self-love can become a universal law of nature. One sees at once that a contradiction in a system of nature whose law would destroy life by means of the very same feeling that acts so as to stimulate the furtherance of life, and hence there could be no existence as a system of nature. Therefore, such a maxim cannot possibly hold as a universal law of nature and is, consequently, wholly opposed to the supreme principle of all duty.

How the Categorical Imperative would apply to suicide from other motivations is unclear.

Laziness

Kant also applies the categorical imperative in the Groundwork of the Metaphysic of Morals on the subject of "failing to cultivate one's talents." He proposes a man who if he cultivated his talents could bring many goods, but he has everything he wants and would prefer to enjoy the pleasures of life instead. The man asks himself how the universality of such a thing works. While Kant agrees that a society could subsist if everyone did nothing, he notes that the man would have no pleasures to enjoy, for if everyone let their talents go to waste, there would be no one to create luxuries that created this theoretical situation in the first place. Not only that, but cultivating one's talents is a duty to oneself. Thus, it is not willed to make laziness universal, and a rational being has imperfect duty to cultivate its talents. Kant concludes in the Groundwork:

[H]e cannot possibly will that this should become a universal law of nature or be implanted in us as such a law by a natural instinct. For as a rational being he necessarily wills that all his faculties should be developed, inasmuch as they are given him for all sorts of possible purposes.

Charity

Kant's last application of the categorical imperative in the Groundwork of the Metaphysics of Morals is of charity. He proposes a fourth man who finds his own life fine but sees other people struggling with life and who ponders the outcome of doing nothing to help those in need (while not envying them or accepting anything from them). While Kant admits that humanity could subsist (and admits it could possibly perform better) if this were universal, he states:

But even though it is possible that a universal law of nature could subsist in accordance with that maxim, still it is impossible to will that such a principle should hold everywhere as a law of nature. For a will that resolved in this way would contradict itself, inasmuch as cases might often arise in which one would have need of the love and sympathy of others and in which he would deprive himself, by such a law of nature springing from his own will, of all hope of the aid he wants for himself.

Cruelty to animals

Kant derived a prohibition against cruelty to animals by arguing that such cruelty is a violation of a duty in relation to oneself. According to Kant, man has the imperfect duty to strengthen the feeling of compassion, since this feeling promotes morality in relation to other human beings. However, cruelty to animals deadens the feeling of compassion in man. Therefore, man is obliged not to treat animals brutally.

The trial of Adolf Eichmann

In 1961, discussion of Kant's categorical imperative was included in the trial of the SS Lieutenant Colonel Adolf Eichmann in Jerusalem.

As Hannah Arendt wrote in her book on the trial, Eichmann declared "with great emphasis that he had lived his whole life...according to a Kantian definition of duty." Arendt considered this so "incomprehensible on the face of it" that it confirmed her sense that he wasn't really thinking at all, just mouthing accepted formulae, thereby establishing his banality. Judge Raveh indeed had asked Eichmann whether he thought he had really lived according to the categorical imperative during the war. Eichmann acknowledged he did not "live entirely according to it, although I would like to do so."

Deborah Lipstadt, in her book on the trial, takes this as evidence that evil is not banal, but is in fact self-aware.

Application of the universalizability principle to the ethics of consumption

Pope Francis, in his 2015 encyclical, applies the first formulation of the universalizability principle to the issue of consumption:

Instead of resolving the problems of the poor and thinking of how the world can be different, some can only propose a reduction in the birth rate. ... To blame population growth instead of extreme and selective consumerism on the part of some, is one way of refusing to face the issues. It is an attempt to legitimize the present model of distribution, where a minority believes that it has the right to consume in a way which can never be universalized, since the planet could not even contain the waste products of such consumption.

Game theory

One form of the categorical imperative is superrationality. The concept was elucidated by Douglas Hofstadter as a new approach to game theory. Unlike in conventional game theory, a superrational player will act as if all other players are superrational too and that a superrational agent will always come up with the same strategy as any other superrational agent when facing the same problem.

Criticisms

The Golden Rule

The first formulation of the categorical imperative appears similar to the Golden Rule. In its negative form, the rule prescribes: "Do not impose on others what you do not wish for yourself." In its positive form, the rule states: "Treat others how you wish to be treated." Due to this similarity, some have thought the two are identical. William P. Alston and Richard B. Brandt, in their introduction to Kant, stated, "His view about when an action is right is rather similar to the Golden Rule; he says, roughly, that an act is right if and only if its agent is prepared to have that kind of action made universal practice or a 'law of nature.' Thus, for instance, Kant says it is right for a person to lie if and only if he is prepared to have everyone lie in similar circumstances, including those in which he is deceived by the lie."

Claiming that Ken Binmore thought so as well, Peter Corning suggests that:

Kant's objection to the Golden Rule is especially suspect because the categorical imperative (CI) sounds a lot like a paraphrase, or perhaps a close cousin, of the same fundamental idea. In effect, it says that you should act toward others in ways that you would want everyone else to act toward others, yourself included (presumably). Calling it a universal law does not materially improve on the basic concept.

Kant himself did not think so in the Groundwork of the Metaphysics of Morals. Rather, the categorical imperative is an attempt to identify a purely formal and necessarily universally binding rule on all rational agents. The Golden Rule, on the other hand, is neither purely formal nor necessarily universally binding. It is "empirical" in the sense that applying it depends on providing content, such as, "If you don't want others to hit you, then don't hit them." It is also a hypothetical imperative in the sense that it can be formulated, "If you want X done to you, then do X to others." Kant feared that the hypothetical clause, "if you want X done to you," remains open to dispute. In fact, he famously criticized it for not being sensitive to differences of situation, noting that a prisoner duly convicted of a crime could appeal to the golden rule while asking the judge to release him, pointing out that the judge would not want anyone else to send him to prison, so he should not do so to others.

Lying to a murderer

One of the first major challenges to Kant's reasoning came from the French philosopher Benjamin Constant, who asserted that since truth telling must be universal, according to Kant's theories, one must (if asked) tell a known murderer the location of his prey. This challenge occurred while Kant was still alive, and his response was the essay On a Supposed Right to Tell Lies from Benevolent Motives (sometimes translated On a Supposed Right to Lie because of Philanthropic Concerns). In this reply, Kant agreed with Constant's inference, that from Kant's own premises one must infer a moral duty not to lie to a murderer.

Kant denied that such an inference indicates any weakness in his premises: not lying to the murderer is required because moral actions do not derive their worth from the expected consequences. He claimed that because lying to the murderer would treat him as a mere means to another end, the lie denies the rationality of another person, and therefore denies the possibility of there being free rational action at all. This lie results in a contradiction in conception and therefore the lie is in conflict with duty.

Constant and Kant agree that refusing to answer the murderer's question (rather than lying) is consistent with the categorical imperative, but assume for the purposes of argument that refusing to answer would not be an option.

Questioning autonomy

Schopenhauer's criticism of the Kantian philosophy expresses doubt concerning the absence of egoism in the categorical imperative. Schopenhauer claimed that the categorical imperative is actually hypothetical and egotistical, not categorical. However, Schopenhauer's criticism (as cited here) presents a weak case for linking egoism to Kant's formulations of the categorical imperative. By definition any form of sentient, organic life is interdependent and emergent with the organic and inorganic properties, environmental life supporting features, species dependent means of child rearing. These conditions are already rooted in mutual interdependence which makes that life form possible at all to be in a state of coordination with other forms of life - be it with pure practical reason or not. It may be that the categorical imperative is indeed biased in that it is life promoting and in part promotes the positive freedom for rational beings to pursue freely the setting of their own ends (read choices).

However, deontology also holds not merely the positive form freedom (to set ends freely) but also the negative forms of freedom to that same will (to restrict setting of ends that treat others merely as means, etc.). The deontological system is for Kant argued to be based in a synthetic a priori - since in restricting the will's motive at its root to a purely moral schema consistent its maxims can be held up to the pure moral law as a structure of cognition and therefore the alteration of action accompanying a cultured person to a 'reverence for the law' or 'moral feeling'.

Thus, insofar as individuals freely chosen ends are consistent in a rational Idea of community of interdependent beings also exercising the possibility of their pure moral reason is the egoism self-justified as being what is 'holy' good will because the motive is consistent with what all rational beings who are able to exercise this purely formal reason would see. The full community of other rational members - even if this 'Kingdom of Ends' is not yet actualized and whether or not we ever live to see it - is thus a kind of 'infinite game' that seeks to held in view by all beings able to participate and choose the 'highest use of reason' (see Critique of Pure Reason) which is reason in its pure practical form. That is, morality seen deontologically.

Søren Kierkegaard believed Kantian autonomy was insufficient and that, if unchecked, people tend to be lenient in their own cases, either by not exercising the full rigor of the moral law or by not properly disciplining themselves of moral transgressions. However, many of Kierkegaard's criticisms on his understanding of Kantian autonomy, neglect the evolution of Kant's moral theory from the Groundwork of Metaphysics of Morals, to the second and final critiques respectively, The Critique of Practical Reason, The Critique of Moral Judgment, and his final work on moral theory the Metaphysics of Morals 

Kant was of the opinion that man is his own law (autonomy)—that is, he binds himself under the law which he himself gives himself. Actually, in a profounder sense, this is how lawlessness or experimentation are established. This is not being rigorously earnest any more than Sancho Panza's self-administered blows to his own bottom were vigorous. ... Now if a man is never even once willing in his lifetime to act so decisively that [a lawgiver] can get hold of him, well, then it happens, then the man is allowed to live on in self-complacent illusion and make-believe and experimentation, but this also means: utterly without grace.

— Søren Kierkegaard, Papers and Journals

Magnet school

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Magnet_sc...