Search This Blog

Saturday, June 28, 2025

Dialectical behavior therapy

From Wikipedia, the free encyclopedia
The skills modules in dialectical behavior therapy

Dialectical behavior therapy (DBT) is an evidence-based psychotherapy that began with efforts to treat personality disorders and interpersonal conflicts. Evidence suggests that DBT can be useful in treating mood disorders and suicidal ideation as well as for changing behavioral patterns such as self-harm and substance use. DBT evolved into a process in which the therapist and client work with acceptance and change-oriented strategies and ultimately balance and synthesize them—comparable to the philosophical dialectical process of thesis and antithesis, followed by synthesis.

This approach was developed by Marsha M. Linehan, a psychology researcher at the University of Washington. She defines it as "a synthesis or integration of opposites". DBT was designed to help people increase their emotional and cognitive regulation by learning about the triggers that lead to reactive states and by helping to assess which coping skills to apply in the sequence of events, thoughts, feelings, and behaviors to help avoid undesired reactions. Linehan later disclosed to the public her own struggles and belief that she suffers from borderline personality disorder.

DBT grew out of a series of failed attempts to apply the standard cognitive behavioral therapy (CBT) protocols of the late 1970s to chronically suicidal clients. Research on its effectiveness in treating other conditions has been fruitful. DBT has been used by practitioners to treat people with depression, drug and alcohol problems, post-traumatic stress disorder (PTSD), traumatic brain injuries (TBI), binge-eating disorder, and mood disorders. Research indicates that DBT might help patients with symptoms and behaviors associated with spectrum mood disorders, including self-injury. Work also suggests its effectiveness with sexual-abuse survivors and chemical dependency.

DBT combines standard cognitive-behavioral techniques for emotion regulation and reality-testing with concepts of distress tolerance, acceptance, and mindful awareness largely derived from contemplative meditative practice. DBT is based upon the biosocial theory of mental illness and is the first therapy that has been experimentally demonstrated to be generally effective in treating borderline personality disorder (BPD). The first randomized clinical trial of DBT showed reduced rates of suicidal gestures, psychiatric hospitalizations, and treatment dropouts when compared to usual treatment. A meta-analysis found that DBT reached moderate effects in individuals with BPD. DBT may not be appropriate as a universal intervention, as it was shown to be harmful or have null effects in a study of an adapted DBT skills-training intervention in adolescents in schools, though conclusions of iatrogenic harm are unwarranted as the majority of participants did not significantly engage with the assigned activities with higher engagement predicting more positive outcomes.

Overview

DBT is sometimes considered a part of the "third wave" of cognitive-behavioral therapy, as DBT adapts CBT to assist patients in dealing with stress. DBT focuses on treating disorders that are characterised by impulsivity and emotional dysregulation.

DBT strives to have the patient view the therapist as an accepting ally rather than an adversary in the treatment of psychological issues: many treatments at this time left patients feeling "criticized, misunderstood, and invalidated" due to the way these methods "focused on changing cognitions and behaviors." Accordingly, the therapist aims to accept and validate the client's feelings at any given time, while, nonetheless, informing the client that some feelings and behaviors are maladaptive, and showing them better alternatives. In particular, DBT targets self-harm and suicide attempts by identifying the function of that behavior and obtaining that function safely through DBT coping skills. DBT focuses on the client acquiring new skills and changing their behaviors, with the ultimate goal of achieving a "life worth living".

In DBT's biosocial theory of BPD, clients have a biological predisposition for emotional dysregulation, and their social environment validates maladaptive behavior.

DBT skills training alone is being used to address treatment goals in some clinical settings, and the broader goal of emotion regulation that is seen in DBT has allowed it to be used in new settings, for example, supporting parenting. There has been little study into adapting DBT into an online environment, but a review indicates that attendance is improved online, with comparable improvements for clients to the traditional mode.

Four modules

Mindfulness

DBT wise mind—the synthesis of the two opposites: reasonable mind and emotion mind

Mindfulness is one of the core ideas behind all elements of DBT. It is considered a foundation for the other skills taught in DBT, because it helps individuals accept and tolerate the powerful emotions they may feel when challenging their habits or exposing themselves to upsetting situations.

The concept of mindfulness and the meditative exercises used to teach it are derived from traditional contemplative religious practice, though the version taught in DBT does not involve any religious or metaphysical concepts. Within DBT it is the capacity to pay attention, nonjudgmentally, to the present moment; about living in the moment, experiencing one's emotions and senses fully, yet with perspective. The practice of mindfulness can also be intended to make people more aware of their environments through their five senses: touch, smell, sight, taste, and sound. Mindfulness relies heavily on the principle of acceptance, sometimes referred to as "radical acceptance". Acceptance skills rely on the patient's ability to view situations with no judgment, and to accept situations and their accompanying emotions. This causes less distress overall, which can result in reduced discomfort and symptomology.

Acceptance and change

The first few sessions of DBT introduce the dialectic of acceptance and change. The patient must first become comfortable with the idea of therapy; once the patient and therapist have established a trusting relationship, DBT techniques can flourish. An essential part of learning acceptance is to first grasp the idea of radical acceptance: radical acceptance embraces the idea of facing situations, both positive and negative, without judgment. Acceptance also incorporates mindfulness and emotional regulation skills, which depend on the idea of radical acceptance. These skills, specifically, are what set DBT apart from other therapies.

Often, after a patient becomes familiar with the idea of acceptance, they will accompany it with change. DBT has five specific states of change which the therapist will review with the patient: pre-contemplation, contemplation, preparation, action, and maintenance. Precontemplation is the first stage, in which the patient is completely unaware of their problem. In the second stage, contemplation, the patient realizes the reality of their illness: this is not an action, but a realization. It is not until the third stage, preparation, that the patient is likely to take action, and prepares to move forward. This could be as simple as researching or contacting therapists. Finally, in stage 4, the patient takes action and receives treatment. In the final stage, maintenance, the patient must strengthen their change in order to prevent relapse. After grasping acceptance and change, a patient can fully advance to mindfulness techniques.

There are six mindfulness skills used in DBT to bring the client closer to achieving a "wise mind", the synthesis of the rational mind and emotional mind: three "what" skills (observe, describe, participate) and three "how" skills (nonjudgementally, one-mindfully, effectively).

Distress tolerance

The concept of distress tolerance arose from methods used in person-centered, psychodynamic, psychoanalytic, gestalt, and/or narrative therapies, along with religious and spiritual practices. Distress tolerance means learning to bear emotional discomfort skillfully, without resorting to maladaptive reactions. Healthier coping behaviors are learned, including intentional self-distraction, self-soothing, and 'radical acceptance.'

Distress tolerance skills are meant to arise naturally as a consequence of mindfulness. They have to do with the ability to accept, in a non-evaluative and nonjudgmental fashion, both oneself and the current situation. It is meant to be a non-judgmental stance, one of neither approval nor resignation. The goal is to become capable of calmly recognizing negative situations and their impact, rather than becoming overwhelmed or hiding from them. This allows individuals to make wise decisions about whether and how to take action, rather than falling into intense, desperate, and often destructive emotional reactions.

Emotion regulation

Individuals with borderline personality disorder and suicidal individuals are frequently emotionally intense and labile. They can be angry, intensely frustrated, depressed, or anxious. The theory holds that intense emotions are conditioned responses to distressing experiences, which serve as the conditioned stimuli. Emotional regulation skills are taught to help patients modify their conditioned responses.

Dialectical behavior therapy skills for emotion regulation include:

  • Learning how to understand and name emotions: the patient focuses on recognizing their feelings. This segment relates directly to mindfulness, which also exposes a patient to their emotions.
  • Identify obstacles to changing emotions
  • Changing unwanted emotions: the therapist emphasizes the use of opposite-reactions, fact-checking, and problem solving to regulate emotions. While using opposite-reactions, the patient targets distressing feelings by responding with the opposite emotion.
  • Reducing vulnerability: the patient learns to accumulate positive emotions and to plan coping mechanisms in advance, in order to better handle difficult experiences in the future.
  • Increase mindfulness to current emotions
  • Take opposite action
  • Apply distress tolerance techniques
  • Managing extreme conditions: the patient focuses on incorporating their use of mindfulness skills to their current emotions, to remain stable and alert in a crisis.

Interpersonal effectiveness

The three interpersonal skills focused on in DBT include self-respect, treating others "with care, interest, validation, and respect", and assertiveness. The dialectic involved in healthy relationships involves balancing the needs of others with the needs of the self, while maintaining one's self-respect.

Tools

Diary cards

Specially formatted diary cards can be used to track relevant emotions and behaviors. Diary cards are most useful when they are filled out daily. The diary card is used to find the treatment priorities that guide the agenda of each therapy session. Both the client and therapist can use the diary card to see what has improved, gotten worse, or stayed the same.

Chain analysis

Chain analysis— from a prompting event to the problem behavior and consequences

Chain analysis is a form of functional analysis of behavior but with increased focus on sequential events that form the behavior chain. It has strong roots in behavioral psychology in particular applied behavior analysis concept of chaining. A growing body of research supports the use of behavior chain analysis with multiple populations.

Efficacy

Borderline personality disorder

DBT is the therapy that has been studied the most for treatment of borderline personality disorder, and there have been enough studies done to conclude that DBT is helpful in treating borderline personality disorder. Several studies have found there are neurobiological changes in individuals with BPD after DBT treatment.

Depression

A Duke University pilot study compared treatment of depression by antidepressant medication to treatment by antidepressants and dialectical behavior therapy. A total of 34 chronically depressed individuals over age 60 were treated for 28 weeks. Six months after treatment, statistically significant differences were noted in remission rates between groups, with a greater percentage of patients treated with antidepressants and dialectical behavior therapy in remission.

Complex post-traumatic stress disorder (CPTSD)

Exposure to complex trauma, or the experience of prolonged trauma with little chance of escape, can lead to the development of complex post-traumatic stress disorder (CPTSD) in an individual. The American Psychiatric Association (APA) does not recognize CPTSD as a diagnosis in the DSM-5 (Diagnostical and Statistical Manual of Mental Disorders, the manual used by providers to diagnose, treat and discuss mental illness), though many practitioners argue that CPTSD is separate from post-traumatic stress disorder (PTSD). As of 2020, over 40 studies from 15 different countries had "consistently demonstrated the distinction between PTSD and CPTSD" and "replicated the distinct symptoms associated with each disorder" according to a 2021 literature review.

CPTSD is similar to PTSD in that its symptomatology is pervasive and includes cognitive, emotional, and biological domains, among others. CPTSD differs from PTSD in that it is believed to originate in childhood interpersonal trauma, or chronic childhood stress, and that the most common precedents are sexual traumas. Currently, the prevalence rate for CPTSD is an estimated 0.5%, while PTSD's is 1.5%. Numerous definitions for CPTSD exist. Different versions are contributed by the World Health Organization (WHO), The International Society for Traumatic Stress Studies (ISTSS), and individual clinicians and researchers.

Most definitions revolve around criteria for PTSD with the addition of several other domains. While The APA may not recognize CPTSD, the WHO has recognized this syndrome in its 11th edition of the International Classification of Diseases (ICD-11). The WHO defines CPTSD as a disorder following a single or multiple events which cause the individual to feel stressed or trapped, characterized by low self-esteem, interpersonal deficits, and deficits in affect regulation. These deficits in affect regulation, among other symptoms are a reason why CPTSD is sometimes compared with borderline personality disorder (BPD).

Similarities Between CPTSD and borderline personality disorder

In addition to affect dysregulation, case studies reveal that patients with CPTSD can also exhibit splitting, mood swings, and fears of abandonment. Like patients with borderline personality disorder, patients with CPTSD were traumatized frequently and/or early in their development and never learned proper coping mechanisms. These individuals may use avoidance, substances, dissociation, and other maladaptive behaviors to cope. Thus, treatment for CPTSD involves stabilizing and teaching successful coping behaviors, affect regulation, and creating and maintaining interpersonal connections. In addition to sharing symptom presentations, CPTSD and BPD can share neurophysiological similarities, for example, abnormal volume of the amygdala (emotional memory), hippocampus (memory), anterior cingulate cortex (emotion), and orbital prefrontal cortex (personality). Another shared characteristic between CPTSD and BPD is the possibility for dissociation. Further research is needed to determine the reliability of dissociation as a hallmark of CPTSD, however it is a possible symptom. Because of the two disorders' shared symptomatology and physiological correlates, psychologists began hypothesizing that a treatment which was effective for one disorder may be effective for the other as well.

DBT as a treatment for CPTSD

DBT's use of acceptance and goal orientation as an approach to behavior change can help to instill empowerment and engage individuals in the therapeutic process. The focus on the future and change can help to prevent the individual from becoming overwhelmed by their history of trauma. This is a risk especially with CPTSD, as multiple traumas are common within this diagnosis. Generally, care providers address a client's suicidality before moving on to other aspects of treatment. Because PTSD can make an individual more likely to experience suicidal ideation, DBT can be an option to stabilize suicidality and aid in other treatment modalities.

Some critics argue that while DBT can be used to treat CPTSD, it is not significantly more effective than standard PTSD treatments. Further, this argument posits that DBT decreases self-injurious behaviors (such as cutting or burning) and increases interpersonal functioning but neglects core CPTSD symptoms such as impulsivity, cognitive schemas (repetitive, negative thoughts), and emotions such as guilt and shame. The ISTSS reports that CPTSD requires treatment which differs from typical PTSD treatment, using a multiphase model of recovery, rather than focusing on traumatic memories. The recommended multiphase model consists of establishing safety, distress tolerance, and social relations.

Because DBT has four modules which generally align with these guidelines (Mindfulness, Distress Tolerance, Affect Regulation, Interpersonal Skills) it is a treatment option. Other critiques of DBT discuss the time required for the therapy to be effective. Individuals seeking DBT may not be able to commit to the individual and group sessions required, or their insurance may not cover every session.

A study co-authored by Linehan found that among women receiving outpatient care for BPD and who had attempted suicide in the previous year, 56% additionally met criteria for PTSD. Because of the correlation between borderline personality disorder traits and trauma, some settings began using DBT as a treatment for traumatic symptoms. Some providers opt to combine DBT with other PTSD interventions, such as prolonged exposure therapy (PE) (repeated, detailed description of the trauma in a psychotherapy session) or cognitive processing therapy (CPT) (psychotherapy which addresses cognitive schemas related to traumatic memories).

For example, a regimen which combined PE and DBT would include teaching mindfulness skills and distress tolerance skills, then implementing PE. The individual with the disorder would then be taught acceptance of a trauma's occurrence and how it may continue to affect them throughout their lives. Participants in clinical trials of this DBT PE regimen exhibited a decrease in symptoms, and throughout the 12-week trial, no self-injurious or suicidal behaviors were reported. Later trials similarly show increased effectiveness versus DBT.

Another argument which supports the use of DBT as a treatment for trauma hinges upon PTSD symptoms such as emotion regulation and distress. Some PTSD treatments such as exposure therapy may not be suitable for individuals whose distress tolerance and/or emotion regulation is low. Biosocial theory posits that emotion dysregulation is caused by an individual's heightened emotional sensitivity combined with environmental factors (such as invalidation of emotions, continued abuse/trauma), and tendency to ruminate (repeatedly think about a negative event and how the outcome could have been changed).

An individual who has these features is likely to use maladaptive coping behaviors. DBT can be appropriate in these cases because it teaches appropriate coping skills and allows the individuals to develop some degree of self-sufficiency. The first three modules of DBT increase distress tolerance and emotion regulation skills in the individual, paving the way for work on symptoms such as intrusions, self-esteem deficiency, and interpersonal relations.

Noteworthy is that DBT has often been modified based on the population being treated. For example, in veteran populations DBT is modified to include exposure exercises and accommodate the presence of traumatic brain injury (TBI), and insurance coverage (i.e. shortening treatment). Populations with comorbid BPD may need to spend longer in the "Establishing Safety" phase. In adolescent populations, the skills training aspect of DBT has elicited significant improvement in emotion regulation and ability to express emotion appropriately. In populations with comorbid substance use, adaptations may be made on a case-by-case basis.

For example, a provider may wish to incorporate elements of motivational interviewing (psychotherapy which uses empowerment to inspire behavior change). The degree of substance use should also be considered. For some individuals, substance use is the only coping behavior they know, and as such the provider may seek to implement skills training before targeting substance reduction. Inversely, a client's substance use may be interfering with attendance or other treatment compliance and the provider may choose to address the substance use before implementing DBT for the trauma.

Computational science

From Wikipedia, the free encyclopedia

Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and solve complex physical problems. While this typically extends into computational specializations, this field of study includes:

In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiments, which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programs and application software that model systems being studied and run these programs with various sets of input parameters. The essence of computational science is the application of numerical algorithms and computational mathematics. In some cases, these models require massive amounts of calculations (usually floating-point) and are often executed on supercomputers or distributed computing platforms.

The computational scientist

Ways to study a system

The term computational scientist is used to describe someone skilled in scientific computing. Such a person is usually a scientist, an engineer, or an applied mathematician who applies high-performance computing in different ways to advance the state-of-the-art in their respective applied disciplines in physics, chemistry, or engineering.

Computational science is now commonly considered a third mode of science, complementing and adding to experimentation/observation and theory (see image). Here, one defines a system as a potential source of data, an experiment as a process of extracting data from a system by exerting it through its inputs and a model (M) for a system (S) and an experiment (E) as anything to which E can be applied in order to answer questions about S. A computational scientist should be capable of:

  • recognizing complex problems
  • adequately conceptualizing the system containing these problems
  • designing a framework of algorithms suitable for studying this system: the simulation
  • choosing a suitable computing infrastructure (parallel computing/grid computing/supercomputers)
  • hereby, maximizing the computational power of the simulation
  • assessing to what level the output of the simulation resembles the systems: the model is validated
  • adjusting the conceptualization of the system accordingly
  • repeat the cycle until a suitable level of validation is obtained: the computational scientist trusts that the simulation generates adequately realistic results for the system under the studied conditions

Substantial effort in computational sciences has been devoted to developing algorithms, efficient implementation in programming languages, and validating computational results. A collection of problems and solutions in computational science can be found in Steeb, Hardy, Hardy, and Stoop (2004).

Philosophers of science addressed the question to what degree computational science qualifies as science, among them Humphreys and Gelfert. They address the general question of epistemology: how does gain insight from such computational science approaches? Tolk uses these insights to show the epistemological constraints of computer-based simulation research. As computational science uses mathematical models representing the underlying theory in executable form, in essence, they apply modeling (theory building) and simulation (implementation and execution). While simulation and computational science are our most sophisticated way to express our knowledge and understanding, they also come with all constraints and limits already known for computational solutions.

Applications of computational science

Problem domains for computational science/scientific computing include:

Predictive computational science

Predictive computational science is a scientific discipline concerned with the formulation, calibration, numerical solution, and validation of mathematical models designed to predict specific aspects of physical events, given initial and boundary conditions, and a set of characterizing parameters and associated uncertainties. In typical cases, the predictive statement is formulated in terms of probabilities. For example, given a mechanical component and a periodic loading condition, "the probability is (say) 90% that the number of cycles at failure (Nf) will be in the interval N1<Nf<N2".

Urban complex systems

Cities are massively complex systems created by humans, made up of humans, and governed by humans. Trying to predict, understand and somehow shape the development of cities in the future requires complex thinking and computational models and simulations to help mitigate challenges and possible disasters. The focus of research in urban complex systems is, through modeling and simulation, to build a greater understanding of city dynamics and help prepare for the coming urbanization.

Computational finance

In financial markets, huge volumes of interdependent assets are traded by a large number of interacting market participants in different locations and time zones. Their behavior is of unprecedented complexity and the characterization and measurement of the risk inherent to this highly diverse set of instruments is typically based on complicated mathematical and computational models. Solving these models exactly in closed form, even at a single instrument level, is typically not possible, and therefore we have to look for efficient numerical algorithms. This has become even more urgent and complex recently, as the credit crisis has clearly demonstrated the role of cascading effects going from single instruments through portfolios of single institutions to even the interconnected trading network. Understanding this requires a multi-scale and holistic approach where interdependent risk factors such as market, credit, and liquidity risk are modeled simultaneously and at different interconnected scales.

Computational biology

Exciting new developments in biotechnology are now revolutionizing biology and biomedical research. Examples of these techniques are high-throughput sequencing, high-throughput quantitative PCR, intra-cellular imaging, in-situ hybridization of gene expression, three-dimensional imaging techniques like Light Sheet Fluorescence Microscopy, and Optical Projection (micro)-Computer Tomography. Given the massive amounts of complicated data that is generated by these techniques, their meaningful interpretation, and even their storage, form major challenges calling for new approaches. Going beyond current bioinformatics approaches, computational biology needs to develop new methods to discover meaningful patterns in these large data sets. Model-based reconstruction of gene networks can be used to organize the gene expression data in a systematic way and to guide future data collection. A major challenge here is to understand how gene regulation is controlling fundamental biological processes like biomineralization and embryogenesis. The sub-processes like gene regulation, organic molecules interacting with the mineral deposition process, cellular processes, physiology, and other processes at the tissue and environmental levels are linked. Rather than being directed by a central control mechanism, biomineralization and embryogenesis can be viewed as an emergent behavior resulting from a complex system in which several sub-processes on very different temporal and spatial scales (ranging from nanometer and nanoseconds to meters and years) are connected into a multi-scale system. One of the few available options to understand such systems is by developing a multi-scale model of the system.

Complex systems theory

Using information theory, non-equilibrium dynamics, and explicit simulations, computational systems theory tries to uncover the true nature of complex adaptive systems.

Computational science and engineering

Computational science and engineering (CSE) is a relatively new discipline that deals with the development and application of computational models and simulations, often coupled with high-performance computing, to solve complex physical problems arising in engineering analysis and design (computational engineering) as well as natural phenomena (computational science). CSE has become accepted amongst scientists, engineers and academics as the "third mode of discovery" (next to theory and experimentation). In many fields, computer simulation is integral and therefore essential to business and research. Computer simulation provides the capability to enter fields that are either inaccessible to traditional experimentation or where carrying out traditional empirical inquiries is prohibitively expensive. CSE should neither be confused with pure computer science, nor with computer engineering, although a wide domain in the former is used in CSE (e.g., certain algorithms, data structures, parallel programming, high-performance computing), and some problems in the latter can be modeled and solved with CSE methods (as an application area).

Methods and algorithms

Algorithms and mathematical methods used in computational science are varied. Commonly applied methods include:

Historically and today, Fortran remains popular for most applications of scientific computing.Other programming languages and computer algebra systems commonly used for the more mathematical aspects of scientific computing applications include GNU Octave, HaskellJuliaMapleMathematica, MATLABPython (with third-party SciPy library), Perl (with third-party PDL library), RScilab, and TK Solver. The more computationally intensive aspects of scientific computing will often use some variation of C or Fortran and optimized algebra libraries such as BLAS or LAPACK. In addition, parallel computing is heavily used in scientific computing to find solutions of large problems in a reasonable amount of time. In this framework, the problem is either divided over many cores on a single CPU node (such as with OpenMP), divided over many CPU nodes networked together (such as with MPI), or is run on one or more GPUs (typically using either CUDA or OpenCL).

Computational science application programs often model real-world changing conditions, such as weather, airflow around a plane, automobile body distortions in a crash, the motion of stars in a galaxy, an explosive device, etc. Such programs might create a 'logical mesh' in computer memory where each item corresponds to an area in space and contains information about that space relevant to the model. For example, in weather models, each item might be a square kilometer; with land elevation, current wind direction, humidity, temperature, pressure, etc. The program would calculate the likely next state based on the current state, in simulated time steps, solving differential equations that describe how the system operates, and then repeat the process to calculate the next state.

Conferences and journals

In 2001, the International Conference on Computational Science (ICCS) was first organized. Since then, it has been organized yearly. ICCS is an A-rank conference in the CORE ranking.

The Journal of Computational Science published its first issue in May 2010. The Journal of Open Research Software was launched in 2012. The ReScience C initiative, which is dedicated to replicating computational results, was started on GitHub in 2015.

Education

At some institutions, a specialization in scientific computation can be earned as a "minor" within another program (which may be at varying levels). However, there are increasingly many bachelor's, master's, and doctoral programs in computational science. The joint degree program master program computational science at the University of Amsterdam and the Vrije Universiteit in computational science was first offered in 2004. In this program, students:

  • learn to build computational models from real-life observations;
  • develop skills in turning these models into computational structures and in performing large-scale simulations;
  • learn theories that will give a firm basis for the analysis of complex systems;
  • learn to analyze the results of simulations in a virtual laboratory using advanced numerical algorithms.

ETH Zurich offers a bachelor's and master's degree in Computational Science and Engineering. The degree equips students with the ability to understand scientific problem and apply numerical methods to solve such problems. The directions of specializations include Physics, Chemistry, Biology and other Scientific and Engineering disciplines.

George Mason University has offered a multidisciplinary doctorate Ph.D. program in Computational Sciences and Informatics starting from 1992.

The School of Computational and Integrative Sciences, Jawaharlal Nehru University (erstwhile School of Information Technology) also offers a vibrant master's science program for computational science with two specialties: Computational Biology and Complex Systems.

Subfields

Computational thinking

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Computational_thinking

Computational thinking (CT) refers to the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms. In education, CT is a set of problem-solving methods that involve expressing problems and their solutions in ways that a computer could also execute. It involves automation of processes, but also using computing to explore, analyze, and understand processes (natural and artificial).

History

The history of computational thinking as a concept dates back at least to the 1950s but most ideas are much older. Computational thinking involves ideas like abstraction, data representation, and logically organizing data, which are also prevalent in other kinds of thinking, such as scientific thinking, engineering thinking, systems thinking, design thinking, model-based thinking, and the like. Neither the idea nor the term are recent: Preceded by terms like algorithmizing, procedural thinking, algorithmic thinking, and computational literacy by computing pioneers like Alan Perlis and Donald Knuth, the term computational thinking was first used by Seymour Papert in 1980 and again in 1996. Computational thinking can be used to algorithmically solve complicated problems of scale, and is often used to realize large improvements in efficiency.

The phrase computational thinking was brought to the forefront of the computer science education community in 2006 as a result of a Communications of the ACM essay on the subject by Jeannette Wing. The essay suggests that thinking computationally is a fundamental skill for everyone, not just computer scientists, and argues for the importance of integrating computational ideas into other subjects in school. The essay also states that by learning computational thinking, children will be better in many everyday tasks; as examples, the essay gives packing one's backpack, finding one's lost mittens, and knowing when to stop renting and buying instead. The continuum of computational thinking questions in education ranges from K–9 computing for children to professional and continuing education, where the challenge is how to communicate deep principles, maxims, and ways of thinking between experts.

For the first ten years computational thinking was a US-centered movement, and still today that early focus is seen in the field's research. The field's most cited articles and most cited people were active in the early US CT wave, and the field's most active researcher networks are US-based. Dominated by US and European researchers, it is unclear to what extent can the field's predominantly Western body of research literature cater to the needs of students in other cultural groups. An ongoing effort to globalize effective thinking skills in everyday life is emerging in the Prolog community, whose Prolog Education Committee, sponsored by the Association for Logic Programming has the mission of "making Computational and Logical Thinking through Prolog and its successors a core subject in educational curricula and beyond, worldwide".

Characteristics

The characteristics that define computational thinking are decomposition, pattern recognition / data representation, generalization/abstraction, and algorithms. By decomposing a problem, identifying the variables involved using data representation, and creating algorithms, a generic solution results. The generic solution is a generalization or abstraction that can be used to solve a multitude of variations of the initial problem.

The "three As" Computational Thinking Process describes computational thinking as a set of three steps: abstraction, automation, and analysis.

Another characterization of computational thinking is the "three As" iterative process based on three stages:

  1. Abstraction: Problem formulation;
  2. Automation: Solution expression;
  3. Analysis: Solution execution and evaluation.

Connection to the "four Cs"

The four Cs of 21st-century learning are communication, critical thinking, collaboration, and creativity[citation needed]. The fifth C could be computational thinking which entails the capability to resolve problems algorithmically and logically. It includes tools that produce models and visualize data. Grover describes how computational thinking is applicable across subjects beyond science, technology, engineering, and mathematics (STEM) which include the social sciences and language arts.

Since its inception, the 4 Cs have gradually gained acceptance as important elements of many school syllabi. This development triggered modifications in platforms and directions such as inquiry, project-based, and more profound learning across all K–12 levels. Many countries have introduced computational thinking to all students: The United Kingdom has had CT in its national curriculum since 2012. Singapore calls CT "national capability". Other nations like Australia, China, Korea, and New Zealand have embarked on massive efforts to introduce computational thinking in schools. In the United States, President Barack Obama created the "Computer Science for All" program to empower a new generation of students in America with the proper computer science proficiency required to flourish in a digital economy. Computational thinking means thinking or solving problems like computer scientists. CT refers to thought processes required in understanding problems and formulating solutions. CT involves logic, assessment, patterns, automation, and generalization. Career readiness can be integrated into academic environments in multiple ways.

The "algoRithms" part of CT has also been referred to as the "fourth R", where the others are Reading, wRiting, and aRithmetic.

Computational education

3D design of cubicle desks to get computers to the desk for a computational education

In K–12 education

Similar to Seymour Papert, Alan Perlis, and Marvin Minsky before, Jeannette Wing envisioned computational thinking becoming an essential part of every child's education. However, integrating computational thinking into the K–12 curriculum and computer science education has faced several challenges including the agreement on the definition of computational thinking, how to assess children's development in it, and how to distinguish it from other similar "thinking" like systems thinking, design thinking, and engineering thinking. Currently, computational thinking is broadly defined as a set of cognitive skills and problem solving processes that include (but are not limited to) the following characteristics (but there are arguments that few, if any, of them belong to computing specifically, instead of being principles in many fields of science and engineering).

  • Using abstractions and pattern recognition to represent the problem in new and different ways
  • Logically organizing and analyzing data
  • Breaking the problem down into smaller parts
  • Approaching the problem using programmatic thinking techniques such as iteration, symbolic representation, and logical operations
  • Reformulating the problem into a series of ordered steps (algorithmic thinking)
  • Identifying, analyzing, and implementing possible solutions with the goal of achieving the most efficient and effective combination of steps and resources
  • Generalizing this problem-solving process to a wide variety of problems

Current integration of computational thinking into the K–12 curriculum comes in two forms: in computer science classes directly or through the use and measure of computational thinking techniques in other subjects. Teachers in Science, Technology, Engineering, and Mathematics (STEM) focused classrooms that include computational thinking, allow students to practice problem-solving skills such as trial and errorValerie Barr and Chris Stephenson describe computational thinking patterns across disciplines in a 2011 ACM Inroads article However Conrad Wolfram has argued that computational thinking should be taught as a distinct subject.

There are online institutions that provide a curriculum, and other related resources, to build and strengthen pre-college students with computational thinking, analysis and problem-solving.

Center for Computational Thinking

Carnegie Mellon University in Pittsburgh has a Center for Computational Thinking. The Center's major activity is conducting PROBEs or PROBlem-oriented Explorations. These PROBEs are experiments that apply novel computing concepts to problems to show the value of computational thinking. A PROBE experiment is generally a collaboration between a computer scientist and an expert in the field to be studied. The experiment typically runs for a year. In general, a PROBE will seek to find a solution for a broadly applicable problem and avoid narrowly focused issues. Some examples of PROBE experiments are optimal kidney transplant logistics and how to create drugs that do not breed drug-resistant viruses.

Criticism

The concept of computational thinking has been criticized as too vague, as it's rarely made clear how it is different from other forms of thought. The inclination among computer scientists to force computational solutions upon other fields has been called "computational chauvinism". Some computer scientists worry about the promotion of computational thinking as a substitute for a broader computer science education, as computational thinking represents just one small part of the field. Others worry that the emphasis on computational thinking encourages computer scientists to think too narrowly about the problems they can solve, thus avoiding the social, ethical and environmental implications of the technology they create. In addition, as nearly all CT research is done in the US and Europe, it is not certain how well those educational ideas work in other cultural contexts.

A 2019 paper argues that the term "computational thinking" (CT) should be used mainly as a shorthand to convey the educational value of computer science, hence the need of teaching it in school. The strategic goal is to have computer science recognized in school as an autonomous scientific subject more than trying to identify "body of knowledge" or "assessment methods" for CT. Particularly important is to stress the fact that the scientific novelty associated with CT is the shift from the "problem solving" of mathematics to the "having problem solved" of computer science. Without the "effective agent", who automatically executes the instructions received to solve the problem, there would be no computer science, but just mathematics. Another criticism in the same paper is that focusing on "problem solving" is too narrow, since "solving a problem is just an instance of a situation where one wants to reach a specified goal". The paper therefore generalizes the original definitions by Cuny, Snyder, and Wing and Aho as follows: "Computational thinking is the thought processes involved in modeling a situation and specifying the ways an information-processing agent can effectively operate within it to reach an externally specified (set of) goal(s)."

Many definitions of CT describe it only at skill level because the momentum behind its growth comes from its promise to boost STEM education. And, the latest movement in STEM education is based on suggestions (by learning theories) that we teach students experts' habits of mind. So, whether it is computational thinking, scientific thinking, or engineering thinking, the motivation is the same and the challenge is also the same: teaching experts' habits of mind to novices is inherently problematic because of the prerequisite content knowledge and practice skills needed to engage them in the same thinking processes as the experts. Only when we link the experts' habits of mind to fundamental cognitive processes can we then narrow their skill-sets down to more basic competencies that can be taught to novices. There have been only a few studies that actually address the cognitive essence of CT. Among those, Yasar (Communications of ACM, Vol. 61, No. 7, July 2018) describes CT as thinking that is generated/facilitated by a computational device, be it biological or electronic. Accordingly, everyone employs CT, not just computer scientists, and it can be improved via education and experience.

Computational logic and human thinking

Computational logic is an approach to computing that includes both computational thinking and logical thinking. It is based on a view of computing as the application of general-purpose logical reasoning to domain-specific knowledge expressed in logical terms.

Teaching materials for computational logic as a computer language for children were developed in the early 1980s. University level texts for non-computing students were developed in the early 2010s. More recently, a variety of new teaching materials have been developed to bridge the gap between STEM and non-STEM academic disciplines.

Gödel's incompleteness theorems

From Wikipedia, the free encyclopedia

Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.

The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

Employing a diagonal argument, Gödel's incompleteness theorems were among the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem.

Formal systems: completeness, consistency, and effective axiomatization

The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. Particularly in the context of first-order logic, formal systems are also called formal theories. In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability within these systems, rather than about "provability" in an informal sense.

There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties.

Effective axiomatization

A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is recursively enumerable. This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC).

The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent and complete, and contains a sufficient amount of arithmetic. However, it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems.

Completeness

A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms. This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first-order logic is semantically complete. But it is not syntactically complete, since there are sentences expressible in the language of first-order logic that can be neither proved nor disproved from the axioms of logic alone.

In a system of mathematics, thinkers such as Hilbert believed that it was just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) every mathematical formula.

A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue.

The theory of first-order Peano arithmetic seems consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete.

Consistency

A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and inconsistent otherwise. That is to say, a consistent axiomatic system is one that is free from contradiction.

Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if κ is the least such cardinal, then Vκ sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model.

If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent.

Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory.

Systems which contain arithmetic

The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic Q. Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems.

The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory.

The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication.

Dan Willard (2001) has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; that is to say, these systems are consistent and capable of proving their own consistency (see self-verifying theories).

Conflicting goals

In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers (Smith 2007, p. 2). In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems.

The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the continuum hypothesis, which is unresolvable in ZFC + "there exists an inaccessible cardinal".

The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized.

First incompleteness theorem

Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by J. Barkley Rosser (1936) using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated.

First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F." (Raatikainen 2020)

The unprovable statement GF referred to by the theorem is often referred to as "the Gödel sentence" for the system F. The proof constructs a particular Gödel sentence for the system F, but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence.

Each effectively generated system has its own Gödel sentence. It is possible to define a larger system F' that contains the whole of F plus GF as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to F', and thus F' also cannot be complete. In this case, GF is indeed a theorem in F', because it is an axiom. Because GF states only that it is not provable in F, no contradiction is presented by its provability within F'. However, because the incompleteness theorem applies to F', there will be a new Gödel statement GF ' for F', showing that F' is also incomplete. GF ' will differ from GF in that GF ' will refer to F', rather than F.

Syntactic form of the Gödel sentence

The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in F. However, the sequence of steps is such that the constructed sentence turns out to be GF itself. In this way, the Gödel sentence GF indirectly states its own unprovability within F.

To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete.

Thus, although the Gödel sentence refers indirectly to sentences of the system F, when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation (Smith 2007, p. 141). As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables (Franzén 2005, p. 71).

Truth of the Gödel sentence

The first incompleteness theorem shows that the Gödel sentence GF of an appropriate formal theory F is unprovable in F. Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (Smoryński 1977, p. 825; also see Franzén 2005, pp. 28–33). For this reason, the sentence GF is often said to be "true but unprovable." (Raatikainen 2020). However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence GF may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication Con(F)→GF, where Con(F) is a canonical sentence asserting the consistency of F (Smoryński 1977, p. 840, Kikuchi & Tanaka 1994, p. 403).

Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem (Franzén 2005, p. 135). That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system F is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system F, and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" – it must contain elements that do not correspond to any standard natural number (Raatikainen 2020, Franzén 2005, p. 135).

Relationship with the liar paradox

Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence G for a system F makes a similar assertion to the liar sentence, but with truth replaced by provability: G says "G is not provable in the system F." The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence.

It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski.

Extensions of Gödel's original result

Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions.

Gödel demonstrated the incompleteness of the system of Principia Mathematica, a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results.

Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but ω-consistent. A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number m the system proves ~P(m), and yet the system also proves that there exists a natural number n such that P(n). That is, the system says that a number with property P exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. J. Barkley Rosser (1936) strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem.

Second incompleteness theorem

For each formal system F containing basic arithmetic, it is possible to canonically define a formula Cons(F) expressing the consistency of F. This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system F whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons(F) states "there is no natural number that codes a derivation of '0=1' from the axioms of F."

Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons(F) will not be provable in F. The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that F is effectively axiomatized. This theorem states that for any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself. This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system F itself.

Expressing consistency

There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of F as a formula in the language of F. There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons(F) from the second incompleteness theorem is a particular expression of consistency.

Other formalizations of the claim that F is consistent may be inequivalent in F, and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.)

The Hilbert–Bernays conditions

The standard proof of the second incompleteness theorem assumes that the provability predicate ProvA(P) satisfies the Hilbert–Bernays provability conditions. Letting #(P) represent the Gödel number of a formula P, the provability conditions say:

  1. If F proves P, then F proves ProvA(#(P)).
  2. F proves 1.; that is, F proves ProvA(#(P)) → ProvA(#(ProvA(#(P)))).
  3. F proves ProvA(#(PQ)) ∧ ProvA(#(P)) → ProvA(#(Q))   (analogue of modus ponens).

There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert–Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic.

Implications for consistency proofs

Gödel's second incompleteness theorem also implies that a system F1 satisfying the technical conditions outlined above cannot prove the consistency of any system F2 that proves the consistency of F1. This is because such a system F1 can prove that if F2 proves the consistency of F1, then F1 is in fact consistent. For the claim that F1 is consistent has form "for all numbers n, n has the decidable property of not being a code for a proof of contradiction in F1". If F1 were in fact inconsistent, then F2 would prove for some n that n is the code of a contradiction in F1. But if F2 also proved that F1 is consistent (that is, that there is no such n), then it would itself be inconsistent. This reasoning can be formalized in F1 to show that if F2 is consistent, then F1 is consistent. Since, by second incompleteness theorem, F1 does not prove its consistency, it cannot prove the consistency of F2 either.

This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out.

The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would provide no interesting information if a system F proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of F in F would give us no clue as to whether F is consistent; no doubts about the consistency of F would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system F in some system F' that is in some sense less doubtful than F itself, for example, weaker than F. For many naturally occurring theories F and F', such as F = Zermelo–Fraenkel set theory and F' = primitive recursive arithmetic, the consistency of F' is provable in F, and thus F' cannot prove the consistency of F by the above corollary of the second incompleteness theorem.

The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of a different system with different axioms. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called ε0 is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory.

Examples of undecidable statements

There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem).

Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense.

Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics.

The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC.

Shelah (1974) showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory.

Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound c such that no specific number can be proved in that system to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.

Undecidable statements provable in larger systems

These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic.

In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic.

Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system ATR0 codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory.

Relationship with computability

The incompleteness theorem is closely related to several results about undecidable sets in recursion theory.

Kleene (1943) presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: no computer program can correctly determine, given any program P as input, whether P eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by Shoenfield (1967); Charlesworth (1981); and Hopcroft & Ullman (1979).

Franzén (2005) explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial p(x1, x2,...,xk) with integer coefficients, determines whether there is an integer solution to the equation p = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation p = 0 does have a solution in the integers then any sufficiently strong system of arithmetic T will prove this. Moreover, suppose the system T is ω-consistent. In that case, it will never prove that a particular polynomial equation has a solution when there is no solution in the integers. Thus, if T were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of T until either "p has a solution" or "p has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that T cannot be ω-consistent and complete. Moreover, for each consistent effectively generated system T, it is possible to effectively generate a multivariate polynomial p over the integers such that the equation p = 0 has no solutions over the integers, but the lack of solutions cannot be proved in T.

Smoryński (1977) shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable.

Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include false statements in the standard model; these theories are known as ω-inconsistent.

Proof sketch for the first theorem

The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria:

  1. Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement S is provable in the system" (which can be applied to any statement "S" in the system).
  2. In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument).
  3. Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false.

Arithmetization of syntax

The main problem in fleshing out the proof described above is that it seems at first that to construct a statement p that is equivalent to "p cannot be proved", p would somehow have to contain a reference to p, which could easily give rise to an infinite regress. Gödel's technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the Entscheidungsproblem.

In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number:

  • The word hello is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111.
  • The logical statement x=y => y=x is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120.

In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or does not have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers that represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements.

Construction of a statement about "provability"

Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this.

A formula F(x) that contains exactly one free variable x is called a statement form or class-sign. As soon as x is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number n, is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "2 × 3 = 6".

Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form F(x) can be assigned a Gödel number denoted by G(F). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G(F).

The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement p, one may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form Bew(y) that uses this arithmetical relation to state that a Gödel number of a proof of y exists:

Bew(y) = ∃ x (y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y).

The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "Bew(y)" is merely an abbreviation that represents a particular, very long, formula in the original language of T; the string "Bew" itself is not claimed to be part of this language.

An important feature of the formula Bew(y) is that if a statement p is provable in the system then Bew(G(p)) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G(p)) to be satisfied.

Diagonalization

The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves

pF(G(p)).

By letting F be the negation of Bew(x), we obtain the theorem

p ↔ ~Bew(G(p))

and the p defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula.

The statement p is not literally equal to ~Bew(G(p)); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself. This is similar to the following sentence in English:

", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable.

This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method.

Now, assume that the axiomatic system is ω-consistent, and let p be the statement obtained in the previous section.

If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable.

If the negation of p were provable, then Bew(G(p)) would be provable (because p was constructed to be equivalent to the negation of Bew(G(p))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable.

Thus the statement p is undecidable in our axiomatic system: it can neither be proved nor disproved within the system.

In fact, to show that p is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of p is not provable. Thus, if p is constructed for a particular system:

  • If the system is ω-consistent, it can prove neither p nor its negation, and so p is undecidable.
  • If the system is consistent, it may have the same situation, or it may prove the negation of p. In the later case, we have a statement ("not p") which is false but provable, and the system is not ω-consistent.

If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either p or "not p" as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula Bew(x) is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent.

Proof via Berry's paradox

Boolos (1989) sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke. Boolos's proof proceeds by constructing, for any computably enumerable set S of true sentences of arithmetic, another sentence which is true but not contained in S. This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic.

Computer verified proofs

The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers.

Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm (Shankar 1994), by Russell O'Connor in 2003 using Rocq (previously known as Coq) (O'Connor 2005) and by John Harrison in 2009 using HOL Light (Harrison 2009). A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle (Paulson 2014).

Proof sketch for the second theorem

The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system S using a formal predicate P for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system S itself.

Let p stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system S can be proved from within the system S itself. This is equivalent to proving the statement "System S is consistent". Now consider the statement c, where c = "If the system S is consistent, then p is not provable". The proof of sentence c can be formalized within the system S, and therefore the statement c, "p is not provable", (or identically, "not P(p)") can be proved in the system S.

Observe then, that if we can prove that the system S is consistent (ie. the statement in the hypothesis of c), then we have proved that p is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence c, ""p" is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in S: to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in S. So we cannot prove that the system S is consistent. And the 2nd Incompleteness Theorem statement follows.

Discussion and implications

The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles.

Consequences for logicism and Hilbert's second problem

The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic. Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first-order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem.

Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem").

Minds and machines

Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.

Putnam (1960) suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine.

Wigderson (2010) has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us."

Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites Gödel's theorems as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure that gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from how the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modeling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following:

Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false.

In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts, and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power.

There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside.

Paraconsistent logic

Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). Priest (1984, 2006) argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system. Shapiro (2002) gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism.

Appeals to the incompleteness theorems in other fields

Appeals and analogies are sometimes made to the incompleteness of theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including Franzén (2005), Raatikainen (2005), Sokal & Bricmont (1999); and Stangroom & Benson (2006)Sokal & Bricmont (1999) and Stangroom & Benson (2006), for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. Sokal & Bricmont (1999) criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.).

History

After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem. At the time, theories of natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of natural numbers alone were known as "arithmetic".

Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistent proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound.

In the course of his research, Gödel discovered that although a sentence, asserting its falsehood leads to paradox, a sentence that asserts its non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel, and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week.

Announcement

The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively. The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying,

For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. ... The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignorabimus, our credo avers: We must know. We shall know!

This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face.

Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for a conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930. Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930.

Gödel's paper was published in the Monatshefte in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the Monatshefte; the prompt acceptance of the first paper was one reason he changed his plans.

Generalization and acceptance

Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency if the Gödel sentence was changed appropriately. These developments left the incompleteness theorems in essentially their modern form.

Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent.

The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem.

Criticisms

Finsler

Finsler (1926) used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability and had only a superficial resemblance to Gödel's work. Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization. Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career.

Zermelo

In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument. In October, Gödel replied with a 10-page letter, where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system; it is not true in general by Tarski's undefinability theorem. However, Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor". Gödel decided that pursuing the matter further was pointless, and Carnap agreed. Much of Zermelo's subsequent work was related to logic stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories.

Wittgenstein

Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics, particularly, one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas.

Multiple commentators have read Wittgenstein as misunderstanding Gödel, although Floyd & Putnam (2000) as well as Priest (2004) have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative. The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements", and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing:

It is clear from the passages you cite that Wittgenstein did not understand [the first incompleteness theorem] (or pretended not to understand it). He interpreted it as a kind of logical paradox, while in fact is just the opposite, namely a mathematical theorem within an absolutely uncontroversial part of mathematics (finitary number theory or combinatorics).

Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. Floyd & Putnam (2000) argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. Rodych (2003) argues that their interpretation of Wittgenstein is not historically justified. Berto (2009) explores the relationship between Wittgenstein's writing and theories of paraconsistent logic.

Dialectical behavior therapy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Dialectical_behavior_therapy   ...