Search This Blog

Saturday, September 3, 2022

Cognitive distortion

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cognitive_distortion

A cognitive distortion is an exaggerated or irrational thought pattern involved in the onset or perpetuation of psychopathological states, such as depression and anxiety.

Cognitive distortions are thoughts that cause individuals to perceive reality inaccurately. According to Aaron Beck's cognitive model, a negative outlook on reality, sometimes called negative schemas (or schemata), is a factor in symptoms of emotional dysfunction and poorer subjective well-being. Specifically, negative thinking patterns reinforce negative emotions and thoughts. During difficult circumstances, these distorted thoughts can contribute to an overall negative outlook on the world and a depressive or anxious mental state. According to hopelessness theory and Beck's theory, the meaning or interpretation that people give to their experience importantly influences whether they will become depressed and whether they will experience severe, repeated, or long-duration episodes of depression.

Challenging and changing cognitive distortions is a key element of cognitive behavioral therapy (CBT).

Definition

Cognitive comes from the Medieval Latin cognitīvus, equivalent to Latin cognit(us), 'known'. Distortion means the act of twisting or altering something out of its true, natural, or original state.

History

In 1957, American psychologist Albert Ellis, though he did not know it yet, would aid cognitive therapy in correcting cognitive distortions and indirectly helping David D. Burns in writing The Feeling Good Handbook. Ellis created what he called the ABC Technique of rational beliefs. The ABC stands for the activating event, beliefs that are irrational, and the consequences that come from the belief. Ellis wanted to prove that the activating event is not what caused the emotional behavior or the consequences, but the beliefs and how the person irrationally perceive the events that aids the consequences. With this model, Ellis attempted to use rational emotive behavior therapy (REBT) with his patients, in order to help them "reframe" or reinterpret the experience in a more rational manner. In this model Ellis explains it all for his clients, while Beck helps his clients figure this out on their own. Beck first started to notice these automatic distorted thought processes when practicing psychoanalysis, while his patients followed the rule of saying anything that comes to mind. Aaron realized that his patients had irrational fears, thoughts, and perceptions that were automatic. Beck began noticing his automatic thought processes that he knew his patients had but did not report. Most of the time the thoughts were biased against themselves and very erroneous.

Beck believed that the negative schemas developed and manifested themselves in the perspective and behavior. The distorted thought processes lead to focusing on degrading the self, amplifying minor external setbacks, experiencing other's harmless comments as ill-intended, while simultaneously seeing self as inferior. Inevitably cognitions are reflected in their behavior with a reduced desire to care for oneself, to seek pleasure, and give up. These exaggerated perceptions, due to cognition, feel real and accurate because the schemas, after being reinforced through the behavior, tend to become automatic and do not allow time for reflection. This cycle is also known as Beck's cognitive triad, focused on the theory that the person's negative schema applied to the self, the future, and the environment.

In 1972, psychiatrist, psychoanalyst, and cognitive therapy scholar Aaron T. Beck published Depression: Causes and Treatment. He was dissatisfied with the conventional Freudian treatment of depression, because there was no empirical evidence for the success of Freudian psychoanalysis. Beck's book provided a comprehensive and empirically-supported theoretical model for depression—its potential causes, symptoms, and treatments. In Chapter 2, titled "Symptomatology of Depression", he described "cognitive manifestations" of depression, including low self-evaluation, negative expectations, self-blame and self-criticism, indecisiveness, and distortion of the body image.

Beck's student David D. Burns continued research on the topic. In his book Feeling Good: The New Mood Therapy, Burns described personal and professional anecdotes related to cognitive distortions and their elimination. When Burns published Feeling Good: The New Mood Therapy, it made Beck's approach to distorted thinking widely known and popularized. Burns sold over four million copies of the book in the United States alone. It was a book commonly "prescribed" for patients who have cognitive distortions that have led to depression. Beck approved of the book, saying that it would help others alter their depressed moods by simplifying the extensive study and research that had taken place since shortly after Beck had started as a student and practitioner of psychoanalytic psychiatry. Nine years later, The Feeling Good Handbook was published, which was also built on Beck's work and includes a list of ten specific cognitive distortions that will be discussed throughout this article.

Main types

Examples of some common cognitive distortions seen in depressed and anxious individuals. People may be taught how to identify and alter these distortions as part of cognitive behavioural therapy.

John C. Gibbs and Granville Bud Potter propose four categories for cognitive distortions: self-centered, blaming others, minimizing-mislabeling, and assuming the worst. The cognitive distortions listed below are categories of automatic thinking, and are to be distinguished from logical fallacies.

All-or-nothing thinking

The "all-or-nothing thinking distortion" is also referred to as "splitting", "black-and-white thinking", and "polarized thinking." Someone with the all-or-nothing thinking distortion looks at life in black and white categories. Either they are a success or a failure; either they are good or bad; there is no in-between. According to one article, "Because there is always someone who is willing to criticize, this tends to collapse into a tendency for polarized people to view themselves as a total failure. Polarized thinkers have difficulty with the notion of being 'good enough' or a partial success."

  • Example (from The Feeling Good Handbook): A woman eats a spoonful of ice cream. She thinks she is a complete failure for breaking her diet. She becomes so depressed that she ends up eating the whole quart of ice cream.

This example captures the polarized nature of this distortion—the person believes they are totally inadequate if they fall short of perfection. In order to combat this distortion, Burns suggests thinking of the world in terms of shades of gray. Rather than viewing herself as a complete failure for eating a spoonful of ice cream, the woman in the example could still recognize her overall effort to diet as at least a partial success.

This distortion is commonly found in perfectionists.

Jumping to conclusions

Reaching preliminary conclusions (usually negative) with little (if any) evidence. Three specific subtypes are identified:

Mind reading

Inferring a person's possible or probable (usually negative) thoughts from their behavior and nonverbal communication; taking precautions against the worst suspected case without asking the person.

  • Example 1: A student assumes that the readers of their paper have already made up their minds concerning its topic, and, therefore, writing the paper is a pointless exercise.
  • Example 2: Kevin assumes that because he sits alone at lunch, everyone else must think he is a loser. (This can encourage self-fulfilling prophecy; Kevin may not initiate social contact because of his fear that those around him already perceive him negatively).

Fortune-telling

Predicting outcomes (usually negative) of events.

  • Example: A depressed person tells themselves they will never improve; they will continue to be depressed for their whole life.

One way to combat this distortion is to ask, "If this is true, does it say more about me or them?"

Labeling

Labeling occurs when someone overgeneralizes characteristics of other people. For example, someone might use an unfavorable term to describe a complex person or event.

Emotional reasoning

In the emotional reasoning distortion, it is assumed that feelings expose the true nature of things and experience reality as a reflection of emotionally linked thoughts; something is believed true solely based on a feeling.

  • Examples: "I feel stupid, therefore I must be stupid". Feeling fear of flying in planes, and then concluding that planes must be a dangerous way to travel. Feeling overwhelmed by the prospect of cleaning one's house, therefore concluding that it's hopeless to even start cleaning.

Should/shouldn't and must/mustn't statements

Making "must" or "should" statements was included by Albert Ellis in his rational emotive behavior therapy (REBT), an early form of CBT; he termed it "musturbation". Michael C. Graham called it "expecting the world to be different than it is". It can be seen as demanding particular achievements or behaviors regardless of the realistic circumstances of the situation.

  • Example: After a performance, a concert pianist believes he or she should not have made so many mistakes.
  • In Feeling Good: The New Mood Therapy, David Burns clearly distinguished between pathological "should statements", moral imperatives, and social norms.

A related cognitive distortion, also present in Ellis' REBT, is a tendency to "awfulize"; to say a future scenario will be awful, rather than to realistically appraise the various negative and positive characteristics of that scenario. According to Burns, "must" and "should" statements are negative because they cause the person to feel guilty and upset at themselves. Some people also direct this distortion at other people, which can cause feelings of anger and frustration when that other person does not do what they should have done. He also mentions how this type of thinking can lead to rebellious thoughts. In other words, trying to whip oneself into doing something with "shoulds" may cause one to desire just the opposite.

Gratitude traps

A gratitude trap is a type of cognitive distortion that typically arises from misunderstandings regarding the nature or practice of gratitude. The term can refer to one of two related but distinct thought patterns:

  • A self-oriented thought process involving feelings of guilt, shame, or frustration related to one's expectations of how things "should" be
  • An "elusive ugliness in many relationships, a deceptive 'kindness,' the main purpose of which is to make others feel indebted", as defined by psychologist Ellen Kenner

Blaming others

Personalization and blaming

Personalization is assigning personal blame disproportionate to the level of control a person realistically has in a given situation.

  • Example 1: A foster child assumes that he/she has not been adopted because he/she is not "loveable enough".
  • Example 2: A child has bad grades. His/her mother believes it is because she is not a good enough parent.

Blaming is the opposite of personalization. In the blaming distortion, the disproportionate level of blame is placed upon other people, rather than oneself. In this way, the person avoids taking personal responsibility, making way for a "victim mentality".

  • Example: Placing blame for marital problems entirely on one's spouse.

Always being right

In this cognitive distortion, being wrong is unthinkable. This distortion is characterized by actively trying to prove one's actions or thoughts to be correct, and sometimes prioritizing self-interest over the feelings of another person. In this cognitive distortion, the facts that oneself has about their surroundings are always right while other people's opinions and perspectives are wrongly seen.

Fallacy of change

Relying on social control to obtain cooperative actions from another person. The underlying assumption of this thinking style is that one's happiness depends on the actions of others. The fallacy of change also assumes that other people should change to suit one's own interests automatically and/or that it is fair to pressure them to change. It may be present in most abusive relationships in which partners' "visions" of each other are tied into the belief that happiness, love, trust, and perfection would just occur once they or the other person change aspects of their beings.

Minimizing-mislabeling

Magnification and minimization

Giving proportionally greater weight to a perceived failure, weakness or threat, or lesser weight to a perceived success, strength or opportunity, so that the weight differs from that assigned by others, such as "making a mountain out of a molehill". In depressed clients, often the positive characteristics of other people are exaggerated and their negative characteristics are understated.

  • Catastrophizing – Giving greater weight to the worst possible outcome, however unlikely, or experiencing a situation as unbearable or impossible when it is just uncomfortable.

Labeling and mislabeling

A form of overgeneralization; attributing a person's actions to their character instead of to an attribute. Rather than assuming the behaviour to be accidental or otherwise extrinsic, one assigns a label to someone or something that is based on the inferred character of that person or thing.

Assuming the worst

Overgeneralizing

Someone who overgeneralizes makes faulty generalizations from insufficient evidence. Such as seeing a "single negative event" as a "never-ending pattern of defeat", and as such drawing a very broad conclusion from a single incident or a single piece of evidence. Even if something bad happens only once, it is expected to happen over and over again.

  • Example 1: A young woman is asked out on a first date, but not a second one. She is distraught as she tells her friend, "This always happens to me! I'll never find love!"
  • Example 2: A woman is lonely and often spends most of her time at home. Her friends sometimes ask her to dinner and to meet new people. She feels it is useless to even try. No one really could like her. And anyway, all people are the same; petty and selfish.

One suggestion to combat this distortion is to "examine the evidence" by performing an accurate analysis of one's situation. This aids in avoiding exaggerating one's circumstances.

Disqualifying the positive

Disqualifying the positive refers to rejecting positive experiences by insisting they "don't count" for some reason or other. Negative belief is maintained despite contradiction by everyday experiences. Disqualifying the positive may be the most common fallacy in the cognitive distortion range; it is often analyzed with "always being right", a type of distortion where a person is in an all-or-nothing self-judgment. People in this situation show signs of depression. Examples include:

  • "I will never be as good as Jane"
  • "Anyone could have done as well"
  • "They are just congratulating me to be nice"

Mental filtering

Filtering distortions occur when an individual dwells only on the negative details of a situation and filters out the positive aspects.

  • Example: Andy gets mostly compliments and positive feedback about a presentation he has done at work, but he also has received a small piece of criticism. For several days following his presentation, Andy dwells on this one negative reaction, forgetting all of the positive reactions that he had also been given.

The Feeling Good Handbook notes that filtering is like a "drop of ink that discolors a beaker of water". One suggestion to combat filtering is a cost–benefit analysis. A person with this distortion may find it helpful to sit down and assess whether filtering out the positive and focusing on the negative is helping or hurting them in the long run.

Conceptualization

In a series of publications, philosopher Paul Franceschi has proposed a unified conceptual framework for cognitive distortions designed to clarify their relationships and define new ones. This conceptual framework is based on three notions: (i) the reference class (a set of phenomena or objects, e.g. events in the patient's life); (ii) dualities (positive/negative, qualitative/quantitative, ...); (iii) the taxon system (degrees allowing to attribute properties according to a given duality to the elements of a reference class). In this model, "dichotomous reasoning", "minimization", "maximization" and "arbitrary focus" constitute general cognitive distortions (applying to any duality), whereas "disqualification of the positive" and "catastrophism" are specific cognitive distortions, applying to the positive/negative duality. This conceptual framework posits two additional cognitive distortion classifications: the "omission of the neutral" and the "requalification in the other pole".

Cognitive restructuring

Cognitive restructuring (CR) is a popular form of therapy used to identify and reject maladaptive cognitive distortions, and is typically used with individuals diagnosed with depression. In CR, the therapist and client first examine a stressful event or situation reported by the client. For example, a depressed male college student who experiences difficulty in dating might believe that his "worthlessness" causes women to reject him. Together, therapist and client might then create a more realistic cognition, e.g., "It is within my control to ask girls on dates. However, even though there are some things I can do to influence their decisions, whether or not they say yes is largely out of my control. Thus, I am not responsible if they decline my invitation." CR therapies are designed to eliminate "automatic thoughts" that include clients' dysfunctional or negative views. According to Beck, doing so reduces feelings of worthlessness, anxiety, and anhedonia that are symptomatic of several forms of mental illness. CR is the main component of Beck's and Burns's CBT.

Narcissistic defense

Those diagnosed with narcissistic personality disorder tend, unrealistically, to view themselves as superior, overemphasizing their strengths and understating their weaknesses. Narcissists use exaggeration and minimization this way to shield themselves against psychological pain.

Decatastrophizing

In cognitive therapy, decatastrophizing or decatastrophization is a cognitive restructuring technique that may be used to treat cognitive distortions, such as magnification and catastrophizing, commonly seen in psychological disorders like anxiety and psychosis. Major features of these disorders are the subjective report of being overwhelmed by life circumstances and the incapability of affecting them.

The goal of CR is to help the client change their perceptions to render the felt experience as less significant.

Criticism

Common criticisms of the diagnosis of cognitive distortion relate to epistemology and the theoretical basis. If the perceptions of the patient differ from those of the therapist, it may not be because of intellectual malfunctions but because the patient has different experiences. In some cases, depressed subjects appear to be "sadder but wiser".

Lateral computing

From Wikipedia, the free encyclopedia

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

The traditional or conventional approach to solving computing problems is to either build mathematical models or have an IF- THEN -ELSE structure. For example, a brute-force search is used in many chess engines, but this approach is computationally expensive and sometimes may arrive at poor solutions. It is for problems like this that lateral computing can be useful to form a better solution.

A simple problem of truck backup can be used for illustrating lateral-computing. This is one of the difficult tasks for traditional computing techniques, and has been efficiently solved by the use of fuzzy logic (which is a lateral computing technique). Lateral-computing sometimes arrives at a novel solution for particular computing problem by using the model of how living beings, such as how humans, ants, and honeybees, solve a problem; how pure crystals are formed by annealing, or evolution of living beings or quantum mechanics etc.

From lateral-thinking to lateral-computing

Lateral thinking is technique for creative thinking for solving problems. The brain as center of thinking has a self-organizing information system. It tends to create patterns and traditional thinking process uses them to solve problems. The lateral thinking technique proposes to escape from this patterning to arrive at better solutions through new ideas. Provocative use of information processing is the basic underlying principle of lateral thinking,

The provocative operator (PO) is something which characterizes lateral thinking. Its function is to generate new ideas by provocation and providing escape route from old ideas. It creates a provisional arrangement of information.

Water logic is contrast to traditional or rock logic. Water logic has boundaries which depends on circumstances and conditions while rock logic has hard boundaries. Water logic, in someways, resembles fuzzy logic.

Transition to lateral-computing

Lateral computing does a provocative use of information processing similar to lateral-thinking. This is explained with the use of evolutionary computing which is a very useful lateral-computing technique. The evolution proceeds by change and selection. While random mutation provides change, the selection is through survival of the fittest. The random mutation works as a provocative information processing and provides a new avenue for generating better solutions for the computing problem. The term "Lateral Computing" was first proposed by Prof CR SUTHIKSHN Kumar and First World Congress on Lateral Computing WCLC 2004 was organized with international participants during December 2004.

Lateral computing takes the analogies from real-world examples such as:

  • How slow cooling of the hot gaseous state results in pure crystals (Annealing)
  • How the neural networks in the brain solve such problems as face and speech recognition
  • How simple insects such as ants and honeybees solve some sophisticated problems
  • How evolution of human beings from molecular life forms are mimicked by evolutionary computing
  • How living organisms defend themselves against diseases and heal their wounds
  • How electricity is distributed by grids

Differentiating factors of "lateral computing":

  • Does not directly approach the problem through mathematical means.
  • Uses indirect models or looks for analogies to solve the problem.
  • Radically different from what is in vogue, such as using "photons" for computing in optical computing. This is rare as most conventional computers use electrons to carry signals.
  • Sometimes the Lateral Computing techniques are surprisingly simple and deliver high performance solutions to very complex problems.
  • Some of the techniques in lateral computing use "unexplained jumps". These jumps may not look logical. The example is the use of "Mutation" operator in genetic algorithms.

Convention – lateral

It is very hard to draw a clear boundary between conventional and lateral computing. Over a period of time, some unconventional computing techniques become integral part of mainstream computing. So there will always be an overlap between conventional and lateral computing. It will be tough task classifying a computing technique as a conventional or lateral computing technique as shown in the figure. The boundaries are fuzzy and one may approach with fuzzy sets.

Formal definition

Lateral computing is a fuzzy set of all computing techniques which use unconventional computing approach. Hence Lateral computing includes those techniques which use semi-conventional or hybrid computing. The degree of membership for lateral computing techniques is greater than 0 in the fuzzy set of unconventional computing techniques.

The following brings out some important differentiators for lateral computing.

Conventional computing

  • The problem and technique are directly correlated.
  • Treats the problem with rigorous mathematical analysis.
  • Creates mathematical models.
  • The computing technique can be analyzed mathematically.
Lateral computing

  • The problem may hardly have any relation to the computing technique used
  • Approaches problems by analogies such as human information processing model, annealing, etc.
  • Sometimes the computing technique cannot be mathematically analyzed.

Lateral computing and parallel computing

Parallel computing focuses on improving the performance of the computers/algorithms through the use of several computing elements (such as processing elements). The computing speed is improved by using several computing elements. Parallel computing is an extension of conventional sequential computing. However, in lateral computing, the problem is solved using unconventional information processing whether using a sequential or parallel computing.

A review of lateral-computing techniques

There are several computing techniques which fit the Lateral computing paradigm. Here is a brief description of some of the Lateral Computing techniques:

Swarm intelligence

Swarm intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents, interacting locally with their environment, cause coherent functional global patterns to emerge. SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model.

One interesting swarm intelligent technique is the Ant Colony algorithm:

  • Ants are behaviorally unsophisticated; collectively they perform complex tasks. Ants have highly developed sophisticated sign-based communication.
  • Ants communicate using pheromones; trails are laid that can be followed by other ants.
  • Routing Problem Ants drop different pheromones used to compute the "shortest" path from source to destination(s).

Agent-based systems

Agents are encapsulated computer systems that are situated in some environment and are capable of flexible, autonomous action in that environment in order to meet their design objectives. Agents are considered to be autonomous (independent, not-controllable), reactive (responding to events), pro-active (initiating actions of their own volition), and social (communicative). Agents vary in their abilities: they can be static or mobile, or may or may not be intelligent. Each agent may have its own task and/or role. Agents, and multi-agent systems, are used as a metaphor to model complex distributed processes. Such agents invariably need to interact with one another in order to manage their inter-dependencies. These interactions involve agents cooperating, negotiating and coordinating with one another.

Agent-based systems are computer programs that try to simulate various complex phenomena via virtual "agents" that represent the components of a business system. The behaviors of these agents are programmed with rules that realistically depict how business is conducted. As widely varied individual agents interact in the model, the simulation shows how their collective behaviors govern the performance of the entire system - for instance, the emergence of a successful product or an optimal schedule. These simulations are powerful strategic tools for "what-if" scenario analysis: as managers change agent characteristics or "rules," the impact of the change can be easily seen in the model output

Grid computing

By analogy, a computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. The applications of grid computing are in:

  • Chip design, cryptographic problems, medical instrumentation, and supercomputing.
  • Distributed supercomputing applications use grids to aggregate substantial computational resources in order to tackle problems that cannot be solved on a single system.

Autonomic computing

The autonomic nervous system governs our heart rate and body temperature, thus freeing our conscious brain from the burden of dealing with these and many other low-level, yet vital, functions. The essence of autonomic computing is self-management, the intent of which is to free system administrators from the details of system operation and maintenance.

Four aspects of autonomic computing are:

  • Self-configuration
  • Self-optimization
  • Self-healing
  • Self-protection

This is a grand challenge promoted by IBM.

Optical computing

Optical computing is to use photons rather than conventional electrons for computing. There are quite a few instances of optical computers and successful use of them. The conventional logic gates use semiconductors, which use electrons for transporting the signals. In case of optical computers, the photons in a light beam are used to do computation.

There are numerous advantages of using optical devices for computing such as immunity to electromagnetic interference, large bandwidth, etc.

DNA computing

DNA computing uses strands of DNA to encode the instance of the problem and to manipulate them using techniques commonly available in any molecular biology laboratory in order to simulate operations that select the solution of the problem if it exists.

Since the DNA molecule is also a code, but is instead made up of a sequence of four bases that pair up in a predictable manner, many scientists have thought about the possibility of creating a molecular computer. These computers rely on the much faster reactions of DNA nucleotides binding with their complements, a brute force method that holds enormous potential for creating a new generation of computers that would be 100 billion times faster than today's fastest PC. DNA computing has been heralded as the "first example of true nanotechnology", and even the "start of a new era", which forges an unprecedented link between computer science and life science.

Example applications of DNA computing are in solution for the Hamiltonian path problem which is a known NP complete one. The number of required lab operations using DNA grows linearly with the number of vertices of the graph. Molecular algorithms have been reported that solves the cryptographic problem in a polynomial number of steps. As known, factoring large numbers is a relevant problem in many cryptographic applications.

Quantum computing

In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics, which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or quantum superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. A quantum computer manipulates qubits by executing a series of quantum gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying these gates in succession, a quantum computer can perform a complicated unitary transformation to a set of qubits in some initial state.

Reconfigurable computing

Field-programmable gate arrays (FPGA) are making it possible to build truly reconfigurable computers. The computer architecture is transformed by on the fly reconfiguration of the FPGA circuitry. The optimal matching between architecture and algorithm improves the performance of the reconfigurable computer. The key feature is hardware performance and software flexibility.

For several applications such as fingerprint matching, DNA sequence comparison, etc., reconfigurable computers have been shown to perform several orders of magnitude better than conventional computers.

Simulated annealing

The Simulated annealing algorithm is designed by looking at how the pure crystals form from a heated gaseous state while the system is cooled slowly. The computing problem is redesigned as a simulated annealing exercise and the solutions are arrived at. The working principle of simulated annealing is borrowed from metallurgy: a piece of metal is heated (the atoms are given thermal agitation), and then the metal is left to cool slowly. The slow and regular cooling of the metal allows the atoms to slide progressively their most stable ("minimal energy") positions. (Rapid cooling would have "frozen" them in whatever position they happened to be at that time.) The resulting structure of the metal is stronger and more stable. By simulating the process of annealing inside a computer program, it is possible to find answers to difficult and very complex problems. Instead of minimizing the energy of a block of metal or maximizing its strength, the program minimizes or maximizes some objective relevant to the problem at hand.

Soft computing

One of the main components of "Lateral-computing" is soft computing which approaches problems with human information processing model. The Soft Computing technique comprises Fuzzy logic, neuro-computing, evolutionary-computing, machine learning and probabilistic-chaotic computing.

Neuro computing

Instead of solving a problem by creating a non-linear equation model of it, the biological neural network analogy is used for solving the problem. The neural network is trained like a human brain to solve a given problem. This approach has become highly successful in solving some of the pattern recognition problems.

Evolutionary computing

The genetic algorithm (GA) resembles the natural evolution to provide a universal optimization. Genetic algorithms start with a population of chromosomes which represent the various solutions. The solutions are evaluated using a fitness function and a selection process determines which solutions are to be used for competition process. These algorithms are highly successful in solving search and optimization problems. The new solutions are created using evolutionary principles such as mutation and crossover.

Fuzzy logic

Fuzzy logic is based on the fuzzy sets concepts proposed by Lotfi Zadeh. The degree of membership concept is central to fuzzy sets. The fuzzy sets differ from crisp sets since they allow an element to belong to a set to a degree (degree of membership). This approach finds good applications for control problems. The Fuzzy logic has found enormous applications and has already found a big market presence in consumer electronics such as washing machines, microwaves, mobile phones, Televisions, Camcoders etc.

Probabilistic/chaotic computing

Probabilistic computing engines, e.g. use of probabilistic graphical model such as Bayesian network. Such computational techniques are referred to as randomization, yielding probabilistic algorithms. When interpreted as a physical phenomenon through classical statistical thermodynamics, such techniques lead to energy savings that are proportional to the probability p with which each primitive computational step is guaranteed to be correct (or equivalently to the probability of error, (1–p). Chaotic Computing is based on the chaos theory.

Fractals

Fractal Computing are objects displaying self-similarity at different scales. Fractals generation involves small iterative algorithms. The fractals have dimensions greater than their topological dimensions. The length of the fractal is infinite and size of it cannot be measured. It is described by an iterative algorithm unlike a Euclidean shape which is given by a simple formula. There are several types of fractals and Mandelbrot sets are very popular.

Fractals have found applications in image processing, image compression music generation, computer games etc. Mandelbrot set is a fractal named after its creator. Unlike the other fractals, even though the Mandelbrot set is self-similar at magnified scales, the small scale details are not identical to the whole. I.e., the Mandelbrot set is infinitely complex. But the process of generating it is based on an extremely simple equation. The Mandelbrot set M is a collection of complex numbers. The numbers Z which belong to M are computed by iteratively testing the Mandelbrot equation. C is a constant. If the equation converges for chosen Z, then Z belongs to M. Mandelbrot equation:

Randomized algorithm

A Randomized algorithm makes arbitrary choices during its execution. This allows a savings in execution time at the beginning of a program. The disadvantage of this method is the possibility that an incorrect solution will occur. A well-designed randomized algorithm will have a very high probability of returning a correct answer. The two categories of randomized algorithms are:

Consider an algorithm to find the kth element of an array. A deterministic approach would be to choose a pivot element near the median of the list and partition the list around that element. The randomized approach to this problem would be to choose a pivot at random, thus saving time at the beginning of the process. Like approximation algorithms, they can be used to more quickly solve tough NP-complete problems. An advantage over the approximation algorithms, however, is that a randomized algorithm will eventually yield an exact answer if executed enough times

Machine learning

Human beings/animals learn new skills, languages/concepts. Similarly, machine learning algorithms provide capability to generalize from training data. There are two classes of Machine Learning (ML):

  • Supervised ML
  • Unsupervised ML

One of the well known machine learning technique is Back Propagation Algorithm. This mimics how humans learn from examples. The training patterns are repeatedly presented to the network. The error is back propagated and the network weights are adjusted using gradient descent. The network converges through several hundreds of iterative computations.

Support vector machines

This is another class of highly successful machine learning techniques successfully applied to tasks such as text classification, speaker recognition, image recognition etc.

Example applications

There are several successful applications of lateral-computing techniques. Here is a small set of applications that illustrates lateral computing:

  • Bubble sorting: Here the computing problem of sorting is approached with an analogy of bubbles rising in water. This is by treating the numbers as bubbles and floating them to their natural position.
  • Truck backup problem: This is an interesting problem of reversing a truck and parking it at a particular location. The traditional computing techniques have found it difficult to solve this problem. This has been successfully solved by Fuzzy system.
  • Balancing an inverted pendulum: This problem involves balancing and inverted pendulum. This problem has been efficiently solved by neural networks and fuzzy systems.
  • Smart volume control for mobile phones: The volume control in mobile phones depend on the background noise levels, noise classes, hearing profile of the user and other parameters. The measurement on noise level and loudness level involve imprecision and subjective measures. The authors have demonstrated the successful use of fuzzy logic system for volume control in mobile handsets.
  • Optimization using genetic algorithms and simulated annealing: The problems such as traveling salesman problem have been shown to be NP complete problems. Such problems are solved using algorithms which benefit by heuristics. Some of the applications are in VLSI routing, partitioning etc. Genetic algorithms and Simulated annealing have been successful in solving such optimization problems.
  • Programming The Unprogrammable (PTU) involving the automatic creation of computer programs for unconventional computing devices such as cellular automata, multi-agent systems, parallel systems, field-programmable gate arrays, field-programmable analog arrays, ant colonies, swarm intelligence, distributed systems, and the like.

Summary

Above is a review of lateral-computing techniques. Lateral-computing is based on the lateral-thinking approach and applies unconventional techniques to solve computing problems. While, most of the problems are solved in conventional techniques, there are problems which require lateral-computing. Lateral-computing provides advantage of computational efficiency, low cost of implementation, better solutions when compared to conventional computing for several problems. The lateral-computing successfully tackles a class of problems by exploiting tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. Lateral-computing techniques which use the human like information processing models have been classified as "Soft Computing" in literature.

Lateral-computing is valuable while solving numerous computing problems whose mathematical models are unavailable. They provide a way of developing innovative solutions resulting in smart systems with Very High Machine IQ (VHMIQ). This article has traced the transition from lateral-thinking to lateral-computing. Then several lateral-computing techniques have been described followed by their applications. Lateral-computing is for building new generation artificial intelligence based on unconventional processing.

Deep learning processor

From Wikipedia, the free encyclopedia

A deep learning processor (DLP), or a deep learning accelerator, is an electronic circuit designed for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors range from mobile devices, such as neural processing units (NPUs) in Huawei cellphones, to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform.

The goal of DLPs is to provide higher efficiency and performance for deep learning algorithms than general central processing unit (CPUs) and graphics processing units (GPUs) would. Most DLPs employ a large number of computing components to leverage high data-level parallelism, a relatively larger on-chip buffer/memory to leverage the data reuse patterns, and limited data-width operators for error-resilience of deep learning. Deep learning processors differ from AI accelerators in that they are specialized for running learning algorithms, while AI accelerators are typically more specialized for inference. However, the two terms (DLP vs AI accelerator) are not used rigorously and there is often overlap between the two.

History

The use of CPUs/GPUs

At the beginning, general CPUs were adopted to perform deep learning algorithms. Later, GPUs are introduced to the domain of deep learning. For example, in 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet, which won the champion of the ISLVRC-2012 competition. As the interests in deep learning algorithms and DLPs keep increasing, GPU manufacturers start to add deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library). For example, Nvidia even released the Turing Tensor Core—a DLP—to accelerate deep learning processing.

The first DLP

To provide higher efficiency in performance and energy, domain-specific design starts to draw a great attention. In 2014, Chen et al. proposed the first DLP in the world, DianNao (Chinese for "electric brain"), to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance (of key operations in deep neural networks) only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao, ShiDianNao, PuDianNao) are proposed by the same group, forming the DianNao Family.

The blooming DLPs

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. Only at ISCA 2016, three sessions, 15% (!) of the accepted papers, are all architecture designs about deep learning. Such efforts include Eyeriss (MIT), EIE (Stanford), Minerva (Harvard), Stripes (University of Toronto) in academia, and TPU (Google), MLU (Cambricon) in industry. We listed several representative works in Table 1.

Table 1. Typical DLPs
Year DLPs Institution Type Computation Memory Hierarchy Control Peak Performance
2014 DianNao ICT, CAS digital vector MACs scratchpad VLIW 452 Gops (16-bit)
DaDianNao ICT, CAS digital vector MACs scratchpad VLIW 5.58 Tops (16-bit)
2015 ShiDianNao ICT, CAS digital scalar MACs scratchpad VLIW 194 Gops (16-bit)
PuDianNao ICT, CAS digital vector MACs scratchpad VLIW 1,056 Gops (16-bit)
2016 DnnWeaver Georgia Tech digital Vector MACs scratchpad - -
EIE Stanford digital scalar MACs scratchpad - 102 Gops (16-bit)
Eyeriss MIT digital scalar MACs scratchpad - 67.2 Gops (16-bit)
Prime UCSB hybrid Process-in-Memory ReRAM - -
2017 TPU Google digital scalar MACs scratchpad CISC 92 Tops (8-bit)
PipeLayer U of Pittsburgh hybrid Process-in-Memory ReRAM -
FlexFlow ICT, CAS digital scalar MACs scratchpad - 420 Gops ()
2018 MAERI Georgia Tech digital scalar MACs scratchpad -
PermDNN City University of New York digital vector MACs scratchpad - 614.4 Gops (16-bit)
2019 FPSA Tsinghua hybrid Process-in-Memory ReRAM -
Cambricon-F ICT, CAS digital vector MACs scratchpad FISA 14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

DLP architecture

With the rapid evolution of deep learning algorithms and DLPs, many architectures have been explored. Roughly, DLPs can be classified into three categories based on their implementation: digital circuits, analog circuits, and hybrid circuits. As the pure analog DLPs are rarely seen, we introduce the digital DLPs and hybrid DLPs.

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs or scalar MACs. Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically. Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue. Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing. Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM, phase change memory, etc.

GPUs and FPGAs

Despite the DLPs, GPUs and FPGAs are also being used as accelerators to speed up the execution of deep learning algorithms. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory, contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms. Microsoft builds its deep learning platform using FPGAs in its Azure to support real-time deep learning services. In Table 2 we compare the DLPs against GPUs and FPGAs in terms of target, performance, energy efficiency, and flexibility.

Table 2. DLPs vs. GPUs vs. FPGAs

Target Performance Energy Efficiency Flexibility
DLPs deep learning high high domain-specific
FPGAs all low moderate general
GPUs matrix computation moderate low matrix applications

Atomically thin semiconductors for deep learning

Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). They use two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. 

Integrated photonic tensor core

In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.

Benchmarks

Benchmarking has served long as the foundation of designing new hardware architectures, where both architects and practitioners can compare various architectures, identify their bottlenecks, and conduct the corresponding system/architectural optimization. Table 3 lists several typical benchmarks for DLPs, dating from the year 2012 in time order.

Table 3. Benchmarks.
Year NN Benchmark Affiliations # of microbenchmarks # of component benchmarks # of application benchmarks
2012 BenchNN ICT, CAS N/A 12 N/A
2016 Fathom Harvard N/A 8 N/A
2017 BenchIP ICT, CAS 12 11 N/A
2017 DAWNBench Stanford 8 N/A N/A
2017 DeepBench Baidu 4 N/A N/A
2018 MLPerf Harvard, Intel, and Google, etc. N/A 7 N/A
2019 AIBench ICT, CAS and Alibaba, etc. 12 16 2
2019 NNBench-X UCSB N/A 10 N/A

Hubbert peak theory

From Wikipedia, the free encyclopedia

2004 U.S. government predictions for oil production other than in OPEC and the former Soviet Union

The Hubbert peak theory says that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production tends to follow a bell-shaped curve. It is one of the primary theories on peak oil.

Choosing a particular curve determines a point of maximum production based on discovery rates, production rates and cumulative production. Early in the curve (pre-peak), the production rate increases due to the discovery rate and the addition of infrastructure. Late in the curve (post-peak), production declines because of resource depletion.

The Hubbert peak theory is based on the observation that the amount of oil under the ground in any region is finite, therefore the rate of discovery which initially increases quickly must reach a maximum and decline. In the US, oil extraction followed the discovery curve after a time lag of 32 to 35 years. The theory is named after American geophysicist M. King Hubbert, who created a method of modeling the production curve given an assumed ultimate recovery volume.

Hubbert's peak

"Hubbert's peak" can refer to the peaking of production of a particular area, which has now been observed for many fields and regions.

Hubbert's peak was thought to have been achieved in the United States contiguous 48 states (that is, excluding Alaska and Hawaii) in the early 1970s. Oil production peaked at 10.2 million barrels (1.62 million cubic metres) per day in 1970 and then declined over the subsequent 35 years in a pattern which closely followed the one predicated by Hubbert in the mid-1950s. However, beginning in the mid-2000 decade, advances in extraction technology, particularly those that led to the extraction of tight oil and unconventional oil resulted in a large increase in U.S. oil production, thus establishing a pattern which deviated drastically from the model predicted by Hubbert for the contiguous 48-states as a whole. In November 2017 the United States once again surpassed the 10 million barrel mark for the first time since 1970.

Peak oil as a proper noun, or "Hubbert's peak" applied more generally, refers to a predicted event: the peak of the entire planet's oil production. After peak oil, according to the Hubbert Peak Theory, the rate of oil production on Earth would enter a terminal decline. On the basis of his theory, in a paper he presented to the American Petroleum Institute in 1956, Hubbert correctly predicted that production of oil from conventional sources would peak in the continental United States around 1965–1970. Hubbert further predicted a worldwide peak at "about half a century" from publication and approximately 12 gigabarrels (GB) a year in magnitude. In a 1976 TV interview Hubbert added that the actions of OPEC might flatten the global production curve but this would only delay the peak for perhaps 10 years. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction.

Hubbert's theory

Hubbert curve

The standard Hubbert curve. For applications, the x and y scales are replaced by time and production scales.
 
U.S. Oil Production and Imports 1910 to 2012

In 1956, Hubbert proposed that fossil fuel production in a given region over time would follow a roughly bell-shaped curve without giving a precise formula; he later used the Hubbert curve, the derivative of the logistic curve, for estimating future production using past observed discoveries.

Hubbert assumed that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. At some point, a peak output is reached, and production begins declining until it approximates an exponential decline.

The Hubbert curve satisfies these constraints. Furthermore, it is symmetrical, with the peak of production reached when half of the fossil fuel that will ultimately be produced has been produced. It also has a single peak.

Given past oil discovery and production data, a Hubbert curve that attempts to approximate past discovery data may be constructed and used to provide estimates for future production. In particular, the date of peak oil production or the total amount of oil ultimately produced can be estimated that way. Cavallo defines the Hubbert curve used to predict the U.S. peak as the derivative of:

where max is the total resource available (ultimate recovery of crude oil), the cumulative production, and and are constants. The year of maximum annual production (peak) is:

so now the cumulative production reaches the half of the total available resource:

The Hubbert equation assumes that oil production is symmetrical about the peak. Others have used similar but non-symmetrical equations which may provide better a fit to empirical production data.

Use of multiple curves

The sum of multiple Hubbert curves, a technique not developed by Hubbert himself, may be used in order to model more complicated real life scenarios. For example, when new technologies like hydraulic fracturing combined with new formations that were not productive before the new technology, this can create a need for multiple curves. These technologies are limited in number, but make a big impact on production and cause a need for a new curve to be added to the old curve and the entire curve to be reworked.

Reliability

Crude oil

Hubbert's upper-bound prediction for US crude oil production (1956), and actual lower-48 states production through 2016

Hubbert, in his 1956 paper, presented two scenarios for US crude oil production:

  • most likely estimate: a logistic curve with a logistic growth rate equal to 6%, an ultimate resource equal to 150 Giga-barrels (Gb) and a peak in 1965. The size of the ultimate resource was taken from a synthesis of estimates by well-known oil geologists and the US Geological Survey, which Hubbert judged to be the most likely case.
  • upper-bound estimate: a logistic curve with a logistic growth rate equal to 6% and ultimate resource equal to 200 Giga-barrels and a peak in 1970.

Hubbert's upper-bound estimate, which he regarded as optimistic, accurately predicted that US oil production would peak in 1970, although the actual peak was 17% higher than Hubbert's curve. Production declined, as Hubbert had predicted, and stayed within 10 percent of Hubbert's predicted value from 1974 through 1994; since then, actual production has been significantly greater than the Hubbert curve. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction.

Hubbert's 1956 production curves depended on geological estimates of ultimate recoverable oil resources, but he was dissatisfied by the uncertainty this introduced, given the various estimates ranging from 110 billion to 590 billion barrels for the US. Starting in his 1962 publication, he made his calculations, including that of ultimate recovery, based only on mathematical analysis of production rates, proved reserves, and new discoveries, independent of any geological estimates of future discoveries. He concluded that the ultimate recoverable oil resource of the contiguous 48 states was 170 billion barrels, with a production peak in 1966 or 1967. He considered that because his model incorporated past technical advances, that any future advances would occur at the same rate, and were also incorporated. Hubbert continued to defend his calculation of 170 billion barrels in his publications of 1965 and 1967, although by 1967 he had moved the peak forward slightly, to 1968 or 1969.

A post-hoc analysis of peaked oil wells, fields, regions and nations found that Hubbert's model was the "most widely useful" (providing the best fit to the data), though many areas studied had a sharper "peak" than predicted.

A 2007 study of oil depletion by the UK Energy Research Centre pointed out that there is no theoretical and no robust practical reason to assume that oil production will follow a logistic curve. Neither is there any reason to assume that the peak will occur when half the ultimate recoverable resource has been produced; and in fact, empirical evidence appears to contradict this idea. An analysis of a 55 post-peak countries found that the average peak was at 25 percent of the ultimate recovery.

Natural gas

Hubbert's 1962 prediction of US lower 48-state gas production, versus actual production through 2012

Hubbert also predicted that natural gas production would follow a logistic curve similar to that of oil. The graph shows actual gas production in blue compared to his predicted gas production for the United States in red, published in 1962.

Economics

Oil imports by country Pre-2006

Energy return on energy investment

The ratio of energy extracted to the energy expended in the process is often referred to as the Energy Return on Energy Investment (EROI or EROEI). Should the EROEI drops to one, or equivalently the Net energy gain falls to zero, the oil production is no longer a net energy source.

There is a difference between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca tar sands plants. Cheap natural gas has also led to ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate.

The assumption of inevitable declining volumes of oil and gas produced per unit of effort is contrary to recent experience in the US. In the United States, as of 2017, there has been an ongoing decade-long increase in the productivity of oil and gas drilling in all the major tight oil and gas plays. The US Energy Information Administration reports, for instance, that in the Bakken Shale production area of North Dakota, the volume of oil produced per day of drilling rig time in January 2017 was 4 times the oil volume per day of drilling five years previous, in January 2012, and nearly 10 times the oil volume per day of ten years previous, in January 2007. In the Marcellus gas region of the northeast, The volume of gas produced per day of drilling time in January 2017 was 3 times the gas volume per day of drilling five years previous, in January 2012, and 28 times the gas volume per day of drilling ten years previous, in January 2007.

Growth-based economic models

World energy consumption & predictions, 2005–2035. Source: International Energy Outlook 2011.

Insofar as economic growth is driven by oil consumption growth, post-peak societies must adapt. Hubbert believed:

Our principal constraints are cultural. During the last two centuries we have known nothing but exponential growth and in parallel we have evolved what amounts to an exponential-growth culture, a culture so heavily dependent upon the continuance of exponential growth for its stability that it is incapable of reckoning with problems of non growth.

— M. King Hubbert, "Exponential Growth as a Transient Phenomenon in Human History"

Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth". Brief oil interruptions in 1973 and 1979 markedly slowed—but did not stop—the growth of world GDP.

Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.

David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), in their 2003 study Food, Land, Population and the U.S. Economy, placed the maximum U.S. population for a sustainable economy at 200 million (actual population approx. 290m in 2003, 329m in 2019). To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.

Hubbert peaks

Although Hubbert peak theory receives most attention in relation to peak oil production, it has also been applied to other natural resources.

Natural gas

Doug Reynolds predicted in 2005 that the North American peak would occur in 2007. Bentley predicted a world "decline in conventional gas production from about 2020".

Coal

Although observers believe that peak coal is significantly further out than peak oil, Hubbert studied the specific example of anthracite in the US, a high grade coal, whose production peaked in the 1920s. Hubbert found that anthracite matches a curve closely. Hubbert had recoverable coal reserves worldwide at 2.500 × 1012 metric tons and peaking around 2150 (depending on usage).

More recent estimates suggest an earlier peak. Coal: Resources and Future Production (PDF 630KB), published on April 5, 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years. Reporting on this, Richard Heinberg also notes that the date of peak annual energetic extraction from coal is likely to come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively. A second study, The Future of Coal by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for European Commission Joint Research Centre, reaches similar conclusions and states that "coal might not be so abundant, widely available and reliable as an energy source in the future".

Work by David Rutledge of Caltech predicts that the total of world coal production will amount to only about 450 gigatonnes. This implies that coal is running out faster than usually assumed.

Fissionable materials

In a paper in 1956, after a review of US fissionable reserves, Hubbert notes of nuclear power:

There is promise, however, provided mankind can solve its international problems and not destroy itself with nuclear weapons, and provided world population (which is now expanding at such a rate as to double in less than a century) can somehow be brought under control, that we may at last have found an energy supply adequate for our needs for at least the next few centuries of the "foreseeable future."

As of 2015, the identified resources of uranium are sufficient to provide more than 135 years of supply at the present rate of consumption. Technologies such as the thorium fuel cycle, reprocessing and fast breeders can, in theory, extend the life of uranium reserves from hundreds to thousands of years.

Caltech physics professor David Goodstein stated in 2004 that

... you would have to build 10,000 of the largest power plants that are feasible by engineering standards in order to replace the 10 terawatts of fossil fuel we're burning today ... that's a staggering amount and if you did that, the known reserves of uranium would last for 10 to 20 years at that burn rate. So, it's at best a bridging technology ... You can use the rest of the uranium to breed plutonium 239 then we'd have at least 100 times as much fuel to use. But that means you're making plutonium, which is an extremely dangerous thing to do in the dangerous world that we live in.

Helium

Helium production and storage in the United States, 1940–2014 (data from USGS)

Almost all helium on Earth is a result of radioactive decay of uranium and thorium. Helium is extracted by fractional distillation from natural gas, which contains up to 7% helium. The world's largest helium-rich natural gas fields are found in the United States, especially in the Hugoton and nearby gas fields in Kansas, Oklahoma, and Texas. The extracted helium is stored underground in the National Helium Reserve near Amarillo, Texas, the self-proclaimed "Helium Capital of the World". Helium production is expected to decline along with natural gas production in these areas.

Helium, which is the second-lightest chemical element, will rise to the upper layers of Earth's atmosphere, where it can forever break free from Earth's gravitational attraction. Approximately 1,600 tons of helium are lost per year as a result of atmospheric escape mechanisms.

Transition metals

Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The price of copper rose 500% between 2003 and 2007 and was attributed by some to peak copper. Copper prices later fell, along with many other commodities and stock prices, as demand shrank from fear of a global recession. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years. A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled.

Precious metals

In 2009, Aaron Regent president of the Canadian gold giant Barrick Gold said that global output has been falling by roughly one million ounces a year since the start of the decade. The total global mine supply has dropped by 10pc as ore quality erodes, implying that the roaring bull market of the last eight years may have further to run. "There is a strong case to be made that we are already at 'peak gold'," he told The Daily Telegraph at the RBC's annual gold conference in London. "Production peaked around 2000 and it has been in decline ever since, and we forecast that decline to continue. It is increasingly difficult to find ore," he said.

Ore grades have fallen from around 12 grams per tonne in 1950 to nearer 3 grams in the US, Canada, and Australia. South Africa's output has halved since peaking in 1970. Output fell a further 14 percent in South Africa in 2008 as companies were forced to dig ever deeper – at greater cost – to replace depleted reserves.

World mined gold production has peaked four times since 1900: in 1912, 1940, 1971, and 2001, each peak being higher than previous peaks. The latest peak was in 2001, when production reached 2,600 metric tons, then declined for several years. Production started to increase again in 2009, spurred by high gold prices, and achieved record new highs each year in 2012, 2013, and in 2014, when production reached 2,990 tonnes.

Phosphorus

Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. According to a 2008 study, the total reserves of phosphorus are estimated to be approximately 3,200 MT, with a peak production at 28 MT/year in 2034. Individual countries' supplies vary widely; without a recycling initiative America's supply is estimated around 30 years. Phosphorus supplies affect agricultural output which in turn limits alternative fuels such as biodiesel and ethanol. Its increasing price and scarcity (global price of rock phosphate rose 8-fold in the 2 years to mid 2008) could change global agricultural patterns. Lands, perceived as marginal because of remoteness, but with very high phosphorus content, such as the Gran Chaco may get more agricultural development, while other farming areas, where nutrients are a constraint, may drop below the line of profitability.

Renewable resources

Wood

Unlike fossil resources, forests keep growing, thus the Hubbert peak theory does not apply. There had been wood shortages in the past, called Holznot in German speaking regions, but no global peak wood yet, despite the early 2021 "Lumber Crisis". Besides, deforestation may cause other problems, like erosion.

Water

Hubbert's original analysis did not apply to renewable resources. However, over-exploitation often results in a Hubbert peak nonetheless. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced.

For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe aquifers whose water is not being recharged.

Fishing

Peak fish: At least one researcher has attempted to perform Hubbert linearization (Hubbert curve) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. The Atlantic northwest cod fishery was a renewable resource, but the numbers of fish taken exceeded the fish's rate of recovery. The end of the cod fishery does match the exponential drop of the Hubbert bell curve. Another example is the cod of the North Sea.

Air/oxygen

Half the world's oxygen is produced by phytoplankton. The numbers of plankton have dropped by 40% since the 1950s.

Criticisms of peak oil

Economist Michael Lynch argues that the theory behind the Hubbert curve is too simplistic and relies on an overly Malthusian point of view. Lynch claims that Campbell's predictions for world oil production are strongly biased towards underestimates, and that Campbell has repeatedly pushed back the date.

Leonardo Maugeri, vice president of the Italian energy company Eni, argues that nearly all of peak estimates do not take into account unconventional oil even though the availability of these resources is significant and the costs of extraction and processing, while still very high, are falling because of improved technology. He also notes that the recovery rate from existing world oil fields has increased from about 22% in 1980 to 35% today because of new technology and predicts this trend will continue. The ratio between proven oil reserves and current production has constantly improved, passing from 20 years in 1948 to 35 years in 1972 and reaching about 40 years in 2003. These improvements occurred even with low investment in new exploration and upgrading technology because of the low oil prices during the last 20 years. However, Maugeri feels that encouraging more exploration will require relatively high oil prices.

Edward Luttwak, an economist and historian, claims that unrest in countries such as Russia, Iran and Iraq has led to a massive underestimate of oil reserves. The Association for the Study of Peak Oil and Gas (ASPO) responds by claiming neither Russia nor Iran are troubled by unrest currently, but Iraq is.

Cambridge Energy Research Associates authored a report that is critical of Hubbert-influenced predictions:

Despite his valuable contribution, M. King Hubbert's methodology falls down because it does not consider likely resource growth, application of new technology, basic commercial factors, or the impact of geopolitics on production. His approach does not work in all cases-including on the United States itself-and cannot reliably model a global production outlook. Put more simply, the case for the imminent peak is flawed. As it is, production in 2005 in the Lower 48 in the United States was 66 percent higher than Hubbert projected.

CERA does not believe there will be an endless abundance of oil, but instead believes that global production will eventually follow an "undulating plateau" for one or more decades before declining slowly, and that production will reach 40 Mb/d by 2015.

Alfred J. Cavallo, while predicting a conventional oil supply shortage by no later than 2015, does not think Hubbert's peak is the correct theory to apply to world production.

Criticisms of peak element scenarios

Although M. King Hubbert himself made major distinctions between decline in petroleum production versus depletion (or relative lack of it) for elements such as fissionable uranium and thorium, some others have predicted peaks like peak uranium and peak phosphorus soon on the basis of published reserve figures compared to present and future production. According to some economists, though, the amount of proved reserves inventoried at a time may be considered "a poor indicator of the total future supply of a mineral resource."

As some illustrations, tin, copper, iron, lead, and zinc all had both production from 1950 to 2000 and reserves in 2000 much exceed world reserves in 1950, which would be impossible except for how "proved reserves are like an inventory of cars to an auto dealer" at a time, having little relationship to the actual total affordable to extract in the future. In the example of peak phosphorus, additional concentrations exist intermediate between 71,000 Mt of identified reserves (USGS) and the approximately 30,000,000,000 Mt of other phosphorus in Earth's crust, with the average rock being 0.1% phosphorus, so showing decline in human phosphorus production will occur soon would require far more than comparing the former figure to the 190 Mt/year of phosphorus extracted in mines (2011 figure).

Sexual cannibalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sex...