Search This Blog

Saturday, January 4, 2025

Attention

From Wikipedia, the free encyclopedia
Focused attention

Attention or focus, is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is the selective concentration on discrete information, either subjectively or objectively. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, less than 1% of the visual input data stream of 1MByte/sec can enter the bottleneck, leading to inattentional blindness.

Attention remains a crucial area of investigation within education, psychology, neuroscience, cognitive neuroscience, and neuropsychology. Areas of active investigation involve determining the source of the sensory cues and signals that generate attention, the effects of these sensory cues and signals on the tuning properties of sensory neurons, and the relationship between attention and other behavioral and cognitive processes, which may include working memory and psychological vigilance. A relatively new body of research, which expands upon earlier research within psychopathology, is investigating the diagnostic symptoms associated with traumatic brain injury and its effects on attention. Attention also varies across cultures.

The relationships between attention and consciousness are complex enough that they have warranted philosophical exploration. Such exploration is both ancient and continually relevant, as it can have effects in fields ranging from mental health and the study of disorders of consciousness to artificial intelligence and its domains of research.

Contemporary definition and research

Prior to the founding of psychology as a scientific discipline, attention was studied in the field of philosophy. Thus, many of the discoveries in the field of attention were made by philosophers. Psychologist John B. Watson calls Juan Luis Vives the father of modern psychology because, in his book De Anima et Vita (The Soul and Life), he was the first to recognize the importance of empirical investigation. In his work on memory, Vives found that the more closely one attends to stimuli, the better they will be retained.

By the 1990s, psychologists began using positron emission tomography (PET) and later functional magnetic resonance imaging (fMRI) to image the brain while monitoring tasks involving attention. Considering this expensive equipment was generally only available in hospitals, psychologists sought cooperation with neurologists. Psychologist Michael Posner (then already renowned for his influential work on visual selective attention) and neurologist Marcus Raichle pioneered brain imaging studies of selective attention. Their results soon sparked interest from the neuroscience community, which until then had been focused on monkey brains. With the development of these technological innovations, neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of electroencephalography (EEG) had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. A growing body of such neuroimaging research has identified a frontoparietal attention network which appears to be responsible for control of attention.

A definition of a psychological construct forms a research approach to its study. In scientific works, attention often coincides and substitutes the notion of intentionality due to the extent of semantic uncertainty in the linguistic explanations of these notions' definitions. Intentionality has in turn been defined as "the power of minds to be about something: to represent or to stand for things, properties and states of affairs". Although these two psychological constructs (attention and intentionality) appear to be defined by similar terms, they are different notions. To clarify the definition of attention, it would be correct to consider the origin of this notion to review the meaning of the term given to it when the experimental study on attention was initiated. It is thought that the experimental approach began with famous experiments with a 4 x 4 matrix of sixteen randomly chosen letters – the experimental paradigm that informed Wundt's theory of attention. Wundt interpreted the experimental outcome introducing the meaning of attention as "that psychical process, which is operative in the clear perception of the narrow region of the content of consciousness." These experiments showed the physical limits of attention threshold, which were 3-6 letters observing the matrix during 1/10 s of their exposition. "We shall call the entrance into the large region of consciousness - apprehension, and the elevation into the focus of attention - apperception." Wundt's theory of attention postulated one of the main features of this notion that attention is an active, voluntary process realized during a certain time. In contrast, neuroscience research shows that intentionality may emerge instantly, even unconsciously; research reported to register neuronal correlates of an intentional act that preceded this conscious act (also see shared intentionality). Therefore, while intentionality is a mental state (“the power of the mind to be about something”, arising even unconsciously), the description of the construct of attention should be understood in the dynamical sense as the ability to elevate the clear perception of the narrow region of the content of consciousness and to keep in mind this state for a time. The attention threshold would be the period of minimum time needed for employing perception to clearly apprehend the scope of intention. From this perspective, a scientific approach to attention is relevant when it considers the difference between these two concepts (first of all, between their statical and dynamical statuses).

The growing body of literature shows empirical evidence that attention is conditioned by the number of elements and the duration of exposition. Decades of research on subitizing have supported Wundt's findings about the limits of a human ability to concentrate awareness on a task. Latvian prof. Sandra Mihailova and prof. Igor Val Danilov drew an essential conclusion from the Wundtian approach to the study of attention: the scope of attention is related to cognitive development. As the mind grasps more details about an event, it also increases the number of reasonable combinations within that event, enhancing the probability of better understanding its features and particularity. For example, three items in the focal point of consciousness have six possible combinations (3 factorial), and four items have 24 (4 factorial) combinations. This number of combinations becomes significantly prominent in the case of a focal point with six items with 720 possible combinations (6 factorial). Empirical evidence suggests that the scope of attention in young children develops from two items in the focal point at age up to six months to five or more items in the focal point at age about five years. As follows from the most recent studies in relation to teaching activities in school, “attention” should be understood as “the state of concentration of an individual’s consciousness on the process of selecting by his own psyche the information he requires and on the process of choosing an algorithm for response actions, which involves the intensification of sensory and intellectual activities”.

Selective and visual

The spotlight model of attention

In cognitive psychology there are at least two models which describe how visual attention operates. These models may be considered metaphors which are used to describe internal processes and to generate hypotheses that are falsifiable. Generally speaking, visual attention is thought to operate as a two-stage process. In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e., it is focused), and processing is performed in a serial fashion.

The first of these models to appear in the literature is the spotlight model. The term "spotlight" was inspired by the work of William James, who described attention as having a focus, a margin, and a fringe. The focus is an area that extracts information from the visual scene with a high-resolution, the geometric center of which being where visual attention is directed. Surrounding the focus is the fringe of attention, which extracts information in a much more crude fashion (i.e., low-resolution). This fringe extends out to a specified area, and the cut-off is called the margin.

The second model is called the zoom-lens model and was first introduced in 1986. This model inherits all properties of the spotlight model (i.e., the focus, the fringe, and the margin), but it has the added property of changing in size. This size-change mechanism was inspired by the zoom lens one might find on a camera, and any change in size can be described by a trade-off in the efficiency of processing. The zoom-lens of attention can be described in terms of an inverse trade-off between the size of focus and the efficiency of processing: because attention resources are assumed to be fixed, then it follows that the larger the focus is, the slower processing will be of that region of the visual scene, since this fixed resource will be distributed over a larger area. It is thought that the focus of attention can subtend a minimum of 1° of visual angle, however the maximum size has not yet been determined.

A significant debate emerged in the last decade of the 20th century in which Treisman's 1993 Feature Integration Theory (FIT) was compared to Duncan and Humphrey's 1989 attentional engagement theory (AET). FIT posits that "objects are retrieved from scenes by means of selective spatial attention that picks out objects' features, forms feature maps, and integrates those features that are found at the same location into forming objects." Treismans's theory is based on a two-stage process to help solve the binding problem of attention. These two stages are the preattentive stage and the focused attention stage.

  1. Preattentive Stage: The unconscious detection and separation of features of an item (color, shape, size). Treisman suggests that this happens early in cognitive  processing and that individuals are not aware of the occurrence due to the counter intuitiveness of separating a whole into its part. Evidence shows that preattentive focuses are accurate due to illusory conjunctions.
  2. Focused Attention Stage: The combining of all feature identifiers to perceive all parts as one whole. This is possible through prior knowledge and cognitive mapping. When an item is seen within a known location and has features that people have knowledge of, then prior knowledge will help bring features all together to make sense of what is perceived. The case of R.M's damage to his parietal lobe, also known as Balint's syndrome, shows the incorporation of focused attention and combination of features in the role of attention.

Through sequencing these steps, parallel and serial search is better exhibited through the formation of conjunctions of objects. Conjunctive searches, according to Treismans, are done through both stages in order to create selective and focused attention on an object, though Duncan and Humphrey would disagree. Duncan and Humphrey's AET understanding of attention maintained that "there is an initial pre-attentive parallel phase of perceptual segmentation and analysis that encompasses all of the visual items present in a scene. At this phase, descriptions of the objects in a visual scene are generated into structural units; the outcome of this parallel phase is a multiple-spatial-scale structured representation. Selective attention intervenes after this stage to select information that will be entered into visual short-term memory." The contrast of the two theories placed a new emphasis on the separation of visual attention tasks alone and those mediated by supplementary cognitive processes. As Rastophopoulos summarizes the debate: "Against Treisman's FIT, which posits spatial attention as a necessary condition for detection of objects, Humphreys argues that visual elements are encoded and bound together in an initial parallel phase without focal attention, and that attention serves to select among the objects that result from this initial grouping."

Neuropsychological model

In the twentieth century, the pioneering research of Lev Vygotsky and Alexander Luria led to the three-part model of neuropsychology defining the working brain as being represented by three co-active processes listed as Attention, Memory, and Activation. A.R. Luria published his well-known book The Working Brain in 1973 as a concise adjunct volume to his previous 1962 book Higher Cortical Functions in Man. In this volume, Luria summarized his three-part global theory of the working brain as being composed of three constantly co-active processes which he described as the; (1) Attention system, (2) Mnestic (memory) system, and (3) Cortical activation system. The two books together are considered by Homskaya's account as "among Luria's major works in neuropsychology, most fully reflecting all the aspects (theoretical, clinical, experimental) of this new discipline." The product of the combined research of Vygotsky and Luria have determined a large part of the contemporary understanding and definition of attention as it is understood at the start of the 21st-century.

Multitasking and divided attention

Multitasking can be defined as the attempt to perform two or more tasks simultaneously; however, research shows that when multitasking, people make more mistakes or perform their tasks more slowly. Attention must be divided among all of the component tasks to perform them. In divided attention, individuals attend or give attention to multiple sources of information at once or perform more than one task at the same time.

Older research involved looking at the limits of people performing simultaneous tasks like reading stories, while listening and writing something else, or listening to two separate messages through different ears (i.e., dichotic listening). Generally, classical research into attention investigated the ability of people to learn new information when there were multiple tasks to be performed, or to probe the limits of our perception (c.f. Donald Broadbent). There is also older literature on people's performance on multiple tasks performed simultaneously, such as driving a car while tuning a radio or driving while being on the phone.

The vast majority of current research on human multitasking is based on performance of doing two tasks simultaneously, usually that involves driving while performing another task, such as texting, eating, or even speaking to passengers in the vehicle, or with a friend over a cellphone. This research reveals that the human attentional system has limits for what it can process: driving performance is worse while engaged in other tasks; drivers make more mistakes, brake harder and later, get into more accidents, veer into other lanes, and/or are less aware of their surroundings when engaged in the previously discussed tasks.

There has been little difference found between speaking on a hands-free cell phone or a hand-held cell phone, which suggests that it is the strain of attentional system that causes problems, rather than what the driver is doing with his or her hands. While speaking with a passenger is as cognitively demanding as speaking with a friend over the phone, passengers are able to change the conversation based upon the needs of the driver. For example, if traffic intensifies, a passenger may stop talking to allow the driver to navigate the increasingly difficult roadway; a conversation partner over a phone would not be aware of the change in environment.

There have been multiple theories regarding divided attention. One, conceived by cognitive scientist Daniel Kahneman, explains that there is a single pool of attentional resources that can be freely divided among multiple tasks. This model seems oversimplified, however, due to the different modalities (e.g., visual, auditory, verbal) that are perceived. When the two simultaneous tasks use the same modality, such as listening to a radio station and writing a paper, it is much more difficult to concentrate on both because the tasks are likely to interfere with each other. The specific modality model was theorized by Cognitive Psychologists David Navon and Daniel Gopher in 1979. However, more recent research using well controlled dual-task paradigms points at the importance of tasks.

As an alternative, resource theory has been proposed as a more accurate metaphor for explaining divided attention on complex tasks. Resource theory states that as each complex task is automatized, performing that task requires less of the individual's limited-capacity attentional resources. Other variables play a part in our ability to pay attention to and concentrate on many tasks at once. These include, but are not limited to, anxiety, arousal, task difficulty, and skills.

Simultaneous

Simultaneous attention is a type of attention, classified by attending to multiple events at the same time. Simultaneous attention is demonstrated by children in Indigenous communities, who learn through this type of attention to their surroundings. Simultaneous attention is present in the ways in which children of indigenous backgrounds interact both with their surroundings and with other individuals. Simultaneous attention requires focus on multiple simultaneous activities or occurrences. This differs from multitasking, which is characterized by alternating attention and focus between multiple activities, or halting one activity before switching to the next.

Simultaneous attention involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. Indigenous heritage toddlers and caregivers in San Pedro were observed to frequently coordinate their activities with other members of a group in ways parallel to a model of simultaneous attention, whereas middle-class European-descent families in the U.S. would move back and forth between events. Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially wide, keen observers. This points to a strong cultural difference in attention management.

Alternative topics and discussions

Overt and covert orienting

Attention may be differentiated into "overt" versus "covert" orienting.

Overt orienting is the act of selectively attending to an item or location over others by moving the eyes to point in that direction. Overt orienting can be directly observed in the form of eye movements. Although overt eye movements are quite common, there is a distinction that can be made between two types of eye movements; reflexive and controlled. Reflexive movements are commanded by the superior colliculus of the midbrain. These movements are fast and are activated by the sudden appearance of stimuli. In contrast, controlled eye movements are commanded by areas in the frontal lobe. These movements are slow and voluntary.

Covert orienting is the act of mentally shifting one's focus without moving one's eyes. Simply, it is changes in attention that are not attributable to overt eye movements. Covert orienting has the potential to affect the output of perceptual processes by governing attention to particular items or locations (for example, the activity of a V4 neuron whose receptive field lies on an attended stimuli will be enhanced by covert attention) but does not influence the information that is processed by the senses. Researchers often use "filtering" tasks to study the role of covert attention of selecting information. These tasks often require participants to observe a number of stimuli, but attend to only one.

The current view is that visual covert attention is a mechanism for quickly scanning the field of view for interesting locations. This shift in covert attention is linked to eye movement circuitry that sets up a slower saccade to that location.

There are studies that suggest the mechanisms of overt and covert orienting may not be controlled separately and independently as previously believed. Central mechanisms that may control covert orienting, such as the parietal lobe, also receive input from subcortical centres involved in overt orienting. In support of this, general theories of attention actively assume bottom-up (reflexive) processes and top-down (voluntary) processes converge on a common neural architecture, in that they control both covert and overt attentional systems. For example, if individuals attend to the right hand corner field of view, movement of the eyes in that direction may have to be actively suppressed.

Covert attention has been argued to reflect the existence of processes "programming explicit ocular movement". However, this has been questioned on the grounds that N2, "a neural measure of covert attentional allocation—does not always precede eye movements". However, the researchers acknowledge, "it may be impossible to definitively rule out the possibility that some kind of shift of covert attention precedes every shift of overt attention".

Exogenous and endogenous orienting

Orienting attention is vital and can be controlled through external (exogenous) or internal (endogenous) processes. However, comparing these two processes is challenging because external signals do not operate completely exogenously, but will only summon attention and eye movements if they are important to the subject.

Exogenous (from Greek exo, meaning "outside", and genein, meaning "to produce") orienting is frequently described as being under control of a stimulus. Exogenous orienting is considered to be reflexive and automatic and is caused by a sudden change in the periphery. This often results in a reflexive saccade. Since exogenous cues are typically presented in the periphery, they are referred to as peripheral cues. Exogenous orienting can even be observed when individuals are aware that the cue will not relay reliable, accurate information about where a target is going to occur. This means that the mere presence of an exogenous cue will affect the response to other stimuli that are subsequently presented in the cue's previous location.

Several studies have investigated the influence of valid and invalid cues. They concluded that valid peripheral cues benefit performance, for instance when the peripheral cues are brief flashes at the relevant location before the onset of a visual stimulus. Psychologists Michael Posner and Yoav Cohen (1984) noted a reversal of this benefit takes place when the interval between the onset of the cue and the onset of the target is longer than about 300 ms. The phenomenon of valid cues producing longer reaction times than invalid cues is called inhibition of return.

Endogenous (from Greek endo, meaning "within" or "internally") orienting is the intentional allocation of attentional resources to a predetermined location or space. Simply stated, endogenous orienting occurs when attention is oriented according to an observer's goals or desires, allowing the focus of attention to be manipulated by the demands of a task. In order to have an effect, endogenous cues must be processed by the observer and acted upon purposefully. These cues are frequently referred to as central cues. This is because they are typically presented at the center of a display, where an observer's eyes are likely to be fixated. Central cues, such as an arrow or digit presented at fixation, tell observers to attend to a specific location.

When examining differences between exogenous and endogenous orienting, some researchers suggest that there are four differences between the two kinds of cues:

  • exogenous orienting is less affected by cognitive load than endogenous orienting;
  • observers are able to ignore endogenous cues but not exogenous cues;
  • exogenous cues have bigger effects than endogenous cues; and
  • expectancies about cue validity and predictive value affects endogenous orienting more than exogenous orienting.

There exist both overlaps and differences in the areas of the brain that are responsible for endogenous and exogenous orientating. Another approach to this discussion has been covered under the topic heading of "bottom-up" versus "top-down" orientations to attention. Researchers of this school have described two different aspects of how the mind focuses attention to items present in the environment. The first aspect is called bottom-up processing, also known as stimulus-driven attention or exogenous attention. These describe attentional processing which is driven by the properties of the objects themselves. Some processes, such as motion or a sudden loud noise, can attract our attention in a pre-conscious, or non-volitional way. We attend to them whether we want to or not. These aspects of attention are thought to involve parietal and temporal cortices, as well as the brainstem. More recent experimental evidence support the idea that the primary visual cortex creates a bottom-up saliency map, which is received by the superior colliculus in the midbrain area to guide attention or gaze shifts.

The second aspect is called top-down processing, also known as goal-driven, endogenous attention, attentional control or executive attention. This aspect of our attentional orienting is under the control of the person who is attending. It is mediated primarily by the frontal cortex and basal ganglia as one of the executive functions. Research has shown that it is related to other aspects of the executive functions, such as working memory, and conflict resolution and inhibition.

Influence of processing load

A "hugely influential" theory regarding selective attention is the perceptual load theory, which states that there are two mechanisms that affect attention: cognitive and perceptual. The perceptual mechanism considers the subject's ability to perceive or ignore stimuli, both task-related and non task-related. Studies show that if there are many stimuli present (especially if they are task-related), it is much easier to ignore the non-task related stimuli, but if there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant. The cognitive mechanism refers to the actual processing of the stimuli. Studies regarding this showed that the ability to process stimuli decreased with age, meaning that younger people were able to perceive more stimuli and fully process them, but were likely to process both relevant and irrelevant information, while older people could process fewer stimuli, but usually processed only relevant information.

Some people can process multiple stimuli, e.g. trained Morse code operators have been able to copy 100% of a message while carrying on a meaningful conversation. This relies on the reflexive response due to "overlearning" the skill of morse code reception/detection/transcription so that it is an autonomous function requiring no specific attention to perform. This overtraining of the brain comes as the "practice of a skill [surpasses] 100% accuracy," allowing the activity to become autonomic, while your mind has room to process other actions simultaneously.

Based on the primary role of the perceptual load theory, assumptions regarding its functionality surrounding that attentional resources are that of limited capacity which signify the need for all of the attentional resources to be used. This performance, however, is halted when put hand in hand with accuracy and reaction time (RT). This limitation arises through the measurement of literature when obtaining outcomes for scores. This affects both cognitive and perceptual attention because there is a lack of measurement surrounding distributions of temporal and spatial attention. Only a concentrated amount of attention on how effective one is completing the task and how long they take is being analyzed making a more redundant analysis on overall cognition of being able to process multiple stimuli through perception.

Clinical model

Attention is best described as the sustained focus of cognitive resources on information while filtering or ignoring extraneous information. Attention is a very basic function that often is a precursor to all other neurological/cognitive functions. As is frequently the case, clinical models of attention differ from investigation models. One of the most used models for the evaluation of attention in patients with very different neurologic pathologies is the model of Sohlberg and Mateer. This hierarchic model is based in the recovering of attention processes of brain damage patients after coma. Five different kinds of activities of growing difficulty are described in the model; connecting with the activities those patients could do as their recovering process advanced.

  • Focused attention: The ability to respond discretely to specific sensory stimuli.
  • Sustained attention (vigilance and concentration): The ability to maintain a consistent behavioral response during continuous and repetitive activity.
  • Selective attention: The ability to maintain a behavioral or cognitive set in the face of distracting or competing stimuli. Therefore, it incorporates the notion of "freedom from distractibility."
  • Alternating attention: The ability of mental flexibility that allows individuals to shift their focus of attention and move between tasks having different cognitive requirements.
  • Divided attention: This refers to the ability to respond simultaneously to multiple tasks or multiple task demands.

This model has been shown to be very useful in evaluating attention in very different pathologies, correlates strongly with daily difficulties and is especially helpful in designing stimulation programs such as attention process training, a rehabilitation program for neurological patients of the same authors.

Other descriptors for types of attention

  • Mindfulness: Mindfulness has been conceptualized as a clinical model of attention. Mindfulness practices are clinical interventions that emphasize training attention functions.
  • Vigilant attention: Remaining focused on a non-arousing stimulus or uninteresting task for a sustained period is far more difficult than attending to arousing stimuli and interesting tasks, and requires a specific type of attention called 'vigilant attention'. Thereby, vigilant attention is the ability to give sustained attention to a stimulus or task that might ordinarily be insufficiently engaging to prevent our attention being distracted by other stimuli or tasks.

Neural correlates

Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a different response to a stimulus when an animal is not attending to a stimulus, versus when the animal does attend to the stimulus, then the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same.

In a 2007 review, Professor Eric Knudsen describes a more general model which identifies four core processes of attention, with working memory at the center:

  • Working memory temporarily stores information for detailed analysis.
  • Competitive selection is the process that determines which information gains access to working memory.
  • Through top-down sensitivity control, higher cognitive processes can regulate signal intensity in information channels that compete for access to working memory, and thus give them an advantage in the process of competitive selection. Through top-down sensitivity control, the momentary content of working memory can influence the selection of new information, and thus mediate voluntary control of attention in a recurrent loop (endogenous attention).
  • Bottom-up saliency filters automatically enhance the response to infrequent stimuli, or stimuli of instinctive or learned biological relevance (exogenous attention).

Neurally, at different hierarchical levels spatial maps can enhance or inhibit activity in sensory areas, and induce orienting behaviors like eye movement.

  • At the top of the hierarchy, the frontal eye fields (FEF) and the dorsolateral prefrontal cortex contain a retinocentric spatial map. Microstimulation in the FEF induces monkeys to make a saccade to the relevant location. Stimulation at levels too low to induce a saccade will nonetheless enhance cortical responses to stimuli located in the relevant area.
  • At the next lower level, a variety of spatial maps are found in the parietal cortex. In particular, the lateral intraparietal area (LIP) contains a saliency map and is interconnected both with the FEF and with sensory areas.
  • Exogenous attentional guidance in humans and monkeys is by a bottom-up saliency map in the primary visual cortex. In lower vertebrates, this saliency map is more likely in the superior colliculus (optic tectum).
  • Certain automatic responses that influence attention, like orienting to a highly salient stimulus, are mediated subcortically by the superior colliculi.
  • At the neural network level, it is thought that processes like lateral inhibition mediate the process of competitive selection.

In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity.

Another commonly used model for the attention system has been put forth by researchers such as Michael Posner. He divides attention into three functional components: alerting, orienting, and executive attention that can also interact and influence each other.

Cultural variation

Children appear to develop patterns of attention related to the cultural practices of their families, communities, and the institutions in which they participate.

In 1955, Jules Henry suggested that there are societal differences in sensitivity to signals from many ongoing sources that call for the awareness of several levels of attention simultaneously. He tied his speculation to ethnographic observations of communities in which children are involved in a complex social community with multiple relationships.

Many Indigenous children in the Americas predominantly learn by observing and pitching in. There are several studies to support that the use of keen attention towards learning is much more common in Indigenous Communities of North and Central America than in a middle-class European-American setting. This is a direct result of the Learning by Observing and Pitching In model.

Keen attention is both a requirement and result of learning by observing and pitching-in. Incorporating the children in the community gives them the opportunity to keenly observe and contribute to activities that were not directed towards them. It can be seen from different Indigenous communities and cultures, such as the Mayans of San Pedro, that children can simultaneously attend to multiple events. Most Maya children have learned to pay attention to several events at once in order to make useful observations.

One example is simultaneous attention which involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. San Pedro toddlers and caregivers frequently coordinated their activities with other members of a group in multiway engagements rather than in a dyadic fashion. Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially keen observers.

This learning by observing and pitching-in model requires active levels of attention management. The child is present while caretakers engage in daily activities and responsibilities such as: weaving, farming, and other skills necessary for survival. Being present allows the child to focus their attention on the actions being performed by their parents, elders, and/or older siblings. In order to learn in this way, keen attention and focus is required. Eventually the child is expected to be able to perform these skills themselves.

Modelling

In the domain of computer vision, efforts have been made to model the mechanism of human attention, especially the bottom-up intentional mechanism and its semantic significance in classification of video contents. Both spatial attention and temporal attention have been incorporated in such classification efforts.

Generally speaking, there are two kinds of models to mimic the bottom-up salience mechanism in static images. One is based on the spatial contrast analysis. For example, a center–surround mechanism has been used to define salience across scales, inspired by the putative neural mechanism. It has also been hypothesized that some visual inputs are intrinsically salient in certain background contexts and that these are actually task-independent. This model has established itself as the exemplar for salience detection and consistently used for comparison in the literature; the other kind of model is based on the frequency domain analysis. This method was first proposed by Hou et al.. This method was called SR. Then, the PQFT method was also introduced. Both SR and PQFT only use the phase information. In 2012, the HFT method was introduced, and both the amplitude and the phase information are made use of. The Neural Abstraction Pyramid is a hierarchical recurrent convolutional model, which incorporates bottom-up and top-down flow of information to iteratively interpret images.

Hemispatial neglect

Hemispatial neglect, also called unilateral neglect, often occurs when people have damage to the right hemisphere of their brain. This damage often leads to a tendency to ignore the left side of one's body or even the left side of an object that can be seen. Damage to the left side of the brain (the left hemisphere) rarely yields significant neglect of the right side of the body or object in the person's local environments.

The effects of spatial neglect, however, may vary and differ depending on what area of the brain was damaged. Damage to different neural substrates can result in different types of neglect. Attention disorders (lateralized and nonlaterized) may also contribute to the symptoms and effects. Much research has asserted that damage to gray matter within the brain results in spatial neglect.

New technology has yielded more information, such that there is a large, distributed network of frontal, parietal, temporal, and subcortical brain areas that have been tied to neglect. This network can be related to other research as well; the dorsal attention network is tied to spatial orienting. The effect of damage to this network may result in patients neglecting their left side when distracted about their right side or an object on their right side.

Attention in social contexts

Social attention is one special form of attention that involves the allocation of limited processing resources in a social context. Previous studies on social attention often regard how attention is directed toward socially relevant stimuli such as faces and gaze directions of other individuals. In contrast to attending-to-others, a different line of researches has shown that self-related information such as own face and name automatically captures attention and is preferentially processed comparing to other-related information. These contrasting effects between attending-to-others and attending-to-self prompt a synthetic view in a recent Opinion article proposing that social attention operates at two polarizing states: In one extreme, individual tends to attend to the self and prioritize self-related information over others', and, in the other extreme, attention is allocated to other individuals to infer their intentions and desires. Attending-to-self and attending-to-others mark the two ends of an otherwise continuum spectrum of social attention. For a given behavioral context, the mechanisms underlying these two polarities might interact and compete with each other in order to determine a saliency map of social attention that guides our behaviors. An imbalanced competition between these two behavioral and cognitive processes will cause cognitive disorders and neurological symptoms such as autism spectrum disorders and Williams syndrome.

Distracting factors

According to Daniel Goleman's book, Focus: The Hidden Driver of Excellence, there are two types of distracting factors affecting focus – sensory and emotional.

A sensory distracting factor would be, for example, while a person is reading this article, they are neglecting the white field surrounding the text.

An emotional distracting factor would be when someone is focused on answering an email, and somebody shouts their name. It would be almost impossible to neglect the voice speaking it. Attention is immediately directed toward the source. Positive emotions have also been found to affect attention. Induction of happiness has led to increased response times and an increase in inaccurate responses in the face of irrelevant stimuli. Two possible theories as to why emotions might make one more susceptible to distracting stimuli is that emotions take up too much of one's cognitive resources and make it harder to control your focus of attention. The other theory is that emotions make it harder to filter out distractions, specifically with positive emotions due to a feeling of security.

Another distracting factor to attention processes is insufficient sleep. Sleep deprivation is found to impair cognition, specifically performance in divided attention. Divided attention is possibly linked with the circadian processes.

Failure to attend

Inattentional blindness was first introduced in 1998 by Arien Mack and Irvic Rock. Their studies show that when people are focused on specific stimuli, they often miss other stimuli that are clearly present. Though actual blindness is not occurring here, the blindness that happens is due to the perceptual load of what is being attended to. Based on the experiment performed by Mack and Rock, Ula Finch and Nilli Lavie tested participants with a perceptual task. They presented subjects with a cross, one arm being longer than the other, for 5 trials. On the sixth trial, a white square was added to the top left of the screen. The results conclude that out of 10 participants, only 2 (20%) actually saw the square. This would suggest that when a higher focus was attended to the length of the crossed arms, the more likely someone would altogether miss an object that was in plain sight.

Change blindness was first tested by Rensink and coworkers in 1997. Their studies show that people have difficulty detecting changes from scene to scene due to the intense focus on one thing, or lack of attention overall. This was tested by Rensink through a presentation of a picture, and then a blank field, and then the same picture but with an item missing. The results showed that the pictures had to be alternated back and forth a good number of times for participants to notice the difference. This idea is greatly portrayed in films that have continuity errors. Many people do not pick up on differences when in reality, the changes tend to be significant.

History of the study

Philosophical period

Psychologist Daniel E. Berlyne credits the first extended treatment of attention to philosopher Nicolas Malebranche in his work "The Search After Truth". "Malebranche held that we have access to ideas, or mental representations of the external world, but not direct access to the world itself." Thus in order to keep these ideas organized, attention is necessary. Otherwise we will confuse these ideas. Malebranche writes in "The Search After Truth", "because it often happens that the understanding has only confused and imperfect perceptions of things, it is truly a cause of our errors.... It is therefore necessary to look for means to keep our perceptions from being confused and imperfect. And, because, as everyone knows, there is nothing that makes them clearer and more distinct than attentiveness, we must try to find the means to become more attentive than we are". According to Malebranche, attention is crucial to understanding and keeping thoughts organized.

Philosopher Gottfried Wilhelm Leibniz introduced the concept of apperception to this philosophical approach to attention. Apperception refers to "the process by which new experience is assimilated to and transformed by the residuum of past experience of an individual to form a new whole." Apperception is required for a perceived event to become a conscious event. Leibniz emphasized a reflexive involuntary view of attention known as exogenous orienting. However, there is also endogenous orienting which is voluntary and directed attention. Philosopher Johann Friedrich Herbart agreed with Leibniz's view of apperception; however, he expounded on it in by saying that new experiences had to be tied to ones already existing in the mind. Herbart was also the first person to stress the importance of applying mathematical modeling to the study of psychology.

Throughout the philosophical era, various thinkers made significant contributions to the field of attention studies, beginning with research on the extent of attention and how attention is directed. In the beginning of the 19th century, it was thought that people were not able to attend to more than one stimulus at a time. However, with research contributions by Sir William Hamilton, 9th Baronet this view was changed. Hamilton proposed a view of attention that likened its capacity to holding marbles. You can only hold a certain number of marbles at a time before it starts to spill over. His view states that we can attend to more than one stimulus at once. William Stanley Jevons later expanded this view and stated that we can attend to up to four items at a time.

1860–1909

This period of attention research took the focus from conceptual findings to experimental testing. It also involved psychophysical methods that allowed measurement of the relation between physical stimulus properties and the psychological perceptions of them. This period covers the development of attentional research from the founding of psychology to 1909.

Wilhelm Wundt introduced the study of attention to the field of psychology. Wundt measured mental processing speed by likening it to differences in stargazing measurements. Astronomers in this time would measure the time it took for stars to travel. Among these measurements when astronomers recorded the times, there were personal differences in calculation. These different readings resulted in different reports from each astronomer. To correct for this, a personal equation was developed. Wundt applied this to mental processing speed. Wundt realized that the time it takes to see the stimulus of the star and write down the time was being called an "observation error" but actually was the time it takes to switch voluntarily one's attention from one stimulus to another. Wundt called his school of psychology voluntarism. It was his belief that psychological processes can only be understood in terms of goals and consequences.

Franciscus Donders used mental chronometry to study attention and it was considered a major field of intellectual inquiry by authors such as Sigmund Freud. Donders and his students conducted the first detailed investigations of the speed of mental processes. Donders measured the time required to identify a stimulus and to select a motor response. This was the time difference between stimulus discrimination and response initiation. Donders also formalized the subtractive method which states that the time for a particular process can be estimated by adding that process to a task and taking the difference in reaction time between the two tasks. He also differentiated between three types of reactions: simple reaction, choice reaction, and go/no-go reaction.

Hermann von Helmholtz also contributed to the field of attention relating to the extent of attention. Von Helmholtz stated that it is possible to focus on one stimulus and still perceive or ignore others. An example of this is being able to focus on the letter u in the word house and still perceiving the letters h, o, s, and e.

One major debate in this period was whether it was possible to attend to two things at once (split attention). Walter Benjamin described this experience as "reception in a state of distraction." This disagreement could only be resolved through experimentation.

In 1890, William James, in his textbook The Principles of Psychology, remarked:

Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.

James differentiated between sensorial attention and intellectual attention. Sensorial attention is when attention is directed to objects of sense, stimuli that are physically present. Intellectual attention is attention directed to ideal or represented objects; stimuli that are not physically present. James also distinguished between immediate or derived attention: attention to the present versus to something not physically present. According to James, attention has five major effects. Attention works to make us perceive, conceive, distinguish, remember, and shorten reactions time.

1910–1949

During this period, research in attention waned and interest in behaviorism flourished, leading some to believe, like Ulric Neisser, that in this period, "There was no research on attention". However, Jersild published very important work on "Mental Set and Shift" in 1927. He stated, "The fact of mental set is primary in all conscious activity. The same stimulus may evoke any one of a large number of responses depending upon the contextual setting in which it is placed". This research found that the time to complete a list was longer for mixed lists than for pure lists. For example, if a list was names of animals versus a list of the same size with names of animals, books, makes and models of cars, and types of fruits, it takes longer to process the second list. This is task switching.

In 1931, Telford discovered the psychological refractory period. The stimulation of neurons is followed by a refractory phase during which neurons are less sensitive to stimulation. In 1935 John Ridley Stroop developed the Stroop Task which elicited the Stroop Effect. Stroop's task showed that irrelevant stimulus information can have a major impact on performance. In this task, subjects were to look at a list of colors. This list of colors had each color typed in a color different from the actual text. For example, the word Blue would be typed in Orange, Pink in Black, and so on.

Example: Blue Purple Red Green Purple Green

Subjects were then instructed to say the name of the ink color and ignore the text. It took 110 seconds to complete a list of this type compared to 63 seconds to name the colors when presented in the form of solid squares. The naming time nearly doubled in the presence of conflicting color words, an effect known as the Stroop Effect.

1950–1974

In the 1950s, research psychologists renewed their interest in attention when the dominant epistemology shifted from positivism (i.e., behaviorism) to realism during what has come to be known as the "cognitive revolution". The cognitive revolution admitted unobservable cognitive processes like attention as legitimate objects of scientific study.

Modern research on attention began with the analysis of the "cocktail party problem" by Colin Cherry in 1953. At a cocktail party how do people select the conversation that they are listening to and ignore the rest? This problem is at times called "focused attention", as opposed to "divided attention". Cherry performed a number of experiments which became known as dichotic listening and were extended by Donald Broadbent and others. In a typical experiment, subjects would use a set of headphones to listen to two streams of words in different ears and selectively attend to one stream. After the task, the experimenter would question the subjects about the content of the unattended stream.

Broadbent's Filter Model of Attention states that information is held in a pre-attentive temporary store, and only sensory events that have some physical feature in common are selected to pass into the limited capacity processing system. This implies that the meaning of unattended messages is not identified. Also, a significant amount of time is required to shift the filter from one channel to another. Experiments by Gray and Wedderburn and later Anne Treisman pointed out various problems in Broadbent's early model and eventually led to the Deutsch–Norman model in 1968. In this model, no signal is filtered out, but all are processed to the point of activating their stored representations in memory. The point at which attention becomes "selective" is when one of the memory representations is selected for further processing. At any time, only one can be selected, resulting in the attentional bottleneck.

This debate became known as the early-selection vs. late-selection models. In the early selection models (first proposed by Donald Broadbent), attention shuts down (in Broadbent's model) or attenuates (in Treisman's refinement) processing in the unattended ear before the mind can analyze its semantic content. In the late selection models (first proposed by J. Anthony Deutsch and Diana Deutsch), the content in both ears is analyzed semantically, but the words in the unattended ear cannot access consciousness. Lavie's perceptual load theory, however, "provided elegant solution to" what had once been a "heated debate".

Omnipotence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Omnipotence
Separation of Light from Darkness by Michelangelo

Omnipotence is the quality of having unlimited power. Monotheistic religions generally attribute omnipotence only to the deity of their faith. In the monotheistic religious philosophy of Abrahamic religions, omnipotence is often listed as one of God's characteristics, along with omniscience, omnipresence, and omnibenevolence. The presence of all these properties in a single entity has given rise to considerable theological debate, prominently including the problem of evil, the question of why such a deity would permit the existence of evil. It is accepted in philosophy and science that omnipotence can never be effectively understood.

Etymology

The word omnipotence derives from the Latin prefix omni-, meaning "all", and the word potens, meaning "potent" or "powerful". Thus the term means "all-powerful".

Meanings

Scholasticism

The term omnipotent has been used to connote a number of different positions. These positions include, but are not limited to, the following:

  1. A deity is able to do anything that it chooses to do. (In this version, God can do the impossible and something contradictory.)
  2. A deity is able to do anything that is in accord with its own nature (thus, for instance, if it is a logical consequence of a deity's nature that what it speaks is truth, then it is not able to lie).
  3. It is part of a deity's nature to be consistent and that it would be inconsistent for said deity to go against its own laws unless there was a reason to do so.

Thomas Aquinas acknowledged difficulty in comprehending the deity's power: "All confess that God is omnipotent; but it seems difficult to explain in what His omnipotence precisely consists: for there may be doubt as to the precise meaning of the word 'all' when we say that God can do all things. If, however, we consider the matter aright, since power is said in reference to possible things, this phrase, 'God can do all things,' is rightly understood to mean that God can do all things that are possible; and for this reason He is said to be omnipotent." In Scholasticism, omnipotence is generally understood to be compatible with certain limitations or restrictions. A proposition that is necessarily true is one whose negation is self-contradictory.

It is sometimes objected that this aspect of omnipotence involves the contradiction that God cannot do all that He can do; but the argument is sophistical; it is no contradiction to assert that God can realize whatever is possible, but that no number of actualized possibilities exhausts His power. Omnipotence is perfect power, free from all mere potentiality. Hence, although God does not bring into external being all that He is able to accomplish, His power must not be understood as passing through successive stages before its effect is accomplished. The activity of God is simple and eternal, without evolution or change. The transition from possibility to actuality or from act to potentiality, occurs only in creatures. When it is said that God can or could do a thing, the terms are not to be understood in the sense in which they are applied to created causes, but as conveying the idea of a Being, the range of Whose activity is limited only by His sovereign Will.

Aquinas says that:

Power is predicated of God not as something really distinct from His knowledge and will, but as differing from them logically; inasmuch as power implies a notion of a principle putting into execution what the will commands, and what knowledge directs, which three things in God are identified. Or we may say, that the knowledge or will of God, according as it is the effective principle, has the notion of power contained in it. Hence the consideration of the knowledge and will of God precedes the consideration of His power, as the cause precedes the operation and effect.

The adaptation of means to ends in the universe does not argue, as John Stuart Mill would have it, that the power of the designer is limited, but only that God has willed to manifest his glory by a world so constituted rather than by another. Indeed, the production of secondary causes, capable of accomplishing certain effects, requires greater power than the direct accomplishment of these same effects. On the other hand, even though no creature existed, God's power would not be barren, for "creatures are not an end to God." Regarding the deity's power, medieval theologians contended that there are certain things that even an omnipotent deity cannot do. The statement "a deity can do anything" is only sensible with an assumed suppressed clause, "that implies the perfection of true power". This standard scholastic answer allows that acts of creatures such as walking can be performed by humans but not by a deity. Rather than an advantage in power, human acts such as walking, sitting, or giving birth were possible only because of a defect in human power. The capacity to sin, for example, is not a power but a defect or infirmity. In response to questions of a deity performing impossibilities, e.g. making square circles, Aquinas says that "everything that does not imply a contradiction in terms, is numbered amongst those possible things, in respect of which God is called omnipotent: whereas whatever implies contradiction does not come within the scope of divine omnipotence, because it cannot have the aspect of possibility. Hence it is better to say that such things cannot be done, than that God cannot do them. Nor is this contrary to the word of the angel, saying: 'No word shall be impossible with God.' For whatever implies a contradiction cannot be a word, because no intellect can possibly conceive such a thing."

C. S. Lewis has adopted a scholastic position in the course of his work The Problem of Pain. Lewis follows Aquinas' view on contradiction:

His Omnipotence means power to do all that is intrinsically possible, not to do the intrinsically impossible. You may attribute miracles to him, but not nonsense. This is no limit to his power. If you choose to say 'God can give a creature free will and at the same time withhold free will from it,' you have not succeeded in saying anything about God: meaningless combinations of words do not suddenly acquire meaning simply because we prefix to them the two other words 'God can.'... It is no more possible for God than for the weakest of his creatures to carry out both of two mutually exclusive alternatives; not because his power meets an obstacle, but because nonsense remains nonsense even when we talk it about God.

As a stage of normal child development

Sigmund Freud freely used the same term in a comparable way. Referring with respect to an adult neurotic to "the omnipotence which he ascribed to his thoughts and feelings", Freud reckoned that "this belief is a frank acknowledgement of a relic of the old megalomania of infancy". Similarly Freud concluded that "we can detect an element of megalomania in most other forms of paranoic disorder. We are justified in assuming that this megalomania is essentially of an infantile nature and that, as development proceeds, it is sacrificed to social considerations". Freud saw megalomania as an obstacle to psychoanalysis. In the second half of the 20th century object relations theory, both in the States and among British Kleinians, set about "rethinking megalomania... intent on transforming an obstacle... into a complex organization that linked object relations and defence mechanisms" in such a way as to offer new "prospects for therapy".

Edmund Bergler, one of his early followers, considered that "as Freud and Ferenczi have shown, the child lives in a sort of megalomania for a long period; he knows only one yardstick, and that is his own over-inflated ego ... megalomania, it must be understood, is normal in the very young child". Bergler was of the opinion that in later life "the activity of gambling in itself unconsciously activates the megalomania and grandiosity of childhood, reverting to the "fiction of omnipotence"".

Heinz Kohut regarded "the narcissistic patient's "megalomania" as a part of normal development.

D. W. Winnicott took a more positive view of a belief in early omnipotence, seeing it as essential to the child's well-being; and "good-enough" mothering as essential to enable the child to "cope with the immense shock of loss of omnipotence"—as opposed to whatever "prematurely forces it out of its narcissistic universe".

Rejection or limitation

Some monotheists reject the view that a deity is or could be omnipotent, or take the view that, by choosing to create creatures with free will, a deity has chosen to limit divine omnipotence. In Conservative and Reform Judaism, and some movements within Protestant Christianity, including open theism, deities are said to act in the world through persuasion, and not by coercion (this is a matter of choice—a deity could act miraculously, and perhaps on occasion does so—while for process theism it is a matter of necessity—creatures have inherent powers that a deity cannot, even in principle, override). Deities are manifested in the world through inspiration and the creation of possibility, not necessarily by miracles or violations of the laws of nature.

Philosophical grounds

Process theology rejects unlimited omnipotence on a philosophical basis, arguing that omnipotence as classically understood would be less than perfect, and is therefore incompatible with the idea of a perfect deity. The idea is grounded in Plato's oft-overlooked statement that "being is power".

My notion would be, that anything which possesses any sort of power to affect another, or to be affected by another, if only for a single moment, however trifling the cause and however slight the effect, has real existence; and I hold that the definition of being is simply power.

— Plato, 247E

From this premise, Charles Hartshorne argues further that:

Power is influence, and perfect power is perfect influence ... power must be exercised upon something, at least if by power we mean influence, control; but the something controlled cannot be absolutely inert, since the merely passive, that which has no active tendency of its own, is nothing; yet if the something acted upon is itself partly active, then there must be some resistance, however slight, to the "absolute" power, and how can power which is resisted be absolute?

— Hartshorne, 89

The argument can be stated as follows:

  1. If a being exists, then it must have some active tendency.
  2. If a being has some active tendency, then it has some power to resist its creator.
  3. If a being has the power to resist its creator, then the creator does not have absolute power.

For example, although someone might control a lump of jelly-pudding almost completely, the inability of that pudding to stage any resistance renders that person's power rather unimpressive. Power can only be said to be great if it is over something that has defenses and its own agenda. If a deity's power is to be great, it must therefore be over beings that have at least some of their own defenses and agenda. Thus, if a deity does not have absolute power, it must therefore embody some of the characteristics of power, and some of the characteristics of persuasion. This view is known as dipolar theism.

The most popular works espousing this point are from Harold Kushner (in Judaism). The need for a modified view of omnipotence was also articulated by Alfred North Whitehead in the early 20th century and expanded upon by Charles Hartshorne. Hartshorne proceeded within the context of the theological system known as process theology.

Thomas Jay Oord argues that omnipotence dies a death of a thousand philosophical qualifications. To make any sense, the word must undergo various logical, ontological, mathematical, theological, and existential qualifications so that it loses specificity.

Scriptural grounds

In the Authorized King James Version of the Bible, as well as several other versions, in Revelation 19:6 it is stated "the Lord God omnipotent reigneth" (Ancient Greek: παντοκράτωρ, romanizedpantokrator, "all-mighty").

Thomas Jay Oord argues that omnipotence is not found in the Hebrew and Greek scriptures. The Hebrew words Shaddai (breasts) and Sabaoth (hosts) are wrongly translated as "God almighty" or "divine omnipotence". Pantokrator, the Greek word in the New Testament and Septuagint often translated in English as "almighty", actually means "all-holding" rather than almighty or omnipotent. Oord offers an alternative view of divine power he calls "amipotence," which is the maximal power of God's uncontrolling love.

Uncertainty

Trying to develop a theory to explain, assign or reject omnipotence on grounds of logic has little merit, since being omnipotent, in a Cartesian sense, would mean the omnipotent being is above logic, a view supported by René Descartes. He issues this idea in his Meditations on First Philosophy. This view is called universal possibilism.

According to Hindu philosophy the essence of Brahman can never be understood or known since Brahman is beyond both existence and non-existence, transcending and including time, causation and space, and thus can never be known in the same material sense as one traditionally "understands" a given concept or object.

Theodicy and the Bible

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Theodicy_and_the_Bible

Relating theodicy and the Bible is crucial to understanding Abrahamic theodicy because the Bible "has been, both in theory and in fact, the dominant influence upon ideas about God and evil in the Western world". Theodicy, in its most common form, is the attempt to answer the question of why a good God permits the manifestation of evil. Theodicy attempts to resolve the evidential problem of evil by reconciling the traditional divine characteristics of omnibenevolence and omnipotence, in either their absolute or relative form, with the occurrence of evil or suffering in the world.

Theodicy is an "intensely urgent" and "constant concern" of "the entire Bible". The Bible raises the issue of theodicy by its portrayals of God as inflicting evil and by its accounts of people who question God's goodness by their angry indictments. However, the Bible "contains no comprehensive theodicy".

The most common theodicy is free will theodicy, which lays the blame for all moral evil and some natural evil on humanity's misuse of its free will.

God and evil in the Bible

Barry Whitney gives the reason why a theodicy is important for those who believe in the biblical God when he observes that "it is the believer in God, more so than the skeptic, who is forced to come to terms with the problem of evil."

Biblical scholar Walter Brueggemann observes that "theodicy is a constant concern of the entire Bible" and he describes theodicy, from the biblical perspective, as a subject that "concerns the question of God's goodness and power in a world that is manifestly marked by disorder and evil." The Bible evokes a need for a theodicy by its indictments of God coupled with expressions of anger at God, both of which question God's righteousness.

The Bible contains numerous examples of God inflicting evil, both in the form of moral evil resulting from "man's sinful inclinations" and the physical evil of suffering. These two biblical uses of the word evil parallel the Oxford English Dictionary's definitions of the word as (a) "morally evil" and (b) "discomfort, pain, or trouble." The Bible sometimes portrays God as inflicting evil in the generic sense.

In other cases, the word evil refers to suffering. Suffering results from either (a) "'moral' evil, due to human volition" or (b) "'physical' evil, directly due to nature." The Bible portrays God as inflicting evil in both senses because its writers "regarded God as the ultimate Cause of evil." The Bible contains examples of suffering caused by nature that are attributed to God, as well as examples of suffering caused by humans that are attributed to God.

Biblical responses to evils

Tyron Inbody avers that "there is no biblical theodicy." However, Inbody observes that the Bible proffers "various solutions" to questions about God and the evil of human suffering. These "various solutions" to the why of suffering and other evils are delineated in the Bible's "responses" (James Crenshaw) or "approaches" (Daniel J. Harrington) or "answers" (Bart Ehrman) to evil that these biblical scholars have identified. These scholars see a range of responses including punishment for sin, teaching or testing, or the means to some greater good.

Gregory Boyd, while appreciating "expositions of various biblical motifs that explain why we suffer, goes on to warn that "none of these motifs claim to be a comprehensive theodicy." In agreement with Boyd, Milton Crum comments that although such biblical passages might lessen the weight of suffering, ad hoc interpretations of evils do not provide a blanket theodicy. N. T. Wright "reminds us" that "the scriptures are frustratingly indirect and incomplete in answering questions of theodicy."

Deuteronomic "theodic settlement"

Brueggemann observes that, while the various biblical books do not agree on a theodicy, there was a temporary "theodic settlement" in the biblical Book of Deuteronomy.

The "theodic settlement" in the Book of Deuteronomy interprets all afflictions as just punishment for sin, that is, as retribution. Brueggemann defines "the theological notion of retribution" as "the assumption or conviction that the world is ordered by God so that everyone receives a fair outcome of reward or punishment commensurate with his or her conduct." Brueggemann points to Psalm One as a succinct summary of this retribution theodicy: "the Lord watches over the way of the righteous, but the way of the wicked shall perish" (Psalm 1:6).

Deuteronomy elaborates its "theodic settlement" in Chapter 28. Verse two promises that "blessings shall come upon you and overtake you, if you obey the Lord your God." This general promise of blessings is followed by a lengthy list of fourteen specific blessings. But, on the other hand, verse fifteen warns that "if you will not obey the Lord your God ..., curses shall come upon you." This general warning of curses is followed by a list of fifty-four specific curses, all of which would fit into the biblical use of the word evil.

The Deuteronomic retribution theodic settlement interpreted whatever evils ("curses") people suffered as just retribution meted out by a just God. However, at least as early as the early 6th century BC, Jeremiah was asserting that the retribution theodicy was contrary to fact. Jeremiah upbraided God for endowing the wicked with prosperity: "Why does the way of the guilty prosper? Why do all who are treacherous thrive? You plant them, and they take root."

In John 9:1–34, Jesus dismisses this "theodic settlement" by explaining that "It was neither that [a blind] man sinned, nor his parents [that caused his infirmity]; but it was so that the works of God might be displayed in him."

The Book of Job

Brueggemann treats the biblical Book of Job as the prime example of the "newly voiced theodic challenges" to the "old [Deuteronomic] theodic settlement" Job "was blameless and upright, one who feared God and turned away from evil," but nonetheless he suffered "all the evil that the Lord had brought upon him." In the midst of his suffering, Job explicitly contradicted the Deuteronomic theodic settlement: "it is all one; therefore I say, [God] destroys both the blameless and the wicked."

Not only did Job challenge the Deuteronomic theodic settlement by the fact of his own innocent suffering and by explicit contradiction of the old settlement, he interrogated God, "Why do the wicked live on, reach old age, and grow mighty in power?" Brueggemann judges the fact that God had no answer to Job's why? to be so important that "the Book of Job turns on the refusal, unwillingness, or inability of [God] to answer" Job's query. Even worse, God says of himself: "... you [Satan] incited me against him to ruin him without any reason."

Brueggemann explains that the "turn" he sees in the Book of Job is a turn from seeing the 'right' as accepting the old theodic settlement. Now, he continues, "perhaps what is 'right' is Job's refusal to concede, and therefore what is celebrated is his entire defiant argument ... That is, what Yahweh intends as 'right' is that Job (Israel, humankind) should make a legitimate case" before God "without timidity or cowardice" to "carry the human question of justice into the danger zone of God's holiness."

Bible and free-will theodicy

A theodicy is an attempt "to reconcile the power and goodness attributed to God with the presence of evil in the human experience." The Bible attributes both "power" and "goodness" to God.

The free-will theodicy, first developed by Augustine, defends God by placing all the blame for evil on "the misuse of free will by human beings." This free-will theodicy is "perhaps the most influential theodicy ever constructed," and it is currently "the most common theodicy".

Explaining the free-will theodicy, Nick Trakakis writes that "the free will theodicist proceeds to explain the existence of moral evil as a consequence of the misuse of our freedom." Then "the free will theodicist adds, however, that the value of free will (and the goods it makes possible) is so great as to outweigh the risk that it may be misused in various ways."

In parallel with the free-will theodicy, The New Bible Dictionary finds that the Bible attributes evil "to the abuse of free-will." Others have noted the free-will theodicy's "compatibility with and reliance upon the Genesis account of creation" and the fall of Adam and Eve into sin. Because of the compatibility between the free-will theodicy and the Genesis account of the Creation and Fall of humanity "the Fall-doctrine" has been characterized as "fundamentally an exercise in theodicy-making."

Free will and freedom: definition problems

The two terms, "freedom" and "free will," are treated together (a) because, by definition, a "free will" means a "will" that possesses "freedom" and (b) because "free will" is "commonly used" as synonymous with "freedom." Likewise, Robert Kane, writing about "what is often called the free will issue or the problem of free will," says that it "is really a cluster of problems or questions revolving around the conception of human freedom"

In writing about free will, R. C. Sproul points out that "at the heart of the problem is the question of the definition of free will." Manual Vargas adds that "it is not clear that there is any single thing that people have had in mind by the term ‘free will'." Because of the confusion created when "definitions of free will are assumed without being stated," Randy Alcorn urges, "be sure to define terms."

Adler's kinds of freedom

Mortimer Adler recognized the confusion resulting from the fact that there are "several different objects men have in mind when they use the word freedom." In The Idea of Freedom, Adler resolved this confusion by distinguishing the "three kinds of freedom" that various writers have in mind when they use the word. He calls these three kinds of freedom (1) "circumstantial freedom," (2) "natural freedom," and (3) "acquired freedom." "Natural freedom" and "acquired freedom" are germane to the Bible in relation to the free-will theodicy.

"Natural freedom" and the Bible

"Natural freedom", in Adler's terminology, is the freedom of "self-determination" regarding one's "decisions or plans." Natural freedom is "(i) inherent in all men, (ii) regardless of the circumstances under which they live and (iii) without regard to any state of mind or character which they may or may not acquire in the course of their lives"

Biblical scholars find that the Bible views all humanity as possessing the "natural freedom" of the will that enables "self-determination." In this sense of the term, biblical scholars say that, although the Bible does not use the term, it assumes human "free will." For example, what Adler calls "natural freedom" matches the definition of the biblical concept of "free will" in the Westminster Dictionary of Theological Terms, namely, "the free choice of the will that all persons possess." Other scholars describe free will in the Bible as follows:

  • If the phrase "free will" be taken morally and psychologically, as meaning the power of unconstrained, spontaneous, voluntary, and therefore responsible, choice, the Bible everywhere assumes that all men, as such, possess it, unregenerate and regenerate alike. The New Bible Dictionary.
  • "Free will is clearly taught in such Scripture passages as Matthew 23:37 ... and in Revelation 22:17." Archaeology and Bible History.
  • The Bible assumes that all human beings have "free will" in the sense of "the ability to make meaningful choices," that is, "the ability to have voluntary choices that have real effects." If God Is Good.
  • We make willing choices, choices that have real effect .... In this sense, it is certainly consistent with Scripture to say that we have "free will." Bible Doctrine.

Debate on "natural freedom"

The proper interpretation of biblical passages relating to the freedom of the human will has been the subject of debate for most of the Christian era. The Pelagius versus Augustine of Hippo debate over free will took place in the 5th century. The Erasmus versus Martin Luther arguments in 16th century included disagreements about free will, as did the Arminians versus Calvinists debates in the late 16th and early 17th centuries.

There is still no resolution of the free will debate because as Robert Kane observes, "debates about free will are more alive than ever today."

"Acquired freedom" and the Bible

"Acquired freedom," in Adler's terminology, is the freedom "to live as [one] ought to live," and to live as one ought requires "a change or development" whereby a person acquires "a state of mind, or character, or personality" that can be described by such qualities as "good, wise, virtuous, righteous, holy, healthy, sound, flexible, etc."

Thus, while Adler ascribes the "natural freedom" of "self-determination" to everyone, he asserts that the freedom "to live as [one] ought to live" must be acquired by "a change or development" in a person. The New Bible Dictionary finds these two distinct freedoms in the Bible:

(i) "The Bible everywhere assumes" that, by nature, everyone possesses the freedom of "unconstrained, spontaneous, voluntary, and therefore responsible, choice." The New Bible Dictionary calls this natural freedom "free will" in a moral and psychological sense of the term.
(ii) At the same time, the Bible seems "to indicate that no man is free for obedience and faith till he is freed from sin's dominion." He still possesses "free will" in the sense of voluntary choices, but "all his voluntary choices are in one way or another acts of serving sin" until he acquires freedom from "sin's dominion." The New Bible Dictionary denotes this acquired freedom for "obedience and faith" as "free will" in a theological sense.

The Bible gives a basic reason why a person must acquire a freedom "to live as [one] ought to live" when it applies Adam and Eve's sin to all humanity: "the Lord saw that the wickedness of humankind was great in the earth, and that every inclination of the thoughts of their hearts was only evil continually" (Genesis 6:5). Or, in Paul's view, "by the one man's disobedience the many were made sinners" (Romans 5:19). Thus, the Bible describes humanity as connaturally "enslaved to sin" (Romans 6:6; John 8:34). Therefore, in biblical thinking, a freedom from being "enslaved to sin" in order to "live as one ought" must be acquired because "sin" is "the failure to live up to Jesus' commandments to love God and love neighbor."

Jesus on acquired freedom

Jesus told his hearers that they needed to be made free: "if you continue in my word, you are truly my disciples; and you will know the truth, and the truth will make you free" (John 8:32). But Jesus' hearers did not understand that he was not talking about freedom from "economic or social slavery," so they responded, "we are descendants of Abraham and have never been slaves to anyone. What do you mean by saying, ‘You will be made free'?" (John 8:33)."

To clarify what he meant by being "made free," Jesus answered them, "very truly, I tell you, everyone who commits sin is a slave to sin" (John 8:34). By his words being "made free," Jesus meant being "made free" from "bondage to sin." Continuing his reply, Jesus added, "if the Son makes you free, you will be free indeed" (John 8:36). "Free indeed [ontós]" can be more literally translated "truly free" or "really free," as it is in the following translations.

  • "If the son makes you free, you will be truly free" (John 8:36 New Century Bible).
  • "If therefore the Son shall set you free, ye shall be really free" (John 8:36 Darby Translation).

In the John 8:32–36 passage, Jesus taught that "those who sin are slaves to their sin whether they realize it or not" and "they cannot break away from their sin." The freedom of being "truly free" or "really free" had to be acquired by being made free by "the truth," John's name for Jesus in 14:6. Thus, Jesus characterized being made "truly free" as freedom from being a "slave to sin." At the same time, the Bible holds that it is freedom for righteous living because "from the very beginning God's people were taught that the alternative to servitude was not freedom in some abstract sense, but rather freedom to serve the Lord."

Paul on acquired freedom

When the New Bible Dictionary says of humanity's connatural condition that "all his voluntary choices are in one way or another acts of serving sin," it references Romans 6:17–22. In this passage, Paul depicts the connatural human condition as being "slaves of sin." To be "set free from sin," Paul told his readers that they must "become slaves of righteousness."

Regarding the transformation from being "slaves of sin" to being "slaves of righteousness," Douglas J. Moo comments that Paul uses the image of slavery to say that "being bound to God and his will enables the person to become ‘free'" – in the sense of being free "to be what God wants that person to be." The slavery image underscores, as Moo says, that what Paul has in mind "is not freedom to do what one wants, but freedom to obey God" and that the obedience is done "willingly, joyfully, naturally," and not by coercion as with literal slaves.

The Fall and freedom of the will

The Fall (sometimes lowercase) in its theological use refers to "the lapse of human beings into a state of natural or innate sinfulness through the sin of Adam and Eve." The story of the Fall is narrated in Genesis 3:1–7.

Nelson's Student Bible Dictionary describes "the fall" as "the disobedience and sin of Adam and Eve that caused them to lose the state of innocence in which they had been created. This event plunged them and all mankind into a state of sin and corruption." A Concise Dictionary of Theology provides a similar description of the Fall. In their fall, Adam and Eve "disobeyed God and so lost their innocent, ideal existence" and "brought moral evil into the world."

The Bible testifies to the deleterious impact of the Fall on all humanity. Shortly after the Fall, "the Lord saw that the wickedness of humankind was great in the earth, and that every inclination of the thoughts of their hearts was only evil continually" (Genesis 6:5). Or, in Paul's view, "sin came into the world through one man," and "by the one man's disobedience the many were made sinners" (Romans 5:12, 19).

Freedom of will given at creation

Writing about God's creation and Adam and Eve, Baker's Dictionary of Biblical Theology says that "creation is climaxed by persons who possess wills that can choose to either obey or disobey." The Book of Ecclesiasticus, which pre-Protestant Reformation churches consider part of the Bible but which Protestants classify among the Apocrypha, explicitly names freedom of the will as an element in God's creation: God "created humankind in the beginning, and he left them in the power of their own free choice" (Ecclesiasticus 15:14).

Freedom of will not given at creation

Adam and Eve were created with "free will," that is, "the ability to choose either good or evil." The Fall evidences that Adam and Eve were not created with the freedom that Paul calls being "slaves of righteousness" (Romans 6:18): a phrase that denotes "freedom to obey God – willingly, joyfully, naturally."

Critics of the free-will theodicy believe that it "fails" because "God could have created free agents without risking bringing moral evil into the world. There is nothing logically inconsistent about a free agent that always chooses the good." Relating God's creation of Adam and Eve and the Fall to theodicy, J. L. Mackie argues "there was open to [God] the obviously better possibility of making beings who would freely act but always do right." And, Mackie adds, "clearly, [God's] failure to avail himself of this possibility is inconsistent with his being both omnipotent and wholly good."

God and moral evil

The free-will theodicy justifies God by ascribing all evil to "the evil acts of human free will." At the same time, the Bible teaches that God "rules the hearts and actions of all men." The Bible contains many portrayals of God as ruling "hearts and actions" for evil. Following are a few examples:

  • God said, "I will harden [Pharaoh's] heart, so that he will not let the people go" (Exodus 4:21).
  • Isaiah asked, "Why, O Lord, do you make us stray from your ways and harden our heart, so that we do not fear you?" (Isaiah 63:17).
  • God said, "If a prophet is deceived and speaks a word, I, the Lord, have deceived that prophet" (Ezekiel 14:9).
  • John writes that those who "did not believe in [Jesus] could not believe," because, quoting Isaiah 6:10, "[God] has blinded their eyes and hardened their heart" (John 12:37–40 abr).
  • God "hardens the heart of whomever he chooses" (Romans 9:18).
  • "God sends [those who are perishing] a powerful delusion, leading them to believe what is false, so [they] will be condemned" (2 Thessalonians 2:11–12).
  • "Those who do not believe ... stumble because they disobey the word, as they were destined [by God] to do" (1 Peter 2:7–8).

Theodicy unresolved

The existence of evil in the world, in the view of Raymond Lam, "presents the gravest challenge to the existence of an all-powerful, all-knowing, all-loving God." Lam also observes that "no theodicy is easy."

Regarding theodicy and the Bible, Baker's Evangelical Dictionary of Biblical Theology concludes that "the Bible does not answer the oft-posed problem of how a just, omnipotent, and loving God could permit evil to exist in a universe he had created."

With no "definitive answer" to the theodic question, "debates about theodicy continue among believers and unbelievers alike," observes Robert F. Brown. Therefore, Brown adds, "theodicy remains a perennial concern for thoughtful religious commitment." Theodicy remains a "perennial concern" because, Brown reports, "how the divine can be compatible with the existence of evil in the world has perplexed profound thinkers and ordinary people right down to the present day."

Denialism

From Wikipedia, the free encyclopedia
Banner at the 2017 Climate March in Washington, D.C.

In the psychology of human behavior, denialism is a person's choice to deny reality as a way to avoid believing in a psychologically uncomfortable truth. Denialism is an essentially irrational action that withholds the validation of a historical experience or event when a person refuses to accept an empirically verifiable reality.

In the sciences, denialism is the rejection of basic facts and concepts that are undisputed, well-supported parts of the scientific consensus on a subject, in favor of ideas that are radical, controversial, or fabricated. The terms Holocaust denial and AIDS denialism describe the denial of the facts and the reality of the subject matters, and the term climate change denial describes denial of the scientific consensus that the climate change of planet Earth is a real and occurring event primarily caused in geologically recent times by human activity. The forms of denialism present the common feature of the person rejecting overwhelming evidence and trying to generate political controversy in attempts to deny the existence of consensus.

The motivations and causes of denialism include religion, self-interest (economic, political, or financial), and defence mechanisms meant to protect the psyche of the denialist against mentally disturbing facts and ideas; such disturbance is called cognitive dissonance in psychology terms.

Definition and tactics

Anthropologist Didier Fassin distinguishes between denial, defined as "the empirical observation that reality and truth are being denied", and denialism, which he defines as "an ideological position whereby one systematically reacts by refusing reality and truth". Persons and social groups who reject propositions on which there exists a mainstream and scientific consensus engage in denialism when they use rhetorical tactics to give the appearance of argument and legitimate debate, when there is none. It is a process that operates by employing one or more of the following five tactics to maintain the appearance of legitimate controversy:

  1. Conspiracy theories – Dismissing the data or observation by suggesting opponents are involved in "a conspiracy to suppress the truth".
  2. Cherry picking – Selecting an anomalous critical paper supporting their idea, or using outdated, flawed, and discredited papers to make their opponents look as though they base their ideas on weak research. Diethelm and McKee (2009) note, "Denialists are usually not deterred by the extreme isolation of their theories, but rather see it as an indication of their intellectual courage against the dominant orthodoxy and the accompanying political correctness."
  3. False experts – Paying an expert in the field, or another field, to lend supporting evidence or credibility. This goes hand-in-hand with the marginalization of real experts and researchers.
  4. Moving the goalposts – Dismissing evidence presented in response to a specific claim by continually demanding some other (often unfulfillable) piece of evidence (aka Shifting baseline)
  5. Other logical fallacies – Usually one or more of false analogy, appeal to consequences, straw man, or red herring.

Common tactics to different types of denialism include misrepresenting evidence, false equivalence, half-truths, and outright fabrication. South African judge Edwin Cameron notes that a common tactic used by denialists is to "make great play of the inescapable indeterminacy of figures and statistics". Historian Taner Akçam states that denialism is commonly believed to be negation of facts, but in fact "it is in that nebulous territory between facts and truth where such denialism germinates. Denialism marshals its own facts and it has its own truth."

Focusing on the rhetorical tactics through which denialism is achieved in language, in Alex Gillespie (2020) of the London School of Economics has reviewed the linguistic and practical defensive tactics for denying disruptive information. These tactics are conceptualized in terms of three layers of defence:

  1. Avoiding – The first line of defence against disruptive information is to avoid it.
  2. Delegitimizing – The second line of defence is to attack the messenger, by undermining the credibility of the source.
  3. Limiting – The final line of defence, if disruptive information cannot be avoided or delegitimized, is to rationalize and limit the impact of the disruptive ideas.

In 2009, author Michael Specter defined group denialism as "when an entire segment of society, often struggling with the trauma of change, turns away from reality in favor of a more comfortable lie".

Prescriptive and polemic perspectives

If one party to a debate accuses the other of denialism they are framing the debate. This is because an accusation of denialism is both prescriptive and polemic: prescriptive because it carries implications that there is truth to the denied claim; polemic since the accuser implies that continued denial in the light of presented evidence raises questions about the other's motives. Edward Skidelsky, a lecturer in philosophy at Exeter University writes that "An accusation of 'denial' is serious, suggesting either deliberate dishonesty or self-deception. The thing being denied is, by implication, so obviously true that the denier must be driven by perversity, malice or wilful blindness." He suggests that, by the introduction of the word denier into further areas of historical and scientific debate, "One of the great achievements of The Enlightenment – the liberation of historical and scientific enquiry from dogma – is quietly being reversed".

Some people have suggested that because denial of the Holocaust is well known, advocates who use the term denialist in other areas of debate may intentionally or unintentionally imply that their opponents are little better than Holocaust deniers. However, Robert Gallo et al. defended this latter comparison, stating that AIDS denialism is similar to Holocaust denial since it is a form of pseudoscience that "contradicts an immense body of research".

Politics and science

Climate change

Climate change denial (also global warming denial) is a form of science denial characterized by rejecting, refusing to acknowledge, disputing, or fighting the scientific consensus on climate change. Those promoting denial commonly use rhetorical tactics to give the appearance of a scientific controversy where there is none. Climate change denial includes unreasonable doubts about the extent to which climate change is caused by humans, its effects on nature and human society, and the potential of adaptation to global warming by human actions. To a lesser extent, climate change denial can also be implicit when people accept the science but fail to reconcile it with their belief or action. Several studies have analyzed these positions as forms of denialism, pseudoscience, or propaganda.

Many issues that are settled in the scientific community, such as human responsibility for climate change, remain the subject of politically or economically motivated attempts to downplay, dismiss or deny them—an ideological phenomenon academics and scientists call climate change denial. Climate scientists, especially in the United States, have reported government and oil-industry pressure to censor or suppress their work and hide scientific data, with directives not to discuss the subject publicly. The fossil fuels lobby has been identified as overtly or covertly supporting efforts to undermine or discredit the scientific consensus on climate change.

HIV/AIDS

AIDS denialism is the denial that the human immunodeficiency virus (HIV) is the cause of acquired immune deficiency syndrome (AIDS). AIDS denialism has been described as being "among the most vocal anti-science denial movements". Some denialists reject the existence of HIV, while others accept that the virus exists but say that it is a harmless passenger virus and not the cause of AIDS. Insofar as denialists acknowledge AIDS as a real disease, they attribute it to some combination of recreational drug use, malnutrition, poor sanitation, and side effects of antiretroviral medication, rather than infection with HIV. However, the evidence that HIV causes AIDS is scientifically conclusive and the scientific community rejects and ignores AIDS-denialist claims as based on faulty reasoning, cherry picking, and misrepresentation of mainly outdated scientific data. With the rejection of these arguments by the scientific community, AIDS-denialist material is now spread mainly through the Internet.

Thabo Mbeki, former president of South Africa, embraced AIDS denialism, proclaiming that AIDS was primarily caused by poverty. About 365,000 people died from AIDS during his presidency; it is estimated that around 343,000 premature deaths could have been prevented if proper treatment had been available.

COVID-19

"COVID is a lie" graffiti in Pontefract, West Yorkshire, England

The term "COVID-19 denialism" or merely "COVID denialism" refers to the thinking of those who deny the reality of the COVID-19 pandemic, at least to the extent of denying the scientifically recognized COVID mortality data of the World Health Organization. The claims that the COVID-19 pandemic has been faked, exaggerated, or mischaracterized are pseudoscience. Some famous people who have engaged in COVID-19 denialism include Elon Musk, former U.S. President Donald Trump, and former Brazilian President Bolsonaro.

Evolution

Religious beliefs may prompt an individual to deny the validity of the scientific theory of evolution. Evolution is considered an undisputed fact within the scientific community and in academia, where the level of support for evolution is essentially universal, yet this view is often met with opposition by biblical literalists. The alternative view is often presented as a literal interpretation of the Book of Genesis's creation myth. Many fundamentalist Christians teach creationism as if it were fact under the banners of creation science and intelligent design. Beliefs that typically coincide with creationism include the belief in the global flood myth, geocentrism, and the belief that the Earth is only 6,000–10,000 years old. These beliefs are viewed as pseudoscience in the scientific community and are widely regarded as erroneous.

Flat Earth

The superseded belief that the Earth is flat, and denial of all of the overwhelming evidence that supports an approximately spherical Earth that rotates around its axis and orbits the Sun, persists into the 21st century. Modern proponents of flat-Earth cosmology (or flat-Earthers) refuse to accept any kind of contrary evidence, dismissing all spaceflights and images from space as hoaxes and accusing all organizations and even private citizens of conspiring to "hide the truth". They also claim that no actual satellites are orbiting the Earth, that the International Space Station is fake, and that these are lies from all governments involved in this grand cover-up. Some even believe other planets and stars are hoaxes.

Adherents of the modern flat-earth model propose that a dome-shaped firmament encloses a disk-shaped Earth. They may also claim, after Samuel Rowbotham, that the Sun is only 3,000 miles (4,800 km) above the Earth and that the Moon and the Sun orbit above the Earth rather than around it. Modern flat-earthers believe that Antarctica is not a continent but a massive ice floe, with a wall 150 feet (46 m) or higher, which circles the perimeter of the Earth and keeps everything (including all the oceans' water) from falling off the edge.

Flat-Earthers also assert that no one is allowed to fly over or explore Antarctica, despite contrary evidence. According to them, all photos and videos of ships sinking under the horizon and of the bottoms of city skylines and clouds below the horizon, revealing the curvature of the Earth, have been manipulated, computer-generated, or somehow faked. Therefore, regardless of any scientific or empirical evidence provided, flat-Earthers conclude that it is fabricated or altered in some way.

When linked to other observed phenomena such as gravity, sunsets, tides, eclipses, distances and other measurements that challenge the flat earth model, claimants replace commonly-accepted explanations with piecemeal models that distort or over-simplify how perspective, mass, buoyancy, light or other physical systems work. These piecemeal replacements rarely conform with each other, finally leaving many flat-Earth claimants to agree that such phenomena remain "mysteries" and more investigation is to be done. In this conclusion, adherents remain open to all explanations except the commonly accepted globular Earth model, shifting the debate from ignorance to denialism.

Genetically modified foods

There is a scientific consensus that currently available food derived from genetically modified crops (GM) poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.

Psychological analyses indicate that over 70% of GM food opponents in the US are "absolute" in their opposition, experience disgust at the thought of eating GM foods, and are "evidence insensitive".

Statins

Statin denialism is a rejection of the medical worth of statins, a class of cholesterol-lowering drugs. Cardiologist Steven Nissen at Cleveland Clinic has commented "We are losing the battle for the hearts and minds of our patients to Web sites..." promoting unproven medical therapies. Harriet Hall sees a spectrum of statin denialism ranging from pseudoscientific claims to the understatement of benefits and overstatement of side effects, all of which is contrary to the scientific evidence.

Mental illness denial

Mental illness denial or mental disorder denial is where a person denies the existence of mental disorders. Serious analysts, as well as pseudoscientific movements, question the existence of certain disorders. A minority of professional researchers see disorders such as depression from a sociocultural perspective and argue that the solution to it is fixing a dysfunction in society, not in the person's brain. Some people may also deny that they have a mental illness after being diagnosed, and certain analysts argue this denialism is usually fueled by narcissistic injury. Anti-psychiatry movements such as Scientology promote mental illness denial by having alternative practices to psychiatry.

Election denial

Election denial is false dismissal of the outcome of a fair election. Stacey Abrams denied the 2018 Georgia gubernatorial election was "a free and fair election" and spent $22 million in "largely unsuccessful" litigation. Since the 2020 United States presidential election, there has been an ongoing stolen election conspiracy theory about the 2020 presidential election.

Historiography

Historical negationism, the denialism of widely accepted historical facts, is a major source of concern among historians and it is frequently used to falsify or distort accepted historical events. In attempting to revise the past, negationists are distinguished by the use of techniques inadmissible in proper historical discourse, such as presenting known forged documents as genuine, inventing ingenious but implausible reasons for distrusting genuine documents, attributing conclusions to books and sources that report the opposite, manipulating statistical series to support the given point of view, and deliberately mistranslating texts.

Some countries, such as Germany, have criminalized the negationist revision of certain historical events, while other countries take a more cautious position for various reasons, such as the protection of free speech. Others mandate negationist views, such as California, where schoolchildren have been explicitly prevented from learning about the California genocide.

Armenian genocide denialism

Photograph of the Iğdır Genocide Memorial and Museum in Turkey
The Iğdır Genocide Memorial and Museum promotes the false view that Armenians committed genocide against Turks, rather than vice versa.

Armenian genocide denial is the negationist claim that the Ottoman Empire and its ruling party, the Committee of Union and Progress (CUP), did not commit genocide against its Armenian citizens during World War I—a crime documented in a large body of evidence and affirmed by the vast majority of scholars. The perpetrators denied the genocide as they carried it out, claiming that Armenians in the Ottoman Empire were resettled for military reasons, not exterminated. In its aftermath, incriminating documents were systematically destroyed. Denial has been the policy of every government of the Ottoman Empire's successor state, the Republic of Turkey, as of 2024.

Borrowing arguments used by the CUP to justify its actions, Armenian genocide denial rests on the assumption that the deportation of Armenians was a legitimate state action in response to a real or perceived Armenian uprising that threatened the empire's existence during wartime. Deniers assert the CUP intended to resettle Armenians, not kill them. They claim the death toll is exaggerated or attribute the deaths to other factors, such as a purported civil war, disease, bad weather, rogue local officials, or bands of Kurds and outlaws. The historian Ronald Grigor Suny summarizes the main argument as "there was no genocide, and the Armenians were to blame for it".

A critical reason for denial is that the genocide enabled the establishment of a Turkish nation-state; recognizing it would contradict Turkey's founding myths. Since the 1920s, Turkey has worked to prevent recognition or even mention of the genocide in other countries. It has spent millions of dollars on lobbying, created research institutes, and used intimidation and threats. Denial affects Turkey's domestic policies and is taught in Turkish schools; some Turkish citizens who acknowledge the genocide have faced prosecution for "insulting Turkishness". Turkey's century-long effort to deny the genocide sets it apart from other historical cases of genocide.

Azerbaijan, a close ally of Turkey, also denies the genocide and campaigns against its recognition internationally. Most Turkish citizens and political parties support Turkey's denial policy. Scholars argue that Armenian genocide denial has set the tone for the government's attitude towards minorities, and has contributed to the ongoing violence against Kurds in Turkey. A 2014 poll of 1,500 people conducted by EDAM, a Turkish think tank, found that nine percent of Turkish citizens recognize the genocide.

Holocaust denialism

Holocaust denial refers to the denial of the murder of 5 to 6 million Jews by the Nazis in Europe during World War 2. In this context, the term is a subset of genocide denial, which is a form of politically motivated denialism.

Nakba denialism

Nakba denial refers to attempts to downgrade, deny and misdescribe the ethnic cleansing of Palestinians during the Nakba, in which four-fifths of all Palestinians were driven off their lands and into exile.

Srebrenica massacre denialism

Sonja Biserko, president of the Helsinki Committee for Human Rights in Serbia, and Edina Bečirević, the Faculty of Criminalistics, Criminology and Security Studies of the University of Sarajevo have pointed to a culture of denial of the Srebrenica massacre in Serbian society, taking many forms and present in particular in political discourse, the media, the law and the educational system.

Cannabis (drug)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cannabis_(drug) Cannabis Cannabis in...