Search This Blog

Thursday, July 25, 2024

Electroencephalography

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Electroencephalography

Electroencephalography
Epileptic spike and wave discharges monitored EEG

Electroencephalography (EEG) is a method to record an electrogram of the spontaneous electrical activity of the brain. The biosignals detected by EEG have been shown to represent the postsynaptic potentials of pyramidal neurons in the neocortex and allocortex. It is typically non-invasive, with the EEG electrodes placed along the scalp (commonly called "scalp EEG") using the International 10–20 system, or variations of it. Electrocorticography, involving surgical placement of electrodes, is sometimes called "intracranial EEG". Clinical interpretation of EEG recordings is most often performed by visual inspection of the tracing or quantitative EEG analysis.

Voltage fluctuations measured by the EEG bioamplifier and electrodes allow the evaluation of normal brain activity. As the electrical activity monitored by EEG originates in neurons in the underlying brain tissue, the recordings made by the electrodes on the surface of the scalp vary in accordance with their orientation and distance to the source of the activity. Furthermore, the value recorded is distorted by intermediary tissues and bones, which act in a manner akin to resistors and capacitors in an electrical circuit. This means that not all neurons will contribute equally to an EEG signal, with an EEG predominately reflecting the activity of cortical neurons near the electrodes on the scalp. Deep structures within the brain further away from the electrodes will not contribute directly to an EEG; these include the base of the cortical gyrus, mesial walls of the major lobes, hippocampus, thalamus, and brain stem.

A healthy human EEG will show certain patterns of activity that correlate with how awake a person is. The range of frequencies one observes are between 1 and 30 Hz, and amplitudes will vary between 20 and 100 μV. The observed frequencies are subdivided into various groups: alpha (8–13 Hz), beta (13–30 Hz), delta (0.5–4 Hz), and theta (4–7 Hz). Alpha waves are observed when a person is in a state of relaxed wakefulness and are mostly prominent over the parietal and occipital sites. During intense mental activity, beta waves are more prominent in frontal areas as well as other regions. If a relaxed person is told to open their eyes, one observes alpha activity decreasing and an increase in beta activity. Theta and delta waves are not generally seen in wakefulness - if they are, it is a sign of brain dysfunction.

EEG can detect abnormal electrical discharges such as sharp waves, spikes, or spike-and-wave complexes, as observable in people with epilepsy; thus, it is often used to inform medical diagnosis. EEG can detect the onset and spatio-temporal (location and time) evolution of seizures and the presence of status epilepticus. It is also used to help diagnose sleep disorders, depth of anesthesia, coma, encephalopathies, cerebral hypoxia after cardiac arrest, and brain death. EEG used to be a first-line method of diagnosis for tumors, stroke, and other focal brain disorders, but this use has decreased with the advent of high-resolution anatomical imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT). Despite its limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis. It is one of the few mobile techniques available and offers millisecond-range temporal resolution, which is not possible with CT, PET, or MRI.

Derivatives of the EEG technique include evoked potentials (EP), which involves averaging the EEG activity time-locked to the presentation of a stimulus of some sort (visual, somatosensory, or auditory). Event-related potentials (ERPs) refer to averaged EEG responses that are time-locked to more complex processing of stimuli; this technique is used in cognitive science, cognitive psychology, and psychophysiological research.

Uses

Epilepsy

An EEG recording setup using the 10-10 system of electrode placement.

EEG is the gold standard diagnostic procedure to confirm epilepsy. The sensitivity of a routine EEG to detect interictal epileptiform discharges at epilepsy centers has been reported to be in the range of 29–55%. Given the low to moderate sensitivity, a routine EEG (typically with a duration of 20–30 minutes) can be normal in people that have epilepsy. When an EEG shows interictal epileptiform discharges (e.g. sharp waves, spikes, spike-and-wave, etc.) it is confirmatory of epilepsy in nearly all cases (high specificity), however up to 3.5% of the general population may have epileptiform abnormalities in an EEG without ever having had a seizure (low false positive rate) or with a very low risk of developing epilepsy in the future.

When a routine EEG is normal and there is a high suspicion or need to confirm epilepsy, it may be repeated or performed with a longer duration in the epilepsy monitoring unit (EMU) or at home with an ambulatory EEG. In addition, there are activating maneuvers such as photic stimulation, hyperventilation and sleep deprivation that can increase the diagnostic yield of the EEG.

Epilepsy Monitoring Unit (EMU)

At times, a routine EEG is not sufficient to establish the diagnosis or determine the best course of action in terms of treatment. In this case, attempts may be made to record an EEG while a seizure is occurring. This is known as an ictal recording, as opposed to an interictal recording, which refers to the EEG recording between seizures. To obtain an ictal recording, a prolonged EEG is typically performed accompanied by a time-synchronized video and audio recording. This can be done either as an outpatient (at home) or during a hospital admission, preferably to an Epilepsy Monitoring Unit (EMU) with nurses and other personnel trained in the care of patients with seizures. Outpatient ambulatory video EEGs typically last one to three days. An admission to an Epilepsy Monitoring Unit typically lasts several days but may last for a week or longer. While in the hospital, seizure medications are usually withdrawn to increase the odds that a seizure will occur during admission. For reasons of safety, medications are not withdrawn during an EEG outside of the hospital. Ambulatory video EEGs, therefore, have the advantage of convenience and are less expensive than a hospital admission, but they also have the disadvantage of a decreased probability of recording a clinical event.

Epilepsy monitoring is often considered when patients continue having events despite being on anti-seizure medications or if there is concern that the patient's events have an alternate diagnosis, e.g., psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders, migraine variants, stroke, etc. In cases of epileptic seizures, continuous EEG monitoring helps to characterize seizures and localize/lateralize the region of the brain from which a seizure originates. This can help identify appropriate non-medication treatment options. In clinical use, EEG traces are visually analyzed by neurologists to look at various features. Increasingly, quantitative analysis of EEG is being used in conjunction with visual analysis. Quantitative analysis displays like power spectrum analysis, alpha-delta ratio, amplitude integrated EEG, and spike detection can help quickly identify segments of EEG that need close visual analysis or, in some cases, be used as surrogates for quick identification of seizures in long-term recordings.

Other brain disorders

An EEG might also be helpful for diagnosing or treating the following disorders:

  • Brain tumor
  • Brain damage from head injury
  • Brain dysfunction that can have a variety of causes (encephalopathy)
  • Inflammation of the brain (encephalitis)
  • Stroke
  • Sleep disorders

It can also:

Intensive Care Unit (ICU)

EEG can also be used in intensive care units for brain function monitoring to monitor for non-convulsive seizures/non-convulsive status epilepticus, to monitor the effect of sedative/anesthesia in patients in medically induced coma (for treatment of refractory seizures or increased intracranial pressure), and to monitor for secondary brain damage in conditions such as subarachnoid hemorrhage (currently a research method).

In cases where significant brain injury is suspected, e.g., after cardiac arrest, EEG can provide some prognostic information.

If a patient with epilepsy is being considered for resective surgery, it is often necessary to localize the focus (source) of the epileptic brain activity with a resolution greater than what is provided by scalp EEG. In these cases, neurosurgeons typically implant strips and grids of electrodes or penetrating depth electrodes under the dura mater, through either a craniotomy or a burr hole. The recording of these signals is referred to as electrocorticography (ECoG), subdural EEG (sdEEG), intracranial EEG (icEEG), or stereotactic EEG (sEEG). The signal recorded from ECoG is on a different scale of activity than the brain activity recorded from scalp EEG. Low-voltage, high-frequency components that cannot be seen easily (or at all) in scalp EEG can be seen clearly in ECoG. Further, smaller electrodes (which cover a smaller parcel of brain surface) allow for better spatial resolution to narrow down the areas critical for seizure onset and propagation. Some clinical sites record data from penetrating microelectrodes.

Home ambulatory EEG

Sometimes it is more convenient or clinically necessary to perform ambulatory EEG recordings in the home of the patient. These studies typically have a duration of 24–72 hours.

Research use

EEG and the related study of ERPs are used extensively in neuroscience, cognitive science, cognitive psychology, neurolinguistics, and psychophysiological research, as well as to study human functions such as swallowing. Any EEG techniques used in research are not sufficiently standardised for clinical use, and many ERP studies fail to report all of the necessary processing steps for data collection and reduction, limiting the reproducibility and replicability of many studies. Based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), EEG scans cannot be used reliably to assist in making a clinical diagnosis of ADHD. However, EEG continues to be used in research on mental disabilities, such as auditory processing disorder (APD), ADD, and ADHD

Advantages

Several other methods to study brain function exist, including functional magnetic resonance imaging (fMRI), positron emission tomography (PET), magnetoencephalography (MEG), nuclear magnetic resonance spectroscopy (NMR or MRS), electrocorticography (ECoG), single-photon emission computed tomography (SPECT), near-infrared spectroscopy (NIRS), and event-related optical signal (EROS). Despite the relatively poor spatial sensitivity of EEG, the "one-dimensional signals from localised peripheral regions on the head make it attractive for its simplistic fidelity and has allowed high clinical and basic research throughput". Thus, EEG possesses some advantages over some of those other techniques:

  • Hardware costs are significantly lower than those of most other techniques 
  • EEG prevents limited availability of technologists to provide immediate care in high traffic hospitals.
  • EEG only requires a quiet room and briefcase-size equipment, whereas fMRI, SPECT, PET, MRS, or MEG require bulky and immobile equipment. For example, MEG requires equipment consisting of liquid helium-cooled detectors that can be used only in magnetically shielded rooms, altogether costing upwards of several million dollars; and fMRI requires the use of a 1-ton magnet in, again, a shielded room.
  • EEG can readily have a high temporal resolution, (although sub-millisecond resolution generates less meaningful data), because the two to 32 data streams generated by that number of electrodes is easily stored and processed, whereas 3D spatial technologies provide thousands or millions times as many input data streams, and are thus limited by hardware and software. EEG is commonly recorded at sampling rates between 250 and 2000 Hz in clinical and research settings.
  • EEG is relatively tolerant of subject movement, unlike most other neuroimaging techniques. There even exist methods for minimizing, and even eliminating movement artifacts in EEG data 
  • EEG is silent, which allows for better study of the responses to auditory stimuli.
  • EEG does not aggravate claustrophobia, unlike fMRI, PET, MRS, SPECT, and sometimes MEG
  • EEG does not involve exposure to high-intensity (>1 Tesla) magnetic fields, as in some of the other techniques, especially MRI and MRS. These can cause a variety of undesirable issues with the data, and also prohibit use of these techniques with participants that have metal implants in their body, such as metal-containing pacemakers
  • EEG does not involve exposure to radioligands, unlike positron emission tomography.
  • ERP studies can be conducted with relatively simple paradigms, compared with IE block-design fMRI studies
  • Relatively non-invasive, in contrast to electrocorticography, which requires electrodes to be placed on the actual surface of the brain.

EEG also has some characteristics that compare favorably with behavioral testing:

  • EEG can detect covert processing (i.e., processing that does not require a response)
  • EEG can be used in subjects who are incapable of making a motor response
  • EEG is a method widely used in the study of sport performance, valued for its portability and lightweight design 
  • Some ERP components can be detected even when the subject is not attending to the stimuli
  • Unlike other means of studying reaction time, ERPs can elucidate stages of processing (rather than just the result)
  • the simplicity of EEG readily provides for tracking of brain changes during different phases of life. EEG sleep analysis can indicate significant aspects of the timing of brain development, including evaluating adolescent brain maturation.
  • In EEG there is a better understanding of what signal is measured as compared to other research techniques, e.g. the BOLD response in MRI.

Disadvantages

  • Low spatial resolution on the scalp. fMRI, for example, can directly display areas of the brain that are active, while EEG requires intense interpretation just to hypothesize what areas are activated by a particular response.
  • Depending on the orientation and location of the dipole causing an EEG change, there may be a false localization due to the inverse problem.
  • EEG poorly measures neural activity that occurs below the upper layers of the brain (the cortex).
  • Unlike PET and MRS, cannot identify specific locations in the brain at which various neurotransmitters, drugs, etc. can be found.
  • Often takes a long time to connect a subject to EEG, as it requires precise placement of dozens of electrodes around the head and the use of various gels, saline solutions, and/or pastes to maintain good conductivity, and a cap is used to keep them in place. While the length of time differs dependent on the specific EEG device used, as a general rule it takes considerably less time to prepare a subject for MEG, fMRI, MRS, and SPECT.
  • Signal-to-noise ratio is poor, so sophisticated data analysis and relatively large numbers of subjects are needed to extract useful information from EEG.
  • EEGs are not currently very compatible with individuals who have coarser and/or textured hair. Even protective styles can pose issues during testing. Researchers are currently trying to build better options for patients and technicians alike. Furthermore, researchers are starting to implement more culturally-informed data collection practices to help reduce racial biases in EEG research. 

With other neuroimaging techniques

Simultaneous EEG recordings and fMRI scans have been obtained successfully, though recording both at the same time effectively requires that several technical difficulties be overcome, such as the presence of ballistocardiographic artifact, MRI pulse artifact and the induction of electrical currents in EEG wires that move within the strong magnetic fields of the MRI. While challenging, these have been successfully overcome in a number of studies.

MRI's produce detailed images created by generating strong magnetic fields that may induce potentially harmful displacement force and torque. These fields produce potentially harmful radio frequency heating and create image artifacts rendering images useless. Due to these potential risks, only certain medical devices can be used in an MR environment.

Similarly, simultaneous recordings with MEG and EEG have also been conducted, which has several advantages over using either technique alone:

  • EEG requires accurate information about certain aspects of the skull that can only be estimated, such as skull radius, and conductivities of various skull locations. MEG does not have this issue, and a simultaneous analysis allows this to be corrected for.
  • MEG and EEG both detect activity below the surface of the cortex very poorly, and like EEG, the level of error increases with the depth below the surface of the cortex one attempts to examine. However, the errors are very different between the techniques, and combining them thus allows for correction of some of this noise.
  • MEG has access to virtually no sources of brain activity below a few centimetres under the cortex. EEG, on the other hand, can receive signals from greater depth, albeit with a high degree of noise. Combining the two makes it easier to determine what in the EEG signal comes from the surface (since MEG is very accurate in examining signals from the surface of the brain), and what comes from deeper in the brain, thus allowing for analysis of deeper brain signals than either EEG or MEG on its own.

Recently, a combined EEG/MEG (EMEG) approach has been investigated for the purpose of source reconstruction in epilepsy diagnosis.

EEG has also been combined with positron emission tomography. This provides the advantage of allowing researchers to see what EEG signals are associated with different drug actions in the brain.

Recent studies using machine learning techniques such as neural networks with statistical temporal features extracted from frontal lobe EEG brainwave data has shown high levels of success in classifying mental states (Relaxed, Neutral, Concentrating), mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia.

Mechanisms

The brain's electrical charge is maintained by billions of neurons. Neurons are electrically charged (or "polarized") by membrane transport proteins that pump ions across their membranes. Neurons are constantly exchanging ions with the extracellular milieu, for example to maintain resting potential and to propagate action potentials. Ions of similar charge repel each other, and when many ions are pushed out of many neurons at the same time, they can push their neighbours, who push their neighbours, and so on, in a wave. This process is known as volume conduction. When the wave of ions reaches the electrodes on the scalp, they can push or pull electrons on the metal in the electrodes. Since metal conducts the push and pull of electrons easily, the difference in push or pull voltages between any two electrodes can be measured by a voltmeter. Recording these voltages over time gives us the EEG.

The electric potential generated by an individual neuron is far too small to be picked up by EEG or MEG. EEG activity therefore always reflects the summation of the synchronous activity of thousands or millions of neurons that have similar spatial orientation. If the cells do not have similar spatial orientation, their ions do not line up and create waves to be detected. Pyramidal neurons of the cortex are thought to produce the most EEG signal because they are well-aligned and fire together. Because voltage field gradients fall off with the square of distance, activity from deep sources is more difficult to detect than currents near the skull.

Scalp EEG activity shows oscillations at a variety of frequencies. Several of these oscillations have characteristic frequency ranges, spatial distributions and are associated with different states of brain functioning (e.g., waking and the various sleep stages). These oscillations represent synchronized activity over a network of neurons. The neuronal networks underlying some of these oscillations are understood (e.g., the thalamocortical resonance underlying sleep spindles), while many others are not (e.g., the system that generates the posterior basic rhythm). Research that measures both EEG and neuron spiking finds the relationship between the two is complex, with a combination of EEG power in the gamma band and phase in the delta band relating most strongly to neuron spike activity.

Method

Computer electroencephalograph Neurovisor-BMM 40 produced and offered in Russia

In conventional scalp EEG, the recording is obtained by placing electrodes on the scalp with a conductive gel or paste, usually after preparing the scalp area by light abrasion to reduce impedance due to dead skin cells. Many systems typically use electrodes, each of which is attached to an individual wire. Some systems use caps or nets into which electrodes are embedded; this is particularly common when high-density arrays of electrodes are needed.

Electrode locations and names are specified by the International 10–20 system for most clinical and research applications (except when high-density arrays are used). This system ensures that the naming of electrodes is consistent across laboratories. In most clinical applications, 19 recording electrodes (plus ground and system reference) are used. A smaller number of electrodes are typically used when recording EEG from neonates. Additional electrodes can be added to the standard set-up when a clinical or research application demands increased spatial resolution for a particular area of the brain. High-density arrays (typically via cap or net) can contain up to 256 electrodes more-or-less evenly spaced around the scalp.

Each electrode is connected to one input of a differential amplifier (one amplifier per pair of electrodes); a common system reference electrode is connected to the other input of each differential amplifier. These amplifiers amplify the voltage between the active electrode and the reference (typically 1,000–100,000 times, or 60–100 dB of power gain). In analog EEG, the signal is then filtered (next paragraph), and the EEG signal is output as the deflection of pens as paper passes underneath. Most EEG systems these days, however, are digital, and the amplified signal is digitized via an analog-to-digital converter, after being passed through an anti-aliasing filter. Analog-to-digital sampling typically occurs at 256–512 Hz in clinical scalp EEG; sampling rates of up to 20 kHz are used in some research applications.

During the recording, a series of activation procedures may be used. These procedures may induce normal or abnormal EEG activity that might not otherwise be seen. These procedures include hyperventilation, photic stimulation (with a strobe light), eye closure, mental activity, sleep and sleep deprivation. During (inpatient) epilepsy monitoring, a patient's typical seizure medications may be withdrawn.

The digital EEG signal is stored electronically and can be filtered for display. Typical settings for the high-pass filter and a low-pass filter are 0.5–1 Hz and 35–70 Hz respectively. The high-pass filter typically filters out slow artifact, such as electrogalvanic signals and movement artifact, whereas the low-pass filter filters out high-frequency artifacts, such as electromyographic signals. An additional notch filter is typically used to remove artifact caused by electrical power lines (60 Hz in the United States and 50 Hz in many other countries).

The EEG signals can be captured with opensource hardware such as OpenBCI and the signal can be processed by freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox.

As part of an evaluation for epilepsy surgery, it may be necessary to insert electrodes near the surface of the brain, under the surface of the dura mater. This is accomplished via burr hole or craniotomy. This is referred to variously as "electrocorticography (ECoG)", "intracranial EEG (I-EEG)" or "subdural EEG (SD-EEG)". Depth electrodes may also be placed into brain structures, such as the amygdala or hippocampus, structures, which are common epileptic foci and may not be "seen" clearly by scalp EEG. The electrocorticographic signal is processed in the same manner as digital scalp EEG (above), with a couple of caveats. ECoG is typically recorded at higher sampling rates than scalp EEG because of the requirements of Nyquist theorem – the subdural signal is composed of a higher predominance of higher frequency components. Also, many of the artifacts that affect scalp EEG do not impact ECoG, and therefore display filtering is often not needed.

A typical adult human EEG signal is about 10 μV to 100 μV in amplitude when measured from the scalp.

Since an EEG voltage signal represents a difference between the voltages at two electrodes, the display of the EEG for the reading encephalographer may be set up in one of several ways. The representation of the EEG channels is referred to as a montage.

Sequential montage
Each channel (i.e., waveform) represents the difference between two adjacent electrodes. The entire montage consists of a series of these channels. For example, the channel "Fp1-F3" represents the difference in voltage between the Fp1 electrode and the F3 electrode. The next channel in the montage, "F3-C3", represents the voltage difference between F3 and C3, and so on through the entire array of electrodes.
Referential montage
Each channel represents the difference between a certain electrode and a designated reference electrode. There is no standard position for this reference; it is, however, at a different position than the "recording" electrodes. Midline positions are often used because they do not amplify the signal in one hemisphere vs. the other, such as Cz, Oz, Pz etc. as online reference. The other popular offline references are:
  • REST reference: which is an offline computational reference at infinity where the potential is zero. REST (reference electrode standardization technique) takes the equivalent sources inside the brain of any a set of scalp recordings as springboard to link the actual recordings with any an online or offline( average, linked ears etc.) non-zero reference to the new recordings with infinity zero as the standardized reference.
  • "linked ears": which is a physical or mathematical average of electrodes attached to both earlobes or mastoids.
Average reference montage
The outputs of all of the amplifiers are summed and averaged, and this averaged signal is used as the common reference for each channel.
Laplacian montage
Each channel represents the difference between an electrode and a weighted average of the surrounding electrodes.

When analog (paper) EEGs are used, the technologist switches between montages during the recording in order to highlight or better characterize certain features of the EEG. With digital EEG, all signals are typically digitized and stored in a particular (usually referential) montage; since any montage can be constructed mathematically from any other, the EEG can be viewed by the electroencephalographer in any display montage that is desired.

The EEG is read by a clinical neurophysiologist or neurologist (depending on local custom and law regarding medical specialities), optimally one who has specific training in the interpretation of EEGs for clinical purposes. This is done by visual inspection of the waveforms, called graphoelements. The use of computer signal processing of the EEG – so-called quantitative electroencephalography – is somewhat controversial when used for clinical purposes (although there are many research uses).

Dry EEG electrodes

In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994. The arrayed electrode was also demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode. The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.

In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement.

In 2018, a functional dry electrode composed of a polydimethylsiloxane elastomer filled with conductive carbon nanofibers was reported. This research was conducted at the U.S. Army Research Laboratory. EEG technology often involves applying a gel to the scalp which facilitates strong signal-to-noise ratio. This results in more reproducible and reliable experimental results. Since patients dislike having their hair filled with gel, and the lengthy setup requires trained staff on hand, utilizing EEG outside the laboratory setting can be difficult. Additionally, it has been observed that wet electrode sensors' performance reduces after a span of hours. Therefore, research has been directed to developing dry and semi-dry EEG bioelectronic interfaces.

Dry electrode signals depend upon mechanical contact. Therefore, it can be difficult getting a usable signal because of impedance between the skin and the electrode. Some EEG systems attempt to circumvent this issue by applying a saline solution. Others have a semi dry nature and release small amounts of the gel upon contact with the scalp. Another solution uses spring loaded pin setups. These may be uncomfortable. They may also be dangerous if they were used in a situation where a patient could bump their head since they could become lodged after an impact trauma incident.

Currently, headsets are available incorporating dry electrodes with up to 30 channels. Such designs are able to compensate for some of the signal quality degradation related to high impedances by optimizing pre-amplification, shielding and supporting mechanics.

Limitations

EEG has several limitations. Most important is its poor spatial resolution. EEG is most sensitive to a particular set of post-synaptic potentials: those generated in superficial layers of the cortex, on the crests of gyri directly abutting the skull and radial to the skull. Dendrites which are deeper in the cortex, inside sulci, in midline or deep structures (such as the cingulate gyrus or hippocampus), or producing currents that are tangential to the skull, make far less contribution to the EEG signal.

EEG recordings do not directly capture axonal action potentials. An action potential can be accurately represented as a current quadrupole, meaning that the resulting field decreases more rapidly than the ones produced by the current dipole of post-synaptic potentials. In addition, since EEGs represent averages of thousands of neurons, a large population of cells in synchronous activity is necessary to cause a significant deflection on the recordings. Action potentials are very fast and, as a consequence, the chances of field summation are slim. However, neural backpropagation, as a typically longer dendritic current dipole, can be picked up by EEG electrodes and is a reliable indication of the occurrence of neural output.

Not only do EEGs capture dendritic currents almost exclusively as opposed to axonal currents, they also show a preference for activity on populations of parallel dendrites and transmitting current in the same direction at the same time. Pyramidal neurons of cortical layers II/III and V extend apical dendrites to layer I. Currents moving up or down these processes underlie most of the signals produced by electroencephalography.

Therefore, EEG provides information with a large bias to select neuron types, and generally should not be used to make claims about global brain activity. The meninges, cerebrospinal fluid and skull "smear" the EEG signal, obscuring its intracranial source.

It is mathematically impossible to reconstruct a unique intracranial current source for a given EEG signal, as some currents produce potentials that cancel each other out. This is referred to as the inverse problem. However, much work has been done to produce remarkably good estimates of, at least, a localized electric dipole that represents the recorded currents.

EEG vis-à-vis fMRI, fNIRS, fUS and PET

EEG has several strong points as a tool for exploring brain activity. EEGs can detect changes over milliseconds, which is excellent considering an action potential takes approximately 0.5–130 milliseconds to propagate across a single neuron, depending on the type of neuron. Other methods of looking at brain activity, such as PET, fMRI or fUS have time resolution between seconds and minutes. EEG measures the brain's electrical activity directly, while other methods record changes in blood flow (e.g., SPECT, fMRI, fUS) or metabolic activity (e.g., PET, NIRS), which are indirect markers of brain electrical activity.

EEG can be used simultaneously with fMRI or fUS so that high-temporal-resolution data can be recorded at the same time as high-spatial-resolution data, however, since the data derived from each occurs over a different time course, the data sets do not necessarily represent exactly the same brain activity. There are technical difficulties associated with combining EEG and fMRI including the need to remove the MRI gradient artifact present during MRI acquisition. Furthermore, currents can be induced in moving EEG electrode wires due to the magnetic field of the MRI.

EEG can be used simultaneously with NIRS or fUS without major technical difficulties. There is no influence of these modalities on each other and a combined measurement can give useful information about electrical activity as well as hemodynamics at medium spatial resolution.

EEG vis-à-vis MEG

EEG reflects correlated synaptic activity caused by post-synaptic potentials of cortical neurons. The ionic currents involved in the generation of fast action potentials may not contribute greatly to the averaged field potentials representing the EEG. More specifically, the scalp electrical potentials that produce EEG are generally thought to be caused by the extracellular ionic currents caused by dendritic electrical activity, whereas the fields producing magnetoencephalographic signals are associated with intracellular ionic currents.

Normal activity

The EEG is typically described in terms of (1) rhythmic activity and (2) transients. The rhythmic activity is divided into bands by frequency. To some degree, these frequency bands are a matter of nomenclature (i.e., any rhythmic activity between 8–12 Hz can be described as "alpha"), but these designations arose because rhythmic activity within a certain frequency range was noted to have a certain distribution over the scalp or a certain biological significance. Frequency bands are usually extracted using spectral methods (for instance Welch) as implemented for instance in freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox. Computational processing of the EEG is often named quantitative electroencephalography (qEEG).

Most of the cerebral signal observed in the scalp EEG falls in the range of 1–20 Hz (activity below or above this range is likely to be artifactual, under standard clinical recording techniques). Waveforms are subdivided into bandwidths known as alpha, beta, theta, and delta to signify the majority of the EEG used in clinical practice.

Comparison of EEG bands

Comparison of EEG bands
Band Frequency (Hz) Location Normally Pathologically
Delta < 4 frontally in adults, posteriorly in children; high-amplitude waves
  • adult slow-wave sleep
  • in babies
  • Has been found during some continuous-attention tasks
  • subcortical lesions
  • diffuse lesions
  • metabolic encephalopathy hydrocephalus
  • deep midline lesions
Theta 4–7 Found in locations not related to task at hand
  • higher in young children
  • drowsiness in adults and teens
  • idling
  • Associated with inhibition of elicited responses (has been found to spike in situations where a person is actively trying to repress a response or action).
  • focal subcortical lesions
  • metabolic encephalopathy
  • deep midline disorders
  • some instances of hydrocephalus
Alpha 8–12 posterior regions of head, both sides, higher in amplitude on dominant side. Central sites (c3-c4) at rest
  • relaxed/reflecting
  • closing the eyes
  • Also associated with inhibition control, seemingly with the purpose of timing inhibitory activity in different locations across the brain.
  • coma
Beta 13–30 both sides, symmetrical distribution, most evident frontally; low-amplitude waves
  • range span: active calm → intense → stressed → mild obsessive
  • active thinking, focus, high alert, anxious
Gamma > 32 Somatosensory cortex
  • Displays during cross-modal sensory processing (perception that combines two different senses, such as sound and sight)
  • Also is shown during short-term memory matching of recognized objects, sounds, or tactile sensations
  • A decrease in gamma-band activity may be associated with cognitive decline, especially when related to the theta band; however, this has not been proven for use as a clinical diagnostic measurement
Mu 8–12 Sensorimotor cortex
  • Shows rest-state motor neurons.
  • Mu suppression could indicate that motor mirror neurons are working. Deficits in Mu suppression, and thus in mirror neurons, might play a role in autism.

The practice of using only whole numbers in the definitions comes from practical considerations in the days when only whole cycles could be counted on paper records. This leads to gaps in the definitions, as seen elsewhere on this page. The theoretical definitions have always been more carefully defined to include all frequencies. Unfortunately there is no agreement in standard reference works on what these ranges should be – values for the upper end of alpha and lower end of beta include 12, 13, 14 and 15. If the threshold is taken as 14 Hz, then the slowest beta wave has about the same duration as the longest spike (70 ms), which makes this the most useful value.


Human EEG with prominent alpha-rhythm
Human EEG with prominent alpha-rhythm

Wave patterns

Delta waves
  • Delta waves is the frequency range up to 4 Hz. It tends to be the highest in amplitude and the slowest waves. It is seen normally in adults in slow-wave sleep. It is also seen normally in babies. It may occur focally with subcortical lesions and in general distribution with diffuse lesions, metabolic encephalopathy hydrocephalus or deep midline lesions. It is usually most prominent frontally in adults (e.g. FIRDA – frontal intermittent rhythmic delta) and posteriorly in children (e.g. OIRDA – occipital intermittent rhythmic delta).
Theta waves
  • Theta is the frequency range from 4 Hz to 7 Hz. Theta is seen normally in young children. It may be seen in drowsiness or arousal in older children and adults; it can also be seen in meditation. Excess theta for age represents abnormal activity. It can be seen as a focal disturbance in focal subcortical lesions; it can be seen in generalized distribution in diffuse disorder or metabolic encephalopathy or deep midline disorders or some instances of hydrocephalus. On the contrary this range has been associated with reports of relaxed, meditative, and creative states.
Alpha waves
  • Alpha is the frequency range from 8 Hz to 12 Hz. Hans Berger named the first rhythmic EEG activity he observed the "alpha wave". This was the "posterior basic rhythm" (also called the "posterior dominant rhythm" or the "posterior alpha rhythm"), seen in the posterior regions of the head on both sides, higher in amplitude on the dominant side. It emerges with closing of the eyes and with relaxation, and attenuates with eye opening or mental exertion. The posterior basic rhythm is actually slower than 8 Hz in young children (therefore technically in the theta range).
Sensorimotor rhythm aka mu rhythm
In addition to the posterior basic rhythm, there are other normal alpha rhythms such as the mu rhythm (alpha activity in the contralateral sensory and motor cortical areas) that emerges when the hands and arms are idle; and the "third rhythm" (alpha activity in the temporal or frontal lobes). Alpha can be abnormal; for example, an EEG that has diffuse alpha occurring in coma and is not responsive to external stimuli is referred to as "alpha coma".
Beta waves
  • Beta is the frequency range from 13 Hz to about 30 Hz. It is seen usually on both sides in symmetrical distribution and is most evident frontally. Beta activity is closely linked to motor behavior and is generally attenuated during active movements. Low-amplitude beta with multiple and varying frequencies is often associated with active, busy or anxious thinking and active concentration. Rhythmic beta with a dominant set of frequencies is associated with various pathologies, such as Dup15q syndrome, and drug effects, especially benzodiazepines. It may be absent or reduced in areas of cortical damage. It is the dominant rhythm in patients who are alert or anxious or who have their eyes open.
Gamma waves
  • Gamma is the frequency range approximately 30–100 Hz. Gamma rhythms are thought to represent binding of different populations of neurons together into a network for the purpose of carrying out a certain cognitive or motor function.
  • Mu range is 8–13 Hz and partly overlaps with other frequencies. It reflects the synchronous firing of motor neurons in rest state. Mu suppression is thought to reflect motor mirror neuron systems, because when an action is observed, the pattern extinguishes, possibly because the normal and mirror neuronal systems "go out of sync" and interfere with one other.

"Ultra-slow" or "near-DC" activity is recorded using DC amplifiers in some research contexts. It is not typically recorded in a clinical context because the signal at these frequencies is susceptible to a number of artifacts.

Some features of the EEG are transient rather than rhythmic. Spikes and sharp waves may represent seizure activity or interictal activity in individuals with epilepsy or a predisposition toward epilepsy. Other transient features are normal: vertex waves and sleep spindles are seen in normal sleep.

There are types of activity that are statistically uncommon, but not associated with dysfunction or disease. These are often referred to as "normal variants". The mu rhythm is an example of a normal variant.

The normal electroencephalogram (EEG) varies by age. The prenatal EEG and neonatal EEG is quite different from the adult EEG. Fetuses in the third trimester and newborns display two common brain activity patterns: "discontinuous" and "trace alternant." "Discontinuous" electrical activity refers to sharp bursts of electrical activity followed by low frequency waves. "Trace alternant" electrical activity describes sharp bursts followed by short high amplitude intervals and usually indicates quiet sleep in newborns. The EEG in childhood generally has slower frequency oscillations than the adult EEG.

The normal EEG also varies depending on state. The EEG is used along with other measurements (EOG, EMG) to define sleep stages in polysomnography. Stage I sleep (equivalent to drowsiness in some systems) appears on the EEG as drop-out of the posterior basic rhythm. There can be an increase in theta frequencies. Santamaria and Chiappa cataloged a number of the variety of patterns associated with drowsiness. Stage II sleep is characterized by sleep spindles – transient runs of rhythmic activity in the 12–14 Hz range (sometimes referred to as the "sigma" band) that have a frontal-central maximum. Most of the activity in Stage II is in the 3–6 Hz range. Stage III and IV sleep are defined by the presence of delta frequencies and are often referred to collectively as "slow-wave sleep". Stages I–IV comprise non-REM (or "NREM") sleep. The EEG in REM (rapid eye movement) sleep appears somewhat similar to the awake EEG.

EEG under general anesthesia depends on the type of anesthetic employed. With halogenated anesthetics, such as halothane or intravenous agents, such as propofol, a rapid (alpha or low beta), nonreactive EEG pattern is seen over most of the scalp, especially anteriorly; in some older terminology this was known as a WAR (widespread anterior rapid) pattern, contrasted with a WAIS (widespread slow) pattern associated with high doses of opiates. Anesthetic effects on EEG signals are beginning to be understood at the level of drug actions on different kinds of synapses and the circuits that allow synchronized neuronal activity.

Artifacts

Main types of artifacts in human EEG
Main types of artifacts in human EEG

EEG is an extremely useful technique for studying brain activity, but the signal measured is always contaminated by artifacts which can impact the analysis of the data. An artifact is any measured signal that does not originate within the brain. Although multiple algorithms exist for the removal of artifacts, the problem of how to deal with them remains an open question. The source of artifacts can be from issues relating to the instrument, such as faulty electrodes, line noise or high electrode impedance, or they may be from the physiology of the subject being recorded. This can include, eye blinks and movement, cardiac activity and muscle activity and these types of artifacts are more complicated to remove. Artifacts may bias the visual interpretation of EEG data as some may mimic cognitive activity that could affect diagnoses of problems such as Alzheimer's disease or sleep disorders. As such the removal of such artifacts in EEG data used for practical applications is of the utmost importance.

Artifact removal

It is important to be able to distinguish artifacts from genuine brain activity in order to prevent incorrect interpretations of EEG data. General approaches for the removal of artifacts from the data are, prevention, rejection and cancellation. The goal of any approach is to develop methodology capable of identifying and removing artifacts without affecting the quality of the EEG signal. As artifact sources are quite different the majority of researchers focus on developing algorithms that will identify and remove a single type of noise in the signal. Simple filtering using a notch filter is commonly employed to reject components with a 50/60 Hz frequency. However such simple filters are not an appropriate choice for dealing with all artifacts, as for some, their frequencies will overlap with the EEG frequencies.

Regression algorithms have a moderate computation cost and are simple. They represented the most popular correction method up until the mid-1990s when they were replaced by "blind source separation" type methods. Regression algorithms work on the premise that all artifacts are comprised by one or more reference channels. Subtracting these reference channels from the other contaminated channels, in either the time or frequency domain, by estimating the impact of the reference channels on the other channels, would correct the channels for the artifact. Although the requirement of reference channels ultimately lead to this class of algorithm being replaced, they still represent the benchmark to which modern algorithms are evaluated against. Blind source separation (BSS) algorithms employed to remove artifacts include principal component analysis (PCA) and independent component analysis (ICA) and several algorithms in this class have been successful at tackling most physiological artifacts.

Physiological artifacts

Ocular artifacts

Ocular artifacts affect the EEG signal significantly. This is due to eye movements involving a change in electric fields surrounding the eyes, distorting the electric field over the scalp, and as EEG is recorded on the scalp, it therefore distorts the recorded signal. A difference of opinion exists among researchers, with some arguing ocular artifacts are, or may be reasonably described as a single generator, whilst others argue it is important to understand the potentially complicated mechanisms. Three potential mechanisms have been proposed to explain the ocular artifact.

The first is corneal retinal dipole movement which argues that an electric dipole is formed between the cornea and retina, as the former is positively and the latter negatively charged. When the eye moves, so does this dipole which impacts the electrical field over the scalp, this is the most standard view. The second mechanism is retinal dipole movement, which is similar to the first but differing in that it argues there is a potential difference, hence dipole across the retina with the cornea having little effect. The third mechanism is eyelid movement. It is known that there is a change in voltage around the eyes when the eyelid moves, even if the eyeball does not. It is thought that the eyelid can be described as a sliding potential source and that the impacting of blinking is different to eye movement on the recorded EEG.

Eyelid fluttering artifacts of a characteristic type were previously called Kappa rhythm (or Kappa waves). It is usually seen in the prefrontal leads, that is, just over the eyes. Sometimes they are seen with mental activity. They are usually in the Theta (4–7 Hz) or Alpha (7–14 Hz) range. They were named because they were believed to originate from the brain. Later study revealed they were generated by rapid fluttering of the eyelids, sometimes so minute that it was difficult to see. They are in fact noise in the EEG reading, and should not technically be called a rhythm or wave. Therefore, current usage in electroencephalography refers to the phenomenon as an eyelid fluttering artifact, rather than a Kappa rhythm (or wave).

The propagation of the ocular artifact is impacted by multiple factors including the properties of the subject's skull, neuronal tissues and skin but the signal may be approximated as being inversely proportional to the distance from the eyes squared. The electrooculogram (EOG) consists of a series of electrodes measuring voltage changes close to the eye and is the most common tool for dealing with the eye movement artifact in the EEG signal.

Muscular artifacts

Another source of artifacts are various muscle movements across the body. This particular class of artifact is usually recorded by all electrodes on the scalp due to myogenic activity (increase or decrease of blood pressure). The origin of these artifacts have no single location and arises from functionally independent muscle groups, meaning the characteristics of the artifact are not constant. The observed patterns due to muscular artifacts will change depending on subject sex, the particular muscle tissue, and its degree of contraction. The frequency range for muscular artifacts is wide and overlaps with every classic EEG rhythm. However most of the power is concentrated in the lower range of the observed frequencies of 20 to 300 Hz making the gamma band particularly susceptible to muscular artifacts. Some muscle artifacts may have activity with a frequency as low as 2 Hz, so delta and theta bands may also be affected by muscle activity. Muscular artifacts may impact sleep studies, as unconscious bruxism (grinding of teeth) movements or snoring can seriously impact the quality of the recorded EEG. In addition the recordings made of epilepsy patients may be significantly impacted by the existence of muscular artifacts.

Cardiac artifacts

The potential due to cardiac activity introduces electrocardiograph (ECG) errors in the EEG. Artifacts arising due to cardiac activity may be removed with the help of an ECG reference signal.

Other physiological artifacts

Glossokinetic artifacts are caused by the potential difference between the base and the tip of the tongue. Minor tongue movements can contaminate the EEG, especially in parkinsonian and tremor disorders.

Environmental artifacts

In addition to artifacts generated by the body, many artifacts originate from outside the body. Movement by the patient, or even just settling of the electrodes, may cause electrode pops, spikes originating from a momentary change in the impedance of a given electrode. Poor grounding of the EEG electrodes can cause significant 50 or 60 Hz artifact, depending on the local power system's frequency. A third source of possible interference can be the presence of an IV drip; such devices can cause rhythmic, fast, low-voltage bursts, which may be confused for spikes.

Abnormal activity

Abnormal activity can broadly be separated into epileptiform and non-epileptiform activity. It can also be separated into focal or diffuse.

Focal epileptiform discharges represent fast, synchronous potentials in a large number of neurons in a somewhat discrete area of the brain. These can occur as interictal activity, between seizures, and represent an area of cortical irritability that may be predisposed to producing epileptic seizures. Interictal discharges are not wholly reliable for determining whether a patient has epilepsy nor where his/her seizure might originate. (See focal epilepsy.)

Generalized epileptiform discharges often have an anterior maximum, but these are seen synchronously throughout the entire brain. They are strongly suggestive of a generalized epilepsy.

Focal non-epileptiform abnormal activity may occur over areas of the brain where there is focal damage of the cortex or white matter. It often consists of an increase in slow frequency rhythms and/or a loss of normal higher frequency rhythms. It may also appear as focal or unilateral decrease in amplitude of the EEG signal.

Diffuse non-epileptiform abnormal activity may manifest as diffuse abnormally slow rhythms or bilateral slowing of normal rhythms, such as the PBR.

Intracortical Encephalogram electrodes and sub-dural electrodes can be used in tandem to discriminate and discretize artifact from epileptiform and other severe neurological events.

More advanced measures of abnormal EEG signals have also recently received attention as possible biomarkers for different disorders such as Alzheimer's disease.

Remote communication

Systems for decoding imagined speech from EEG have applications such as in brain–computer interfaces.

EEG diagnostics

The Department of Defense (DoD) and Veteran's Affairs (VA), and U.S Army Research Laboratory (ARL), collaborated on EEG diagnostics in order to detect mild to moderate Traumatic Brain Injury (mTBI) in combat soldiers. Between 2000 and 2012, 75 percent of U.S. military operations brain injuries were classified mTBI. In response, the DoD pursued new technologies capable of rapid, accurate, non-invasive, and field-capable detection of mTBI to address this injury.

Combat personnel often develop PTSD and mTBI in correlation. Both conditions present with altered low-frequency brain wave oscillations. Altered brain waves from PTSD patients present with decreases in low-frequency oscillations, whereas, mTBI injuries are linked to increased low-frequency wave oscillations. Effective EEG diagnostics can help doctors accurately identify conditions and appropriately treat injuries in order to mitigate long-term effects.

Traditionally, clinical evaluation of EEGs involved visual inspection. Instead of a visual assessment of brain wave oscillation topography, quantitative electroencephalography (qEEG), computerized algorithmic methodologies, analyzes a specific region of the brain and transforms the data into a meaningful "power spectrum" of the area. Accurately differentiating between mTBI and PTSD can significantly increase positive recovery outcomes for patients especially since long-term changes in neural communication can persist after an initial mTBI incident.

Another common measurement made from EEG data is that of complexity measures such as Lempel-Ziv complexity, fractal dimension, and spectral flatness, which are associated with particular pathologies or pathology stages.

Economics

Inexpensive EEG devices exist for the low-cost research and consumer markets. Recently, a few companies have miniaturized medical grade EEG technology to create versions accessible to the general public. Some of these companies have built commercial EEG devices retailing for less than US$100.

  • In 2004 OpenEEG released its ModularEEG as open source hardware. Compatible open source software includes a game for balancing a ball.
  • In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.
  • In 2008 OCZ Technology developed device for use in video games relying primarily on electromyography.
  • In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca.
  • In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best-selling consumer based EEG to date.
  • In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force.
  • In 2010, NeuroSky added a blink and electromyography function to the MindSet.
  • In 2011, NeuroSky released the MindWave, an EEG device designed for educational purposes and games. The MindWave won the Guinness Book of World Records award for "Heaviest machine moved using a brain control interface".
  • In 2012, a Japanese gadget project, neurowear, released Necomimi: a headset with motorized cat ears. The headset is a NeuroSky MindWave unit with two motors on the headband where a cat's ears might be. Slipcovers shaped like cat ears sit over the motors so that as the device registers emotional states the ears move to relate. For example, when relaxed, the ears fall to the sides and perk up when excited again.
  • In 2014, OpenBCI released an eponymous open source brain-computer interface after a successful kickstarter campaign in 2013. The board, later renamed "Cyton", has 8 channels, expandable to 16 with the Daisy module. It supports EEG, EKG, and EMG. The Cyton Board is based on the Texas Instruments ADS1299 IC and the Arduino or PIC microcontroller, and initially costed $399 before increasing in price to $999. It uses standard metal cup electrodes and conductive paste.
  • In 2015, Mind Solutions Inc released the smallest consumer BCI to date, the NeuroSync. This device functions as a dry sensor at a size no larger than a Bluetooth ear piece.
  • In 2015, A Chinese-based company Macrotellect released BrainLink Pro and BrainLink Lite, a consumer grade EEG wearable product providing 20 brain fitness enhancement Apps on Apple and Android App Stores.
  • In 2021, BioSerenity release the Neuronaute and Icecap a single-use disposable EEG headset that allows recording with equivalent quality to traditional cup electrodes.

Future research

The EEG has been used for many purposes besides the conventional uses of clinical diagnosis and conventional cognitive neuroscience. An early use was during World War II by the U.S. Army Air Corps to screen out pilots in danger of having seizures; long-term EEG recordings in epilepsy patients are still used today for seizure prediction. Neurofeedback remains an important extension, and in its most advanced form is also attempted as the basis of brain computer interfaces. The EEG is also used quite extensively in the field of neuromarketing.

The EEG is altered by drugs that affect brain functions, the chemicals that are the basis for psychopharmacology. Berger's early experiments recorded the effects of drugs on EEG. The science of pharmaco-electroencephalography has developed methods to identify substances that systematically alter brain functions for therapeutic and recreational use.

Honda is attempting to develop a system to enable an operator to control its Asimo robot using EEG, a technology it eventually hopes to incorporate into its automobiles.

EEGs have been used as evidence in criminal trials in the Indian state of Maharashtra. Brain Electrical Oscillation Signature Profiling (BEOS), an EEG technique, was used in the trial of State of Maharashtra v. Sharma to show Sharma remembered using arsenic to poison her ex-fiancé, although the reliability and scientific basis of BEOS is disputed.

A lot of research is currently being carried out in order to make EEG devices smaller, more portable and easier to use. So called "Wearable EEG" is based upon creating low power wireless collection electronics and 'dry' electrodes which do not require a conductive gel to be used. Wearable EEG aims to provide small EEG devices which are present only on the head and which can record EEG for days, weeks, or months at a time, as ear-EEG. Such prolonged and easy-to-use monitoring could make a step change in the diagnosis of chronic conditions such as epilepsy, and greatly improve the end-user acceptance of BCI systems. Research is also being carried out on identifying specific solutions to increase the battery lifetime of Wearable EEG devices through the use of the data reduction approach.

In research, currently EEG is often used in combination with machine learning. EEG data are pre-processed then passed on to machine learning algorithms. These algorithms are then trained to recognize different diseases like schizophrenia, epilepsy or dementia. Furthermore, they are increasingly used to study seizure detection. By using machine learning, the data can be analyzed automatically. In the long run this research is intended to build algorithms that support physicians in their clinical practice  and to provide further insights into diseases. In this vein, complexity measures of EEG data are often calculated, such as Lempel-Ziv complexity, fractal dimension, and spectral flatness. It has been shown that combining or multiplying such measures can reveal previously hidden information in EEG data.

EEG signals from musical performers were used to create instant compositions and one CD by the Brainwave Music Project, run at the Computer Music Center at Columbia University by Brad Garton and Dave Soldier. Similarly, an hour-long recording of the brainwaves of Ann Druyan was included on the Voyager Golden Record, launched on the Voyager probes in 1977, in case any extraterrestrial intelligence could decode her thoughts, which included what it was like to fall in love.

History

The first human EEG recording obtained by Hans Berger in 1924. The upper tracing is EEG, and the lower is a 10 Hz timing signal.
Hans Berger

In 1875, Richard Caton (1842–1926), a physician practicing in Liverpool, presented his findings about electrical phenomena of the exposed cerebral hemispheres of rabbits and monkeys in the British Medical Journal. In 1890, Polish physiologist Adolf Beck published an investigation of spontaneous electrical activity of the brain of rabbits and dogs that included rhythmic oscillations altered by light. Beck started experiments on the electrical brain activity of animals. Beck placed electrodes directly on the surface of the brain to test for sensory stimulation. His observation of fluctuating brain activity led to the conclusion of brain waves.

In 1912, Ukrainian physiologist Vladimir Vladimirovich Pravdich-Neminsky published the first animal EEG and the evoked potential of the mammalian (dog). In 1914, Napoleon Cybulski and Jelenska-Macieszyna photographed EEG recordings of experimentally induced seizures.

German physiologist and psychiatrist Hans Berger (1873–1941) recorded the first human EEG in 1924. Expanding on work previously conducted on animals by Richard Caton and others, Berger also invented the electroencephalograph (giving the device its name), an invention described "as one of the most surprising, remarkable, and momentous developments in the history of clinical neurology". His discoveries were first confirmed by British scientists Edgar Douglas Adrian and B. H. C. Matthews in 1934 and developed by them.

In 1934, Fisher and Lowenbach first demonstrated epileptiform spikes. In 1935, Gibbs, Davis and Lennox described interictal spike waves and the three cycles/s pattern of clinical absence seizures, which began the field of clinical electroencephalography. Subsequently, in 1936 Gibbs and Jasper reported the interictal spike as the focal signature of epilepsy. The same year, the first EEG laboratory opened at Massachusetts General Hospital.

Franklin Offner (1911–1999), professor of biophysics at Northwestern University developed a prototype of the EEG that incorporated a piezoelectric inkwriter called a Crystograph (the whole device was typically known as the Offner Dynograph).

In 1947, The American EEG Society was founded and the first International EEG congress was held. In 1953 Aserinsky and Kleitman described REM sleep.

In the 1950s, William Grey Walter developed an adjunct to EEG called EEG topography, which allowed for the mapping of electrical activity across the surface of the brain. This enjoyed a brief period of popularity in the 1980s and seemed especially promising for psychiatry. It was never accepted by neurologists and remains primarily a research tool.

Chuck Kayser with electroencephalograph electrodes and a signal conditioner for use in Project Gemini, 1965

An electroencephalograph system manufactured by Beckman Instruments was used on at least one of the Project Gemini manned spaceflights (1965–1966) to monitor the brain waves of astronauts on the flight. It was one of many Beckman Instruments specialized for and used by NASA.

The first instance of the use of EEG to control a physical object, a robot, was in 1988. The robot would follow a line or stop depending on the alpha activity of the subject. If the subject relaxed and closed their eyes therefore increasing alpha activity, the bot would move. Opening their eyes thus decreasing alpha activity would cause the robot to stop on the trajectory.

In October 2018, scientists connected the brains of three people to experiment with the process of thoughts sharing. Five groups of three people participated in the experiment using EEG. The success rate of the experiment was 81%.

Metastability in the brain

In the field of computational neuroscience, the theory of metastability refers to the human brain's ability to integrate several functional parts and to produce neural oscillations in a cooperative and coordinated manner, providing the basis for conscious activity.

Metastability, a state in which signals (such as oscillatory waves) fall outside their natural equilibrium state but persist for an extended period of time, is a principle that describes the brain's ability to make sense out of seemingly random environmental cues. In the past 25 years, interest in metastability and the underlying framework of nonlinear dynamics has been fueled by advancements in the methods by which computers model brain activity.

Overview

EEG measures the gross electrical activity of the brain that can be observed on the surface of the skull. In the metastability theory, EEG outputs produce oscillations that can be described as having identifiable patterns that correlate with each other at certain frequencies. Each neuron in a neuronal network normally outputs a dynamical oscillatory waveform, but also has the ability to output a chaotic waveform. When neurons are integrated into the neural network by interfacing neurons with each other, the dynamical oscillations created by each neuron can be combined to form highly predictable EEG oscillations.

By identifying these correlations and the individual neurons that contribute to predictable EEG oscillations, scientists can determine which cortical domains are processing in parallel and which neuronal networks are intertwined. In many cases, metastability describes instances in which distal parts of the brain interact with each other to respond to environmental stimuli.

Frequency domains of metastability

It has been suggested that one integral facet of brain dynamics underlying conscious thought is the brain's ability to convert seemingly noisy or chaotic signals into predictable oscillatory patterns.

In EEG oscillations of neural networks, neighboring waveform frequencies are correlated on a logarithmic scale rather than a linear scale. As a result, mean frequencies in oscillatory bands cannot link together according to linearity of their mean frequencies. Instead, phase transitions are linked according to their ability to couple with adjacent phase shifts in a constant state of transition between unstable and stable phase synchronization. This phase synchronization forms the basis of metastable behavior in neural networks.

Metastable behavior occurs at the high frequency domain known as 1/f regime. This regime describes an environment in which a noisy signal (also known as pink noise) has been induced, where the amount of power the signal outputs over a certain bandwidth (its power spectral density) is inversely proportional to its frequency.

Noise at the 1/f regime can be found in many biological systems – for instance, in the output of a heartbeat in an ECG waveform—but serves a unique purpose for phase synchrony in neuronal networks. At the 1/f regime, the brain is in the critical state necessary for a conscious response to weak or chaotic environmental signals because it can shift the random signals into identifiable and predictable oscillatory waveforms. While often transient, these waveforms exist in a stable form long enough to contribute to what can be thought of as conscious response to environmental stimuli.

Theories of metastability

Oscillatory activity and coordination dynamics

The dynamical system model, which represents networks composed of integrated neural systems communicating with one another between unstable and stable phases, has become an increasingly popular theory underpinning the understanding of metastability. Coordination dynamics[4] forms the basis for this dynamical system model by describing mathematical formulae and paradigms governing the coupling of environmental stimuli to their effectors.

History of coordination dynamics and the Haken-Kelso-Bunz (HKB) model

The so-named HKB model is one of the earliest and well-respected theories to describe coordination dynamics in the brain. In this model, the formation of neural networks can be partly described as self-organization, where individual neurons and small neuronal systems aggregate and coordinate to either adapt or respond to local stimuli or to divide labor and specialize in function.

Transition of parallel movement of index fingers to antiparallel, symmetric movement.

In the last 20 years, the HKB model has become a widely accepted theory to explain the coordinated movements and behaviors of individual neurons into large, end-to-end neural networks. Originally the model described a system in which spontaneous transitions observed in finger movements could be described as a series of in-phase and out-of-phase movements.

In the mid-1980s HKB model experiments, subjects were asked to wave one finger on each hand in two modes of direction: first, known as out of phase, both fingers moving in the same direction back and forth (as windshield wipers might move); and second, known as in-phase, where both fingers come together and move away to and from the midline of the body. To illustrate coordination dynamics, the subjects were asked to move their fingers out of phase with increasing speed until their fingers were moving as fast as possible. As movement approached its critical speed, the subjects’ fingers were found to move from out-of-phase (windshield-wiper-like) movement to in-phase (toward midline movement).

The HKB model, which has also been elucidated by several complex mathematical descriptors, is still a relatively simple but powerful way to describe seemingly-independent systems that come to reach synchrony just before a state of self-organized criticality.

Evolution of cognitive coordination dynamics

In the last 10 years, the HKB model has been reconciled with advanced mathematical models and supercomputer-based computation to link rudimentary coordination dynamics to higher-order processes such as learning and memory.

The traditional EEG is still useful to investigate coordination between different parts of the brain. 40 Hz gamma wave activity is a prominent example of the brain's ability to be modeled dynamically and is a common example of coordination dynamics. Continuous study of these and other oscillations has led to an important conclusion: analyzing waves as having a common signal phase but a different amplitude leads to the possibility that these different signals serve a synergistic function.

Some unusual characteristics of these waves: they are virtually simultaneous and have a very short onset latency, which implies that they operate faster than synaptic conduction would allow; and that their recognizable patterns are sometimes interrupted by periods of randomness. The latter idiosyncrasy has served as the basis for assuming an interaction and transition between neural subsystems. Analysis of activation and deactivation of regions of the cortex has shown a dynamic shift between dependence and interdependence, reflecting the brain's metastable nature as a function of a coordinated dynamical system.

fMRI, large-scale electrode arrays, and MEG expand upon the patterns seen in EEG by providing visual confirmation of coordinated dynamics. The MEG, which provides an improvement over EEG in spatiotemporal characterization, allows researchers to stimulate certain parts of the brain with environmental cues and observe the response in a holistic brain model. Additionally, MEG has a response time of about one millisecond, allowing for a virtually real-time investigation of the active turning-on and -off of selected parts of the brain in response to environmental cues and conscious tasks.

Social coordination dynamics and the phi complex

A developing field in coordination dynamics involves the theory of social coordination, which attempts to relate the DC to normal human development of complex social cues following certain patterns of interaction. This work is aimed at understanding how human social interaction is mediated by metastability of neural networks. fMRI and EEG are particularly useful in mapping thalamocortical response to social cues in experimental studies.

A new theory called the phi complex has been developed by J. A. Scott Kelso and fellow researchers at Florida Atlantic University to provide experimental results for the theory of social coordination dynamics. In Kelso's experiments, two subjects were separated by an opaque barrier and asked to wag their fingers; then the barrier was removed and the subjects were instructed to continue to wag their fingers as if no change had occurred. After a short period, the movements of the two subjects sometimes became coordinated and synchronized (but other times continued to be asynchronous). The link between EEG and conscious social interaction is described as Phi, one of several brain rhythms operating in the 10 Hz range. Phi consists of two components: one to favor solitary behavior and another to favor interactive (interpersonal) behavior. Further analysis of Phi may reveal the social and interpersonal implications of degenerative diseases such as schizophrenia—or may provide insight into common social relationships such as the dynamics of alpha and omega-males or the popular bystander effect describing how people diffuse personal responsibility in emergency situations depending on the number of other individuals present.

Dynamic core

A second theory of metastability involves a so-called dynamic core, which is a term to loosely describe the thalamocortical region believed to be the integration center of consciousness. The dynamic core hypothesis (DCH) reflects the use and disuse of interconnected neuronal networks during stimulation of this region. A computer model of 65,000 spiking neurons shows that neuronal groups existing in the cortex and thalamus interact in the form of synchronous oscillation. The interaction between distinct neuronal groups forms the dynamic core and may help explain the nature of conscious experience. A critical feature of the DCH is that instead of thinking binarily about transitions between neural integration and non-integration (i.e., that the two are either one or the other with no in-between), the metastable nature of the dynamic core can allow for a continuum of integration.

Neural Darwinism

One theory used to integrate the dynamic core with conscious thought involves a developing concept known as neural Darwinism. In this model, metastable interactions in the thalamocortical region cause a process of selectionism via re-entry (a phenomenon describing the overall reciprocity and interactivity between signals in distant parts of the brain through coupled signal latency). Neuronal selectivity involves mechanochemical events that take place pre- and post-natally whereby neuronal connections are influenced by environmental experiences. The modification of synaptic signals as it relates to the dynamic core provides further explanation for the DCH.

Despite growing evidence for the DCH, the ability to generate mathematical constructs to model and predict dynamic core behavior has been slow to progress. Continued development of stochastic processes designed to graph neuronal signals as chaotic and non-linear has provided some algorithmic basis for analyzing how chaotic environmental signals are coupled to enhance selectivity of neural outgrowth or coordination in the dynamic core.

Global workspace hypothesis

The global workspace hypothesis is another theory to elucidate metastability, and has existed in some form since 1983. This hypothesis also focuses on the phenomenon of re-entry, the ability of a routine or process to be used by multiple parts of the brain simultaneously. Both the DCH and global neuronal workspace (GNW) models involve re-entrance, but the GNW model elaborates on re-entrant connectivity between distant parts of the brain and long-range signal flow. Workspace neurons are similar anatomically but separated spatially from each other.

One interesting aspect of the GNW is that with sufficient intensity and length over which a signal travels, a small initiation signal can be compounded to activate an "ignition" of a critical spike-inducing state. This idea is analogous to a skier on the slope of a mountain, who, by disrupting a few blocks of ice with his skis, initiates a giant avalanche in his wake. To help prove the circuit-like amplification theory, research has shown that inducing lesions in long-distance connections corrupts performance in integrative models.

A popular experiment to demonstrate the global workspace hypothesis involves showing a subject a series of backward-masked visual words (e.g., "the dog sleeps quietly" is shown as "ylteiuq speels god eht") and then asking the subject to identify the forward "translation" of these words. Not only did fMRI detect activity in the word-recognition portion of the cortex, but additionally, activity is often detected in the parietal and prefrontal cortices. In almost every experiment, conscious input in word and audition tasks shows a much wider use of integrated portions of the brain than in identical unconscious input. The wide distribution and constant signal transfer between different areas of the brain in experimental results is a common method to attempt to prove the neural workspace hypothesis. More studies are being conducted to determine precisely the correlation between conscious and unconscious task deliberation in the realm of the global workspace.

The operational architectonics theory of brain–mind

Although the concept of metastability has been around in Neuroscience for some time, the specific interpretation of metastability in the context of brain operations of different complexity has been developed by Andrew and Alexander Fingelkurts within their model of Operational Architectonics of brain–mind functioning. Metastability is basically a theory of how global integrative and local segregative tendencies coexist in the brain. The Operational Architectonics is centered on the fact that in the metastable regime of brain functioning, the individual parts of the brain exhibit tendencies to function autonomously at the same time as they exhibit tendencies for coordinated activity. In accordance with Operational Architectonics, the synchronized operations produced by distributed neuronal assemblies constitute the metastable spatial-temporal patterns. They are metastable because intrinsic differences in the activity between neuronal assemblies are sufficiently large that they each do their own job (operation), while still retaining a tendency to be coordinated together in order to realize the complex brain operation.

The future of metastability

In addition to study investigating the effects of metastable interactions on traditional social function, much research will likely focus on determining the role of the coordinated dynamic system and the global workspace in the progression of debilitating diseases such as Alzheimer's disease, Parkinson's disease, stroke, and schizophrenia.

An interest in the effect of a traumatic or semi-traumatic brain injury (TBI) on the coordinated dynamical system has developed in the last five years as the number of TBI cases has risen from war-related injuries.

Metastability

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Metastability
A metastable state of weaker bond (1), a transitional "saddle" configuration (2) and a stable state of stronger bond (3).

In chemistry and physics, metastability is an intermediate energetic state within a dynamical system other than the system's state of least energy. A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope. Bowling pins show similar metastability by either merely wobbling for a moment or tipping over completely. A common example of metastability in science is isomerisation. Higher energy isomers are long lived because they are prevented from rearranging to their preferred ground state by (possibly large) barriers in the potential energy.

During a metastable state of finite lifetime, all state-describing parameters reach and hold stationary values. In isolation:

  • the state of least energy is the only one the system will inhabit for an indefinite length of time, until more external energy is added to the system (unique "absolutely stable" state);
  • the system will spontaneously leave any other state (of higher energy) to eventually return (after a sequence of transitions) to the least energetic state.

The metastability concept originated in the physics of first-order phase transitions. It then acquired new meaning in the study of aggregated subatomic particles (in atomic nuclei or in atoms) or in molecules, macromolecules or clusters of atoms and molecules. Later, it was borrowed for the study of decision-making and information transmission systems.

Metastability is common in physics and chemistry – from an atom (many-body assembly) to statistical ensembles of molecules (viscous fluids, amorphous solids, liquid crystals, minerals, etc.) at molecular levels or as a whole (see Metastable states of matter and grain piles below). The abundance of states is more prevalent as the systems grow larger and/or if the forces of their mutual interaction are spatially less uniform or more diverse.

In dynamic systems (with feedback) like electronic circuits, signal trafficking, decisional, neural and immune systems, the time-invariance of the active or reactive patterns with respect to the external influences defines stability and metastability (see brain metastability below). In these systems, the equivalent of thermal fluctuations in molecular systems is the "white noise" that affects signal propagation and the decision-making.

Statistical physics and thermodynamics

Non-equilibrium thermodynamics is a branch of physics that studies the dynamics of statistical ensembles of molecules via unstable states. Being "stuck" in a thermodynamic trough without being at the lowest energy state is known as having kinetic stability or being kinetically persistent. The particular motion or kinetics of the atoms involved has resulted in getting stuck, despite there being preferable (lower-energy) alternatives.

States of matter

Metastable states of matter (also referred as metastates) range from melting solids (or freezing liquids), boiling liquids (or condensing gases) and sublimating solids to supercooled liquids or superheated liquid-gas mixtures. Extremely pure, supercooled water stays liquid below 0 °C and remains so until applied vibrations or condensing seed doping initiates crystallization centers. This is a common situation for the droplets of atmospheric clouds.

Condensed matter and macromolecules

Metastable phases are common in condensed matter and crystallography. This is the case for anatase, a metastable polymorph of titanium dioxide, which despite commonly being the first phase to form in many synthesis processes due to its lower surface energy, is always metastable, with rutile being the most stable phase at all temperatures and pressures. As another example, diamond is a stable phase only at very high pressures, but is a metastable form of carbon at standard temperature and pressure. It can be converted to graphite (plus leftover kinetic energy), but only after overcoming an activation energy – an intervening hill. Martensite is a metastable phase used to control the hardness of most steel. Metastable polymorphs of silica are commonly observed. In some cases, such as in the allotropes of solid boron, acquiring a sample of the stable phase is difficult.

The bonds between the building blocks of polymers such as DNA, RNA, and proteins are also metastable. Adenosine triphosphate (ATP) is a highly metastable molecule, colloquially described as being "full of energy" that can be used in many ways in biology.

Generally speaking, emulsions/colloidal systems and glasses are metastable. The metastability of silica glass, for example, is characterised by lifetimes on the order of 1098 years (as compared with the lifetime of the universe, which is thought to be around 13.787×109 years).

Sandpiles are one system which can exhibit metastability if a steep slope or tunnel is present. Sand grains form a pile due to friction. It is possible for an entire large sand pile to reach a point where it is stable, but the addition of a single grain causes large parts of it to collapse.

The avalanche is a well-known problem with large piles of snow and ice crystals on steep slopes. In dry conditions, snow slopes act similarly to sandpiles. An entire mountainside of snow can suddenly slide due to the presence of a skier, or even a loud noise or vibration.

Quantum mechanics

Aggregated systems of subatomic particles described by quantum mechanics (quarks inside nucleons, nucleons inside atomic nuclei, electrons inside atoms, molecules, or atomic clusters) are found to have many distinguishable states. Of these, one (or a small degenerate set) is indefinitely stable: the ground state or global minimum.

All other states besides the ground state (or those degenerate with it) have higher energies. Of all these other states, the metastable states are the ones having lifetimes lasting at least 102 to 103 times longer than the shortest lived states of the set.

A metastable state is then long-lived (locally stable with respect to configurations of 'neighbouring' energies) but not eternal (as the global minimum is). Being excited – of an energy above the ground state – it will eventually decay to a more stable state, releasing energy. Indeed, above absolute zero, all states of a system have a non-zero probability to decay; that is, to spontaneously fall into another state (usually lower in energy). One mechanism for this to happen is through tunnelling.

Nuclear physics

Some energetic states of an atomic nucleus (having distinct spatial mass, charge, spin, isospin distributions) are much longer-lived than others (nuclear isomers of the same isotope), e.g. technetium-99m. The isotope tantalum-180m, although being a metastable excited state, is long-lived enough that it has never been observed to decay, with a half-life calculated to be least 4.5×1016 years, over 3 million times the current age of the universe.

Atomic and molecular physics

Some atomic energy levels are metastable. Rydberg atoms are an example of metastable excited atomic states. Transitions from metastable excited levels are typically those forbidden by electric dipole selection rules. This means that any transitions from this level are relatively unlikely to occur. In a sense, an electron that happens to find itself in a metastable configuration is trapped there. Since transitions from a metastable state are not impossible (merely less likely), the electron will eventually decay to a less energetic state, typically by an electric quadrupole transition, or often by non-radiative de-excitation (e.g., collisional de-excitation).

This slow-decay property of a metastable state is apparent in phosphorescence, the kind of photoluminescence seen in glow-in-the-dark toys that can be charged by first being exposed to bright light. Whereas spontaneous emission in atoms has a typical timescale on the order of 10−8 seconds, the decay of metastable states can typically take milliseconds to minutes, and so light emitted in phosphorescence is usually both weak and long-lasting.

Chemistry

In chemical systems, a system of atoms or molecules involving a change in chemical bond can be in a metastable state, which lasts for a relatively long period of time. Molecular vibrations and thermal motion make chemical species at the energetic equivalent of the top of a round hill very short-lived. Metastable states that persist for many seconds (or years) are found in energetic valleys which are not the lowest possible valley (point 1 in illustration). A common type of metastability is isomerism.

The stability or metastability of a given chemical system depends on its environment, particularly temperature and pressure. The difference between producing a stable vs. metastable entity can have important consequences. For instances, having the wrong crystal polymorph can result in failure of a drug while in storage between manufacture and administration. The map of which state is the most stable as a function of pressure, temperature and/or composition is known as a phase diagram. In regions where a particular state is not the most stable, it may still be metastable. Reaction intermediates are relatively short-lived, and are usually thermodynamically unstable rather than metastable. The IUPAC recommends referring to these as transient rather than metastable.

Metastability is also used to refer to specific situations in mass spectrometry and spectrochemistry.

Electronic circuits

A digital circuit is supposed to be found in a small number of stable digital states within a certain amount of time after an input change. However if an input changes at the wrong moment a digital circuit which employs feedback (even a simple circuit such as a flip-flop) can enter a metastable state and take an unbounded length of time to finally settle into a fully stable digital state.

Computational neuroscience

Metastability in the brain is a phenomenon studied in computational neuroscience to elucidate how the human brain recognizes patterns. Here, the term metastability is used rather loosely. There is no lower-energy state, but there are semi-transient signals in the brain that persist for a while and are different than the usual equilibrium state.

Conflict resolution

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Conflict_resolution

Conflict resolution
is conceptualized as the methods and processes involved in facilitating the peaceful ending of conflict and retribution. Committed group members attempt to resolve group conflicts by actively communicating information about their conflicting motives or ideologies to the rest of group (e.g., intentions; reasons for holding certain beliefs) and by engaging in collective negotiation. Dimensions of resolution typically parallel the dimensions of conflict in the way the conflict is processed. Cognitive resolution is the way disputants understand and view the conflict, with beliefs, perspectives, understandings and attitudes. Emotional resolution is in the way disputants feel about a conflict, the emotional energy. Behavioral resolution is reflective of how the disputants act, their behavior. Ultimately a wide range of methods and procedures for addressing conflict exist, including negotiation, mediation, mediation-arbitration, diplomacy, and creative peacebuilding.

Dispute resolution is conflict resolution limited to law, such as arbitration and litigation processes. The concept of conflict resolution can be thought to encompass the use of nonviolent resistance measures by conflicted parties in an attempt to promote effective resolution.

Theories and models

There are a plethora of different theories and models linked to the concept of conflict resolution. A few of them are described below.

Conflict resolution curve

There are many examples of conflict resolution in history, and there has been a debate about the ways to conflict resolution: whether it should be forced or peaceful. Conflict resolution by peaceful means is generally perceived to be a better option. The conflict resolution curve derived from an analytical model that offers a peaceful solution by motivating conflicting entities. Forced resolution of conflict might invoke another conflict in the future.

Conflict resolution curve (CRC) separates conflict styles into two separate domains: domain of competing entities and domain of accommodating entities. There is a sort of agreement between targets and aggressors on this curve. Their judgements of badness compared to goodness of each other are analogous on CRC. So, arrival of conflicting entities to some negotiable points on CRC is important before peace building. CRC does not exist (i.e., singular) in reality if the aggression of the aggressor is certain. Under such circumstances it might lead to apocalypse with mutual destruction.

The curve explains why nonviolent struggles ultimately toppled repressive regimes and sometimes forced leaders to change the nature of governance. Also, this methodology has been applied to capture conflict styles on the Korean Peninsula and dynamics of negotiation processes.

Dual concern model

The dual concern model of conflict resolution is a conceptual perspective that assumes individuals' preferred method of dealing with conflict is based on two underlying themes or dimensions: concern for self (assertiveness) and concern for others (empathy).

According to the model, group members balance their concern for satisfying personal needs and interests with their concern for satisfying the needs and interests of others in different ways. The intersection of these two dimensions ultimately leads individuals towards exhibiting different styles of conflict resolution. The dual model identifies five conflict resolution styles or strategies that individuals may use depending on their dispositions toward pro-self or pro-social goals.

Avoidance conflict style

Characterized by joking, changing or avoiding the topic, or even denying that a problem exists, the conflict avoidance style is used when an individual has withdrawn in dealing with the other party, when one is uncomfortable with conflict, or due to cultural contexts. During conflict, these avoiders adopt a "wait and see" attitude, often allowing conflict to phase out on its own without any personal involvement. By neglecting to address high-conflict situations, avoiders risk allowing problems to fester or spin out of control.

Accommodating conflict style

In contrast, yielding, "accommodating", smoothing or suppression conflict styles are characterized by a high level of concern for others and a low level of concern for oneself. This passive pro-social approach emerges when individuals derive personal satisfaction from meeting the needs of others and have a general concern for maintaining stable, positive social relationships. When faced with conflict, individuals with an accommodating conflict style tend to harmonize into others' demands out of respect for the social relationship. With this sense of yielding to the conflict, individuals fall back to others' input instead of finding solutions with their own intellectual resolution.

Competitive conflict style

The competitive, "fighting" or forcing conflict style maximizes individual assertiveness (i.e., concern for self) and minimizes empathy (i.e., concern for others). Groups consisting of competitive members generally enjoy seeking domination over others, and typically see conflict as a "win or lose" predicament. Fighters tend to force others to accept their personal views by employing competitive power tactics (arguments, insults, accusations or even violence) that foster intimidation.

Conciliation conflict style

The conciliation, "compromising", bargaining or negotiation conflict style is typical of individuals who possess an intermediate level of concern for both personal and others' outcomes. Compromisers value fairness and, in doing so, anticipate mutual give-and-take interactions. By accepting some demands put forth by others, compromisers believe this agreeableness will encourage others to meet them halfway, thus promoting conflict resolution. This conflict style can be considered an extension of both "yielding" and "cooperative" strategies.

Cooperation conflict style

Characterized by an active concern for both pro-social and pro-self behavior, the cooperation, integration, confrontation or problem-solving conflict style is typically used when an individual has elevated interests in their own outcomes as well as in the outcomes of others. During conflict, cooperators collaborate with others in an effort to find an amicable solution that satisfies all parties involved in the conflict. Individuals using this type of conflict style tend to be both highly assertive and highly empathetic. By seeing conflict as a creative opportunity, collaborators willingly invest time and resources into finding a "win-win" solution. According to the literature on conflict resolution, a cooperative conflict resolution style is recommended above all others. This resolution may be achieved by lowering the aggressor's guard while raising the ego.

Relational dialectics theory

Relational dialectics theory (RDT), introduced by Leslie Baxter and Barbara Matgomery (1988), explores the ways in which people in relationships use verbal communication to manage conflict and contradiction as opposed to psychology. This concept focuses on maintaining a relationship even through contradictions that arise and how relationships are managed through coordinated talk. RDT assumes that relationships are composed of opposing tendencies, are constantly changing, and tensions arises from intimate relationships.

The main concepts of RDT are:

  • Contradictions – The concept is that the contrary has the characteristics of its opposite. People can seek to be in a relationship but still need their space.
  • Totality – The totality comes when the opposites unite. Thus, the relationship is balanced with contradictions and only then it reaches totality
  • Process – Comprehended through various social processes. These processes simultaneously continue within a relationship in a recurring manner.
  • Praxis – The relationship progresses with experience and both people interact and communicate effectively to meet their needs. Praxis is a concept of practicability in making decisions in a relationship despite opposing wants and needs

Strategy of conflict

Strategy of conflict, by Thomas Schelling, is the study of negotiation during conflict and strategic behavior that results in the development of "conflict behavior". This idea is based largely on game theory. In "A Reorientation of Game Theory", Schelling discusses ways in which one can redirect the focus of a conflict in order to gain advantage over an opponent.

  • Conflict is a contest. Rational behavior, in this contest, is a matter of judgment and perception.
  • Strategy makes predictions using "rational behavior – behavior motivated by a serious calculation of advantages, a calculation that in turn is based on an explicit and internally consistent value system".
  • Cooperation is always temporary, interests will change.

Peace and conflict studies

Within peace and conflict studies a definition of conflict resolution is presented in Peter Wallensteen's book Understanding Conflict Resolution:

Conflict resolution is a social situation where the armed conflicting parties in a (voluntarily) agreement resolve to live peacefully with – and/or dissolve – their basic incompatibilities and henceforth cease to use arms against one another.

The "conflicting parties" concerned in this definition are formally or informally organized groups engaged in intrastate or interstate conflict. 'Basic incompatibility' refers to a severe disagreement between at least two sides where their demands cannot be met by the same resources at the same time. The cost of conflict can be balanced by the price of unjust peace.

Conflict resolution mechanisms

One theory discussed within the field of peace and conflict studies is conflict resolution mechanisms: independent procedures in which the conflicting parties can have confidence. They can be formal or informal arrangements with the intention of resolving the conflict. In Understanding Conflict Resolution Wallensteen draws from the works of Lewis A. Coser, Johan Galtung and Thomas Schelling, and presents seven distinct theoretical mechanisms for conflict resolutions:

  1. A shift in priorities for one of the conflicting parties. While it is rare that a party completely changes its basic positions, it can display a shift in to what it gives highest priority. In such an instance new possibilities for conflict resolutions may arise.
  2. The contested resource is divided. In essence, this means both conflicting parties display some extent of shift in priorities which then opens up for some form of "meeting the other side halfway" agreement.
  3. Horse-trading between the conflicting parties. This means that one side gets all of its demands met on one issue, while the other side gets all of its demands met on another issue.
  4. The parties decide to share control, and rule together over the contested resource. It could be permanent, or a temporary arrangement for a transition period that, when over, has led to a transcendence of the conflict.
  5. The parties agree to leave control to someone else. In this mechanism the primary parties agree, or accept, that a third party takes control over the contested resource.
  6. The parties resort to conflict resolution mechanisms, notably arbitration or other legal procedures. This means finding a procedure for resolving the conflict through some of the previously mentioned five ways, but with the added quality that it is done through a process outside of the parties' immediate control.
  7. Some issues can be left for later. The argument for this is that political conditions and popular attitudes can change, and some issues can gain from being delayed, as their significance may pale with time.

Intrastate and interstate

Moshe Dayan and Abdullah el Tell reach a ceasefire agreement during the 1948 Arab–Israeli War in Jerusalem on 30 November 1948

According to conflict database Uppsala Conflict Data Program's definition war may occur between parties who contest an incompatibility. The nature of an incompatibility can be territorial or governmental, but a warring party must be a "government of a state or any opposition organization or alliance of organizations that uses armed force to promote its position in the incompatibility in an intrastate or an interstate armed conflict". Wars can conclude with a peace agreement, which is a "formal agreement... which addresses the disputed incompatibility, either by settling all or part of it, or by clearly outlining a process for how [...] to regulate the incompatibility." A ceasefire is another form of agreement made by warring parties; unlike a peace agreement, it only "regulates the conflict behaviour of warring parties", and does not resolve the issue that brought the parties to war in the first place.

Peacekeeping measures may be deployed to avoid violence in solving such incompatibilities. Beginning in the last century, political theorists have been developing the theory of a global peace system that relies upon broad social and political measures to avoid war in the interest of achieving world peace. The Blue Peace approach developed by Strategic Foresight Group facilitates cooperation between countries over shared water resources, thus reducing the risk of war and enabling sustainable development.

Conflict resolution is an expanding field of professional practice, both in the U.S. and around the world. The escalating costs of conflict have increased use of third parties who may serve as a conflict specialists to resolve conflicts. In fact, relief and development organizations have added peace-building specialists to their teams. Many major international non-governmental organizations have seen a growing need to hire practitioners trained in conflict analysis and resolution. Furthermore, this expansion has resulted in the need for conflict resolution practitioners to work in a variety of settings such as in businesses, court systems, government agencies, nonprofit organizations, and educational institutions throughout the world. Democracy has a positive influence on conflict resolution.

In the workplace

According to the Cambridge dictionary, a basic definition of conflict is: "an active disagreement between people with opposing opinions or principles." Conflicts such as disagreements may occur at any moment, being a normal part of human interactions. The type of conflict and its severity may vary both in content and degree of seriousness; however, it is impossible to completely avoid it. Actually, conflict in itself is not necessarily a negative thing. When handled constructively it can help people to stand up for themselves and others, to evolve and learn how to work together to achieve a mutually satisfactory solution. But if conflict is handled poorly it can cause anger, hurt, divisiveness and more serious problems.

If it is impossible to completely avoid conflict as it was said, the possibilities to experience it are usually higher particularly in complex social contexts in which important diversities are at stake. Specially because of this reason, speaking about conflict resolution becomes fundamental in ethnically diverse and multicultural work environments, in which not only "regular" work disagreements may occur but in which also different languages, worldviews, lifestyles and ultimately value differences may diverge.

Conflict resolution is the process by which two or more parties engaged in a disagreement, dispute or debate reach an agreement resolving it. It involves a series of stages, involved actors, models and approaches that may depend on the kind of confrontation at stake and the surrounded social and cultural context. However, there are some general actions and personal skills that may be very useful when facing a conflict to solve (independently of its nature), e.g. an open minded orientation able to analyze the different point of views and perspectives involved, as well as an ability to empathize, carefully listen and clearly communicate with all the parts involved. Sources of conflict may be so many, depending on the particular situation and the specific context, but some of the most common include:

Personal differences such as values, ethics, personalities, age, education, gender, socioeconomic status, cultural background, temperament, health, religion, political beliefs, etc. Thus, almost any social category that serves to differentiate people may become an object of conflict when it does negatively diverge with people who do not share it. Clashes of ideas, choices or actions. Conflict occurs when people does not share common goals, or common ways to reach a particular objective (e.g. different work styles). Conflict occurs also when there is direct or indirect competition between people or when someone may feel excluded from a particular activity or by some people within the company. Lack of communication or poor communication are also significant reasons to start a conflict, to misunderstand a particular situation and to create potentially explosive interactions.

Fundamental strategies

Although different conflicts may require different ways to handle them, this is a list of fundamental strategies that may be implemented when handling a conflictive situation:

  1. Reaching agreement on rules and procedures: Establishing ground rules may include the following actions: a. Determining a site for the meeting; b. Setting a formal agenda; c. Determining who attends; d. Setting time limits; e. Setting procedural rules; f. Following specific "do(s) and don't(s)".
  2. Reducing tension and synchronizing the de-escalation of hostility: In highly emotional situations when people feel angry, upset, frustrated, it is important to implement the following actions: a. Separating the involved parties; b. Managing tensions – jokes as an instrument to give the opportunity for catharsis; c. Acknowledging others' feelings – actively listening to others; d. De-escalation by public statements by parties – about the concession, the commitments of the parties.
  3. Improving the accuracy of communication, particularly improving each party's understanding of the other's perception: a. Accurate understanding of the other's position; b. Role reversal, trying to adopt the other's position (empathetic attitudes); c. Imaging – describing how they see themselves, how the other parties appears to them, how they think the other parties will describe them and how the others see themselves.
  4. Controlling the number and size of issues in the discussion: a. Fractionate the negotiation – a method that divides a large conflict into smaller parts: 1. Reduce the number of parties on each side; 2. Control the number of substantive issues; 3. Search for different ways to divide big issues.
  5. Establishing common ground where parties can find a basis for agreement: a. Establishing common goals or superordinate goals; b. Establishing common enemies; c. Identifying common expectations; d. Managing time constraints and deadlines; e. Reframing the parties' view of each other; f. Build trust through the negotiation process.
  6. Enhancing the desirability of the options and alternatives that each party presents to the other: a. Giving the other party an acceptable proposal; b. Asking for a different decision; c. Sweeten the other rather than intensifying the threat; d. Elaborating objective or legitimate criteria to evaluate all possible solutions.

Approaches

A conflict is a common phenomenon in the workplace; as mentioned before, it can occur because of the most different grounds of diversity and under very different circumstances. However, it is usually a matter of interests, needs, priorities, goals or values interfering with each other; and, often, a result of different perceptions more than actual differences. Conflicts may involve team members, departments, projects, organization and client, boss and subordinate, organization needs vs. personal needs, and they are usually immersed in complex relations of power that need to be understood and interpreted in order to define the more tailored way to manage the conflict. There are, nevertheless, some main approaches that may be applied when trying to solve a conflict that may lead to very different outcomes to be valued according to the particular situation and the available negotiation resources:

Forcing

When one of the conflict's parts firmly pursues his or her own concerns despite the resistance of the other(s). This may involve pushing one viewpoint at the expense of another or maintaining firm resistance to the counterpart's actions; it is also commonly known as "competing". Forcing may be appropriate when all other, less forceful methods, do not work or are ineffective; when someone needs to stand up for his/her own rights (or the represented group/organization's rights), resist aggression and pressure. It may be also considered a suitable option when a quick resolution is required and using force is justified (e.g. in a life-threatening situation, to stop an aggression), and as a very last resort to resolve a long-lasting conflict.

However, forcing may also negatively affect the relationship with the opponent in the long run; may intensified the conflict if the opponent decides to react in the same way (even if it was not the original intention); it does not allow to take advantage in a productive way of the other side's position and, last but not least, taking this approach may require a lot of energy and be exhausting to some individuals.

Win-win / collaborating

Collaboration involves an attempt to work with the other part involved in the conflict to find a win-win solution to the problem in hand, or at least to find a solution that most satisfies the concerns of both parties. The win-win approach sees conflict resolution as an opportunity to come to a mutually beneficial result; and it includes identifying the underlying concerns of the opponents and finding an alternative which meets each party's concerns. From that point of view, it is the most desirable outcome when trying to solve a problem for all partners.

Collaborating may be the best solution when consensus and commitment of other parties is important; when the conflict occurs in a collaborative, trustworthy environment and when it is required to address the interests of multiple stakeholders. But more specially, it is the most desirable outcome when a long-term relationship is important so that people can continue to collaborate in a productive way; collaborating is in few words, sharing responsibilities and mutual commitment. For parties involved, the outcome of the conflict resolution is less stressful; however, the process of finding and establishing a win-win solution may be longer and should be very involving.

It may require more effort and more time than some other methods; for the same reason, collaborating may not be practical when timing is crucial and a quick solution or fast response is required.

Compromising

Different from the win-win solution, in this outcome the conflict parties find a mutually acceptable solution which partially satisfies both parties. This can occur as both parties converse with one another and seek to understand the other's point of view. Compromising may be an optimal solution when the goals are moderately important and not worth the use of more assertive or more involving approaches. It may be useful when reaching temporary settlement on complex issues and as a first step when the involved parties do not know each other well or have not yet developed a high level of mutual trust. Compromising may be a faster way to solve things when time is a factor. The level of tensions can be lower as well, but the result of the conflict may be also less satisfactory.

If this method is not well managed, and the factor time becomes the most important one, the situation may result in both parties being not satisfied with the outcome (i.e. a lose-lose situation). Moreover, it does not contribute to building trust in the long run and it may require a closer monitoring of the kind of partially satisfactory compromises acquired.

Withdrawing

This technique consists on not addressing the conflict, postpone it or simply withdrawing; for that reason, it is also known as Avoiding. This outcome is suitable when the issue is trivial and not worth the effort or when more important issues are pressing, and one or both the parties do not have time to deal with it. Withdrawing may be also a strategic response when it is not the right time or place to confront the issue, when more time is needed to think and collect information before acting or when not responding may bring still some winnings for at least some of the involves parties. Moreover, withdrawing may be also employed when someone know that the other party is totally engaged with hostility and does not want (can not) to invest further unreasonable efforts.

Withdrawing may give the possibility to see things from a different perspective while gaining time and collecting further information, and specially is a low stress approach particularly when the conflict is a short time one. However, not acting may be interpreted as an agreement and therefore it may lead to weakening or losing a previously gained position with one or more parties involved. Furthermore, when using withdrawing as a strategy more time, skills and experiences together with other actions may need to be implemented.

Smoothing

Smoothing is accommodating the concerns of others first of all, rather than one's own concerns. This kind of strategy may be applied when the issue of the conflict is much more important for the counterparts whereas for the other is not particularly relevant. It may be also applied when someone accepts that he/she is wrong and furthermore there are no other possible options than continuing an unworthy competing-pushing situation. Just as withdrawing, smoothing may be an option to find at least a temporal solution or obtain more time and information, however, it is not an option when priority interests are at stake.

There is a high risk of being abused when choosing the smoothing option. Therefore, it is important to keep the right balance and to not give up one own interests and necessities. Otherwise, confidence in one's ability, mainly with an aggressive opponent, may be seriously damaged, together with credibility by the other parties involved. Needed to say, in these cases a transition to a Win-Win solution in the future becomes particularly more difficult when someone.

Between organizations

Relationships between organizations, such as strategic alliances, buyer-supplier partnerships, organizational networks, or joint ventures are prone to conflict. Conflict resolution in inter-organizational relationships has attracted the attention of business and management scholars. They have related the forms of conflict (e.g., integrity-based vs. competence-based conflict) to the mode of conflict resolution and the negotiation and repair approaches used by organizations. They have also observed the role of important moderating factors such as the type of contractual arrangement, the level of trust between organizations, or the type of power asymmetry.

Other forms

Conflict management

Conflict management refers to the long-term management of intractable conflicts. It is the label for the variety of ways by which people handle grievances—standing up for what they consider to be right and against what they consider to be wrong. Those ways include such diverse phenomena as gossip, ridicule, lynching, terrorism, warfare, feuding, genocide, law, mediation, and avoidance. Which forms of conflict management will be used in any given situation can be somewhat predicted and explained by the social structure—or social geometry—of the case.

Conflict management is often considered to be distinct from conflict resolution. In order for actual conflict to occur, there should be an expression of exclusive patterns which explain why and how the conflict was expressed the way it was. Conflict is often connected to a previous issue. Resolution refers to resolving a dispute to the approval of one or both parties, whereas management is concerned with an ongoing process that may never have a resolution. Neither is considered the same as conflict transformation, which seeks to reframe the positions of the conflict parties.

Counseling

When personal conflict leads to frustration and loss of efficiency, counseling may prove helpful. Although few organizations can afford to have professional counselors on staff, given some training, managers may be able to perform this function. Nondirective counseling, or "listening with understanding", is little more than being a good listener—something often considered to be important in a manager.

Sometimes simply being able to express one's feelings to a concerned and understanding listener is enough to relieve frustration and make it possible for an individual to advance to a problem-solving frame of mind. The nondirective approach is one effective way for managers to deal with frustrated subordinates and coworkers.

There are other, more direct and more diagnostic, methods that could be used in appropriate circumstances. However, the great strength of the nondirective approach lies in its simplicity, its effectiveness, and that it deliberately avoids the manager-counselor's diagnosing and interpreting emotional problems, which would call for special psychological training. Listening to staff with sympathy and understanding is unlikely to escalate the problem, and is a widely used approach for helping people cope with problems that interfere with their effectiveness in the workplace.

Culture-based

The Reconciliation of Jacob and Esau (illustration from a Bible card published 1907 by the Providence Lithograph Company)

Conflict resolution as both a professional practice and academic field is highly sensitive to cultural practices. In Western cultural contexts, such as Canada and the United States, successful conflict resolution usually involves fostering communication among disputants, problem solving, and drafting agreements that meet underlying needs. In these situations, conflict resolvers often talk about finding a mutually satisfying ("win-win") solution for everyone involved.

In many non-Western cultural contexts, such as Afghanistan, Vietnam, and China, it is also important to find "win-win" solutions; however, the routes taken to find them may be very different. In these contexts, direct communication between disputants that explicitly addresses the issues at stake in the conflict can be perceived as very rude, making the conflict worse and delaying resolution. It can make sense to involve religious, tribal, or community leaders; communicate difficult truths through a third party; or make suggestions through stories. Intercultural conflicts are often the most difficult to resolve because the expectations of the disputants can be very different, and there is much occasion for misunderstanding.

In animals

Conflict resolution has also been studied in non-humans, including dogs, cats, monkeys, snakes, elephants, and primates. Aggression is more common among relatives and within a group than between groups. Instead of creating distance between the individuals, primates tend to be more intimate in the period after an aggressive incident. These intimacies consist of grooming and various forms of body contact. Stress responses, including increased heart rates, usually decrease after these reconciliatory signals. Different types of primates, as well as many other species who live in groups, display different types of conciliatory behavior. Resolving conflicts that threaten the interaction between individuals in a group is necessary for survival, giving it a strong evolutionary value. A further focus of this is among species that have stable social units, individual relationships, and the potential for intragroup aggression that may disrupt beneficial relationships. The role of these reunions in negotiating relationships is examined along with the susceptibility of these relationships to partner value asymmetries and biological market effects. These findings contradict previous existing theories about the general function of aggression, i.e. creating space between individuals (first proposed by Konrad Lorenz), which seems to be more the case in conflicts between groups than it is within groups.

In addition to research in primates, biologists are beginning to explore reconciliation in other animals. Until recently, the literature dealing with reconciliation in non-primates has consisted of anecdotal observations and very little quantitative data. Although peaceful post-conflict behavior had been documented going back to the 1960s, it was not until 1993 that Rowell made the first explicit mention of reconciliation in feral sheep. Reconciliation has since been documented in spotted hyenas, lions, bottlenose dolphins, dwarf mongoose, domestic goats, domestic dogs, and, recently, in red-necked wallabies.

Education

Universities worldwide offer programs of study pertaining to conflict research, analysis, and practice. Conrad Grebel University College at the University of Waterloo has the oldest-running peace and conflict studies (PACS) program in Canada. PACS can be taken as an Honors, 4-year general, or 3-year general major, joint major, minor, and diploma. Grebel also offers an interdisciplinary Master of Peace and Conflict Studies professional program. The Cornell University ILR School houses the Scheinman Institute on Conflict Resolution, which offers undergraduate, graduate, and professional training on conflict resolution. It also offers dispute resolution concentrations for its MILR, JD/MILR, MPS, and MS/PhD graduate degree programs. At the graduate level, Eastern Mennonite University's Center for Justice and Peacebuilding offers a Master of Arts in Conflict Transformation, a dual Master of Divinity/MA in Conflict Transformation degree, and several graduate certificates. EMU also offers an accelerated 5-year BA in Peacebuilding and Development/MA in Conflict Transformation. Additional graduate programs are offered at Columbia University, Georgetown University, Johns Hopkins University, Creighton University, the University of North Carolina at Greensboro, and Trinity College Dublin. George Mason University's Jimmy and Rosalynn Carter School for Peace and Conflict Resolution offers BA, BS, MS, and PhD degrees in Conflict Analysis and Resolution, as well as an undergraduate minor, graduate certificates, and joint degree programs. Nova Southeastern University also offers a PhD in Conflict Analysis & Resolution, in both online and on-campus formats.

Conflict resolution is a growing area of interest in UK pedagogy, with teachers and students both encouraged to learn about mechanisms that lead to aggressive action and those that lead to peaceful resolution. The University of Law, one of the oldest common law training institutions in the world, offers a legal-focused master's degree in conflict resolution as an LL.M. (Conflict resolution).

Tel Aviv University offers two graduate degree programs in the field of conflict resolution, including the English-language International Program in Conflict Resolution and Mediation, allowing students to learn in a geographic region which is the subject of much research on international conflict resolution.

The Nelson Mandela Center for Peace & Conflict Resolution at Jamia Millia Islamia University, New Delhi, is one of the first centers for peace and conflict resolution to be established at an Indian university. It offers a two-year full-time MA course in Conflict Analysis and Peace-Building, as well as a PhD in Conflict and Peace Studies.

In Sweden Linnaeus University, Lund University and Uppsala University offer programs on bachelor, master and/or doctoral level in Peace and Conflict Studies. Uppsala University also hosts its own Department of Peace and Conflict Research, among other things occupied with running the conflict database UCDP (Uppsala Conflict Data Program).

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...