Search This Blog

Saturday, September 30, 2023

Electrocardiography

From Wikipedia, the free encyclopedia
 
Electrocardiography
ECG of a heart in normal sinus rhythm
ICD-10-PCSR94.31
ICD-9-CM89.52
MeSHD004562
MedlinePlus003868
Use of real time monitoring of the heart in an intensive care unit in a German hospital (2015), the monitoring screen above the patient displaying an electrocardiogram and various values of parameters of the heart like heart rate and blood pressure

Electrocardiography is the process of producing an electrocardiogram (ECG or EKG), a recording of the heart's electrical activity through repeated cardiac cycles. It is an electrogram of the heart which is a graph of voltage versus time of the electrical activity of the heart using electrodes placed on the skin. These electrodes detect the small electrical changes that are a consequence of cardiac muscle depolarization followed by repolarization during each cardiac cycle (heartbeat). Changes in the normal ECG pattern occur in numerous cardiac abnormalities, including cardiac rhythm disturbances (such as atrial fibrillation and ventricular tachycardia), inadequate coronary artery blood flow (such as myocardial ischemia and myocardial infarction), and electrolyte disturbances (such as hypokalemia and hyperkalemia).

Traditionally, "ECG" usually means a 12-lead ECG taken while lying down as discussed below. However, other devices can record the electrical activity of the heart such as a Holter monitor but also some models of smartwatch are capable of recording an ECG. ECG signals can be recorded in other contexts with other devices.

In a conventional 12-lead ECG, ten electrodes are placed on the patient's limbs and on the surface of the chest. The overall magnitude of the heart's electrical potential is then measured from twelve different angles ("leads") and is recorded over a period of time (usually ten seconds). In this way, the overall magnitude and direction of the heart's electrical depolarization is captured at each moment throughout the cardiac cycle.

There are three main components to an ECG: the P wave, which represents depolarization of the atria; the QRS complex, which represents depolarization of the ventricles; and the T wave, which represents repolarization of the ventricles.

During each heartbeat, a healthy heart has an orderly progression of depolarization that starts with pacemaker cells in the sinoatrial node, spreads throughout the atrium, and passes through the atrioventricular node down into the bundle of His and into the Purkinje fibers, spreading down and to the left throughout the ventricles. This orderly pattern of depolarization gives rise to the characteristic ECG tracing. To the trained clinician, an ECG conveys a large amount of information about the structure of the heart and the function of its electrical conduction system. Among other things, an ECG can be used to measure the rate and rhythm of heartbeats, the size and position of the heart chambers, the presence of any damage to the heart's muscle cells or conduction system, the effects of heart drugs, and the function of implanted pacemakers.

Medical uses

Normal 12-lead ECG
A 12-lead ECG of a 26-year-old male with an incomplete right bundle branch block (RBBB)

The overall goal of performing an ECG is to obtain information about the electrical functioning of the heart. Medical uses for this information are varied and often need to be combined with knowledge of the structure of the heart and physical examination signs to be interpreted. Some indications for performing an ECG include the following:

ECGs can be recorded as short intermittent tracings or continuous ECG monitoring. Continuous monitoring is used for critically ill patients, patients undergoing general anesthesia, and patients who have an infrequently occurring cardiac arrhythmia that would unlikely be seen on a conventional ten-second ECG. Continuous monitoring can be conducted by using Holter monitors, internal and external defibrillators and pacemakers, and/or biotelemetry.

Screening

A patient undergoing an ECG

For adults, evidence does not support the use of ECGs among those without symptoms or at low risk of cardiovascular disease as an effort for prevention. This is because an ECG may falsely indicate the existence of a problem, leading to misdiagnosis, the recommendation of invasive procedures, and overtreatment. However, persons employed in certain critical occupations, such as aircraft pilots, may be required to have an ECG as part of their routine health evaluations. Hypertrophic cardiomyopathy screening may also be considered in adolescents as part of a sports physical out of concern for sudden cardiac death.

Electrocardiograph machines

An EKG electrode

Electrocardiograms are recorded by machines that consist of a set of electrodes connected to a central unit. Early ECG machines were constructed with analog electronics, where the signal drove a motor to print out the signal onto paper. Today, electrocardiographs use analog-to-digital converters to convert the electrical activity of the heart to a digital signal. Many ECG machines are now portable and commonly include a screen, keyboard, and printer on a small wheeled cart. Recent advancements in electrocardiography include developing even smaller devices for inclusion in fitness trackers and smart watches. These smaller devices often rely on only two electrodes to deliver a single lead I. Portable twelve-lead devices powered by batteries are also available.

Recording an ECG is a safe and painless procedure. The machines are powered by mains power but they are designed with several safety features including an earthed (ground) lead. Other features include:

  • Defibrillation protection: any ECG used in healthcare may be attached to a person who requires defibrillation and the ECG needs to protect itself from this source of energy.
  • Electrostatic discharge is similar to defibrillation discharge and requires voltage protection up to 18,000 volts.
  • Additionally, circuitry called the right leg driver can be used to reduce common-mode interference (typically the 50 or 60 Hz mains power).
  • ECG voltages measured across the body are very small. This low voltage necessitates a low noise circuit, instrumentation amplifiers, and electromagnetic shielding.
  • Simultaneous lead recordings: earlier designs recorded each lead sequentially, but current models record multiple leads simultaneously.

Most modern ECG machines include automated interpretation algorithms. This analysis calculates features such as the PR interval, QT interval, corrected QT (QTc) interval, PR axis, QRS axis, rhythm and more. The results from these automated algorithms are considered "preliminary" until verified and/or modified by expert interpretation. Despite recent advances, computer misinterpretation remains a significant problem and can result in clinical mismanagement.

Cardiac monitors

Besides the standard electrocardiograph machine, there are other devices capable of recording ECG signals. Portable devices have existed since the Holter monitor was produced in 1962. Traditionally, these monitors have used electrodes with patches on the skin to record the ECG, but new devices can stick to the chest as a single patch without need for wires, developed by Zio (Zio XT), TZ Medical (Trident), Philips (BioTel) and BardyDx (CAM) among many others. Implantable devices such as the artificial cardiac pacemaker and implantable cardioverter-defibrillator are capable of measuring a "far field" signal between the leads in the heart and the implanted battery/generator that resembles an ECG signal (technically, the signal recorded in the heart is called an electrogram, which is interpreted differently). Advancement of the Holter monitor became the implantable loop recorder that performs the same function but in an implantable device with batteries that last on the order of years.

Additionally, there are available various Arduino kits with ECG sensor modules and smartwatch devices that are capable of recording an ECG signal as well, such as with the 4th generation Apple Watch, Samsung Galaxy Watch 4 and newer devices.

Electrodes and leads

Proper placement of the limb electrodes. The limb electrodes can be far down on the limbs or close to the hips/shoulders as long as they are placed symmetrically.
Placement of the precordial electrodes

Electrodes are the actual conductive pads attached to the body surface. Any pair of electrodes can measure the electrical potential difference between the two corresponding locations of attachment. Such a pair forms a lead. However, "leads" can also be formed between a physical electrode and a virtual electrode, known as Wilson's central terminal (WCT), whose potential is defined as the average potential measured by three limb electrodes that are attached to the right arm, the left arm, and the left foot, respectively.

Commonly, 10 electrodes attached to the body are used to form 12 ECG leads, with each lead measuring a specific electrical potential difference (as listed in the table below).

Leads are broken down into three types: limb; augmented limb; and precordial or chest. The 12-lead ECG has a total of three limb leads and three augmented limb leads arranged like spokes of a wheel in the coronal plane (vertical), and six precordial leads or chest leads that lie on the perpendicular transverse plane (horizontal).

In medical settings, the term leads is also sometimes used to refer to the electrodes themselves, although this is technically incorrect.

The 10 electrodes in a 12-lead ECG are listed below.

Electrode name Electrode placement
RA On the right arm, avoiding thick muscle.
LA In the same location where RA was placed, but on the left arm.
RL On the right leg, lower end of inner aspect of calf muscle. (Avoid bony prominences)
LL In the same location where RL was placed, but on the left leg.
V1 In the fourth intercostal space (between ribs 4 and 5) just to the right of the sternum (breastbone)
V2 In the fourth intercostal space (between ribs 4 and 5) just to the left of the sternum.
V3 Between leads V2 and V4.
V4 In the fifth intercostal space (between ribs 5 and 6) in the mid-clavicular line.
V5 Horizontally even with V4, in the left anterior axillary line.
V6 Horizontally even with V4 and V5 in the mid-axillary line.

Two types of electrodes in common use are a flat paper-thin sticker and a self-adhesive circular pad. The former are typically used in a single ECG recording while the latter are for continuous recordings as they stick longer. Each electrode consists of an electrically conductive electrolyte gel and a silver/silver chloride conductor. The gel typically contains potassium chloride – sometimes silver chloride as well – to permit electron conduction from the skin to the wire and to the electrocardiogram.

The common virtual electrode, known as Wilson's central terminal (VW), is produced by averaging the measurements from the electrodes RA, LA, and LL to give an average potential of the body:

In a 12-lead ECG, all leads except the limb leads are assumed to be unipolar (aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6). The measurement of a voltage requires two contacts and so, electrically, the unipolar leads are measured from the common lead (negative) and the unipolar lead (positive). This averaging for the common lead and the abstract unipolar lead concept makes for a more challenging understanding and is complicated by sloppy usage of "lead" and "electrode". In fact, instead of being a constant reference, VW has a value that fluctuates throughout the heart cycle. It also does not truly represent the center-of-heart potential due to the body parts the signals travel through.

Limb leads

The limb leads and augmented limb leads (Wilson's central terminal is used as the negative pole for the latter in this representation)

Leads I, II and III are called the limb leads. The electrodes that form these signals are located on the limbs – one on each arm and one on the left leg. The limb leads form the points of what is known as Einthoven's triangle.

  • Lead I is the voltage between the (positive) left arm (LA) electrode and right arm (RA) electrode:
  • Lead II is the voltage between the (positive) left leg (LL) electrode and the right arm (RA) electrode:
  • Lead III is the voltage between the (positive) left leg (LL) electrode and the left arm (LA) electrode:

Augmented limb leads

Leads aVR, aVL, and aVF are the augmented limb leads. They are derived from the same three electrodes as leads I, II, and III, but they use Goldberger's central terminal as their negative pole. Goldberger's central terminal is a combination of inputs from two limb electrodes, with a different combination for each augmented lead. It is referred to immediately below as "the negative pole".

  • Lead augmented vector right (aVR) has the positive electrode on the right arm. The negative pole is a combination of the left arm electrode and the left leg electrode:
  • Lead augmented vector left (aVL) has the positive electrode on the left arm. The negative pole is a combination of the right arm electrode and the left leg electrode:
  • Lead augmented vector foot (aVF) has the positive electrode on the left leg. The negative pole is a combination of the right arm electrode and the left arm electrode:

Together with leads I, II, and III, augmented limb leads aVR, aVL, and aVF form the basis of the hexaxial reference system, which is used to calculate the heart's electrical axis in the frontal plane.

Older versions of the nodes (VR, VL, VF) use Wilson's central terminal as the negative pole, but the amplitude is too small for the thick lines of old ECG machines. The Goldberger terminals scale up (augments) the Wilson results by 50%, at the cost of sacrificing physical correctness by not having the same negative pole for all three.

Precordial leads

The precordial leads lie in the transverse (horizontal) plane, perpendicular to the other six leads. The six precordial electrodes act as the positive poles for the six corresponding precordial leads: (V1, V2, V3, V4, V5, and V6). Wilson's central terminal is used as the negative pole. Recently, unipolar precordial leads have been used to create bipolar precordial leads that explore the right to left axis in the horizontal plane.

Specialized leads

Additional electrodes may rarely be placed to generate other leads for specific diagnostic purposes. Right-sided precordial leads may be used to better study pathology of the right ventricle or for dextrocardia (and are denoted with an R (e.g., V5R). Posterior leads (V7 to V9) may be used to demonstrate the presence of a posterior myocardial infarction. The Lewis lead or S5-lead (requiring an electrode at the right sternal border in the second intercostal space) can be used to better detect atrial activity in relation to that of the ventricles.

An esophogeal lead can be inserted to a part of the esophagus where the distance to the posterior wall of the left atrium is only approximately 5–6 mm (remaining constant in people of different age and weight). An esophageal lead avails for a more accurate differentiation between certain cardiac arrhythmias, particularly atrial flutter, AV nodal reentrant tachycardia and orthodromic atrioventricular reentrant tachycardia. It can also evaluate the risk in people with Wolff-Parkinson-White syndrome, as well as terminate supraventricular tachycardia caused by re-entry.

An intracardiac electrogram (ICEG) is essentially an ECG with some added intracardiac leads (that is, inside the heart). The standard ECG leads (external leads) are I, II, III, aVL, V1, and V6. Two to four intracardiac leads are added via cardiac catheterization. The word "electrogram" (EGM) without further specification usually means an intracardiac electrogram.

Lead locations on an ECG report

A standard 12-lead ECG report (an electrocardiograph) shows a 2.5 second tracing of each of the twelve leads. The tracings are most commonly arranged in a grid of four columns and three rows. The first column is the limb leads (I, II, and III), the second column is the augmented limb leads (aVR, aVL, and aVF), and the last two columns are the precordial leads (V1 to V6). Additionally, a rhythm strip may be included as a fourth or fifth row.

The timing across the page is continuous and notes tracings of the 12 leads for the same time period. In other words, if the output were traced by needles on paper, each row would switch which leads as the paper is pulled under the needle. For example, the top row would first trace lead I, then switch to lead aVR, then switch to V1, and then switch to V4, and so none of these four tracings of the leads are from the same time period as they are traced in sequence through time.

Contiguity of leads

Diagram showing the contiguous leads in the same color in the standard 12-lead layout

Each of the 12 ECG leads records the electrical activity of the heart from a different angle, and therefore align with different anatomical areas of the heart. Two leads that look at neighboring anatomical areas are said to be contiguous.

Category Leads Activity
Inferior leads Leads II, III and aVF Look at electrical activity from the vantage point of the inferior surface (diaphragmatic surface of heart)
Lateral leads I, aVL, V5 and V6 Look at the electrical activity from the vantage point of the lateral wall of left ventricle
Septal leads V1 and V2 Look at electrical activity from the vantage point of the septal surface of the heart (interventricular septum)
Anterior leads V3 and V4 Look at electrical activity from the vantage point of the anterior wall of the right and left ventricles (Sternocostal surface of heart)

In addition, any two precordial leads next to one another are considered to be contiguous. For example, though V4 is an anterior lead and V5 is a lateral lead, they are contiguous because they are next to one another.

Electrophysiology

The study of the conduction system of the heart is called cardiac electrophysiology (EP). An EP study is performed via a right-sided cardiac catheterization: a wire with an electrode at its tip is inserted into the right heart chambers from a peripheral vein, and placed in various positions in close proximity to the conduction system so that the electrical activity of that system can be recorded. Standard catheter positions for an EP study include "high right atrium" or hRA near the sinus node, a "His" across the septal wall of the tricuspid valve to measure bundle of His, a "coronary sinus" into the coronary sinus, and a "right ventricle" in the apex of the right ventricle.

Interpretation

Interpretation of the ECG is fundamentally about understanding the electrical conduction system of the heart. Normal conduction starts and propagates in a predictable pattern, and deviation from this pattern can be a normal variation or be pathological. An ECG does not equate with mechanical pumping activity of the heart; for example, pulseless electrical activity produces an ECG that should pump blood but no pulses are felt (and constitutes a medical emergency and CPR should be performed). Ventricular fibrillation produces an ECG but is too dysfunctional to produce a life-sustaining cardiac output. Certain rhythms are known to have good cardiac output and some are known to have bad cardiac output. Ultimately, an echocardiogram or other anatomical imaging modality is useful in assessing the mechanical function of the heart.

Like all medical tests, what constitutes "normal" is based on population studies. The heartrate range of between 60 and 100 beats per minute (bpm) is considered normal since data shows this to be the usual resting heart rate.

Theory

QRS is upright in a lead when its axis is aligned with that lead's vector
Schematic representation of a normal ECG

Interpretation of the ECG is ultimately that of pattern recognition. In order to understand the patterns found, it is helpful to understand the theory of what ECGs represent. The theory is rooted in electromagnetics and boils down to the four following points:

  • depolarization of the heart toward the positive electrode produces a positive deflection
  • depolarization of the heart away from the positive electrode produces a negative deflection
  • repolarization of the heart toward the positive electrode produces a negative deflection
  • repolarization of the heart away from the positive electrode produces a positive deflection

Thus, the overall direction of depolarization and repolarization produces positive or negative deflection on each lead's trace. For example, depolarizing from right to left would produce a positive deflection in lead I because the two vectors point in the same direction. In contrast, that same depolarization would produce minimal deflection in V1 and V2 because the vectors are perpendicular, and this phenomenon is called isoelectric.

Normal rhythm produces four entities – a P wave, a QRS complex, a T wave, and a U wave – that each have a fairly unique pattern.

  • The P wave represents atrial depolarization.
  • The QRS complex represents ventricular depolarization.
  • The T wave represents ventricular repolarization.
  • The U wave represents papillary muscle repolarization.

Changes in the structure of the heart and its surroundings (including blood composition) change the patterns of these four entities.

The U wave is not typically seen and its absence is generally ignored. Atrial repolarisation is typically hidden in the much more prominent QRS complex and normally cannot be seen without additional, specialised electrodes.

Background grid

ECGs are normally printed on a grid. The horizontal axis represents time and the vertical axis represents voltage. The standard values on this grid are shown in the adjacent image at 25mm/sec:

  • A small box is 1 mm × 1 mm and represents 0.1 mV × 0.04 seconds.
  • A large box is 5 mm × 5 mm and represents 0.5 mV × 0.20 seconds.

The "large" box is represented by a heavier line weight than the small boxes.

Measuring time and voltage with ECG graph paper

The standard printing speed in the United States is 25 mm per sec (5 big boxes per second), but in other countries it can be 50 mm per sec. Faster speeds such as 100 and 200 mm per sec are used during electrophysiology studies.

Not all aspects of an ECG rely on precise recordings or having a known scaling of amplitude or time. For example, determining if the tracing is a sinus rhythm only requires feature recognition and matching, and not measurement of amplitudes or times (i.e., the scale of the grids are irrelevant). An example to the contrary, the voltage requirements of left ventricular hypertrophy require knowing the grid scale.

Rate and rhythm

In a normal heart, the heart rate is the rate at which the sinoatrial node depolarizes since it is the source of depolarization of the heart. Heart rate, like other vital signs such as blood pressure and respiratory rate, change with age. In adults, a normal heart rate is between 60 and 100 bpm (normocardic), whereas it is higher in children. A heart rate below normal is called "bradycardia" (<60 in adults) and above normal is called "tachycardia" (>100 in adults). A complication of this is when the atria and ventricles are not in synchrony and the "heart rate" must be specified as atrial or ventricular (e.g., the ventricular rate in ventricular fibrillation is 300–600 bpm, whereas the atrial rate can be normal [60–100] or faster [100–150]).

In normal resting hearts, the physiologic rhythm of the heart is normal sinus rhythm (NSR). Normal sinus rhythm produces the prototypical pattern of P wave, QRS complex, and T wave. Generally, deviation from normal sinus rhythm is considered a cardiac arrhythmia. Thus, the first question in interpreting an ECG is whether or not there is a sinus rhythm. A criterion for sinus rhythm is that P waves and QRS complexes appear 1-to-1, thus implying that the P wave causes the QRS complex.

Once sinus rhythm is established, or not, the second question is the rate. For a sinus rhythm, this is either the rate of P waves or QRS complexes since they are 1-to-1. If the rate is too fast, then it is sinus tachycardia, and if it is too slow, then it is sinus bradycardia.

If it is not a sinus rhythm, then determining the rhythm is necessary before proceeding with further interpretation. Some arrhythmias with characteristic findings:

Determination of rate and rhythm is necessary in order to make sense of further interpretation.

Axis

Diagram showing how the polarity of the QRS complex in leads I, II, and III can be used to estimate the heart's electrical axis in the frontal plane.

The heart has several axes, but the most common by far is the axis of the QRS complex (references to "the axis" imply the QRS axis). Each axis can be computationally determined to result in a number representing degrees of deviation from zero, or it can be categorized into a few types.

The QRS axis is the general direction of the ventricular depolarization wavefront (or mean electrical vector) in the frontal plane. It is often sufficient to classify the axis as one of three types: normal, left deviated, or right deviated. Population data shows that a normal QRS axis is from −30° to 105°, with 0° being along lead I and positive being inferior and negative being superior (best understood graphically as the hexaxial reference system). Beyond +105° is right axis deviation and beyond −30° is left axis deviation (the third quadrant of −90° to −180° is very rare and is an indeterminate axis). A shortcut for determining if the QRS axis is normal is if the QRS complex is mostly positive in lead I and lead II (or lead I and aVF if +90° is the upper limit of normal).

The normal QRS axis is generally down and to the left, following the anatomical orientation of the heart within the chest. An abnormal axis suggests a change in the physical shape and orientation of the heart or a defect in its conduction system that causes the ventricles to depolarize in an abnormal way.

Classification Angle Notes
Normal −30° to 105° Normal
Left axis deviation −30° to −90° May indicate left ventricular hypertrophy, left anterior fascicular block, or an old inferior STEMI
Right axis deviation +105° to +180° May indicate right ventricular hypertrophy, left posterior fascicular block, or an old lateral STEMI
Indeterminate axis +180° to −90° Rarely seen; considered an 'electrical no-man's land'

The extent of a normal axis can be +90° or 105° depending on the source.

Amplitudes and intervals

Animation of a normal ECG wave

All of the waves on an ECG tracing and the intervals between them have a predictable time duration, a range of acceptable amplitudes (voltages), and a typical morphology. Any deviation from the normal tracing is potentially pathological and therefore of clinical significance.

For ease of measuring the amplitudes and intervals, an ECG is printed on graph paper at a standard scale: each 1 mm (one small box on the standard 25mm/s ECG paper) represents 40 milliseconds of time on the x-axis, and 0.1 millivolts on the y-axis.

Feature Description Pathology Duration
P wave The P wave represents depolarization of the atria. Atrial depolarization spreads from the SA node towards the AV node, and from the right atrium to the left atrium. The P wave is typically upright in most leads except for aVR; an unusual P wave axis (inverted in other leads) can indicate an ectopic atrial pacemaker. If the P wave is of unusually long duration, it may represent atrial enlargement. Typically a large right atrium gives a tall, peaked P wave while a large left atrium gives a two-humped bifid P wave. <80 ms
PR interval The PR interval is measured from the beginning of the P wave to the beginning of the QRS complex. This interval reflects the time the electrical impulse takes to travel from the sinus node through the AV node. A PR interval shorter than 120 ms suggests that the electrical impulse is bypassing the AV node, as in Wolf-Parkinson-White syndrome. A PR interval consistently longer than 200 ms diagnoses first degree atrioventricular block. The PR segment (the portion of the tracing after the P wave and before the QRS complex) is typically completely flat, but may be depressed in pericarditis. 120 to 200 ms
QRS complex The QRS complex represents the rapid depolarization of the right and left ventricles. The ventricles have a large muscle mass compared to the atria, so the QRS complex usually has a much larger amplitude than the P wave. If the QRS complex is wide (longer than 120 ms) it suggests disruption of the heart's conduction system, such as in LBBB, RBBB, or ventricular rhythms such as ventricular tachycardia. Metabolic issues such as severe hyperkalemia, or tricyclic antidepressant overdose can also widen the QRS complex. An unusually tall QRS complex may represent left ventricular hypertrophy while a very low-amplitude QRS complex may represent a pericardial effusion or infiltrative myocardial disease. 80 to 100 ms
J-point The J-point is the point at which the QRS complex finishes and the ST segment begins. The J-point may be elevated as a normal variant. The appearance of a separate J wave or Osborn wave at the J-point is pathognomonic of hypothermia or hypercalcemia.
ST segment The ST segment connects the QRS complex and the T wave; it represents the period when the ventricles are depolarized. It is usually isoelectric, but may be depressed or elevated with myocardial infarction or ischemia. ST depression can also be caused by LVH or digoxin. ST elevation can also be caused by pericarditis, Brugada syndrome, or can be a normal variant (J-point elevation).
T wave The T wave represents the repolarization of the ventricles. It is generally upright in all leads except aVR and lead V1. Inverted T waves can be a sign of myocardial ischemia, left ventricular hypertrophy, high intracranial pressure, or metabolic abnormalities. Peaked T waves can be a sign of hyperkalemia or very early myocardial infarction. 160 ms
Corrected QT interval (QTc) The QT interval is measured from the beginning of the QRS complex to the end of the T wave. Acceptable ranges vary with heart rate, so it must be corrected to the QTc by dividing by the square root of the RR interval. A prolonged QTc interval is a risk factor for ventricular tachyarrhythmias and sudden death. Long QT can arise as a genetic syndrome, or as a side effect of certain medications. An unusually short QTc can be seen in severe hypercalcemia. <440 ms
U wave The U wave is hypothesized to be caused by the repolarization of the interventricular septum. It normally has a low amplitude, and even more often is completely absent. A very prominent U wave can be a sign of hypokalemia, hypercalcemia or hyperthyroidism.

Limb leads and electrical conduction through the heart

Formation of limb waveforms during a pulse

The animation shown to the right illustrates how the path of electrical conduction gives rise to the ECG waves in the limb leads. Recall that a positive current (as created by depolarization of cardiac cells) traveling towards the positive electrode and away from the negative electrode creates a positive deflection on the ECG. Likewise, a positive current traveling away from the positive electrode and towards the negative electrode creates a negative deflection on the ECG. The red arrow represents the overall direction of travel of the depolarization. The magnitude of the red arrow is proportional to the amount of tissue being depolarized at that instance. The red arrow is simultaneously shown on the axis of each of the 3 limb leads. Both the direction and the magnitude of the red arrow's projection onto the axis of each limb lead is shown with blue arrows. Then, the direction and magnitude of the blue arrows are what theoretically determine the deflections on the ECG. For example, as a blue arrow on the axis for Lead I moves from the negative electrode, to the right, towards the positive electrode, the ECG line rises, creating an upward wave. As the blue arrow on the axis for Lead I moves to the left, a downward wave is created. The greater the magnitude of the blue arrow, the greater the deflection on the ECG for that particular limb lead.

Frames 1–3 depict the depolarization being generated in and spreading through the Sinoatrial node. The SA node is too small for its depolarization to be detected on most ECGs. Frames 4–10 depict the depolarization traveling through the atria, towards the Atrioventricular node. During frame 7, the depolarization is traveling through the largest amount of tissue in the atria, which creates the highest point in the P wave. Frames 11–12 depict the depolarization traveling through the AV node. Like the SA node, the AV node is too small for the depolarization of its tissue to be detected on most ECGs. This creates the flat PR segment.

Frame 13 depicts an interesting phenomenon in an over-simplified fashion. It depicts the depolarization as it starts to travel down the interventricular septum, through the Bundle of His and Bundle branches. After the Bundle of His, the conduction system splits into the left bundle branch and the right bundle branch. Both branches conduct action potentials at about 1 m/s. Interestingly, however, the action potential starts traveling down the left bundle branch about 5 milliseconds before it starts traveling down the right bundle branch, as depicted by frame 13. This causes the depolarization of the interventricular septum tissue to spread from left to right, as depicted by the red arrow in frame 14. In some cases, this gives rise to a negative deflection after the PR interval, creating a Q wave such as the one seen in lead I in the animation to the right. Depending on the mean electrical axis of the heart, this phenomenon can result in a Q wave in lead II as well.

Following depolarization of the interventricular septum, the depolarization travels towards the apex of the heart. This is depicted by frames 15–17 and results in a positive deflection on all three limb leads, which creates the R wave. Frames 18–21 then depict the depolarization as it travels throughout both ventricles from the apex of the heart, following the action potential in the Purkinje fibers. This phenomenon creates a negative deflection in all three limb leads, forming the S wave on the ECG. Repolarization of the atria occurs at the same time as the generation of the QRS complex, but it is not detected by the ECG since the tissue mass of the ventricles is so much larger than that of the atria. Ventricular contraction occurs between ventricular depolarization and repolarization. During this time, there is no movement of charge, so no deflection is created on the ECG. This results in the flat ST segment after the S wave.

Frames 24–28 in the animation depict repolarization of the ventricles. The epicardium is the first layer of the ventricles to repolarize, followed by the myocardium. The endocardium is the last layer to repolarize. The plateau phase of depolarization has been shown to last longer in endocardial cells than in epicardial cells. This causes repolarization to start from the apex of the heart and move upwards. Since repolarization is the spread of negative current as membrane potentials decrease back down to the resting membrane potential, the red arrow in the animation is pointing in the direction opposite of the repolarization. This therefore creates a positive deflection in the ECG, and creates the T wave.

Ischemia and infarction

Ischemia or non-ST elevation myocardial infarctions (non-STEMIs) may manifest as ST depression or inversion of T waves. It may also affect the high frequency band of the QRS.

ST elevation myocardial infarctions (STEMIs) have different characteristic ECG findings based on the amount of time elapsed since the MI first occurred. The earliest sign is hyperacute T waves, peaked T waves due to local hyperkalemia in ischemic myocardium. This then progresses over a period of minutes to elevations of the ST segment by at least 1 mm. Over a period of hours, a pathologic Q wave may appear and the T wave will invert. Over a period of days the ST elevation will resolve. Pathologic Q waves generally will remain permanently.

The coronary artery that has been occluded can be identified in an STEMI based on the location of ST elevation. The left anterior descending (LAD) artery supplies the anterior wall of the heart, and therefore causes ST elevations in anterior leads (V1 and V2). The LCx supplies the lateral aspect of the heart and therefore causes ST elevations in lateral leads (I, aVL and V6). The right coronary artery (RCA) usually supplies the inferior aspect of the heart, and therefore causes ST elevations in inferior leads (II, III and aVF).

Artifacts

An ECG tracing is affected by patient motion. Some rhythmic motions (such as shivering or tremors) can create the illusion of cardiac arrhythmia. Artifacts are distorted signals caused by a secondary internal or external sources, such as muscle movement or interference from an electrical device.

Distortion poses significant challenges to healthcare providers, who employ various techniques and strategies to safely recognize these false signals. Accurately separating the ECG artifact from the true ECG signal can have a significant impact on patient outcomes and legal liabilities.

Improper lead placement (for example, reversing two of the limb leads) has been estimated to occur in 0.4% to 4% of all ECG recordings, and has resulted in improper diagnosis and treatment including unnecessary use of thrombolytic therapy.

A Method for Interpretation

Whitbread, consultant nurse and paramedic, suggests ten rules of the normal ECG, deviation from which is likely to indicate pathology. These have been added to, creating the 15 rules for 12-lead (and 15- or 18-lead) interpretation.

Rule 1: All waves in aVR are negative.

Rule 2: The ST segment (J point) starts on the isoelectric line (except in V1 & V2 where it may be elevated by not greater than 1 mm).

Rule 3: The PR interval should be 0.12–0.2 seconds long.

Rule 4: The QRS complex should not exceed 0.11–0.12 seconds.

Rule 5: The QRS and T waves tend to have the same general direction in the limb leads.

Rule 6: The R wave in the precordial (chest) leads grows from V1 to at least V4 where it may or may not decline again.

Rule 7: The QRS is mainly upright in I and II.

Rule 8: The P wave is upright in I II and V2 to V6.

Rule 9: There is no Q wave or only a small q (<0.04 seconds in width) in I, II and V2 to V6.

Rule 10: The T wave is upright in I II and V2 to V6. The end of the T wave should not drop below the isoelectric baseline.

Rule 11: Does the deepest S wave in V1 plus the tallest R wave in V5 or V6 equal >35 mm?

Rule 12: Is there an Epsilon wave?

Rule 13: Is there an J wave?

Rule 14: Is there a Delta wave?

Rule 15: Are there any patterns representing an occlusive myocardial infarction (OMI)?

Diagnosis

Numerous diagnoses and findings can be made based upon electrocardiography, and many are discussed above. Overall, the diagnoses are made based on the patterns. For example, an "irregularly irregular" QRS complex without P waves is the hallmark of atrial fibrillation; however, other findings can be present as well, such as a bundle branch block that alters the shape of the QRS complexes. ECGs can be interpreted in isolation but should be applied – like all diagnostic tests – in the context of the patient. For example, an observation of peaked T waves is not sufficient to diagnose hyperkalemia; such a diagnosis should be verified by measuring the blood potassium level. Conversely, a discovery of hyperkalemia should be followed by an ECG for manifestations such as peaked T waves, widened QRS complexes, and loss of P waves. The following is an organized list of possible ECG-based diagnoses.

Rhythm disturbances or arrhythmias:

Heart block and conduction problems:

Electrolytes disturbances and intoxication:

Ischemia and infarction:

Structural:

History

An early commercial ECG device (1911)
ECG from 1957
  • In 1872, Alexander Muirhead is reported to have attached wires to the wrist of a patient with fever to obtain an electronic record of their heartbeat.
  • In 1882, John Burdon-Sanderson working with frogs, was the first to appreciate that the interval between variations in potential was not electrically quiescent and coined the term "isoelectric interval" for this period.
  • In 1887, Augustus Waller invented an ECG machine consisting of a Lippmann capillary electrometer fixed to a projector. The trace from the heartbeat was projected onto a photographic plate that was itself fixed to a toy train. This allowed a heartbeat to be recorded in real time.
  • In 1895, Willem Einthoven assigned the letters P, Q, R, S, and T to the deflections in the theoretical waveform he created using equations which corrected the actual waveform obtained by the capillary electrometer to compensate for the imprecision of that instrument. Using letters different from A, B, C, and D (the letters used for the capillary electrometer's waveform) facilitated comparison when the uncorrected and corrected lines were drawn on the same graph. Einthoven probably chose the initial letter P to follow the example set by Descartes in geometry. When a more precise waveform was obtained using the string galvanometer, which matched the corrected capillary electrometer waveform, he continued to use the letters P, Q, R, S, and T, and these letters are still in use today. Einthoven also described the electrocardiographic features of a number of cardiovascular disorders.
  • In 1897, the string galvanometer was invented by the French engineer Clément Ader.
  • In 1901, Einthoven, working in Leiden, the Netherlands, used the string galvanometer: the first practical ECG. This device was much more sensitive than the capillary electrometer Waller used.
  • In 1924, Einthoven was awarded the Nobel Prize in Medicine for his pioneering work in developing the ECG.
  • By 1927, General Electric had developed a portable apparatus that could produce electrocardiograms without the use of the string galvanometer. This device instead combined amplifier tubes similar to those used in a radio with an internal lamp and a moving mirror that directed the tracing of the electric pulses onto film.
  • In 1937, Taro Takemi invented a new portable electrocardiograph machine.
  • In 1942, Emanuel Goldberger increases the voltage of Wilson's unipolar leads by 50% and creates the augmented limb leads aVR, aVL and aVF. When added to Einthoven's three limb leads and the six chest leads we arrive at the 12-lead electrocardiogram that is used today.
  • In the late 1940s Rune Elmqvist invented an inkjet printer - thin jets of ink deflected by electrical potentials from the heart, with good frequency response and direct recording of ECG on paper - the device, called the Mingograf, was sold by Siemens Elema until the 1990s.

Etymology

The word is derived from the Greek electro, meaning related to electrical activity; kardia, meaning heart; and graph, meaning "to write".

Tropical cyclones and climate change

North Atlantic tropical cyclone activity according to the Power Dissipation Index, 1949–2015. Sea surface temperature has been plotted alongside the PDI to show how they compare. The lines have been smoothed using a five-year weighted average, plotted at the middle year.

Climate change can affect tropical cyclones in a variety of ways: an intensification of rainfall and wind speed, a decrease in overall frequency, an increase in the frequency of very intense storms and a poleward extension of where the cyclones reach maximum intensity are among the possible consequences of human-induced climate change. Tropical cyclones use warm, moist air as their source of energy or "fuel". As climate change is warming ocean temperatures, there is potentially more of this fuel available.

Between 1979 and 2017, there was a global increase in the proportion of tropical cyclones of Category 3 and higher on the Saffir–Simpson scale. The trend was most clear in the North Atlantic and in the Southern Indian Ocean. In the North Pacific, tropical cyclones have been moving poleward into colder waters and there was no increase in intensity over this period. With 2 °C (3.6 °F) warming, a greater percentage (+13%) of tropical cyclones are expected to reach Category 4 and 5 strength. A 2019 study indicates that climate change has been driving the observed trend of rapid intensification of tropical cyclones in the Atlantic basin. Rapidly intensifying cyclones are hard to forecast and therefore pose additional risk to coastal communities.

Warmer air can hold more water vapor: the theoretical maximum water vapor content is given by the Clausius–Clapeyron relation, which yields ≈7% increase in water vapor in the atmosphere per 1 °C (1.8 °F) warming. All models that were assessed in a 2019 review paper show a future increase of rainfall rates. Additional sea level rise will increase storm surge levels. It is plausible that extreme wind waves see an increase as a consequence of changes in tropical cyclones, further exacerbating storm surge dangers to coastal communities. The compounding effects from floods, storm surge, and terrestrial flooding (rivers) are projected to increase due to global warming.

There is currently no consensus on how climate change will affect the overall frequency of tropical cyclones. A majority of climate models show a decreased frequency in future projections. For instance, a 2020 paper comparing nine high-resolution climate models found robust decreases in frequency in the Southern Indian Ocean and the Southern Hemisphere more generally, while finding mixed signals for Northern Hemisphere tropical cyclones. Observations have shown little change in the overall frequency of tropical cyclones worldwide, with increased frequency in the North Atlantic and central Pacific, and significant decreases in the southern Indian Ocean and western North Pacific. There has been a poleward expansion of the latitude at which the maximum intensity of tropical cyclones occurs, which may be associated with climate change. In the North Pacific, there may also have been an eastward expansion. Between 1949 and 2016, there was a slowdown in tropical cyclone translation speeds. It is unclear still to what extent this can be attributed to climate change: climate models do not all show this feature.

Background

A tropical cyclone is a rapidly rotating storm system characterized by a low-pressure center, a closed low-level atmospheric circulation, strong winds and a spiral arrangement of thunderstorms that produce heavy rain or squalls. The majority of these systems form each year in one of seven tropical cyclone basins, which are monitored by a variety of meteorological services and warning centres.

The factors that determine tropical cyclone activity are relatively well understood: warmer sea levels are favourable to tropical cyclones, as well as an unstable and moist mid-troposphere, while vertical wind shear suppresses them. All of these factors will change under climate change, but is not always clear which factor dominates.

Tropical cyclones are known as hurricanes in the Atlantic Ocean and the northeastern Pacific Ocean, typhoons in the northwestern Pacific Ocean, and cyclones in the southern Pacific or the Indian Ocean. Fundamentally, they are all the same type of storm.

Data and models

Global ocean heat content in the top 700 m of the ocean.
North Atlantic tropical cyclone activity according to the Accumulated Cyclone Energy Index, 1950–2020. For a global ACE graph visit this link.

Measurement

Based on satellite imagery, the Dvorak technique is the primary technique used to estimate globally the tropical cyclone intensity.

The Potential Intensity (PI) of tropical cyclones can be computed from observed data, primarily derived from vertical profiles of temperature, humidity and sea surface temperatures (SSTs). The convective available potential energy (CAPE), was computed from radiosonde stations in parts of the tropics from 1958 to 1997, but is considered to be of poor quality. The Power Dissipation Index (PDI) represents the total power dissipation for the North Atlantic and western North Pacific, and is strongly correlated with tropical SSTs. Various tropical cyclone scales exist to classify a system.

Historical record

Since the satellite era, which began around 1970, trends are considered to be robust enough in regards to the connection of storms and sea surface temperatures. Agreement exists that there were active storm periods in the more distant past, but the sea surface temperature related Power Dissipation Index was not as high. Paleotempestology is the science of past tropical cyclone activity by means of geological proxies (flood sediment), or historical documentary records, such as shipwrecks or tree ring anomalies. As of 2019, paleoclimate studies are not yet sufficiently consistent to draw conclusions for wider regions, but they do provide some useful information about specific locations.

Modelling tropical cyclones

Climate models are used to study expected future changes in cyclonic activity. Lower-resolution climate models cannot represent convection directly, and instead use parametrizations to approximate the smaller scale processes. This poses difficulties for tropical cyclones, as convection is an essential part of tropical cyclone physics.

Higher-resolution global models and regional climate models may be more computer-intensive to run, making it difficult to simulate enough tropical cyclones for robust statistical analysis. However, with growing advancements in technology, climate models have improved simulation abilities for tropical cyclone frequency and intensity.

One challenge that scientists face when modeling is determining whether the recent changes in tropical cyclones are associated with anthropogenic forcing, or if these changes are still within their natural variability. This is most apparent when examining tropical cyclones at longer temporal resolutions. One study found a decreasing trend in tropical storms along the eastern Australian coast over a century-long historical record.

Changes in tropical cyclones

1970 Bhola cyclone before landfall. It became the deadliest tropical cyclone ever recorded with more than 300,000 casualties.

Climate change may affect tropical cyclones in a variety of ways: an intensification of rainfall and wind speed, a decrease in overall frequency, an increase in frequency of very intense storms and a poleward extension of where the cyclones reach maximum intensity are among the possible consequences of human-induced climate change.

Rainfall

Warmer air can hold more water vapor: the theoretical maximum water vapor content is given by the Clausius–Clapeyron relation, which yields ≈7% increase in water vapor in the atmosphere per 1 °C warming. All models that were assessed in a 2019 review paper show a future increase of rainfall rates, which is the rain that falls per hour. The World Meteorological Organization stated in 2017 that the quantity of rainfall from Hurricane Harvey had very likely been increased by climate change.

A tropical cyclone's rainfall area (in contrast to rate) is primarily controlled by its environmental sea surface temperature (SST) – relative to the tropical mean SST, called the relative sea surface temperature. Rainfall will expand outwards as the relative SST increases, associated with an expansion of a storm wind field. The largest tropical cyclones are observed in the western North Pacific tropics, where the largest values of relative SST and mid-tropospheric relative humidity are located. Assuming that ocean temperatures rise uniformly, a warming climate is not likely to impact rainfall area.

Intensity

The 20-year average of the number of annual Category 4 and 5 hurricanes in the Atlantic region has approximately doubled since the year 2000.

Tropical cyclones use warm, moist air as their source of energy or "fuel". As climate change is warming ocean temperatures, there is potentially more of this fuel available. A study published in 2012 suggests that SSTs may be valuable as a proxy to measure potential intensity (PI) of tropical cyclones, as cyclones are sensitive to ocean basin temperatures. Between 1979 and 2017, there was a global increase in the proportion of tropical cyclones of Category 3 and higher on the Saffir–Simpson scale, which are cyclones with wind speeds over 178 km per hour. The trend was most clear in the North Atlantic and in the Southern Indian Ocean. In the North Pacific, tropical cyclones have been moving poleward into colder waters and there was no increase in intensity over this period. With 2 °C warming, a greater percentage (+13%) of tropical cyclones are expected to reach Category 4 and 5 strength. A study of 2020's storms of at least tropical storm-strength concluded that human-induced climate change increased extreme 3-hourly storm rainfall rates by 10%, and extreme 3-day accumulated rainfall amounts by 5%, and for hurricane-strength storms the figures increased to 11% and 8%.

Climate change has likely been driving the observed trend of rapid intensification of tropical cyclones in the Atlantic basin, with the proportion of storms undergoing intensification nearly doubling over the years 1982 to 2009. Rapidly intensifying cyclones are hard to forecast and pose additional risk to coastal communities. Storms have also begun to decay more slowly once they make landfall, threatening areas further inland than in the past. The 2020 Atlantic hurricane season was exceptionally active and broke numerous records for frequency and intensity of storms.

North Atlantic tropical storms and hurricanes
  Hurricane category 1-3
  Tropical storm or Tropical depression

Frequency

There is no consensus on how climate change will affect the overall frequency of tropical cyclones. A majority of climate models show a decreased frequency in future projections. For instance, a 2020 paper comparing nine high-resolution climate models found robust decreases in frequency in the Southern Indian Ocean and the Southern Hemisphere more generally, while finding mixed signals for Northern Hemisphere tropical cyclones. Observations have shown little change in the overall frequency of tropical cyclones worldwide.

A study published in 2015 concluded that there would be more tropical cyclones in a cooler climate, and that tropical cyclone genesis is possible with sea surface temperatures below 26 °C. With warmer sea surface temperatures, especially in the Southern Hemisphere, in tandem with increased levels of carbon dioxide, it is likely tropical cyclone frequency will be reduced in the future.

Research conducted by Murakami et al. following the 2015 hurricane season in the eastern and central Pacific Ocean where a record number of tropical cyclones and three simultaneous category 4 hurricanes occurred, concludes that greenhouse gas forcing enhances subtropical Pacific warming which they project will increase the frequency of extremely active tropical cyclones in this area.

Storm tracks

There has been a poleward expansion of the latitude at which the maximum intensity of tropical cyclones occurs, which may be associated with climate change. In the North Pacific, there may also be an eastward expansion. Between 1949 and 2016, there was a slowdown in tropical cyclone translation speeds. It is unclear still to what extent this can be attributed to climate change: climate models do not all show this feature.

Storm surges and flood hazards

Additional sea level rise will increase storm surge levels. It is plausible that extreme wind waves see an increase as a consequence of changes in tropical cyclones, further exacerbating storm surge dangers to coastal communities. Between 1923 and 2008, storm surge incidents along the US Atlantic coast showed a positive trend. A 2017 study looked at compounding effects from floods, storm surge, and terrestrial flooding (rivers), and projects an increase due to climate change. However, scientists are still uncertain whether recent increases of storm surges are a response to anthropogenic climate change.

Tropical cyclones in different basins

Six tropical cyclones swirl over two basins on September 16, 2020.

Hurricanes

Studies conducted in 2008 and 2016 looked at the duration of the Atlantic hurricane season, and found it may be getting longer, particular south of 30°N and east of 75°W, or the tendency toward more early- and late-season storms, correlated to warming sea surface temperatures. However, uncertainty is still high, and one study found no trend, another mixed results.

A 2011 study linked increased activity of intense hurricanes in the North Atlantic with a northward shift and amplification of convective activities from the African easterly waves (AEWs). In addition to cyclone intensity, both size and translation speed have been shown to be substantial contributors to the impacts resulting from hurricane passage. A 2014 study investigated the response of AEWs to high emissions scenarios, and found increases in regional temperature gradients, convergence and uplift along the Intertropical Front of Africa, resulting in strengthening of the African easterly waves, affecting the climate over West Africa and the larger Atlantic basin.

A 2017 study concluded that the 2015 highly active hurricane season could not be attributed solely to a strong El Niño event. Instead, subtropical warming was an important factor as well, a feature more common as a consequence of climate change. A 2019 study found that increasing evaporation and the larger capability of the atmosphere to hold water vapor linked to climate change, already increased the amount of rainfall from hurricanes Katrina, Irma and Maria by 4 to 9 percent. Future increases of up to 30% were projected.

A 2018 study found no significant trends in landfalling hurricane frequency nor intensity for the continental United States since 1900. Furthermore, growth in coastal populations and regional wealth served as the overwhelming drivers of observed increases in hurricane-related damage.

Typhoons

Research based on records from Japan and Hawaii indicate that typhoons in the north-west Pacific intensified by 12–15% on average since 1977. The observed strongest typhoons doubled, or tripled in some regions, the intensity of particular landfalling systems is most pronounced. This uptick in storm intensity affects coastal populations in China, Japan, Korea and the Philippines, and has been attributed to warming ocean waters. The authors noted that it is not yet clear to what extent global warming caused the increased water temperatures, but observations are consistent with what the IPCC projects for warming of sea surface temperatures. Vertical wind shear has seen decreasing trends in and around China, creating more favourable conditions for intense tropical cyclones. This is mainly in response to the weakening of the East Asian summer monsoon, a consequence of global warming.

Risk management and adaptation

There are several risks associated with the increase of tropical storms, such as it can directly or indirectly cause injuries or death. The most effective strategy to manage risks has been the development of early warning systems. A further policy that would mitigate risks of flooding is reforestation of inland areas in order to strengthen the soil of the communities and reduce coastal inundation. It is also recommended that local schools, churches, and other community infrastructure be permanently equipped to become cyclone shelters. Focusing on applying resources towards immediate relief to those affected may divert attention from more long-term solutions. This is further exacerbated in lower-income communities and countries as they suffer most from the consequences of tropical cyclones.

Pacific region

Specific national and supranational decisions have already been made and are being implemented. The Framework for Resilient Development in the Pacific (FRDP) has been instituted to strengthen and better coordinate disaster response and climate change adaptation among nations and communities in the region. Specific nations such as Tonga and the Cook Islands in the Southern Pacific under this regime have developed a Joint National Action Plan on Climate Change and Disaster Risk Management (JNAP) to coordinate and execute responses to the rising risk for climate change. These countries have identified the most vulnerable areas of their nations, generated national and supranational policies to be implemented, and provided specific goals and timelines to achieve these goals. These actions to be implemented include reforestation, building of levees and dams, creation of early warning systems, reinforcing existing communication infrastructure, finding new sources of fresh water, promoting and subsidizing the proliferation renewable energy, improving irrigation techniques to promote sustainable agriculture, increase public education efforts on sustainable measures, and lobbying internationally for the increased use of renewable energy sources.

United States

The number of $1 billion Atlantic hurricanes almost doubled from the 1980s to the 2010s, and inflation-adjusted costs have increased more than elevenfold. The increases have been attributed to climate change and to greater numbers of people moving to coastal areas.

In the United States, there have been several initiatives taken to better prepare for the strengthening of hurricanes, such as preparing local emergency shelters, building sand dunes and levees, and reforestation initiatives. Despite better modeling capabilities of hurricanes, property damage has increased dramatically. The National Flood Insurance Program incentivizes people to re-build houses in flood-prone areas, and thereby hampers adaptation to increased risk from hurricanes and sea level rise. Due to the wind shear and storm surge, a building with a weak building envelope is subject to more damages. Risk assessment using climate models help determine the structural integrity of residential buildings in hurricane-prone areas.

Some ecosystems, such as marshes, mangroves, and coral reefs, can serve as a natural obstacle to coastal erosion, storm surges, and wind damage caused by hurricanes. These natural habitats are seen to be more cost-effective as they serve as a carbon sink and support biodiversity of a region. Although there is substantial evidence of natural habitats being the more beneficial barrier for tropical cyclones, built defenses are often the primary solution for government agencies and decision makers.  A study published in 2015, which assessed the feasibility of natural, engineered, and hybrid risk-mitigation to tropical cyclones in Freeport, Texas, found that incorporating natural ecosystems into risk-mitigation plans could reduce flood heights and ease the cost of built defenses in the future.

Media and public perception

The destruction from early 21st century Atlantic Ocean hurricanes, such as Hurricanes Katrina, Wilma, and Sandy, caused a substantial upsurge in interest in the subject of climate change and hurricanes by news media and the wider public, and concerns that global climatic change may have played a significant role in those events. In 2005 and 2017, related polling of populations affected by hurricanes concluded in 2005 that 39 percent of Americans believed climate change helped to fuel the intensity of hurricanes, rising to 55 percent in September 2017.

After Typhoon Meranti in 2016, risk perception in China was not measured to increase. However, there was a clear rise in support for personal and community action against climate change. In Taiwan, people that had lived through a typhoon did not express more anxiety about climate change. The survey did find a positive correlation between anxiety about typhoons and anxiety about climate change.

Jet stream

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Jet_stream

0:28
The polar jet stream can travel at speeds greater than 180 km/h (110 mph). Here, the fastest winds are coloured red; slower winds are blue.
Clouds along a jet stream over Canada.

Jet streams are fast flowing, narrow, meandering air currents in the atmospheres of the Earth, Venus, Jupiter, Saturn, Uranus, and Neptune. On Earth, the main jet streams are located near the altitude of the tropopause and are westerly winds (flowing west to east). Jet streams may start, stop, split into two or more parts, combine into one stream, or flow in various directions including opposite to the direction of the remainder of the jet.

Overview

The strongest jet streams are the polar jets around the polar vortices, at 9–12 km (5.6–7.5 mi; 30,000–39,000 ft) above sea level, and the higher altitude and somewhat weaker subtropical jets at 10–16 km (6.2–9.9 mi; 33,000–52,000 ft). The Northern Hemisphere and the Southern Hemisphere each have a polar jet and a subtropical jet. The northern hemisphere polar jet flows over the middle to northern latitudes of North America, Europe, and Asia and their intervening oceans, while the southern hemisphere polar jet mostly circles Antarctica, both all year round.

Jet streams are the product of two factors: the atmospheric heating by solar radiation that produces the large-scale polar, Ferrel, and Hadley circulation cells, and the action of the Coriolis force acting on those moving masses. The Coriolis force is caused by the planet's rotation on its axis. On other planets, internal heat rather than solar heating drives their jet streams. The polar jet stream forms near the interface of the polar and Ferrel circulation cells; the subtropical jet forms near the boundary of the Ferrel and Hadley circulation cells.

Other jet streams also exist. During the Northern Hemisphere summer, easterly jets can form in tropical regions, typically where dry air encounters more humid air at high altitudes. Low-level jets also are typical of various regions such as the central United States. There are also jet streams in the thermosphere.

Meteorologists use the location of some of the jet streams as an aid in weather forecasting. The main commercial relevance of the jet streams is in air travel, as flight time can be dramatically affected by either flying with the flow or against. Often, airlines work to fly 'with' the jet stream to obtain significant fuel cost and time savings. Dynamic North Atlantic Tracks are one example of how airlines and air traffic control work together to accommodate the jet stream and winds aloft that results in the maximum benefit for airlines and other users. Clear-air turbulence, a potential hazard to aircraft passenger safety, is often found in a jet stream's vicinity, but it does not create a substantial alteration of flight times.

Discovery

The first indications of this phenomenon came from American professor Elias Loomis in the 1800s, when he proposed a powerful air current in the upper air blowing west to east across the United States as an explanation for the behaviour of major storms. After the 1883 eruption of the Krakatoa volcano, weather watchers tracked and mapped the effects on the sky over several years. They labelled the phenomenon the "equatorial smoke stream". In the 1920s Japanese meteorologist Wasaburo Oishi detected the jet stream from a site near Mount Fuji. He tracked pilot balloons ("pibals"), used to measure wind speed and direction, as they rose in the air. Oishi's work largely went unnoticed outside Japan because it was published in Esperanto. American pilot Wiley Post, the first man to fly around the world solo in 1933, is often given some credit for discovery of jet streams. Post invented a pressurized suit that let him fly above 6,200 metres (20,300 ft). In the year before his death, Post made several attempts at a high-altitude transcontinental flight, and noticed that at times his ground speed greatly exceeded his air speed.

German meteorologist Heinrich Seilkopf is credited with coining a special term, Strahlströmung (literally "jet current"), for the phenomenon in 1939. Many sources credit real understanding of the nature of jet streams to regular and repeated flight-path traversals during World War II. Flyers consistently noticed westerly tailwinds in excess of 160 km/h (100 mph) in flights, for example, from the US to the UK. Similarly in 1944 a team of American meteorologists in Guam, including Reid Bryson, had enough observations to forecast very high west winds that would slow World War II bombers travelling to Japan.

Description

General configuration of the polar and subtropical jet streams
Cross section of the subtropical and polar jet streams by latitude

Polar jet streams are typically located near the 250 hPa (about 1/4 atmosphere) pressure level, or seven to twelve kilometres (23,000 to 39,000 ft) above sea level, while the weaker subtropical jet streams are much higher, between 10 and 16 kilometres (33,000 and 52,000 ft). Jet streams wander laterally dramatically, and change in altitude. The jet streams form near breaks in the tropopause, at the transitions between the polar, Ferrel and Hadley circulation cells, and whose circulation, with the Coriolis force acting on those masses, drives the jet streams. The polar jets, at lower altitude, and often intruding into mid-latitudes, strongly affect weather and aviation. The polar jet stream is most commonly found between latitudes 30° and 60° (closer to 60°), while the subtropical jet streams are located close to latitude 30°. These two jets merge at some locations and times, while at other times they are well separated. The northern polar jet stream is said to "follow the sun" as it slowly migrates northward as that hemisphere warms, and southward again as it cools.

The width of a jet stream is typically a few hundred kilometres or miles and its vertical thickness often less than five kilometres (16,000 feet).

Jet streams are typically continuous over long distances, but discontinuities are also common. The path of the jet typically has a meandering shape, and these meanders themselves propagate eastward, at lower speeds than that of the actual wind within the flow. Each large meander, or wave, within the jet stream is known as a Rossby wave (planetary wave). Rossby waves are caused by changes in the Coriolis effect with latitude. Shortwave troughs, are smaller scale waves superimposed on the Rossby waves, with a scale of 1,000 to 4,000 kilometres (600–2,500 mi) long, that move along through the flow pattern around large scale, or longwave, "ridges" and "troughs" within Rossby waves. Jet streams can split into two when they encounter an upper-level low, that diverts a portion of the jet stream under its base, while the remainder of the jet moves by to its north.

The wind speeds are greatest where temperature differences between air masses are greatest, and often exceed 92 km/h (50 kn; 57 mph). Speeds of 400 km/h (220 kn; 250 mph) have been measured.

The jet stream moves from West to East bringing changes of weather. Meteorologists now understand that the path of jet streams affects cyclonic storm systems at lower levels in the atmosphere, and so knowledge of their course has become an important part of weather forecasting. For example, in 2007 and 2012, Britain experienced severe flooding as a result of the polar jet staying south for the summer.

Cause

Highly idealised depiction of the global circulation. The upper-level jets tend to flow latitudinally along the cell boundaries.

In general, winds are strongest immediately under the tropopause (except locally, during tornadoes, tropical cyclones or other anomalous situations). If two air masses of different temperatures or densities meet, the resulting pressure difference caused by the density difference (which ultimately causes wind) is highest within the transition zone. The wind does not flow directly from the hot to the cold area, but is deflected by the Coriolis effect and flows along the boundary of the two air masses.

All these facts are consequences of the thermal wind relation. The balance of forces acting on an atmospheric air parcel in the vertical direction is primarily between the gravitational force acting on the mass of the parcel and the buoyancy force, or the difference in pressure between the top and bottom surfaces of the parcel. Any imbalance between these forces results in the acceleration of the parcel in the imbalance direction: upward if the buoyant force exceeds the weight, and downward if the weight exceeds the buoyancy force. The balance in the vertical direction is referred to as hydrostatic. Beyond the tropics, the dominant forces act in the horizontal direction, and the primary struggle is between the Coriolis force and the pressure gradient force. Balance between these two forces is referred to as geostrophic. Given both hydrostatic and geostrophic balance, one can derive the thermal wind relation: the vertical gradient of the horizontal wind is proportional to the horizontal temperature gradient. If two air masses in the northern hemisphere, one cold and dense to the north and the other hot and less dense to the south, are separated by a vertical boundary and that boundary should be removed, the difference in densities will result in the cold air mass slipping under the hotter and less dense air mass. The Coriolis effect will then cause poleward-moving mass to deviate to the East, while equatorward-moving mass will deviate toward the west. The general trend in the atmosphere is for temperatures to decrease in the poleward direction. As a result, winds develop an eastward component and that component grows with altitude. Therefore, the strong eastward moving jet streams are in part a simple consequence of the fact that the Equator is warmer than the north and south poles.

Polar jet stream

The thermal wind relation does not explain why the winds are organized into tight jets, rather than distributed more broadly over the hemisphere. One factor that contributes to the creation of a concentrated polar jet is the undercutting of sub-tropical air masses by the more dense polar air masses at the polar front. This causes a sharp north-south pressure (south-north potential vorticity) gradient in the horizontal plane, an effect which is most significant during double Rossby wave breaking events. At high altitudes, lack of friction allows air to respond freely to the steep pressure gradient with low pressure at high altitude over the pole. This results in the formation of planetary wind circulations that experience a strong Coriolis deflection and thus can be considered 'quasi-geostrophic'. The polar front jet stream is closely linked to the frontogenesis process in midlatitudes, as the acceleration/deceleration of the air flow induces areas of low/high pressure respectively, which link to the formation of cyclones and anticyclones along the polar front in a relatively narrow region.

Subtropical jet

A second factor which contributes to a concentrated jet is more applicable to the subtropical jet which forms at the poleward limit of the tropical Hadley cell, and to first order this circulation is symmetric with respect to longitude. Tropical air rises to the tropopause, and moves poleward before sinking; this is the Hadley cell circulation. As it does so it tends to conserve angular momentum, since friction with the ground is slight. Air masses that begin moving poleward are deflected eastward by the Coriolis force (true for either hemisphere), which for poleward moving air implies an increased westerly component of the winds (note that deflection is leftward in the southern hemisphere).

Other planets

Jupiter's distinctive cloud bands

Jupiter's atmosphere has multiple jet streams, caused by the convection cells that form the familiar banded color structure; on Jupiter, these convection cells are driven by internal heating. The factors that control the number of jet streams in a planetary atmosphere is an active area of research in dynamical meteorology. In models, as one increases the planetary radius, holding all other parameters fixed, the number of jet streams decreases.

Effects

Hurricane protection

Hurricane Flossie over Hawaii in 2007. Note the large band of moisture that developed East of Hawaii Island that came from the hurricane.

The subtropical jet stream rounding the base of the mid-oceanic upper trough is thought to be one of the causes most of the Hawaiian Islands have been resistant to the long list of Hawaii hurricanes that have approached. For example, when Hurricane Flossie (2007) approached and dissipated just before reaching landfall, the U.S. National Oceanic and Atmospheric Administration (NOAA) cited vertical wind shear as evidenced in the photo.

Uses

On Earth, the northern polar jet stream is the most important one for aviation and weather forecasting, as it is much stronger and at a much lower altitude than the subtropical jet streams and also covers many countries in the Northern Hemisphere, while the southern polar jet stream mostly circles Antarctica and sometimes the southern tip of South America. Thus, the term jet stream in these contexts usually implies the northern polar jet stream.

Aviation

Flights between Tokyo and Los Angeles using the jet stream eastbound and a great circle route westbound.

The location of the jet stream is extremely important for aviation. Commercial use of the jet stream began on 18 November 1952, when Pan Am flew from Tokyo to Honolulu at an altitude of 7,600 metres (24,900 ft). It cut the trip time by over one-third, from 18 to 11.5 hours. Not only does it cut time off the flight, it also nets fuel savings for the airline industry. Within North America, the time needed to fly east across the continent can be decreased by about 30 minutes if an airplane can fly with the jet stream, or increased by more than that amount if it must fly west against it.

Associated with jet streams is a phenomenon known as clear-air turbulence (CAT), caused by vertical and horizontal wind shear caused by jet streams. The CAT is strongest on the cold air side of the jet, next to and just under the axis of the jet. Clear-air turbulence can cause aircraft to plunge and so present a passenger safety hazard that has caused fatal accidents, such as the death of one passenger on United Airlines Flight 826.

Possible future power generation

Scientists are investigating ways to harness the wind energy within the jet stream. According to one estimate of the potential wind energy in the jet stream, only one percent would be needed to meet the world's current energy needs. In the late 2000s it was estimated that the required technology would reportedly take 10–20 years to develop. There are two major but divergent scientific articles about jet stream power. Archer & Caldeira claim that the Earth's jet streams could generate a total power of 1700 terawatts (TW) and that the climatic impact of harnessing this amount would be negligible. However, Miller, Gans, & Kleidon claim that the jet streams could generate a total power of only 7.5 TW and that the climatic impact would be catastrophic.

Unpowered aerial attack

Near the end of World War II, from late 1944 until early 1945, the Japanese Fu-Go balloon bomb, a type of fire balloon, was designed as a cheap weapon intended to make use of the jet stream over the Pacific Ocean to reach the west coast of Canada and the United States. Relatively ineffective as weapons, they were used in one of the few attacks on North America during World War II, causing six deaths and a small amount of damage. However, the Japanese were world leaders in biological weapons research at this time. The Japanese Imperial Army's Noborito Institute cultivated anthrax and plague Yersinia pestis; furthermore, it produced enough cowpox viruses to infect the entire United States. The deployment of these biological weapons on fire balloons was planned in 1944. Emperor Hirohito did not permit deployment of biological weapons on the basis of a report by President Staff Officer Umezu on 25 October 1944. Consequently, biological warfare using Fu-Go balloons was not implemented.

Changes due to climate cycles

Effects of ENSO

Impact of El Niño and La Niña on North America

El Niño-Southern Oscillation (ENSO) influences the average location of upper-level jet streams, and leads to cyclical variations in precipitation and temperature across North America, as well as affecting tropical cyclone development across the eastern Pacific and Atlantic basins. Combined with the Pacific Decadal Oscillation, ENSO can also impact cold season rainfall in Europe. Changes in ENSO also change the location of the jet stream over South America, which partially affects precipitation distribution over the continent.

El Niño

During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During the Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. Snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well below normal across the Upper Midwest and Great Lakes states. The northern tier of the lower 48 exhibits above normal temperatures during the fall and winter, while the Gulf coast experiences below normal temperatures during the winter season. The subtropical jet stream across the deep tropics of the Northern Hemisphere is enhanced due to increased convection in the equatorial Pacific, which decreases tropical cyclogenesis within the Atlantic tropics below what is normal, and increases tropical cyclone activity across the eastern Pacific. In the Southern Hemisphere, the subtropical jet stream is displaced equatorward, or north, of its normal position, which diverts frontal systems and thunderstorm complexes from reaching central portions of the continent.

La Niña

Across North America during La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track and jet stream. The storm track shifts far enough northward to bring wetter than normal conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. Snowfall is above normal across the Pacific Northwest and western Great Lakes. Across the North Atlantic, the jet stream is stronger than normal, which directs stronger systems with increased precipitation towards Europe.

Dust Bowl

Evidence suggests the jet stream was at least partly responsible for the widespread drought conditions during the 1930s Dust Bowl in the Midwest United States. Normally, the jet stream flows east over the Gulf of Mexico and turns northward pulling up moisture and dumping rain onto the Great Plains. During the Dust Bowl, the jet stream weakened and changed course traveling farther south than normal. This starved the Great Plains and other areas of the Midwest of rainfall, causing extraordinary drought conditions.

Longer-term climatic changes

Meanders (Rossby Waves) of the Northern Hemisphere's polar jet stream developing (a), (b); then finally detaching a "drop" of cold air (c). Orange: warmer masses of air; pink: jet stream.

Since the early 2000s, climate models have consistently identified that global warming will gradually push jet streams poleward. In 2008, this was confirmed by observational evidence, which proved that from 1979 to 2001, the northern jet stream moved northward at an average rate of 2.01 kilometres (1.25 mi) per year, with a similar trend in the Southern Hemisphere jet stream. Climate scientists have hypothesized that the jet stream will also gradually weaken as a result of global warming. Trends such as Arctic sea ice decline, reduced snow cover, evapotranspiration patterns, and other weather anomalies have caused the Arctic to heat up faster than other parts of the globe, in what is known as the Arctic amplification. In 2021-2022, it was found that since 1979, the warming within the Arctic Circle has been nearly four times faster than the global average, and some hotspots in the Barents Sea area warmed up to seven times faster than the global average. While the Arctic remains one of the coldest places on Earth today, the temperature gradient between it and the warmer parts of the globe will continue to diminish with every decade of global warming as the result of this amplification. If this gradient has a strong influence on the jet stream, then it will eventually become weaker and more variable in its course, which would allow more cold air from the polar vortex to leak mid-latitudes and slow the progression of Rossby Waves, leading to more persistent and more extreme weather.

The hypothesis above is closely associated with Jennifer Francis, who had first proposed it in a 2012 paper co-authored by Stephen J. Vavrus. While some paleoclimate reconstructions have suggested that the polar vortex becomes more variable and causes more unstable weather during periods of warming back in 1997, this was contradicted by climate modelling, with PMIP2 simulations finding in 2010 that the Arctic oscillation was much weaker and more negative during the Last Glacial Maximum, and suggesting that warmer periods have stronger positive phase AO, and thus less frequent leaks of the polar vortex air. However, a 2012 review in the Journal of the Atmospheric Sciences noted that "there [has been] a significant change in the vortex mean state over the twenty-first century, resulting in a weaker, more disturbed vortex.", which contradicted the modelling results but fit the Francis-Vavrus hypothesis. Additionally, a 2013 study noted that the then-current CMIP5 tended to strongly underestimate winter blocking trends, and other 2012 research had suggested a connection between declining Arctic sea ice and heavy snowfall during midlatitude winters.

In 2013, further research from Francis connected reductions in the Arctic sea ice to extreme summer weather in the northern mid-latitudes, while other research from that year identified potential linkages between Arctic sea ice trends and more extreme rainfall in the European summer. At the time, it was also suggested that this connection between Arctic amplification and jet stream patterns was involved in the formation of Hurricane Sandy and played a role in the Early 2014 North American cold wave. In 2015, Francis' next study concluded that highly amplified jet-stream patterns are occurring more frequently in the past two decades. Hence, continued heat-trapping emissions favour increased formation of extreme events caused by prolonged weather conditions.

Studies published in 2017 and 2018 identified stalling patterns of Rossby waves in the northern hemisphere jet stream as the culprit behind other almost stationary extreme weather events, such as the 2018 European heatwave, the 2003 European heat wave, 2010 Russian heat wave or the 2010 Pakistan floods, and suggested that these patterns were all connected to Arctic amplification. Further work from Francis and Vavrus that year suggested that amplified Arctic warming is observed as stronger in lower atmospheric areas because the expanding process of warmer air increases pressure levels which decreases poleward geopotential height gradients. As these gradients are the reason that cause west to east winds through the thermal wind relationship, declining speeds are usually found south of the areas with geopotential increases. In 2017, Francis explained her findings to the Scientific American: "A lot more water vapor is being transported northward by big swings in the jet stream. That's important because water vapor is a greenhouse gas just like carbon dioxide and methane. It traps heat in the atmosphere. That vapor also condenses as droplets we know as clouds, which themselves trap more heat. The vapor is a big part of the amplification story—a big reason the Arctic is warming faster than anywhere else."

In a 2017 study conducted by climatologist Dr. Judah Cohen and several of his research associates, Cohen wrote that "[the] shift in polar vortex states can account for most of the recent winter cooling trends over Eurasian midlatitudes". A 2018 paper from Vavrus and others linked Arctic amplification to more persistent hot-dry extremes during the midlatitude summers, as well as the midlatitude winter continental cooling. Another 2017 paper estimated that when the Arctic experiences anomalous warming, primary production in the North America goes down by between 1% and 4% on average, with some states suffering up to 20% losses. A 2021 study found that a stratospheric polar vortex disruption is linked with extreme cold winter weather across parts of Asia and North America, including the February 2021 North American cold wave. Another 2021 study identified a connection between the Arctic sea ice loss and the increased size of wildfires in the Western United States.

However, because the specific observations are considered short-term observations, there is considerable uncertainty in the conclusions. Climatology observations require several decades to definitively distinguish various forms of natural variability from climate trends. This point was stressed by reviews in 2013 and in 2017. A study in 2014 concluded that Arctic amplification significantly decreased cold-season temperature variability over the Northern Hemisphere in recent decades. Cold Arctic air intrudes into the warmer lower latitudes more rapidly today during autumn and winter, a trend projected to continue in the future except during summer, thus calling into question whether winters will bring more cold extremes. A 2019 analysis of a data set collected from 35 182 weather stations worldwide, including 9116 whose records go beyond 50 years, found a sharp decrease in northern midlatitude cold waves since the 1980s.

Moreover, a range of long-term observational data collected during 2010s and published in 2020s now suggests that the intensification of Arctic amplification since the early 2010s was not linked to significant changes on midlatitude atmospheric patterns. State-of-the-art modelling research of PAMIP (Polar Amplification Model Intercomparison Project) improved upon the 2010 findings of PMIP2 - it did find that sea ice decline would weaken the jet stream and increase the probability of atmospheric blocking, but the connection was very minor, and typically insignificant next to interannual variability. In 2022, a follow-up study found that while the PAMIP average had likely underestimated the weakening caused by sea ice decline by 1.2 to 3 times, even the corrected connection still amounts to only 10% of the jet stream's natural variability.

Additionally, a 2021 study found that while jet streams had indeed slowly moved polewards since 1960 as was predicted by models, they did not weaken, in spite of a small increase in waviness. A 2022 re-analysis of the aircraft observational data collected over 2002–2020 suggested that the North Atlantic jet stream had actually strengthened. Finally, a 2021 study was able to reconstruct jet stream patterns over the past 1,250 years based on Greenland ice cores, and found that all of the recently observed changes remain within range of natural variability: the earliest likely time of divergence is in 2060, under the Representative Concentration Pathway 8.5 which implies continually accelerating greenhouse gas emissions.

Other upper-level jets

Polar night jet

The polar-night jet stream forms mainly during the winter months when the nights are much longer, hence polar nights, in their respective hemispheres at around 60° latitude. The polar night jet moves at a greater height (about 24,000 metres (80,000 ft)) than it does during the summer. During these dark months the air high over the poles becomes much colder than the air over the Equator. This difference in temperature gives rise to extreme air pressure differences in the stratosphere, which, when combined with the Coriolis effect, create the polar night jets, that race eastward at an altitude of about 48 kilometres (30 mi). The polar vortex is circled by the polar night jet. The warmer air can only move along the edge of the polar vortex, but not enter it. Within the vortex, the cold polar air becomes increasingly cold with neither warmer air from lower latitudes nor energy from the Sun entering during the polar night.

Low-level jets

There are wind maxima at lower levels of the atmosphere that are also referred to as jets.

Barrier jet

A barrier jet in the low levels forms just upstream of mountain chains, with the mountains forcing the jet to be oriented parallel to the mountains. The mountain barrier increases the strength of the low level wind by 45 percent. In the North American Great Plains a southerly low-level jet helps fuel overnight thunderstorm activity during the warm season, normally in the form of mesoscale convective systems which form during the overnight hours. A similar phenomenon develops across Australia, which pulls moisture poleward from the Coral Sea towards cut-off lows which form mainly across southwestern portions of the continent.

Coastal jet

Coastal low-level jets are related to a sharp contrast between high temperatures over land and lower temperatures over the sea and play an important role in coastal weather, giving rise to strong coast parallel winds. Most coastal jets are associated with the oceanic high-pressure systems and thermal low over land. These jets are mainly located along cold eastern boundary marine currents, in upwelling regions offshore California, Peru-Chile, Benguela, Portugal, Canary and West Australia, and offshore Yemen—Oman.

Valley exit jet

A valley exit jet is a strong, down-valley, elevated air current that emerges above the intersection of the valley and its adjacent plain. These winds frequently reach speeds of up to 20 m/s (72 km/h; 45 mph) at heights of 40–200 m (130–660 ft) above the ground. Surface winds below the jet tend to be substantially weaker, even when they are strong enough to sway vegetation.

Valley exit jets are likely to be found in valley regions that exhibit diurnal mountain wind systems, such as those of the dry mountain ranges of the US. Deep valleys that terminate abruptly at a plain are more impacted by these factors than are those that gradually become shallower as downvalley distance increases.

Africa

There are several important low-level jets in Africa. Numerous low-level jets form in the Sahara, and are important for the raising of dust off the desert surface. This includes a low-level jet in Chad, which is responsible for dust emission from the Bodélé Depression, the world's most important single source of dust emission. The Somali Jet, which forms off the East African coast is an important component of the global Hadley circulation, and supplies water vapour to the Asian Monsoon. Easterly low-level jets forming in valleys within the East African Rift System help account for the low rainfall in East Africa and support high rainfall in the Congo Basin rainforest. The formation of the thermal low over northern Africa leads to a low-level westerly jet stream from June into October, which provides the moist inflow to the West African monsoon.

While not technically a low-level jet, the mid-level African easterly jet (at 3000–4000 m above the surface) is also an important climate feature in Africa. It occurs during the Northern Hemisphere summer between 10°N and 20°N above in the Sahel region of West Africa. The mid-level easterly African jet stream is considered to play a crucial role in the West African monsoon, and helps form the tropical waves which move across the tropical Atlantic and eastern Pacific oceans during the warm season.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...