Search This Blog

Friday, February 6, 2015

Lightning


From Wikipedia, the free encyclopedia
A lightning flash during a thunderstorm

High-speed, slow-motion lightning video captured at 6,200 frames per second.
Four-second video of a lightning strike, Island in the Sky, Canyonlands National Park, Utah, United States.

Lightning is a sudden electrostatic discharge during an electric storm between electrically charged regions of a cloud (called intra-cloud lightning or IC), between that cloud and another cloud (CC lightning), or between a cloud and the ground (CG lightning). The charged regions within the atmosphere temporarily equalize themselves through a lightning flash, commonly referred to as a strike if it hits an object on the ground. Although lightning is always accompanied by the sound of thunder, distant lightning may be seen but may be too far away for the thunder to be heard.

General considerations

On Earth, the lightning frequency is approximately 40–50 times a second or nearly 1.4 billion flashes per year[1] and the average duration is 30 microseconds.[2] Many factors affect the frequency, distribution, strength and physical properties of a "typical" lightning flash in a particular region of the world. These factors include ground elevation, latitude, prevailing wind currents, relative humidity, proximity to warm and cold bodies of water, etc. To a certain degree, the ratio between IC, CC and CG lightning may also vary by season in middle latitudes. Because human beings are terrestrial and most of their possessions are on the Earth, where lightning can damage or destroy them, CG lightning is the most studied and best understood of the three types, even though IC and CC are more common types of lightning. Lightning's relative unpredictability limits a complete explanation of how or why it occurs, even after hundreds of years of scientific investigation.

A typical cloud to ground lightning flash culminates in the formation of an electrically conducting plasma channel through the air in excess of 5 kilometres (3.1 mi) tall, from within the cloud to the ground's surface. The actual discharge is the final stage of a very complex process.[3] At its peak, a typical thunderstorm produces three or more strikes to the Earth per minute.[4] Lightning primarily occurs when warm air is mixed with colder air masses, resulting in atmospheric disturbances necessary for polarizing the atmosphere.[citation needed] However, it can also occur during dust storms, forest fires, tornadoes, volcanic eruptions, and even in the cold of winter, where the lightning is known as thundersnow.[5][6] Hurricanes typically generate some lightning, mainly in the rainbands as much as 160 kilometres (99 mi) from the center.[7][8][9]

The science of lightning is called fulminology, and the fear of lightning is called astraphobia.

General properties


World map showing frequency of lightning strikes, in flashes per km² per year (equal-area projection), from combined 1995–2003 data from the Optical Transient Detector and 1998–2003 data from the Lightning Imaging Sensor.

Lightning is not distributed evenly around the planet, as seen in the image on the right.

About 70% of lightning occurs over land in the tropics where atmospheric convection is the greatest. This occurs from both the mixture of warmer and colder air masses, as well as differences in moisture concentrations, and it generally happens at the boundaries between them. The flow of warm ocean currents past drier land masses, such as the Gulf Stream, partially explains the elevated frequency of lightning in the Southeast United States. Because the influence of small or absent land masses in the vast stretches of the world's oceans limits the differences between these variants in the atmosphere, lightning is notably less frequent there than over larger landforms. The North and South Poles are limited in their coverage of thunderstorms and therefore result in areas with the least amount of lightning.

In general, cloud-to-ground (CG) lightning flashes account for only 25% of all total lightning flashes worldwide. Since the base of a thunderstorm is usually negatively charged, this is where most CG lightning originates. This region is typically at the elevation where freezing occurs within the cloud. Freezing, combined with collisions between ice and water, appears to be a critical part of the initial charge development and separation process. During wind-driven collisions, ice crystals tend to develop a positive charge, while a heavier, slushy mixture of ice and water (called graupel) develops a negative charge. Updrafts within a storm cloud separate the lighter ice crystals from the heavier graupel, causing the top region of the cloud to accumulate a positive space charge while the lower level accumulates a negative space charge.

Lightning in Belfort, France

Because the concentrated charge within the cloud must exceed the insulating properties of air, and this increases proportionally to the distance between the cloud and the ground, the proportion of CG strikes (versus cloud-to-cloud (CC) or in-cloud (IC) discharges) becomes greater when the cloud is closer to the ground. In the tropics, where the freezing level is generally higher in the atmosphere, only 10% of lightning flashes are CG. At the latitude of Norway (around 60° North latitude), where the freezing elevation is lower, 50% of lightning is CG.[10][11]

Lightning is usually produced by cumulonimbus clouds, which have bases that are typically 1–2 km (0.6-1.25 miles) above the ground and tops up to 15 km (9.3 mi) in height.

On Earth, the place where lightning occurs most often is near the small village of Kifuka in the mountains of the eastern Democratic Republic of the Congo,[12] where the elevation is around 975 m (3,200 ft). On average, this region receives 158 lightning strikes per 1 square kilometer (0.39 sq mi) per year.[13] Other lightning hotspots include Catatumbo lightning in Venezuela, Singapore,[14] Teresina in northern Brazil,[15] and "Lightning Alley" in Central Florida.[16][17]

Establishing conditions necessary for lightning

In order for an electrostatic discharge to occur, two things are necessary: 1) a sufficiently high electric potential between two regions of space must exist; and 2) a high-resistance medium must obstruct the free, unimpeded equalization of the opposite charges.
  1. It is well understood that during a thunderstorm there is charge separation and aggregation in certain regions of the cloud; however the exact processes by which this occurs are not fully understood;[18]
    Main article: thunderstorm
  2. The atmosphere provides the electrical insulation, or barrier, that prevents free equalization between charged regions of opposite polarity. This is overcome by "lightning", a complex process referred to as the lightning "flash".
Establishing the electric field in CG lightning
As a thundercloud moves over the surface of the Earth, an equal electric charge, but of opposite polarity, is induced on the Earth's surface underneath the cloud. The induced positive surface charge, when measured against a fixed point, will be small as the thundercloud approaches, increasing as the center of the storm arrives and dropping as the thundercloud passes. The referential value of the induced surface charge could be roughly represented as a bell curve.
The oppositely charged regions create an electric field within the air between them. This electric field varies in relation to the strength of the surface charge on the base of the thundercloud – the greater the accumulated charge, the higher the electrical field.

View of lightning from an airplane flying above a system.

Lightning flashes and strikes

The best studied and understood form of lightning is cloud to ground (CG). Although more common, intracloud (IC) and cloud to cloud (CC) flashes are very difficult to study given there are no "physical" points to monitor inside the clouds. Also, given the very low probability lightning will strike the same point repeatedly and consistently, scientific inquiry is difficult at best even in the areas of high CG frequency. As such, knowing flash propagation is similar amongst all forms of lightning, the best means to describe the process is through an examination of the most studied form, cloud to ground.

Downward leader formation for negative CG lightning


A downward leader travels towards earth, branching as it goes.

Lightning strike caused by the connection of two leaders, positive shown in blue and negative in red

In a process not well understood, a channel of ionized air, called a "leader", is initiated from a charged region in the thundercloud. Leaders are electrically conductive channels of partially ionized gas that travel away from a region of dense charge. Negative leaders propagate away from densely charged regions of negative charge, and positive leaders propagate from positively charged regions.

The positively and negatively charged leaders proceed in opposite directions, positive upwards within the cloud and negative towards the earth. Both ionic channels proceed, in their respective directions, in a number of successive spurts. Each leader "pools" ions at the leading tips, shooting out one or more new leaders, momentarily pooling again to concentrate charged ions, then shooting out another leader.

Leaders often split, forming branches in a tree-like pattern.[19] In addition, negative leaders travel in a discontinuous fashion. The resulting jerky movement of these "stepped leader(s)" can be readily observed in slow-motion videos of negative leaders as they head toward ground prior to a negative CG lightning strike. The negative leaders continue to propagate and split as they head downward, often speeding up as they get closer to the Earth's surface.

About 90% of ionic channel lengths between "pools" are approximately 45 m (148 ft) in length.[20] The establishment of the ionic channel takes a comparatively long amount of time (hundreds of milliseconds) in comparison to resulting discharge which occurs within a few microseconds. The electric current needed to establish the channel, measured in the tens or hundreds of amperes, is dwarfed by subsequent currents during the actual discharge.

Initiation of the outward leaders is not well understood. The electric field strength within the thundercloud is not typically large enough to initiate this process by itself.[21] Many hypotheses have been proposed. One theory postulates that showers of relativistic electrons are created by cosmic rays and are then accelerated to higher velocities via a process called runaway breakdown. As these relativistic electrons collide and ionize neutral air molecules, they initiate leader formation. Another theory invokes locally enhanced electric fields being formed near elongated water droplets or ice crystals.[22] Percolation theory, especially for the case of biased percolation,[23] [clarification needed] describes random connectivity phenomena, which produce an evolution of connected structures similar to that of lightning strikes.

Upward streamers

When a stepped leader approaches the ground, the presence of opposite charges on the ground enhances the strength of the electric field. The electric field is strongest on grounded objects whose tops are closest to the base of the thundercloud, such as trees and tall buildings. If the electric field is strong enough, a positively charged ionic channel, called a positive or upward streamer, can develop from these points. This was first theorized by Heinz Kasemir.[24][25]

As negatively charged leaders approach, increasing the localized electric field strength, grounded objects already experiencing corona discharge exceed a threshold and form upward streamers.

Attachment

Once a downward leader connects to an available upward leader, a process referred to as attachment, a low-resistance path is formed and discharge may occur. Photographs have been taken on which unattached streamers are clearly visible. The unattached downward leaders are also visible in branched lightning, none of which are connected to the earth, although it may appear they are.[26]

Discharge

Return stroke

Once a conductive channel bridges the ionized air between the negative charges in the cloud and the positive surface charges below, a massive electrical discharge follows. Neutralization of positive surface charges occurs first. An enormous current of positive charges races up the ionic channel towards the thundercloud. This is the 'return stroke' and it is the most luminous and noticeable part of the lightning discharge.

High-speed photography showing different parts of a lightning flash during the discharge process as seen in Toulouse, France.

The positive charges in the ground region surrounding the lightning strike are neutralized within microseconds as they race inward to the strike point, up the plasma channel, and back to the cloud. A huge surge of current creates large radial voltage differences along the surface of the ground. Called step potentials, they are responsible for more injuries and deaths than the strike itself.[citation needed] Electricity follows the path of least resistance. A portion of the return stroke current will often preferentially flow through one leg and out another, electrocuting an unlucky human or animal standing near the point where the lightning strikes.

The electrical current of the return stroke averages 30 kiloamperes for a typical negative CG flash, often referred to as "negative CG" lightning. In some cases, a positive ground to cloud (GC) lightning flash may originate from a positively charged region on the ground below a storm. These discharges normally originate from the tops of very tall structures, such as communications antennas. The rate at which the return stroke current travels has been found to be around 1×108 m/s.[27]

The massive flow of electrical current occurring during the return stroke combined with the rate at which it occurs (measured in microseconds) rapidly superheats the completed leader channel, forming a highly electrically-conductive plasma channel. The core temperature of the plasma during the return stroke may exceed 50,000 K, causing it to brilliantly radiate with a blue-white color. Once the electrical current stops flowing, the channel cools and dissipates over tens or hundreds of milliseconds, often disappearing as fragmented patches of glowing gas. The nearly instantaneous heating during the return stroke causes the air to explosively expand, producing a powerful shock wave that is heard as thunder.

Re-strike

High-speed videos (examined frame-by-frame) show that most negative CG lightning flashes are made up of 3 or 4 individual strokes, though there may be as many as 30.[28]

Each re-strike is separated by a relatively large amount of time, typically 40 to 50 milliseconds, as other charged regions in the cloud are discharged in subsequent strokes. Re-strikes often cause a noticeable "strobe light" effect.[29]

Each successive stroke is preceded by intermediate dart leader strokes that have a faster rise time but lower amplitude than the initial return stroke. Each subsequent stroke usually re-uses the discharge channel taken by the previous one, but the channel may be offset from its previous position as wind displaces the hot channel.[30]

Transient currents during the flash

The electrical current within a typical negative CG lightning discharge rises very quickly to its peak value in 1–10 microseconds, then decays more slowly over 50–200 microseconds. The transient nature of the current within a lightning flash results in several phenomena that need to be addressed in the effective protection of ground-based structures. Rapidly changing currents tend to travel on the surface of a conductor. This is called skin effect, unlike direct currents "flowing through" the entire conductor like water through a hose. Hence, conductors used in the protection of facilities tend to be multi-stranded small wires woven together, that increases the surface area inversely in proportion to cross-sectional area.

The rapidly changing currents also create electromagnetic pulses (EMPs) that radiate outward from the ionic channel. This is a characteristic of all electrical sparks. The radiated pulses rapidly weaken as their distance from the origin increases. However if they pass over conductive elements, for instance electrical wires, communication lines or metallic pipes, they may induce a current which travels outward to its termination. This is the "surge" that, more often than not, results in the destruction of delicate electronics, electrical appliances or electric motors. Devices known as surge protectors (SPD) or transient voltage surge suppressors (TVSS) attached in series with these conductors can detect the lightning flash's transient [irregular] current, and through an alteration of its physical properties, route the spike to an attached earthing ground, thereby protecting the equipment from damage.

Types

There are three primary types of lightning, defined by what is at the "ends" of a flash channel. They are intracloud (IC), which occurs within a single thundercloud unit; cloud to cloud (CC), which starts and ends between two different "functional" thundercloud units; and cloud to ground, that primarily originates in the thundercloud and terminates on an Earth surface, but may also occur in the reverse direction, that is ground to cloud. There are variations of each type, such as "positive" versus "negative" CG flashes, that have different physical characteristics common to each which can be measured. Different common names used to describe a particular lightning event may be attributed to the same or different events.

Cloud to ground (CG)

Cloud-to-ground is the best known and third most common type of lightning. It is the best understood of all forms because it allows for scientific study given it terminates on a physical object, namely the Earth, and lends itself to being measured by instruments. Of the three primary types of lightning, it poses the greatest threat to life and property since it terminates or "strikes" the Earth. Cloud-to-ground (CG) lightning is a lightning discharge between a thundercloud and the ground. It is usually negative in polarity and is usually initiated by a stepped leader moving down from the cloud.
  • Ground-to-cloud lightning is an artificially initiated, or triggered, category of CG flashes. Triggered lightning originates from tall, positively-charged structures on the ground, such as towers on mountains that have been inductively charged by the negative cloud layer above.[31]
  • Positive and negative lightning
CG lightning can occur with both positive and negative polarity. The polarity is that of the charge in the region that originated the lightning leaders. An average bolt of negative lightning carries an electric current of 30,000 amperes (30 kA), and transfers 15 coulombs of electric charge and 500 megajoules of energy. Large bolts of lightning can carry up to 120 kA and 350 coulombs.[32]

Anvil-to-ground (Bolt from the blue) lightning strike.
Unlike the far more common "negative" lightning, positive lightning originates from the positively charged top of the clouds (generally anvil clouds) rather than the lower portion of the storm. Leaders form in the anvil of the cumulonimbus and may travel horizontally for several miles before veering towards the ground. A positive lightning bolt can strike anywhere within several miles of the anvil of the thunderstorm, often in areas experiencing clear or only slightly cloudy skies; they are also known as "bolts from the blue" for this reason. Positive lightning typically makes up less than 5% of all lightning strikes.[33]
Because of the much greater distance to ground, the positively-charged region can develop considerably larger levels of charge and voltages than the negative charge regions in the lower part of the cloud. Positive lightning bolts are considerably hotter and longer than negative lightning. They can develop six to ten times the amount of charge and voltage of a negative bolt and the discharge current may last ten times longer.[34] A bolt of positive lightning may carry an electric current of 300 kA and the potential at the top of the cloud may exceed a billion volts — about 10 times that of negative lightning.[35] During a positive lightning strike, huge quantities of extremely low frequency (ELF) and very low frequency (VLF) radio waves are generated.[36]
As a result of their greater power, as well as lack of warning, positive lightning strikes are considerably more dangerous. At the present time, aircraft are not designed to withstand such strikes, since their existence was unknown at the time standards were set, and the dangers unappreciated until the destruction of a glider in 1999.[37] The standard in force at the time of the crash, Advisory Circular AC 20-53A, was replaced by Advisory Circular AC 20-53B in 2006,[38] however it is unclear whether adequate protection against positive lightning was incorporated.[39][40]
Aircraft operating in U.S. airspace have been required to be equipped with static discharge wicks. Although their primary function is to mitigate radio interference due to static buildup through friction with the air, in the event of a lightning strike, a plane is designed to conduct the excess electricity through its skin and structure to the wicks to be safely discharged back into the atmosphere. These measures, however, may be insufficient for positive lightning.[41]
Positive lightning has also been shown to trigger the occurrence of Upper-atmospheric lightning between the tops of clouds and the ionosphere. Positive lightning tends to occur more frequently in winter storms, as with thundersnow, and in the dissipation stage of a thunderstorm.[42]

Cloud to cloud (CC) and Intra-Cloud (IC)


Branching of cloud to cloud lightning, New Delhi, India

Multiple paths of cloud-to-cloud lightning, Swifts Creek, Australia.

Cloud-to-cloud lightning, Victoria, Australia.

Lightning discharges may occur between areas of cloud without contacting the ground. When it occurs between two separate clouds it is known as inter-cloud lightning, and when it occurs between areas of differing electric potential within a single cloud it is known as intra-cloud lightning. Intra-cloud lightning is the most frequently occurring type.[42]

Intra-cloud lightning most commonly occurs between the upper anvil portion and lower reaches of a given thunderstorm. This lightning can sometimes be observed at great distances at night as so-called "heat lightning". In such instances, the observer may see only a flash of light without hearing any thunder. The "heat" portion of the term is a folk association between locally experienced warmth and the distant lightning flashes.

Another terminology used for cloud–cloud or cloud–cloud–ground lightning is "Anvil Crawler",

Anvil Crawler over Lake Wright Patman south of Redwater, Texas on the backside of a large area of rain associated with a cold-front.

due to the habit of the charge typically originating from beneath or within the anvil and scrambling through the upper cloud layers of a thunderstorm, normally generating multiple branch strokes which are dramatic to witness. These are usually seen as a thunderstorm passes over the observer or begins to decay. The most vivid crawler behavior occurs in well developed thunderstorms that feature extensive rear anvil shearing.

Observational variations

  • Ball lightning may be an atmospheric electrical phenomenon, the physical nature of which is still controversial. The term refers to reports of luminous, usually spherical objects which vary from pea-sized to several meters in diameter.[43] It is sometimes associated with thunderstorms, but unlike lightning flashes, which last only a fraction of a second, ball lightning reportedly lasts many seconds. Ball lightning has been described by eyewitnesses but rarely recorded by meteorologists.[44][45] Scientific data on natural ball lightning is scarce owing to its infrequency and unpredictability. The presumption of its existence is based on reported public sightings, and has therefore produced somewhat inconsistent findings.
  • Bead lightning is the decaying stage of a lightning channel in which the luminosity of the channel breaks up into segments. Nearly every lightning discharge will exhibit beading as the channel cools immediately after a return stroke, sometimes referred to as the lightning's 'bead-out' stage. 'Bead lightning' is more properly a stage of a normal lightning discharge rather than a type of lightning in itself. Beading of a lightning channel is usually a small-scale feature, and therefore is often only apparent when the observer/camera is close to the lightning.[46]
  • Forked lightning is cloud-to-ground lightning that exhibits branching of its path.
  • Heat lightning is a lightning flash that appears to produce no discernible thunder because it occurs too far away for the thunder to be heard. The sound waves dissipate before they reach the observer.[48]
  • Ribbon lightning occurs in thunderstorms with high cross winds and multiple return strokes. The wind will blow each successive return stroke slightly to one side of the previous return stroke, causing a ribbon effect.[citation needed]
  • Rocket lightning is a form of cloud discharge, generally horizontal and at cloud base, with a luminous channel appearing to advance through the air with visually resolvable speed, often intermittently.[49]
  • Sheet lightning is cloud-to-cloud lightning that exhibits a diffuse brightening of the surface of a cloud, caused by the actual discharge path being hidden or too far away. The lightning itself cannot be seen by the spectator, so it appears as only a flash, or a sheet of light. The lightning may be too far away to discern individual flashes.
  • Smooth Channel lighting are positive cloud to ground lightning strikes where the forward stroke originates from the ground upwards to the cloud. The smooth channel is in the lower section of the lightning channel but should branch higher up (not visible as the "branching" is inside the cloud).[citation needed] Large supercells generate tremendous areas of positively-charged cloud material (thick anvil) and wind shear prevents excessive negative strokes as with "normal" thunderstorms. Downdrafts, such as the forward flank downdraft (FFD), bring the positively charged cloud material closer to the ground, where such lightning occurs.[citation needed]
  • Staccato lightning is a cloud-to-ground lightning (CG) strike which is a short-duration stroke that (often but not always) appears as a single very bright flash and often has considerable branching.[50] These are often found in the visual vault area near the mesocyclone of rotating thunderstorms and coincides with intensification of thunderstorm updrafts. A similar cloud-to-cloud strike consisting of a brief flash over a small area, appearing like a blip, also occurs in a similar area of rotating updrafts.[51]

    This CG was of very short duration, exhibited highly branched channels and was very bright indicating that it was staccoto lightning near New Boston, Texas.
  • Superbolts are bolts of lightning around a hundred times brighter than normal. On Earth, one in a million lightning strikes is a superbolt.[citation needed]
  • Sympathetic lightning is the tendency of lightning to be loosely coordinated across long distances. Discharges can appear in clusters when viewed from space.
  • Clear-air lightning describes lightning that occurs with no apparent cloud close enough to have produced it. In the U.S. and Canadian Rockies, a thunderstorm can be in an adjacent valley and not observable from the valley where the lightning bolt strikes, either visually or audibly. European and Asian mountainous areas experience similar events. Also in areas such as sounds, large lakes or open plains, when the storm cell is on the near horizon (within 26 kilometres (16 mi)) there may be some distant activity, a strike can occur and as the storm is so far away, the strike is referred to as clear-air.[citation needed]

Effects

Lightning strike

Objects struck by lightning experience heat and magnetic forces of great magnitude. The heat created by lightning currents traveling through a tree may vaporize its sap, causing a steam explosion that bursts the trunk. As lightning travels through sandy soil, the soil surrounding the plasma channel may melt, forming tubular structures called fulgurites. Even though roughly 90 percent of people struck by lightning survive,[52] humans or animals struck by lightning may suffer severe injury due to internal organ and nervous system damage. Buildings or tall structures hit by lightning may be damaged as the lightning seeks unintended paths to ground. By safely conducting a lightning strike to ground, a lightning protection system can greatly reduce the probability of severe property damage. Lightning also oxidizes nitrogen in the air into nitrates which are deposited by rain and can fertilize plant growth.[53][54]

Thunder

Because the electrostatic discharge of terrestrial lightning superheats the air to plasma temperatures along the length of the discharge channel in a short duration, kinetic theory dictates gaseous molecules undergo a rapid increase in pressure and thus expand outward from the lightning creating a shock wave audible as thunder. Since the sound waves propagate not from a single point source but along the length of the lightning's path, the sound origin's varying distances from the observer can generate a rolling or rumbling effect. Perception of the sonic characteristics is further complicated by factors such as the irregular and possibly branching geometry of the lightning channel, by acoustic echoing from terrain, and by the typically multiple-stroke characteristic of the lightning strike.Light travels at about 300,000,000 m/s. Sound travels through air at about 340 m/s. An observer can approximate the distance to the strike by timing the interval between the visible lightning and the audible thunder it generates. A lightning flash preceding its thunder by five seconds would be about one mile (1.6 km) (5x340 m) distant. A flash preceding thunder by three seconds is about one kilometer (0.62 mi) (3x340 m) distant. Consequently, a lightning strike observed at a very close distance will be accompanied by a sudden clap of thunder, with almost no perceptible time lapse, possibly accompanied by the smell of ozone (O3).

High-energy radiation

The production of X-rays by a bolt of lightning was theoretically predicted as early as 1925[55] but no evidence was found until 2001/2002,[56][57][58] when researchers at the New Mexico Institute of Mining and Technology detected X-ray emissions from an induced lightning strike along a grounded wire trailed behind a rocket shot into a storm cloud. In the same year University of Florida and Florida Tech researchers used an array of electric field and X-ray detectors at a lightning research facility in North Florida to confirm that natural lightning makes X-rays in large quantities during the propagation of stepped leaders. The cause of the X-ray emissions is still a matter for research, as the temperature of lightning is too low to account for the X-rays observed.[59][60]

A number of observations by space-based telescopes have revealed even higher energy gamma ray emissions, the so-called terrestrial gamma-ray flashes (TGFs). These observations pose a challenge to current theories of lightning, especially with the recent discovery of the clear signatures of antimatter produced in lightning.[61]

Volcanic


Volcanic material thrust high into the atmosphere can trigger lightning.

Volcanic activity produces lightning-friendly conditions in multiple ways. The enormous quantity of pulverized material and gases ejected into the atmosphere with explosive power, creates a dense plume of highly charged particles, which establishes the perfect conditions for lightning. The ash density and constant motion within the volcanic plume continually produces electrostatic ionization, resulting in very powerful and very frequent flashes attempting to neutralize itself. Due to the extensive solid material (ash) content, unlike the water rich charge generating zones of a normal thundercloud, it is often called a dirty thunderstorm.
  • Powerful and frequent flashes have been witnessed in the volcanic plume as far back as the 79 AD eruption of Vesuvius by Pliny The Younger.[62]
  • Likewise, vapors and ash originating from vents on the volcano's flanks may produce more localized and smaller flashes upwards of 2.9 km long.
  • Small, short duration sparks, recently documented near newly extruded magma, attest to the material being highly charged prior to even entering the atmosphere.[63]

Extraterrestrial lightning

Lightning has been observed within the atmospheres of other planets, such as Venus, Jupiter and Saturn. Although in the minority on Earth, superbolts appear to be common on Jupiter.

Lightning on Venus has been a controversial subject after decades of study. During the Soviet Venera and U.S. Pioneer missions of the 1970s and 1980s, signals suggesting lightning may be present in the upper atmosphere were detected.[64] Although the Cassini–Huygens mission fly-by of Venus in 1999 detected no signs of lightning, the observation window lasted mere hours. Radio pulses recorded by the spacecraft Venus Express (which began orbiting Venus in April 2006) have been confirmed to originate from lightning on Venus.

Human related

  • Airplane contrails have also been observed to influence lightning to a small degree. The water vapor-dense contrails of airplanes may provide a lower resistance pathway through the atmosphere having some influence upon the establishment of an ionic pathway for a lightning flash to follow.[65]
  • Rocket exhaust plumes provided a pathway for lightning when it was witnessed striking the Apollo 12 rocket shortly after takeoff.
  • Thermonuclear explosions by providing extra material for electrical conduction and a very turbulent localized atmosphere, have been seen triggering lightning flashes within the mushroom cloud. In addition, intense gamma radiation from large nuclear explosions may develop intensely charged regions in the surrounding air through Compton scattering. The intensely charged space charge regions create multiple clear-air lightning discharges shortly after the device detonates.[66]

Scientific study

Properties

Thunder is heard as a rolling, gradually dissipating rumble because the sound from different portions of a long stroke arrives at slightly different times.[67]

When the local electric field exceeds the dielectric strength of damp air (about 3 million volts per meter), electrical discharge results in a strike, often followed by commensurate discharges branching from the same path. (See image, right.) Mechanisms that cause the charges to build up to lightning are still a matter of scientific investigation.[68][69] Lightning may be caused by the circulation of warm moisture-filled air through electric fields.[70] Ice or water particles then accumulate charge as in a Van de Graaff generator.[71]

Researchers at the University of Florida found that the final one-dimensional speeds of 10 flashes observed were between 1.0×105 and 1.4×106 m/s, with an average of 4.4×105 m/s.[72]

Detection and monitoring


Ray counter in a museum.

The earliest detector invented to warn of the approach of a thunder storm was the lightning bell. Benjamin Franklin installed one such device in his house.[73] [74] The detector was based on an electrostatic device called the 'electric chimes' invented by Andrew Gordon in 1742.

Lightning discharges generate a wide range of electromagnetic radiations, including radio-frequency pulses. The times at which a pulse from a given lightning discharge arrives at several receivers can be used to locate the source of the discharge. The United States federal government has constructed a nation-wide grid of such lightning detectors, allowing lightning discharges to be tracked in real time throughout the continental U.S.[75][76]

The Earth-ionosphere waveguide traps electromagnetic VLF- and ELF waves. Electromagnetic pulses transmitted by lightning strikes propagate within that waveguide. The waveguide is dispersive, which means that their group velocity depends on frequency. The difference of the group time delay of a lightning pulse at adjacent frequencies is proportional to the distance between transmitter and receiver. Together with direction finding methods, this allows to locate lightning strikes up to distances of 10000 km from their origin. Moreover, the eigenfrequencies of the Earth-ionospheric waveguide, the Schumann resonances at about 7.5 Hz, are used to determine the global thunderstorm activity.[77]

In addition to ground-based lightning detection, several instruments aboard satellites have been constructed to observe lightning distribution. These include the Optical Transient Detector (OTD), aboard the OrbView-1 satellite launched on April 3, 1995, and the subsequent Lightning Imaging Sensor (LIS) aboard TRMM launched on November 28, 1997.[78][79][80]

Artificially triggered

  • Rocket-triggered lightning can be "triggered" by launching specially designed rockets trailing spools of wire into thunderstorms. The wire unwinds as the rocket ascends, creating an elevated ground that can attract descending leaders. If a leader attaches, the wire provides a low-resistance pathway for a lightning flash to occur. The wire is vaporized by the return current flow, creating a straight lightning plasma channel in its place. This method allows for scientific research of lightning to occur under a more controlled and predictable manner.[81]
The International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, Florida typically uses rocket triggered lightning in their research studies.
  • Laser-triggered
Since the 1970s,[82][83][84][85][86][87] researchers have attempted to trigger lightning strikes by means of infrared or ultraviolet lasers, which create a channel of ionized gas through which the lightning would be conducted to ground. Such triggering of lightning is intended to protect rocket launching pads, electric power facilities, and other sensitive targets.[88][89][90][91][92]
In New Mexico, U.S., scientists tested a new terawatt laser which provoked lightning. Scientists fired ultra-fast pulses from an extremely powerful laser thus sending several terawatts into the clouds to call down electrical discharges in storm clouds over the region. The laser beams sent from the laser make channels of ionized molecules known as "filaments". Before the lightning strikes earth, the filaments lead electricity through the clouds, playing the role of lightning rods. Researchers generated filaments that lived a period too short to trigger a real lightning strike. Nevertheless, a boost in electrical activity within the clouds was registered. According to the French and German scientists who ran the experiment, the fast pulses sent from the laser will be able to provoke lightning strikes on demand.[93] Statistical analysis showed that their laser pulses indeed enhanced the electrical activity in the thundercloud where it was aimed—in effect they generated small local discharges located at the position of the plasma channels.[94]

Physical manifestations


Lightning-induced remanent magnetization (LIRM) mapped during a magnetic field gradient survey of an archaeological site located in Wyoming, United States.

Lightning-induced magnetism

The movement of electrical charges produces a magnetic field (see electromagnetism). The intense currents of a lightning discharge create a fleeting but very strong magnetic field. Where the lightning current path passes through rock, soil, or metal these materials can become permanently magnetized. This effect is known as lightning-induced remanent magnetism, or LIRM. These currents follow the least resistive path, often horizontally near the surface[95][96] but sometimes vertically, where faults, ore bodies, or ground water offers a less resistive path.[97] One theory suggests that lodestones, natural magnets encountered in ancient times, were created in this manner.[98]

Lightning-induced magnetic anomalies can be mapped in the ground,[99][100] and analysis of magnetized materials can confirm lightning was the source of the magnetization[101] and provide an estimate of the peak current of the lightning discharge.[102]

Solar wind and cosmic rays

Some high energy cosmic rays produced by distant supernovas as well as solar particles from the solar wind, enter the atmosphere and electrify the air, creating pathways for lightning bolts.[103]

In culture


In many cultures, lightning has been viewed as part of a deity or a deity in and of itself. These include the Greek god Zeus, the Aztec god Tlaloc, the Mayan God K, Slavic mythology's Perun, the Baltic Pērkons/Perkūnas, Thor in Norse mythology, Ukko in Finnish mythology, the Hindu god Indra, and the Shinto god Raijin. In the traditional religion of the African Bantu tribes, lightning is a sign of the ire of the gods. Verses in the Jewish religion and in Islam also ascribe supernatural importance to lightning. In Christianity, the Second Coming of Jesus is compared to lightning.[Matthew 24:27][Luke 17:24]

The expression "Lightning never strikes twice (in the same place)" is similar to "Opportunity never knocks twice" in the vein of a "once in a lifetime" opportunity, i.e., something that is generally considered improbable. Lightning occurs frequently and more so in specific areas. Since various factors alter the probability of strikes at any given location, repeat lightning strikes have a very low probability (but are not impossible).[104][105] Similarly, "A bolt from the blue" refers to something totally unexpected.

Some political parties use lightning flashes as a symbol of power, such as the People's Action Party in Singapore, the British Union of Fascists during the 1930s, and the National States' Rights Party in the United States during the 1950s.[106] The Schutzstaffel, the paramilitary wing of the Nazi Party, used the Sig rune in their logo which symbolizes lightning. The German word Blitzkrieg, which means "lightning war", was a major offensive strategy of the German army during World War II.

In French and Italian, the expression for "Love at first sight" is Coup de foudre and Colpo di fulmine, respectively, which literally translated means "lightning strike". Some European languages have a separate word for lightning which strikes the ground (as opposed to lightning in general); often it is a cognate of the English word "rays". The name of New Zealand's most celebrated thoroughbred horse, Phar Lap, derives from the shared Zhuang and Thai word for lightning.[107]

The bolt of lightning in heraldry is called a thunderbolt and is shown as a zigzag with non-pointed ends. This symbol usually represents power and speed.

The lightning bolt is used to represent the instantaneous communication capabilities of electrically powered telegraphs and radios. It was a commonly used motif in Art Deco design, especially the zig-zag Art Deco design of the late 1920s.[108] The lightning bolt is a common insignia for military communications units throughout the world. A lightning bolt is also the NATO symbol for a signal asset.

Why scientists adjust temperature records, and how you can too


Author
What does Paraguay have to do with the global temperature record? dany13/Flickr, CC BY
An article in The Australian today has once again raised the question of why scientists, in trying to estimate how the global and regional surface temperatures of Earth may have changed over the past century or so, “adjust” the raw temperature data.

It is important to note, first off, that no data have been “altered” or destroyed in this process – all the raw data remain available for investigation by anyone who has the inclination (as I’ll show below).

But this process can lead to large adjustments to the raw data, and in at least some instances the adjusted data can suggest long-term warming even when the raw data indicate cooling.

This appears to have happened at the Paraguay stations mentioned in the article – the raw temperature recordings suggest cooling over decades, whereas warming appears after the raw data have been adjusted by NASA and NOAA.

The figure below shows this at one station in Paraguay. I have obtained the data from this station (raw and adjusted) from Berkeley Earth, an independent group who have, quite separately from NASA and NOAA, checked global temperature data for these so-called “inhomogeneities” and adjusted the raw data themselves.

Their results provide an independent check for the NASA and NOAA groups doing this adjustment. The raw data (blue line) at this station suggest cooling, whereas the “adjusted” data (pink line) indicate warming.
Berkeley Earth, Author provided

The problem with thermometers

So why do scientists “adjust” the raw data – why don’t they simply accept that the raw data are the best estimate of how the temperature has changed over decades?

The underlying problem is that whether or not a specific thermometer reading is a good estimate of the air temperature depends on how the thermometer is exposed.

Take a thermometer and attach it to a wall, and then compare the temperature you read from that thermometer with the reading from an identical thermometer in a modern Stevenson Screen located nearby.
 
This shows the thermometer screens at the Adelaide Observatory in 1888. Charles Todd established a very long experiment (it ran well into the 20th century) to compare temperature observations in the three different exposures illustrated here. His data show that the summer daytime temperatures measured in the typical 19th century thermometer exposure, the open stand shown on the right, were biased warm compared with the typical 20th century exposure in the Stevenson Screen shown on the left. So simply comparing the raw data from the 19th century with data from the 20th century would be misleading.
On a warm summer day the thermometer on the wall will usually record higher temperatures than the one in the Stevenson Screen. As well, any trees around the observing site, or buildings or roads or car parks, as well as many other factors, can all affect the recorded temperature.

Because nearly all long-term records of temperature anywhere in the world have been affected by such factors, for instance as a rural station or airport gets surrounded by suburbs and roads, the scientific thing to do is to make sure you take these factors into account when trying to get a picture of how the world may have warmed.

How to adjust data scientifically

It would be nice if we had a compete record of all the changes to all the temperature recording sites around the world, listing in forensic detail when and where stations were moved (even a few metres can make a difference), when trees around the site were planted or removed, when car parks nearby were built, and all the buildings for tens of metres around the site.

And we would need all these details stretching back over many decades.

Unfortunately, no station exists with such comprehensive information for the last century or so. But even if these “metadata” did exist, we could not just use the raw data at a single station alone to work out how much to adjust the raw data for changes in exposure and location.

So scientists identify other “comparison” stations (as many as they can find) with which to compare the raw data at the “target” station (such as the station in the graph from Paraguay). These comparison stations are selected because their temperature variations from year-to-year generally track the changes at the “target” station.

If, however, the target station temperatures change suddenly and that change is not matched by similar changes in all the comparison stations, it is reasonable to conclude that something has happened to corrupt the raw data at the target station. The relationship between temperatures at the target and comparison stations is then used to adjust the raw data at the target station.

The details of the way this adjustment process is done varies between the groups who do this. The result of the adjustments made by Berkeley Earth for a station in Paraguay is shown in the figure above.

Both the raw and adjusted data show warming over the past forty or so years, but before the mid-1960s the data are quite different. Even looking at the raw data alone, a scientist would worry that some change in exposure has corrupted the data, because of the sudden large drop in temperature.
But the Berkeley Earth scientists have objectively adjusted for this drop, through their comparisons with other stations in the region. Their adjustments remove the sudden drop in the mid-1960s, and indicate that temperatures in the region have been warming for more than the 40 years shown by the raw data.

Do it yourself

I encourage anyone who worries about the sort of adjustments made by NASA or NOAA, or in Australia by the Bureau of Meteorology, to go to the Berkeley Earth website, look at their independent results, and perhaps even do some calculations themselves to check what these other groups have done.

But don’t just think that the raw data will tell you much more than that the way the thermometer has been exposed has changed, or a car park has been built nearby, or a suburb now surrounds what once was a rural station.

You need to do the science and adjust for these corrupting factors, if you really want to work out how global and regional temperatures have changed. I’ve never been to Paraguay and I know almost nothing about the station whose data are in the graphs above. But scientists around the world have made these data available so we can do this work from our desktops.

I think this is great fun, but then I’m a nerdy meteorologist, so I would think that, wouldn’t I?

Logic


From Wikipedia, the free encyclopedia

Logic (from the Ancient Greek: λογική, logike)[1] is the use and study of valid reasoning.[2][3] The study of logic features most prominently in the subjects of philosophy, mathematics, and computer science.

Logic was studied in several ancient civilizations, including India,[4] China,[5] Persia and Greece. In the West, logic was established as a formal discipline by Aristotle, who gave it a fundamental place in philosophy. The study of logic was part of the classical trivium, which also included grammar and rhetoric. Logic was further extended by Al-Farabi who categorized it into two separate groups (idea and proof). Later, Avicenna revived the study of logic and developed relationship between temporalis and the implication. In the East, logic was developed by Buddhists and Jains.

Logic is often divided into three parts: inductive reasoning, abductive reasoning, and deductive reasoning.

The study of logic[edit]

The concept of logical form is central to logic, it being held that the validity of an argument is determined by its logical form, not by its content. Traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logics.
  • Informal logic is the study of natural language arguments. The study of fallacies is an especially important branch of informal logic. The dialogues of Plato[6] are good examples of informal logic.
  • Formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as a particular application of a wholly abstract rule, that is, a rule that is not about any particular thing or property. The works of Aristotle contain the earliest known formal study of logic. Modern formal logic follows and expands on Aristotle.[7] In many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language.
  • Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference.[8][9] Symbolic logic is often divided into two branches: propositional logic and predicate logic.
  • Mathematical logic is an extension of symbolic logic into other areas, in particular to the study of model theory, proof theory, set theory, and recursion theory.

Logical form

Logic is generally considered formal when it analyzes and represents the form of any valid argument type. The form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. If one considers the notion of form too philosophically loaded, one could say that formalizing simply means translating English sentences into the language of logic.
This is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a considerable variety of form and complexity that makes their use in inference impractical. It requires, first, ignoring those grammatical features irrelevant to logic (such as gender and declension, if the argument is in Latin), replacing conjunctions irrelevant to logic (such as "but") with logical conjunctions like "and" and replacing ambiguous, or alternative logical expressions ("any", "every", etc.) with expressions of a standard type (such as "all", or the universal quantifier ∀).

Second, certain parts of the sentence must be replaced with schematic letters. Thus, for example, the expression "all As are Bs" shows the logical form common to the sentences "all men are mortals", "all cats are carnivores", "all Greeks are philosophers", and so on.

That the concept of form is fundamental to logic was already recognized in ancient times. Aristotle uses variable letters to represent valid inferences in Prior Analytics, leading Jan Łukasiewicz to say that the introduction of variables was "one of Aristotle's greatest inventions".[10] According to the followers of Aristotle (such as Ammonius), only the logical principles stated in schematic terms belong to logic, not those given in concrete terms. The concrete terms "man", "mortal", etc., are analogous to the substitution values of the schematic placeholders A, B, C, which were called the "matter" (Greek hyle) of the inference.

The fundamental difference between modern formal logic and traditional, or Aristotelian logic, lies in their differing analysis of the logical form of the sentences they treat.
  • In the traditional view, the form of the sentence consists of (1) a subject (e.g., "man") plus a sign of quantity ("all" or "some" or "no"); (2) the copula, which is of the form "is" or "is not"; (3) a predicate (e.g., "mortal"). Thus: all men are mortal. The logical constants such as "all", "no" and so on, plus sentential connectives such as "and" and "or" were called "syncategorematic" terms (from the Greek kategorei – to predicate, and syn – together with). This is a fixed scheme, where each judgment has an identified quantity and copula, determining the logical form of the sentence.
  • According to the modern view, the fundamental form of a simple sentence is given by a recursive schema, involving logical connectives, such as a quantifier with its bound variable, which are joined by juxtaposition to other sentences, which in turn may have logical structure.
  • The modern view is more complex, since a single judgement of Aristotle's system involves two or more logical connectives. For example, the sentence "All men are mortal" involves, in term logic, two non-logical terms "is a man" (here M) and "is mortal" (here D): the sentence is given by the judgement A(M,D). In predicate logic, the sentence involves the same two non-logical concepts, here analyzed as m(x) and d(x), and the sentence is given by \forall x. (m(x) \rightarrow d(x)), involving the logical connectives for universal quantification and implication.
  • But equally, the modern view is more powerful. Medieval logicians recognized the problem of multiple generality, where Aristotelian logic is unable to satisfactorily render such sentences as "Some guys have all the luck", because both quantities "all" and "some" may be relevant in an inference, but the fixed scheme that Aristotle used allows only one to govern the inference. Just as linguists recognize recursive structure in natural languages, it appears that logic needs recursive structure.

Deductive and inductive reasoning, and abductive inference

Deductive reasoning concerns what follows necessarily from given premises (if a, then b). However, inductive reasoning—the process of deriving a reliable generalization from observations—has sometimes been included in the study of logic. Similarly, it is important to distinguish deductive validity and inductive validity (called "cogency"). An inference is deductively valid if and only if there is no possible situation in which all the premises are true but the conclusion false. An inductive argument can be neither valid nor invalid; its premises give only some degree of probability, but not certainty, to its conclusion.

The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. Inductive validity on the other hand requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some less formal than others; some of these definitions may use mathematical models of probability. For the most part this discussion of logic deals only with deductive logic.

Abduction[11] is a form of logical inference that goes from observation to a hypothesis that accounts for the reliable data (observation) and seeks to explain relevant evidence. The American philosopher Charles Sanders Peirce (1839–1914) first introduced the term as "guessing".[12] Peirce said that to abduce a hypothetical explanation a from an observed surprising circumstance b is to surmise that a may be true because then b would be a matter of course.[13] Thus, to abduce a from b involves determining that a is sufficient (or nearly sufficient), but not necessary, for b.

Consistency, validity, soundness, and completeness

Among the important properties that logical systems can have:
  • Consistency, which means that no theorem of the system contradicts another.[14]
  • Validity, which means that the system's rules of proof never allow a false inference from true premises. A logical system has the property of soundness when the logical system has the property of validity and uses only premises that prove true (or, in the case of axioms, are true by definition).[14]
  • Completeness, of a logical system, which means that if a formula is true, it can be proven (if it is true, it is a theorem of the system).
  • Soundness, the term soundness has multiple separate meanings, which creates a bit of confusion throughout the literature. Most commonly, soundness refers to logical systems, which means that if some formula can be proven in a system, then it is true in the relevant model/structure (if A is a theorem, it is true). This is the converse of completeness. A distinct, peripheral use of soundness refers to arguments, which means that the premises of a valid argument are true in the actual world.
Some logical systems do not have all four properties. As an example, Kurt Gödel's incompleteness theorems show that sufficiently complex formal systems of arithmetic cannot be consistent and complete;[9] however, first-order predicate logics not extended by specific axioms to be arithmetic formal systems with equality can be complete and consistent.[15]

Rival conceptions of logic

Logic arose (see below) from a concern with correctness of argumentation. Modern logicians usually wish to ensure that logic studies just those arguments that arise from appropriately general forms of inference. For example, Thomas Hofweber writes in the Stanford Encyclopedia of Philosophy that logic "does not, however, cover good reasoning as a whole. That is the job of the theory of rationality. Rather it deals with inferences whose validity can be traced back to the formal features of the representations that are involved in that inference, be they linguistic, mental, or other representations".[16]
By contrast, Immanuel Kant argued that logic should be conceived as the science of judgement, an idea taken up in Gottlob Frege's logical and philosophical work. But Frege's work is ambiguous in the sense that it is both concerned with the "laws of thought" as well as with the "laws of truth", i.e. it both treats logic in the context of a theory of the mind, and treats logic as the study of abstract formal structures.

History

Aristotle, 384–322 BCE.

In Europe, logic was first developed by Aristotle.[17] Aristotelian logic became widely accepted in science and mathematics and remained in wide use in the West until the early 19th century.[18] Aristotle's system of logic was responsible for the introduction of hypothetical syllogism,[19] temporal modal logic,[20][21] and inductive logic,[22] as well as influential terms such as terms, predicables, syllogisms and propositions. In Europe during the later medieval period, major efforts were made to show that Aristotle's ideas were compatible with Christian faith. During the High Middle Ages, logic became a main focus of philosophers, who would engage in critical logical analyses of philosophical arguments, often using variations of the methodology of scholasticism. In 1323, William of Ockham's influential Summa Logicae was released. By the 18th century, the structured approach to arguments had degenerated and fallen out of favour, as depicted in Holberg's satirical play Erasmus Montanus.

The Chinese logical philosopher Gongsun Long (c. 325–250 BCE) proposed the paradox "One and one cannot become two, since neither becomes two."[23] In China, the tradition of scholarly investigation into logic, however, was repressed by the Qin dynasty following the legalist philosophy of Han Feizi.

In India, innovations in the scholastic school, called Nyaya, continued from ancient times into the early 18th century with the Navya-Nyaya school. By the 16th century, it developed theories resembling modern logic, such as Gottlob Frege's "distinction between sense and reference of proper names" and his "definition of number", as well as the theory of "restrictive conditions for universals" anticipating some of the developments in modern set theory.[24] Since 1824, Indian logic attracted the attention of many Western scholars, and has had an influence on important 19th-century logicians such as Charles Babbage, Augustus De Morgan, and George Boole.[25] In the 20th century, Western philosophers like Stanislaw Schayer and Klaus Glashoff have explored Indian logic more extensively.

The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of symbolic logic (now called mathematical logic). In 1854, George Boole published An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, introducing symbolic logic and the principles of what is now known as Boolean logic. In 1879, Gottlob Frege published Begriffsschrift, which inaugurated modern logic with the invention of quantifier notation. From 1910 to 1913, Alfred North Whitehead and Bertrand Russell published Principia Mathematica[8] on the foundations of mathematics, attempting to derive mathematical truths from axioms and inference rules in symbolic logic. In 1931, Gödel raised serious problems with the foundationalist program and logic ceased to focus on such issues.

The development of logic since Frege, Russell, and Wittgenstein had a profound influence on the practice of philosophy and the perceived nature of philosophical problems (see Analytic philosophy), and Philosophy of mathematics. Logic, especially sentential logic, is implemented in computer logic circuits and is fundamental to computer science. Logic is commonly taught by university philosophy departments, often as a compulsory discipline.

Types of logic

Syllogistic logic

The Organon was Aristotle's body of work on logic, with the Prior Analytics constituting the first explicit work in formal logic, introducing the syllogistic.[26] The parts of syllogistic logic, also known by the name term logic, are the analysis of the judgements into propositions consisting of two terms that are related by one of a fixed number of relations, and the expression of inferences by means of syllogisms that consist of two propositions sharing a common term as premise, and a conclusion that is a proposition involving the two unrelated terms from the premises.
Aristotle's work was regarded in classical times and from medieval times in Europe and the Middle East as the very picture of a fully worked out system. However, it was not alone: the Stoics proposed a system of propositional logic that was studied by medieval logicians. Also, the problem of multiple generality was recognized in medieval times. Nonetheless, problems with syllogistic logic were not seen as being in need of revolutionary solutions.

Today, some academics claim that Aristotle's system is generally seen as having little more than historical value (though there is some current interest in extending term logics), regarded as made obsolete by the advent of propositional logic and the predicate calculus. Others use Aristotle in argumentation theory to help develop and critically question argumentation schemes that are used in artificial intelligence and legal arguments.

Propositional logic (sentential logic)

A propositional calculus or logic (also a sentential calculus) is a formal system in which formulae representing propositions can be formed by combining atomic propositions using logical connectives, and in which a system of formal proof rules establishes certain formulae as "theorems".

Predicate logic

Predicate logic is the generic term for symbolic formal systems such as first-order logic, second-order logic, many-sorted logic, and infinitary logic.

Predicate logic provides an account of quantifiers general enough to express a wide set of arguments occurring in natural language. Aristotelian syllogistic logic specifies a small number of forms that the relevant part of the involved judgements may take. Predicate logic allows sentences to be analysed into subject and argument in several additional ways—allowing predicate logic to solve the problem of multiple generality that had perplexed medieval logicians.

The development of predicate logic is usually attributed to Gottlob Frege, who is also credited as one of the founders of analytical philosophy, but the formulation of predicate logic most often used today is the first-order logic presented in Principles of Mathematical Logic by David Hilbert and Wilhelm Ackermann in 1928. The analytical generality of predicate logic allowed the formalization of mathematics, drove the investigation of set theory, and allowed the development of Alfred Tarski's approach to model theory. It provides the foundation of modern mathematical logic.

Frege's original system of predicate logic was second-order, rather than first-order. Second-order logic is most prominently defended (against the criticism of Willard Van Orman Quine and others) by George Boolos and Stewart Shapiro.

Modal logic

In languages, modality deals with the phenomenon that sub-parts of a sentence may have their semantics modified by special verbs or modal particles. For example, "We go to the games" can be modified to give "We should go to the games", and "We can go to the games" and perhaps "We will go to the games". More abstractly, we might say that modality affects the circumstances in which we take an assertion to be satisfied.
Aristotle's logic is in large parts concerned with the theory of non-modalized logic. Although, there are passages in his work, such as the famous sea-battle argument in De Interpretatione § 9, that are now seen as anticipations of modal logic and its connection with potentiality and time, the earliest formal system of modal logic was developed by Avicenna, whom ultimately developed a theory of "temporally modalized" syllogistic.[27]

While the study of necessity and possibility remained important to philosophers, little logical innovation happened until the landmark investigations of Clarence Irving Lewis in 1918, who formulated a family of rival axiomatizations of the alethic modalities. His work unleashed a torrent of new work on the topic, expanding the kinds of modality treated to include deontic logic and epistemic logic. The seminal work of Arthur Prior applied the same formal language to treat temporal logic and paved the way for the marriage of the two subjects. Saul Kripke discovered (contemporaneously with rivals) his theory of frame semantics, which revolutionized the formal technology available to modal logicians and gave a new graph-theoretic way of looking at modality that has driven many applications in computational linguistics and computer science, such as dynamic logic.

Informal reasoning

The motivation for the study of logic in ancient times was clear: it is so that one may learn to distinguish good from bad arguments, and so become more effective in argument and oratory, and perhaps also to become a better person. Half of the works of Aristotle's Organon treat inference as it occurs in an informal setting, side by side with the development of the syllogistic, and in the Aristotelian school, these informal works on logic were seen as complementary to Aristotle's treatment of rhetoric.
This ancient motivation is still alive, although it no longer takes centre stage in the picture of logic; typically dialectical logic forms the heart of a course in critical thinking, a compulsory course at many universities.

Argumentation theory is the study and research of informal logic, fallacies, and critical questions as they relate to every day and practical situations. Specific types of dialogue can be analyzed and questioned to reveal premises, conclusions, and fallacies. Argumentation theory is now applied in artificial intelligence and law.

Mathematical logic

Mathematical logic really refers to two distinct areas of research: the first is the application of the techniques of formal logic to mathematics and mathematical reasoning, and the second, in the other direction, the application of mathematical techniques to the representation and analysis of formal logic.[28]

The earliest use of mathematics and geometry in relation to logic and philosophy goes back to the ancient Greeks such as Euclid, Plato, and Aristotle.[29] Many other ancient and medieval philosophers applied mathematical ideas and methods to their philosophical claims.[30]

One of the boldest attempts to apply logic to mathematics was undoubtedly the logicism pioneered by philosopher-logicians such as Gottlob Frege and Bertrand Russell: the idea was that mathematical theories were logical tautologies, and the programme was to show this by means to a reduction of mathematics to logic.[8] The various attempts to carry this out met with a series of failures, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems.

Both the statement of Hilbert's program and its refutation by Gödel depended upon their work establishing the second area of mathematical logic, the application of mathematics to logic in the form of proof theory.[31] Despite the negative nature of the incompleteness theorems, Gödel's completeness theorem, a result in model theory and another application of mathematics to logic, can be understood as showing how close logicism came to being true: every rigorously defined mathematical theory can be exactly captured by a first-order logical theory; Frege's proof calculus is enough to describe the whole of mathematics, though not equivalent to it. Thus we see how complementary the two areas of mathematical logic have been.[citation needed]

If proof theory and model theory have been the foundation of mathematical logic, they have been but two of the four pillars of the subject. Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic, from Cantor's theorem, through the status of the Axiom of Choice and the question of the independence of the continuum hypothesis, to the modern debate on large cardinal axioms.

Recursion theory captures the idea of computation in logical and arithmetic terms; its most classical achievements are the undecidability of the Entscheidungsproblem by Alan Turing, and his presentation of the Church–Turing thesis.[32] Today recursion theory is mostly concerned with the more refined problem of complexity classes—when is a problem efficiently solvable?—and the classification of degrees of unsolvability.[33]

Philosophical logic

Philosophical logic deals with formal descriptions of ordinary, non-specialist ("natural") language. Most philosophers assume that the bulk of everyday reasoning can be captured in logic if a method or methods to translate ordinary language into that logic can be found. Philosophical logic is essentially a continuation of the traditional discipline called "logic" before the invention of mathematical logic. Philosophical logic has a much greater concern with the connection between natural language and logic. As a result, philosophical logicians have contributed a great deal to the development of non-standard logics (e.g. free logics, tense logics) as well as various extensions of classical logic (e.g. modal logics) and non-standard semantics for such logics (e.g. Kripke's supervaluationism in the semantics of logic).
Logic and the philosophy of language are closely related. Philosophy of language has to do with the study of how our language engages and interacts with our thinking. Logic has an immediate impact on other areas of study. Studying logic and the relationship between logic and ordinary speech can help a person better structure his own arguments and critique the arguments of others. Many popular arguments are filled with errors because so many people are untrained in logic and unaware of how to formulate an argument correctly.[citation needed]

Computational logic

Logic cut to the heart of computer science as it emerged as a discipline: Alan Turing's work on the Entscheidungsproblem followed from Kurt Gödel's work on the incompleteness theorems. The notion of the general purpose computer that came from this work was of fundamental importance to the designers of the computer machinery in the 1940s.

In the 1950s and 1960s, researchers predicted that when human knowledge could be expressed using logic with mathematical notation, it would be possible to create a machine that reasons, or artificial intelligence. This was more difficult than expected because of the complexity of human reasoning. In logic programming, a program consists of a set of axioms and rules. Logic programming systems such as Prolog compute the consequences of the axioms and rules in order to answer a query.

Today, logic is extensively applied in the fields of Artificial Intelligence, and Computer Science, and these fields provide a rich source of problems in formal and informal logic. Argumentation theory is one good example of how logic is being applied to artificial intelligence. The ACM Computing Classification System in particular regards:
Furthermore, computers can be used as tools for logicians. For example, in symbolic logic and mathematical logic, proofs by humans can be computer-assisted. Using automated theorem proving the machines can find and check proofs, as well as work with proofs too lengthy to write out by hand.

Bivalence and the law of the excluded middle; non-classical logics

The logics discussed above are all "bivalent" or "two-valued"; that is, they are most naturally understood as dividing propositions into true and false propositions. Non-classical logics are those systems that reject bivalence.

Hegel developed his own dialectic logic that extended Kant's transcendental logic but also brought it back to ground by assuring us that "neither in heaven nor in earth, neither in the world of mind nor of nature, is there anywhere such an abstract 'either–or' as the understanding maintains. Whatever exists is concrete, with difference and opposition in itself".[34]

In 1910, Nicolai A. Vasiliev extended the law of excluded middle and the law of contradiction and proposed the law of excluded fourth and logic tolerant to contradiction.[35] In the early 20th century Jan Łukasiewicz investigated the extension of the traditional true/false values to include a third value, "possible", so inventing ternary logic, the first multi-valued logic.[citation needed]

Logics such as fuzzy logic have since been devised with an infinite number of "degrees of truth", represented by a real number between 0 and 1.[36]

Intuitionistic logic was proposed by L.E.J. Brouwer as the correct logic for reasoning about mathematics, based upon his rejection of the law of the excluded middle as part of his intuitionism. Brouwer rejected formalization in mathematics, but his student Arend Heyting studied intuitionistic logic formally, as did Gerhard Gentzen. Intuitionistic logic is of great interest to computer scientists, as it is a constructive logic and can be applied for extracting verified programs from proofs.

Modal logic is not truth conditional, and so it has often been proposed as a non-classical logic. However, modal logic is normally formalized with the principle of the excluded middle, and its relational semantics is bivalent, so this inclusion is disputable.

"Is logic empirical?"

What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled "Is logic empirical?"[37] Hilary Putnam, building on a suggestion of W. V. Quine, argued that in general the facts of propositional logic have a similar epistemological status as facts about the physical universe, for example as the laws of mechanics or of general relativity, and in particular that what physicists have learned about quantum mechanics provides a compelling case for abandoning certain familiar principles of classical logic: if we want to be realists about the physical phenomena described by quantum theory, then we should abandon the principle of distributivity, substituting for classical logic the quantum logic proposed by Garrett Birkhoff and John von Neumann.[38]

Another paper of the same name by Sir Michael Dummett argues that Putnam's desire for realism mandates the law of distributivity.[39] Distributivity of logic is essential for the realist's understanding of how propositions are true of the world in just the same way as he has argued the principle of bivalence is. In this way, the question, "Is logic empirical?" can be seen to lead naturally into the fundamental controversy in metaphysics on realism versus anti-realism.

Implication: strict or material?

The notion of implication formalized in classical logic does not comfortably translate into natural language by means of "if ... then ...", due to a number of problems called the paradoxes of material implication.

The first class of paradoxes involves counterfactuals, such as If the moon is made of green cheese, then 2+2=5, which are puzzling because natural language does not support the principle of explosion. Eliminating this class of paradoxes was the reason for C. I. Lewis's formulation of strict implication, which eventually led to more radically revisionist logics such as relevance logic.

The second class of paradoxes involves redundant premises, falsely suggesting that we know the succedent because of the antecedent: thus "if that man gets elected, granny will die" is materially true since granny is mortal, regardless of the man's election prospects. Such sentences violate the Gricean maxim of relevance, and can be modelled by logics that reject the principle of monotonicity of entailment, such as relevance logic.

Tolerating the impossible

Hegel was deeply critical of any simplified notion of the Law of Non-Contradiction. It was based on Leibniz's idea that this law of logic also requires a sufficient ground to specify from what point of view (or time) one says that something cannot contradict itself. A building, for example, both moves and does not move; the ground for the first is our solar system and for the second the earth. In Hegelian dialectic, the law of non-contradiction, of identity, itself relies upon difference and so is not independently assertable.

Closely related to questions arising from the paradoxes of implication comes the suggestion that logic ought to tolerate inconsistency. Relevance logic and paraconsistent logic are the most important approaches here, though the concerns are different: a key consequence of classical logic and some of its rivals, such as intuitionistic logic, is that they respect the principle of explosion, which means that the logic collapses if it is capable of deriving a contradiction. Graham Priest, the main proponent of dialetheism, has argued for paraconsistency on the grounds that there are in fact, true contradictions.[40]

Rejection of logical truth

The philosophical vein of various kinds of skepticism contains many kinds of doubt and rejection of the various bases on which logic rests, such as the idea of logical form, correct inference, or meaning, typically leading to the conclusion that there are no logical truths. Observe that this is opposite to the usual views in philosophical skepticism, where logic directs skeptical enquiry to doubt received wisdoms, as in the work of Sextus Empiricus.

Friedrich Nietzsche provides a strong example of the rejection of the usual basis of logic: his radical rejection of idealization led him to reject truth as a "... mobile army of metaphors, metonyms, and anthropomorphisms—in short ... metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins."[41] His rejection of truth did not lead him to reject the idea of either inference or logic completely, but rather suggested that "logic [came] into existence in man's head [out] of illogic, whose realm originally must have been immense. Innumerable beings who made inferences in a way different from ours perished".[42] Thus there is the idea that logical inference has a use as a tool for human survival, but that its existence does not support the existence of truth, nor does it have a reality beyond the instrumental: "Logic, too, also rests on assumptions that do not correspond to anything in the real world".[43]

This position held by Nietzsche however, has come under extreme scrutiny for several reasons. He fails to demonstrate the validity of his claims and merely asserts them rhetorically. Although, since he is criticising the established criteria of validity, this does not undermine his position for one could argue that the demonstration of validity provided in the name of logic was just as rhetorically based. Some philosophers, such as Jürgen Habermas, claim his position is self-refuting—and accuse Nietzsche of not even having a coherent perspective, let alone a theory of knowledge.[44] Again, it is unclear if this is a decisive critique for the criteria of coherency and consistent theory are exactly what is under question. Georg Lukács, in his book The Destruction of Reason, asserts that, "Were we to study Nietzsche's statements in this area from a logico-philosophical angle, we would be confronted by a dizzy chaos of the most lurid assertions, arbitrary and violently incompatible."[45] Still, in this respect his "theory" would be a much better depicition of a confused and chaotic reality than any consistent and compatible theory. Bertrand Russell described Nietzsche's irrational claims with "He is fond of expressing himself paradoxically and with a view to shocking conventional readers" in his book A History of Western Philosophy.[46]

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...