Search This Blog

Sunday, August 4, 2019

Earthquake engineering

From Wikipedia, the free encyclopedia
 
Earthquake engineering is an interdisciplinary branch of engineering that designs and analyzes structures, such as buildings and bridges, with earthquakes in mind. Its overall goal is to make such structures more resistant to earthquakes. An earthquake (or seismic) engineer aims to construct structures that will not be damaged in minor shaking and will avoid serious damage or collapse in a major earthquake. Earthquake engineering is the scientific field concerned with protecting society, the natural environment, and the man-made environment from earthquakes by limiting the seismic risk to socio-economically acceptable levels. Traditionally, it has been narrowly defined as the study of the behavior of structures and geo-structures subject to seismic loading; it is considered as a subset of structural engineering, geotechnical engineering, mechanical engineering, chemical engineering, applied physics, etc. However, the tremendous costs experienced in recent earthquakes have led to an expansion of its scope to encompass disciplines from the wider field of civil engineering, mechanical engineering, nuclear engineering, and from the social sciences, especially sociology, political science, economics, and finance

The main objectives of earthquake engineering are:
  • Foresee the potential consequences of strong earthquakes on urban areas and civil infrastructure.
  • Design, construct and maintain structures to perform at earthquake exposure up to the expectations and in compliance with building codes.
A properly engineered structure does not necessarily have to be extremely strong or expensive. It has to be properly designed to withstand the seismic effects while sustaining an acceptable level of damage.

Shake-table crash testing of a regular building model (left) and a base-isolated building model (right) at UCSD

Seismic loading


Seismic loading means application of an earthquake-generated excitation on a structure (or geo-structure). It happens at contact surfaces of a structure either with the ground, with adjacent structures, or with gravity waves from tsunami. The loading that is expected at a given location on the Earth's surface is estimated by engineering seismology. It is related to the seismic hazard of the location.

Seismic performance

Earthquake or seismic performance defines a structure's ability to sustain its main functions, such as its safety and serviceability, at and after a particular earthquake exposure. A structure is normally considered safe if it does not endanger the lives and well-being of those in or around it by partially or completely collapsing. A structure may be considered serviceable if it is able to fulfill its operational functions for which it was designed. 

Basic concepts of the earthquake engineering, implemented in the major building codes, assume that a building should survive a rare, very severe earthquake by sustaining significant damage but without globally collapsing. On the other hand, it should remain operational for more frequent, but less severe seismic events.

Seismic performance assessment

Engineers need to know the quantified level of the actual or anticipated seismic performance associated with the direct damage to an individual building subject to a specified ground shaking. Such an assessment may be performed either experimentally or analytically.

Experimental assessment

Experimental evaluations are expensive tests that are typically done by placing a (scaled) model of the structure on a shake-table that simulates the earth shaking and observing its behavior. Such kinds of experiments were first performed more than a century ago. Only recently has it become possible to perform 1:1 scale testing on full structures. 

Due to the costly nature of such tests, they tend to be used mainly for understanding the seismic behavior of structures, validating models and verifying analysis methods. Thus, once properly validated, computational models and numerical procedures tend to carry the major burden for the seismic performance assessment of structures.

Analytical/Numerical assessment

Snapshot from shake-table video of a 6-story non-ductile concrete building destructive testing
 
Seismic performance assessment or seismic structural analysis is a powerful tool of earthquake engineering which utilizes detailed modelling of the structure together with methods of structural analysis to gain a better understanding of seismic performance of building and non-building structures. The technique as a formal concept is a relatively recent development. 

In general, seismic structural analysis is based on the methods of structural dynamics. For decades, the most prominent instrument of seismic analysis has been the earthquake response spectrum method which also contributed to the proposed building code's concept of today.

However, such methods are good only for linear elastic systems, being largely unable to model the structural behavior when damage (i.e., non-linearity) appears. Numerical step-by-step integration proved to be a more effective method of analysis for multi-degree-of-freedom structural systems with significant non-linearity under a transient process of ground motion excitation. Use of the finite element method is one of the most common approaches for analyzing non-linear soil structure interaction computer models. 

Basically, numerical analysis is conducted in order to evaluate the seismic performance of buildings. Performance evaluations are generally carried out by using nonlinear static pushover analysis or nonlinear time-history analysis. In such analyses, it is essential to achieve accurate non-linear modeling of structural components such as beams, columns, beam-column joints, shear walls etc. Thus, experimental results play an important role in determining the modeling parameters of individual components, especially those that are subject to significant non-linear deformations. The individual components are then assembled to create a full non-linear model of the structure. Thus created models are analyzed to evaluate the performance of buildings.

The capabilities of the structural analysis software are a major consideration in the above process as they restrict the possible component models, the analysis methods available and, most importantly, the numerical robustness. The latter becomes a major consideration for structures that venture into the non-linear range and approach global or local collapse as the numerical solution becomes increasingly unstable and thus difficult to reach. There are several commercially available Finite Element Analysis software's such as CSI-SAP2000 and CSI-PERFORM-3D, MTR/SASSI, Scia Engineer-ECtools, ABAQUS, and Ansys, all of which can be used for the seismic performance evaluation of buildings. Moreover, there is research-based finite element analysis platforms such as OpenSees, MASTODON, which is based on the MOOSE Framework, RUAUMOKO and the older DRAIN-2D/3D, several of which are now open source.

Research for earthquake engineering

 
Research for earthquake engineering means both field and analytical investigation or experimentation intended for discovery and scientific explanation of earthquake engineering related facts, revision of conventional concepts in the light of new findings, and practical application of the developed theories. 

The National Science Foundation (NSF) is the main United States government agency that supports fundamental research and education in all fields of earthquake engineering. In particular, it focuses on experimental, analytical and computational research on design and performance enhancement of structural systems.

E-Defense Shake Table
 
The Earthquake Engineering Research Institute (EERI) is a leader in dissemination of earthquake engineering research related information both in the U.S. and globally. 

A definitive list of earthquake engineering research related shaking tables around the world may be found in Experimental Facilities for Earthquake Engineering Simulation Worldwide. The most prominent of them is now E-Defense Shake Table in Japan.

Major U.S. research programs

Large High Performance Outdoor Shake Table, UCSD, NEES network
NSF also supports the George E. Brown, Jr. Network for Earthquake Engineering Simulation

The NSF Hazard Mitigation and Structural Engineering program (HMSE) supports research on new technologies for improving the behavior and response of structural systems subject to earthquake hazards; fundamental research on safety and reliability of constructed systems; innovative developments in analysis and model based simulation of structural behavior and response including soil-structure interaction; design concepts that improve structure performance and flexibility; and application of new control techniques for structural systems.

(NEES) that advances knowledge discovery and innovation for earthquakes and tsunami loss reduction of the nation's civil infrastructure and new experimental simulation techniques and instrumentation.

The NEES network features 14 geographically-distributed, shared-use laboratories that support several types of experimental work: geotechnical centrifuge research, shake-table tests, large-scale structural testing, tsunami wave basin experiments, and field site research. Participating universities include: Cornell University; Lehigh University; Oregon State University; Rensselaer Polytechnic Institute; University at Buffalo, State University of New York; University of California, Berkeley; University of California, Davis; University of California, Los Angeles; University of California, San Diego; University of California, Santa Barbara; University of Illinois, Urbana-Champaign; University of Minnesota; University of Nevada, Reno; and the University of Texas, Austin.

NEES at Buffalo testing facility
 
The equipment sites (labs) and a central data repository are connected to the global earthquake engineering community via the NEEShub website. The NEES website is powered by HUBzero software developed at Purdue University for nanoHUB specifically to help the scientific community share resources and collaborate. The cyberinfrastructure, connected via Internet2, provides interactive simulation tools, a simulation tool development area, a curated central data repository, animated presentations, user support, telepresence, mechanism for uploading and sharing resources, and statistics about users and usage patterns.

This cyberinfrastructure allows researchers to: securely store, organize and share data within a standardized framework in a central location; remotely observe and participate in experiments through the use of synchronized real-time data and video; collaborate with colleagues to facilitate the planning, performance, analysis, and publication of research experiments; and conduct computational and hybrid simulations that may combine the results of multiple distributed experiments and link physical experiments with computer simulations to enable the investigation of overall system performance. 

These resources jointly provide the means for collaboration and discovery to improve the seismic design and performance of civil and mechanical infrastructure systems.

Earthquake simulation

The very first earthquake simulations were performed by statically applying some horizontal inertia forces based on scaled peak ground accelerations to a mathematical model of a building. With the further development of computational technologies, static approaches began to give way to dynamic ones. 

Dynamic experiments on building and non-building structures may be physical, like shake-table testing, or virtual ones. In both cases, to verify a structure's expected seismic performance, some researchers prefer to deal with so called "real time-histories" though the last cannot be "real" for a hypothetical earthquake specified by either a building code or by some particular research requirements. Therefore, there is a strong incentive to engage an earthquake simulation which is the seismic input that possesses only essential features of a real event.

Sometimes earthquake simulation is understood as a re-creation of local effects of a strong earth shaking.

Structure simulation

Concurrent experiments with two building models which are kinematically equivalent to a real prototype.
 
Theoretical or experimental evaluation of anticipated seismic performance mostly requires a structure simulation which is based on the concept of structural likeness or similarity. Similarity is some degree of analogy or resemblance between two or more objects. The notion of similarity rests either on exact or approximate repetitions of patterns in the compared items.

In general, a building model is said to have similarity with the real object if the two share geometric similarity, kinematic similarity and dynamic similarity. The most vivid and effective type of similarity is the kinematic one. Kinematic similarity exists when the paths and velocities of moving particles of a model and its prototype are similar. 

The ultimate level of kinematic similarity is kinematic equivalence when, in the case of earthquake engineering, time-histories of each story lateral displacements of the model and its prototype would be the same.

Seismic vibration control

Seismic vibration control is a set of technical means aimed to mitigate seismic impacts in building and non-building structures. All seismic vibration control devices may be classified as passive, active or hybrid where:
  • passive control devices have no feedback capability between them, structural elements and the ground;
  • active control devices incorporate real-time recording instrumentation on the ground integrated with earthquake input processing equipment and actuators within the structure;
  • hybrid control devices have combined features of active and passive control systems.
When ground seismic waves reach up and start to penetrate a base of a building, their energy flow density, due to reflections, reduces dramatically: usually, up to 90%. However, the remaining portions of the incident waves during a major earthquake still bear a huge devastating potential. 

After the seismic waves enter a superstructure, there are a number of ways to control them in order to soothe their damaging effect and improve the building's seismic performance, for instance:
Mausoleum of Cyrus, the oldest base-isolated structure in the world
 
Devices of the last kind, abbreviated correspondingly as TMD for the tuned (passive), as AMD for the active, and as HMD for the hybrid mass dampers, have been studied and installed in high-rise buildings, predominantly in Japan, for a quarter of a century.

However, there is quite another approach: partial suppression of the seismic energy flow into the superstructure known as seismic or base isolation.

For this, some pads are inserted into or under all major load-carrying elements in the base of the building which should substantially decouple a superstructure from its substructure resting on a shaking ground.

The first evidence of earthquake protection by using the principle of base isolation was discovered in Pasargadae, a city in ancient Persia, now Iran, and dates back to the 6th century BCE. Below, there are some samples of seismic vibration control technologies of today.

Dry-stone walls control

Dry-stone walls of Machu Picchu Temple of the Sun, Peru

People of Inca civilization were masters of the polished 'dry-stone walls', called ashlar, where blocks of stone were cut to fit together tightly without any mortar. The Incas were among the best stonemasons the world has ever seen and many junctions in their masonry were so perfect that even blades of grass could not fit between the stones. 

Peru is a highly seismic land and for centuries the mortar-free construction proved to be apparently more earthquake-resistant than using mortar. The stones of the dry-stone walls built by the Incas could move slightly and resettle without the walls collapsing, a passive structural control technique employing both the principle of energy dissipation (coulomb damping) and that of suppressing resonant amplifications.

Tuned mass damper

Tuned mass damper in Taipei 101, the world's third tallest skyscraper
 
Typically the tuned mass dampers are huge concrete blocks mounted in skyscrapers or other structures and move in opposition to the resonance frequency oscillations of the structures by means of some sort of spring mechanism. 

The Taipei 101 skyscraper needs to withstand typhoon winds and earthquake tremors common in this area of Asia/Pacific. For this purpose, a steel pendulum weighing 660 metric tonnes that serves as a tuned mass damper was designed and installed atop the structure. Suspended from the 92nd to the 88th floor, the pendulum sways to decrease resonant amplifications of lateral displacements in the building caused by earthquakes and strong gusts.

Hysteretic dampers

A hysteretic damper is intended to provide better and more reliable seismic performance than that of a conventional structure by increasing the dissipation of seismic input energy. There are five major groups of hysteretic dampers used for the purpose, namely:
  • Fluid viscous dampers (FVDs)
Viscous Dampers have the benefit of being a supplemental damping system. They have an oval hysteretic loop and the damping is velocity dependent. While some minor maintenance is potentially required, viscous dampers generally do not need to be replaced after an earthquake. While more expensive than other damping technologies they can be used for both seismic and wind loads and are the most commonly used hysteretic damper.
  • Friction dampers (FDs)
Friction dampers tend to be available in two major types, linear and rotational and dissipate energy by heat. The damper operates on the principle of a coulomb damper. Depending on the design, friction dampers can experience stick-slip phenomenon and Cold welding. The main disadvantage being that friction surfaces can wear over time and for this reason they are not recommended for dissipating wind loads. When used in seismic applications wear is not a problem and there is no required maintenance. They have a rectangular hysteretic loop and as long as the building is sufficiently elastic they tend to settle back to their original positions after an earthquake.
  • Metallic yielding dampers (MYDs)
Metallic yielding dampers, as the name implies, yield in order to absorb the earthquake's energy. This type of damper absorbs a large amount of energy however they must be replaced after an earthquake and may prevent the building from settling back to its original position.
  • Viscoelastic dampers (VEDs)
Viscoelastic dampers are useful in that they can be used for both wind and seismic applications, they are usually limited to small displacements. There is some concern as to the reliability of the technology as some brands have been banned from use in buildings in the United States.

Base isolation

Base isolation seeks to prevent the kinetic energy of the earthquake from being transferred into elastic energy in the building. These technologies do so by isolating the structure from the ground, thus enabling them to move somewhat independently. The degree to which the energy is transferred into the structure and how the energy is dissipated will vary depending on the technology used.
  • Lead rubber bearing
LRB being tested at the UCSD Caltrans-SRMD facility
 
Lead rubber bearing or LRB is a type of base isolation employing a heavy damping. It was invented by Bill Robinson, a New Zealander.

Heavy damping mechanism incorporated in vibration control technologies and, particularly, in base isolation devices, is often considered a valuable source of suppressing vibrations thus enhancing a building's seismic performance. However, for the rather pliant systems such as base isolated structures, with a relatively low bearing stiffness but with a high damping, the so-called "damping force" may turn out the main pushing force at a strong earthquake. The video shows a Lead Rubber Bearing being tested at the UCSD Caltrans-SRMD facility. The bearing is made of rubber with a lead core. It was a uniaxial test in which the bearing was also under a full structure load. Many buildings and bridges, both in New Zealand and elsewhere, are protected with lead dampers and lead and rubber bearings. Te Papa Tongarewa, the national museum of New Zealand, and the New Zealand Parliament Buildings have been fitted with the bearings. Both are in Wellington which sits on an active fault.
  • Springs-with-damper base isolator
Springs-with-damper close-up
 
Springs-with-damper base isolator installed under a three-story town-house, Santa Monica, California is shown on the photo taken prior to the 1994 Northridge earthquake exposure. It is a base isolation device conceptually similar to Lead Rubber Bearing

One of two three-story town-houses like this, which was well instrumented for recording of both vertical and horizontal accelerations on its floors and the ground, has survived a severe shaking during the Northridge earthquake and left valuable recorded information for further study.
  • Simple roller bearing
Simple roller bearing is a base isolation device which is intended for protection of various building and non-building structures against potentially damaging lateral impacts of strong earthquakes. 

This metallic bearing support may be adapted, with certain precautions, as a seismic isolator to skyscrapers and buildings on soft ground. Recently, it has been employed under the name of metallic roller bearing for a housing complex (17 stories) in Tokyo, Japan.
  • Friction pendulum bearing
FPB shake-table testing

Friction pendulum bearing (FPB) is another name of friction pendulum system (FPS). It is based on three pillars:
  • articulated friction slider;
  • spherical concave sliding surface;
  • enclosing cylinder for lateral displacement restraint.
Snapshot with the link to video clip of a shake-table testing of FPB system supporting a rigid building model is presented at the right.

Seismic design

Seismic design is based on authorized engineering procedures, principles and criteria meant to design or retrofit structures subject to earthquake exposure. Those criteria are only consistent with the contemporary state of the knowledge about earthquake engineering structures. Therefore, a building design which exactly follows seismic code regulations does not guarantee safety against collapse or serious damage.

The price of poor seismic design may be enormous. Nevertheless, seismic design has always been a trial and error process whether it was based on physical laws or on empirical knowledge of the structural performance of different shapes and materials. 

 
San Francisco after the 1906 earthquake and fire
 
To practice seismic design, seismic analysis or seismic evaluation of new and existing civil engineering projects, an engineer should, normally, pass examination on Seismic Principles which, in the State of California, include:
  • Seismic Data and Seismic Design Criteria
  • Seismic Characteristics of Engineered Systems
  • Seismic Forces
  • Seismic Analysis Procedures
  • Seismic Detailing and Construction Quality Control
To build up complex structural systems, seismic design largely uses the same relatively small number of basic structural elements (to say nothing of vibration control devices) as any non-seismic design project. 

Normally, according to building codes, structures are designed to "withstand" the largest earthquake of a certain probability that is likely to occur at their location. This means the loss of life should be minimized by preventing collapse of the buildings. 

Seismic design is carried out by understanding the possible failure modes of a structure and providing the structure with appropriate strength, stiffness, ductility, and configuration to ensure those modes cannot occur.

Seismic design requirements

Seismic design requirements depend on the type of the structure, locality of the project and its authorities which stipulate applicable seismic design codes and criteria. For instance, California Department of Transportation's requirements called The Seismic Design Criteria (SDC) and aimed at the design of new bridges in California incorporate an innovative seismic performance-based approach. 

The most significant feature in the SDC design philosophy is a shift from a force-based assessment of seismic demand to a displacement-based assessment of demand and capacity. Thus, the newly adopted displacement approach is based on comparing the elastic displacement demand to the inelastic displacement capacity of the primary structural components while ensuring a minimum level of inelastic capacity at all potential plastic hinge locations. 

In addition to the designed structure itself, seismic design requirements may include a ground stabilization underneath the structure: sometimes, heavily shaken ground breaks up which leads to collapse of the structure sitting upon it. The following topics should be of primary concerns: liquefaction; dynamic lateral earth pressures on retaining walls; seismic slope stability; earthquake-induced settlement.

Nuclear facilities should not jeopardise their safety in case of earthquakes or other hostile external events. Therefore, their seismic design is based on criteria far more stringent than those applying to non-nuclear facilities. The Fukushima I nuclear accidents and damage to other nuclear facilities that followed the 2011 Tōhoku earthquake and tsunami have, however, drawn attention to ongoing concerns over Japanese nuclear seismic design standards and caused other many governments to re-evaluate their nuclear programs. Doubt has also been expressed over the seismic evaluation and design of certain other plants, including the Fessenheim Nuclear Power Plant in France.

Failure modes

Failure mode is the manner by which an earthquake induced failure is observed. It, generally, describes the way the failure occurs. Though costly and time consuming, learning from each real earthquake failure remains a routine recipe for advancement in seismic design methods. Below, some typical modes of earthquake-generated failures are presented.

Typical damage to unreinforced masonry buildings at earthquakes
 
The lack of reinforcement coupled with poor mortar and inadequate roof-to-wall ties can result in substantial damage to an unreinforced masonry building. Severely cracked or leaning walls are some of the most common earthquake damage. Also hazardous is the damage that may occur between the walls and roof or floor diaphragms. Separation between the framing and the walls can jeopardize the vertical support of roof and floor systems. 

Soft story collapse due to inadequate shear strength at ground level, Loma Prieta earthquake
 
Soft story effect. Absence of adequate stiffness on the ground level caused damage to this structure. A close examination of the image reveals that the rough board siding, once covered by a brick veneer, has been completely dismantled from the studwall. Only the rigidity of the floor above combined with the support on the two hidden sides by continuous walls, not penetrated with large doors as on the street sides, is preventing full collapse of the structure. 

 
Soil liquefaction. In the cases where the soil consists of loose granular deposited materials with the tendency to develop excessive hydrostatic pore water pressure of sufficient magnitude and compact, liquefaction of those loose saturated deposits may result in non-uniform settlements and tilting of structures. This caused major damage to thousands of buildings in Niigata, Japan during the 1964 earthquake.

Car smashed by landslide rock, 2008 Sichuan earthquake
 
Landslide rock fall. A landslide is a geological phenomenon which includes a wide range of ground movement, including rock falls. Typically, the action of gravity is the primary driving force for a landslide to occur though in this case there was another contributing factor which affected the original slope stability: the landslide required an earthquake trigger before being released.

Effects of pounding against adjacent building, Loma Prieta
 
Pounding against adjacent building. This is a photograph of the collapsed five-story tower, St. Joseph's Seminary, Los Altos, California which resulted in one fatality. During Loma Prieta earthquake, the tower pounded against the independently vibrating adjacent building behind. A possibility of pounding depends on both buildings' lateral displacements which should be accurately estimated and accounted for. 

Effects of completely shattered joints of concrete frame, Northridge
 
At Northridge earthquake, the Kaiser Permanente concrete frame office building had joints completely shattered, revealing inadequate confinement steel, which resulted in the second story collapse. In the transverse direction, composite end shear walls, consisting of two wythes of brick and a layer of shotcrete that carried the lateral load, peeled apart because of inadequate through-ties and failed.
Shifting from foundation, Whittier

Sliding off foundations effect of a relatively rigid residential building structure during 1987 Whittier Narrows earthquake. The magnitude 5.9 earthquake pounded the Garvey West Apartment building in Monterey Park, California and shifted its superstructure about 10 inches to the east on its foundation.
 
Earthquake damage in Pichilemu.
 
If a superstructure is not mounted on a base isolation system, its shifting on the basement should be prevented.

Insufficient shear reinforcement let main rebars to buckle, Northridge
 
Reinforced concrete column burst at Northridge earthquake due to insufficient shear reinforcement mode which allows main reinforcement to buckle outwards. The deck unseated at the hinge and failed in shear. As a result, the La Cienega-Venice underpass section of the 10 Freeway collapsed.

Support-columns and upper deck failure, Loma Prieta earthquake
 
Loma Prieta earthquake: side view of reinforced concrete support-columns failure which triggered the upper deck collapse onto the lower deck of the two-level Cypress viaduct of Interstate Highway 880, Oakland, CA. 

Failure of retaining wall due to ground movement, Loma Prieta
 
Retaining wall failure at Loma Prieta earthquake in Santa Cruz Mountains area: prominent northwest-trending extensional cracks up to 12 cm (4.7 in) wide in the concrete spillway to Austrian Dam, the north abutment

Lateral spreading mode of ground failure, Loma Prieta

Ground shaking triggered soil liquefaction in a subsurface layer of sand, producing differential lateral and vertical movement in an overlying carapace of unliquified sand and silt. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage.

Beams and pier columns diagonal cracking, 2008 Sichuan earthquake
 
Severely damaged building of Agriculture Development Bank of China after 2008 Sichuan earthquake: most of the beams and pier columns are sheared. Large diagonal cracks in masonry and veneer are due to in-plane loads while abrupt settlement of the right end of the building should be attributed to a landfill which may be hazardous even without any earthquake.

Tsunami strikes Ao Nang,
 
Twofold tsunami impact: sea waves hydraulic pressure and inundation. Thus, the Indian Ocean earthquake of December 26, 2004, with the epicenter off the west coast of Sumatra, Indonesia, triggered a series of devastating tsunamis, killing more than 230,000 people in eleven countries by inundating surrounding coastal communities with huge waves up to 30 meters (100 feet) high.

Earthquake-resistant construction

Earthquake construction means implementation of seismic design to enable building and non-building structures to live through the anticipated earthquake exposure up to the expectations and in compliance with the applicable building codes.

Construction of Pearl River Tower X-bracing to resist lateral forces of earthquakes and winds
 
Design and construction are intimately related. To achieve a good workmanship, detailing of the members and their connections should be as simple as possible. As any construction in general, earthquake construction is a process that consists of the building, retrofitting or assembling of infrastructure given the construction materials available.

The destabilizing action of an earthquake on constructions may be direct (seismic motion of the ground) or indirect (earthquake-induced landslides, soil liquefaction and waves of tsunami).

A structure might have all the appearances of stability, yet offer nothing but danger when an earthquake occurs. The crucial fact is that, for safety, earthquake-resistant construction techniques are as important as quality control and using correct materials. Earthquake contractor should be registered in the state/province/country of the project location (depending on local regulations), bonded and insured

To minimize possible losses, construction process should be organized with keeping in mind that earthquake may strike any time prior to the end of construction. 

Each construction project requires a qualified team of professionals who understand the basic features of seismic performance of different structures as well as construction management.

Adobe structures

Partially collapsed adobe building in Westmorland, California
 
Around thirty percent of the world's population lives or works in earth-made construction. Adobe type of mud bricks is one of the oldest and most widely used building materials. The use of adobe is very common in some of the world's most hazard-prone regions, traditionally across Latin America, Africa, Indian subcontinent and other parts of Asia, Middle East and Southern Europe.

Adobe buildings are considered very vulnerable at strong quakes. However, multiple ways of seismic strengthening of new and existing adobe buildings are available.

Key factors for the improved seismic performance of adobe construction are:
  • Quality of construction.
  • Compact, box-type layout.
  • Seismic reinforcement.

Limestone and sandstone structures

Base-isolated City and County Building, Salt Lake City, Utah
 
Limestone is very common in architecture, especially in North America and Europe. Many landmarks across the world are made of limestone. Many medieval churches and castles in Europe are made of limestone and sandstone masonry. They are the long-lasting materials but their rather heavy weight is not beneficial for adequate seismic performance. 

Application of modern technology to seismic retrofitting can enhance the survivability of unreinforced masonry structures. As an example, from 1973 to 1989, the Salt Lake City and County Building in Utah was exhaustively renovated and repaired with an emphasis on preserving historical accuracy in appearance. This was done in concert with a seismic upgrade that placed the weak sandstone structure on base isolation foundation to better protect it from earthquake damage.

Timber frame structures

Anne Hvide's House, Denmark (1560)
 
Timber framing dates back thousands of years, and has been used in many parts of the world during various periods such as ancient Japan, Europe and medieval England in localities where timber was in good supply and building stone and the skills to work it were not. 

The use of timber framing in buildings provides their complete skeletal framing which offers some structural benefits as the timber frame, if properly engineered, lends itself to better seismic survivability.

Light-frame structures

A two-story wooden-frame for a residential building structure
 
Light-frame structures usually gain seismic resistance from rigid plywood shear walls and wood structural panel diaphragms. Special provisions for seismic load-resisting systems for all engineered wood structures requires consideration of diaphragm ratios, horizontal and vertical diaphragm shears, and connector/fastener values. In addition, collectors, or drag struts, to distribute shear along a diaphragm length are required.

Reinforced masonry structures

Reinforced hollow masonry wall
 
A construction system where steel reinforcement is embedded in the mortar joints of masonry or placed in holes and after filled with concrete or grout is called reinforced masonry.

The devastating 1933 Long Beach earthquake revealed that masonry construction should be improved immediately. Then, the California State Code made the reinforced masonry mandatory. 

There are various practices and techniques to achieve reinforced masonry. The most common type is the reinforced hollow unit masonry. The effectiveness of both vertical and horizontal reinforcement strongly depends on the type and quality of the masonry, i.e. masonry units and mortar

To achieve a ductile behavior of masonry, it is necessary that the shear strength of the wall is greater than the flexural strength.

Reinforced concrete structures

Stressed Ribbon pedestrian bridge over the Rogue River, Grants Pass, Oregon
 
Prestressed concrete cable-stayed bridge over Yangtze river
 
Reinforced concrete is concrete in which steel reinforcement bars (rebars) or fibers have been incorporated to strengthen a material that would otherwise be brittle. It can be used to produce beams, columns, floors or bridges. 

Prestressed concrete is a kind of reinforced concrete used for overcoming concrete's natural weakness in tension. It can be applied to beams, floors or bridges with a longer span than is practical with ordinary reinforced concrete. Prestressing tendons (generally of high tensile steel cable or rods) are used to provide a clamping load which produces a compressive stress that offsets the tensile stress that the concrete compression member would, otherwise, experience due to a bending load. 

To prevent catastrophic collapse in response earth shaking (in the interest of life safety), a traditional reinforced concrete frame should have ductile joints. Depending upon the methods used and the imposed seismic forces, such buildings may be immediately usable, require extensive repair, or may have to be demolished.

Prestressed structures

Prestressed structure is the one whose overall integrity, stability and security depend, primarily, on a prestressing. Prestressing means the intentional creation of permanent stresses in a structure for the purpose of improving its performance under various service conditions.

Naturally pre-compressed exterior wall of Colosseum, Rome
 
There are the following basic types of prestressing:
  • Pre-compression (mostly, with the own weight of a structure)
  • Pretensioning with high-strength embedded tendons
  • Post-tensioning with high-strength bonded or unbonded tendons
Today, the concept of prestressed structure is widely engaged in design of buildings, underground structures, TV towers, power stations, floating storage and offshore facilities, nuclear reactor vessels, and numerous kinds of bridge systems.

A beneficial idea of prestressing was, apparently, familiar to the ancient Rome architects; look, e.g., at the tall attic wall of Colosseum working as a stabilizing device for the wall piers beneath.

Steel structures

Collapsed section of the San Francisco–Oakland Bay Bridge in response to Loma Prieta earthquake
 
Steel structures are considered mostly earthquake resistant but some failures have occurred. A great number of welded steel moment-resisting frame buildings, which looked earthquake-proof, surprisingly experienced brittle behavior and were hazardously damaged in the 1994 Northridge earthquake. After that, the Federal Emergency Management Agency (FEMA) initiated development of repair techniques and new design approaches to minimize damage to steel moment frame buildings in future earthquakes.

For structural steel seismic design based on Load and Resistance Factor Design (LRFD) approach, it is very important to assess ability of a structure to develop and maintain its bearing resistance in the inelastic range. A measure of this ability is ductility, which may be observed in a material itself, in a structural element, or to a whole structure.

As a consequence of Northridge earthquake experience, the American Institute of Steel Construction has introduced AISC 358 "Pre-Qualified Connections for Special and intermediate Steel Moment Frames." The AISC Seismic Design Provisions require that all Steel Moment Resisting Frames employ either connections contained in AISC 358, or the use of connections that have been subjected to pre-qualifying cyclic testing.

Prediction of earthquake losses

Earthquake loss estimation is usually defined as a Damage Ratio (DR) which is a ratio of the earthquake damage repair cost to the total value of a building. Probable Maximum Loss (PML) is a common term used for earthquake loss estimation, but it lacks a precise definition. In 1999, ASTM E2026 'Standard Guide for the Estimation of Building Damageability in Earthquakes' was produced in order to standardize the nomenclature for seismic loss estimation, as well as establish guidelines as to the review process and qualifications of the reviewer.

Earthquake loss estimations are also referred to as Seismic Risk Assessments. The risk assessment process generally involves determining the probability of various ground motions coupled with the vulnerability or damage of the building under those ground motions. The results are defined as a percent of building replacement value.

Earthquake prediction

From Wikipedia, the free encyclopedia

Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits, and particularly "the determination of parameters for the next strong earthquake to occur in a region. Earthquake prediction is sometimes distinguished from earthquake forecasting, which can be defined as the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. Prediction can be further distinguished from earthquake warning systems, which upon detection of an earthquake, provide a real-time warning of seconds to neighboring regions that might be affected.

In the 1970s, scientists were optimistic that a practical method for predicting earthquakes would soon be found, but by the 1990s continuing failure led many to question whether it was even possible. Demonstrably successful predictions of large earthquakes have not occurred and the few claims of success are controversial. For example, the most famous claim of a successful prediction is that alleged for the 1975 Haicheng earthquake. A later study said that there was no valid short-term prediction. Extensive searches have reported many possible earthquake precursors, but, so far, such precursors have not been reliably identified across significant spatial and temporal scales. While part of the scientific community hold that, taking into account non-seismic precursors and given enough resources to study them extensively, prediction might be possible, most scientists are pessimistic and some maintain that earthquake prediction is inherently impossible.

Evaluating earthquake predictions

Predictions are deemed significant if they can be shown to be successful beyond random chance. Therefore, methods of statistical hypothesis testing are used to determine the probability that an earthquake such as is predicted would happen anyway (the null hypothesis). The predictions are then evaluated by testing whether they correlate with actual earthquakes better than the null hypothesis.

In many instances, however, the statistical nature of earthquake occurrence is not simply homogeneous. Clustering occurs in both space and time. In southern California about 6% of M ≥ 3.0 earthquakes are "followed by an earthquake of larger magnitude within 5 days and 10 km." In central Italy 9.5% of M≥3.0 earthquakes are followed by a larger event within 48 hours and 30 km. While such statistics are not satisfactory for purposes of prediction (giving ten to twenty false alarms for each successful prediction) they will skew the results of any analysis that assumes that earthquakes occur randomly in time, for example, as realized from a Poisson process. It has been shown that a "naive" method based solely on clustering can successfully predict about 5% of earthquakes; "far better than 'chance'".

The Dilemma: To Alarm? or Not to Alarm?
 
As the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to give warning of a major earthquake, that does occur, or at least an adequate evaluation of the hazard, can result in legal liability, or even political purging. For example, it has been reported that members of the Chinese Academy of Sciences were purged for "having ignored scientific predictions of the disastrous Tangshan earthquake of summer 1976." Wade 1977. Following the L'Aquila earthquake of 2009, seven scientists and technicians in Italy were convicted of manslaughter, but not so much for failing to predict the 2009 L'Aquila Earthquake (where some 300 people died) as for giving undue assurance to the populace – one victim called it "anaesthetizing" – that there would not be a serious earthquake, and therefore no need to take precautions. But warning of an earthquake that does not occur also incurs a cost: not only the cost of the emergency measures themselves, but of civil and economic disruption. False alarms, including alarms that are canceled, also undermine the credibility, and thereby the effectiveness, of future warnings. In 1999 it was reported (Saegusa 1999) that China was introducing "tough regulations intended to stamp out ‘false’ earthquake warnings, in order to prevent panic and mass evacuation of cities triggered by forecasts of major tremors." This was prompted by "more than 30 unofficial earthquake warnings ... in the past three years, none of which has been accurate."  The acceptable trade-off between missed quakes and false alarms depends on the societal valuation of these outcomes. The rate of occurrence of both must be considered when evaluating any prediction method.

In a 1997 study of the cost-benefit ratio of earthquake prediction research in Greece, Stathis Stiros suggested that even a (hypothetical) excellent prediction method would be of questionable social utility, because "organized evacuation of urban centers is unlikely to be successfully accomplished", while "panic and other undesirable side-effects can also be anticipated." He found that earthquakes kill less than ten people per year in Greece (on average), and that most of those fatalities occurred in large buildings with identifiable structural issues. Therefore, Stiros stated that it would be much more cost-effective to focus efforts on identifying and upgrading unsafe buildings. Since the death toll on Greek highways is more than 2300 per year on average, he argued that more lives would also be saved if Greece's entire budget for earthquake prediction had been used for street and highway safety instead.

Prediction methods

Earthquake prediction is an immature science—it has not yet led to a successful prediction of an earthquake from first physical principles. Research into methods of prediction therefore focus on empirical analysis, with two general approaches: either identifying distinctive precursors to earthquakes, or identifying some kind of geophysical trend or pattern in seismicity that might precede a large earthquake. Precursor methods are pursued largely because of their potential utility for short-term earthquake prediction or forecasting, while 'trend' methods are generally thought to be useful for forecasting, long term prediction (10 to 100 years time scale) or intermediate term prediction (1 to 10 years time scale).

Precursors

An earthquake precursor is an anomalous phenomenon that might give effective warning of an impending earthquake. Reports of these – though generally recognized as such only after the event – number in the thousands, some dating back to antiquity. There have been around 400 reports of possible precursors in scientific literature, of roughly twenty different types, running the gamut from aeronomy to zoology. None have been found to be reliable for the purposes of earthquake prediction.

In the early 1990, the IASPEI solicited nominations for a Preliminary List of Significant Precursors. Forty nominations were made, of which five were selected as possible significant precursors, with two of those based on a single observation each.

After a critical review of the scientific literature the International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research." In particular, many cases of reported precursors are contradictory, lack a measure of amplitude, or are generally unsuitable for a rigorous statistical evaluation. Published results are biased towards positive results, and so the rate of false negatives (earthquake but no precursory signal) is unclear.

Animal behavior

For centuries there have been anecdotal accounts of anomalous animal behavior preceding and associated with earthquakes. In cases where animals display unusual behavior some tens of seconds prior to a quake, it has been suggested they are responding to the P-wave. These travel through the ground about twice as fast as the S-waves that cause most severe shaking. They predict not the earthquake itself — that has already happened — but only the imminent arrival of the more destructive S-waves. 

It has also been suggested that unusual behavior hours or even days beforehand could be triggered by foreshock activity at magnitudes that most people do not notice. Another confounding factor of accounts of unusual phenomena is skewing due to "flashbulb memories": otherwise unremarkable details become more memorable and more significant when associated with an emotionally powerful event such as an earthquake. A study that attempted to control for these kinds of factors found an increase in unusual animal behavior (possibly triggered by foreshocks) in one case, but not in four other cases of seemingly similar earthquakes.

Dilatancy–diffusion

In the 1970s the dilatancy–diffusion hypothesis was highly regarded as providing a physical basis for various phenomena seen as possible earthquake precursors. It was based on "solid and repeatable evidence" from laboratory experiments that highly stressed crystalline rock experienced a change in volume, or dilatancy, which causes changes in other characteristics, such as seismic velocity and electrical resistivity, and even large-scale uplifts of topography. It was believed this happened in a 'preparatory phase' just prior to the earthquake, and that suitable monitoring could therefore warn of an impending quake.

Detection of variations in the relative velocities of the primary and secondary seismic waves – expressed as Vp/Vs – as they passed through a certain zone was the basis for predicting the 1973 Blue Mountain Lake (NY) and 1974 Riverside (CA) quake. Although these predictions were informal and even trivial, their apparent success was seen as confirmation of both dilatancy and the existence of a preparatory process, leading to what were subsequently called "wildly over-optimistic statements" that successful earthquake prediction "appears to be on the verge of practical reality."

However, many studies questioned these results, and the hypothesis eventually languished. Subsequent study showed it "failed for several reasons, largely associated with the validity of the assumptions on which it was based", including the assumption that laboratory results can be scaled up to the real world. Another factor was the bias of retrospective selection of criteria. Other studies have shown dilatancy to be so negligible that Main et al. 2012 concluded: "The concept of a large-scale 'preparation zone' indicating the likely magnitude of a future event, remains as ethereal as the ether that went undetected in the Michelson–Morley experiment."

Changes in Vp/Vs

Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as Vp/Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a likely breakthrough when Russian seismologists reported observing such changes (later discounted.) in the region of a subsequent earthquake. This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly.

Study of this phenomenon near Blue Mountain Lake in New York State led to a successful albeit informal prediction in 1973, and it was credited for predicting the 1974 Riverside (CA) quake. However, additional successes have not followed, and it has been suggested that these predictions were a flukes. A Vp/Vs anomaly was the basis of a 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur. Other studies relying on quarry blasts (more precise, and repeatable) found no such variations, while an analysis of two earthquakes in California found that the variations reported were more likely caused by other factors, including retrospective selection of data. Geller (1997) noted that reports of significant velocity changes have ceased since about 1980.

Radon emissions

Most rock contains small amounts of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock.

Radon is useful as a potential earthquake predictor because it is radioactive and thus easily detected, and its short half-life (3.8 days) makes radon levels sensitive to short-term fluctuations. A 2009 review found 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. But as the ICEF found in its review, the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation". Another review concluded that in some cases changes in radon levels preceded an earthquake, but a correlation is not yet firmly established.

Electromagnetic anomalies

Observations of electromagnetic disturbances and their attribution to the earthquake failure process go back as far as the Great Lisbon earthquake of 1755, but practically all such observations prior to the mid-1960s are invalid because the instruments used were sensitive to physical movement. Since then various anomalous electrical, electric-resistive, and magnetic phenomena have been attributed to precursory stress and strain changes that precede earthquakes, raising hopes for finding a reliable earthquake precursor. While a handful of researchers have gained much attention with either theories of how such phenomena might be generated, claims of having observed such phenomena prior to an earthquake, no such phenomena has been shown to be an actual precursor.

A 2011 review found the "most convincing" electromagnetic precursors to be ULF magnetic anomalies, such as the Corralitos event (discussed below) recorded before the 1989 Loma Prieta earthquake. However, it is now believed that observation was a system malfunction. Study of the closely monitored 2004 Parkfield earthquake found no evidence of precursory electromagnetic signals of any type; further study showed that earthquakes with magnitudes less than 5 do not produce significant transient signals. The International Commission on Earthquake Forecasting for Civil Protection (ICEF) considered the search for useful precursors to have been unsuccessful.
* VAN seismic electric signals
The most touted, and most criticized, claim of an electromagnetic precursor is the VAN method of physics professors Panayiotis Varotsos, Kessar Alexopoulos and Konstantine Nomicos (VAN) of the University of Athens. In a 1981 paper they claimed that by measuring geoelectric voltages – what they called "seismic electric signals" (SES) – they could predict earthquakes of magnitude larger than 2.8 within all of Greece up to seven hours beforehand.

In 1984 they claimed there was a "one-to-one correspondence" between SES and earthquakes – that is, that "every sizable EQ is preceded by an SES and inversely every SES is always followed by an EQ the magnitude and the epicenter of which can be reliably predicted" – the SES appearing between 6 and 115 hours before the earthquake. As proof of their method they claimed a series of successful predictions.

Although their report was "saluted by some as a major breakthrough" – one enthusiastic supporter (Uyeda) was reported as saying "VAN is the biggest invention since the time of Archimedes" – among seismologists it was greeted by a "wave of generalized skepticism". In 1996 a paper VAN submitted to the journal Geophysical Research Letters was given an unprecedented public peer-review by a broad group of reviewers, with the paper and reviews published in a special issue; the majority of reviewers found the methods of VAN to be flawed. Additional criticism was raised the same year in a public debate between some of the principals.

A primary criticism was that the method is geophysically implausible and scientifically unsound. Additional objections included the demonstrable falsity of the claimed one-to-one relationship of earthquakes and SES, the unlikelihood of a precursory process generating signals stronger than any observed from the actual earthquakes, and the very strong likelihood that the signals were man-made. Further work in Greece has tracked SES-like "anomalous transient electric signals" back to specific human sources, and found that such signals are not excluded by the criteria used by VAN to identify SES.

The validity of the VAN method, and therefore the predictive significance of SES, was based primarily on the empirical claim of demonstrated predictive success. Numerous weaknesses have been uncovered in the VAN methodology, and in 2011 the ICEF concluded that the prediction capability claimed by VAN could not be validated. Most seismologists consider VAN to have been "resoundingly debunked".
* Corralitos anomaly
Probably the most celebrated seismo-electromagnetic event ever, and one of the most frequently cited examples of a possible earthquake precursor, is the 1989 Corralitos anomaly. In the month prior to the 1989 Loma Prieta earthquake measurements of the earth's magnetic field at ultra-low frequencies by a magnetometer in Corralitos, California, just 7 km from the epicenter of the impending earthquake, started showing anomalous increases in amplitude. Just three hours before the quake the measurements soared to about thirty times greater than normal, with amplitudes tapering off after the quake. Such amplitudes had not been seen in two years of operation, nor in a similar instrument located 54 km away. To many people such apparent locality in time and space suggested an association with the earthquake.

Additional magnetometers were subsequently deployed across northern and southern California, but after ten years, and several large earthquakes, similar signals have not been observed. More recent studies have cast doubt on the connection, attributing the Corralitos signals to either unrelated magnetic disturbance or, even more simply, to sensor-system malfunction.
* Freund physics
In his investigations of crystalline physics, Friedemann Freund found that water molecules embedded in rock can dissociate into ions if the rock is under intense stress. The resulting charge carriers can generate battery currents under certain conditions. Freund suggested that perhaps these currents could be responsible for earthquake precursors such as electromagnetic radiation, earthquake lights and disturbances of the plasma in the ionosphere. The study of such currents and interactions is known as "Freund physics".

Most seismologists reject Freund's suggestion that stress-generated signals can be detected and put to use as precursors, for a number of reasons. First, it is believed that stress does not accumulate rapidly before a major earthquake, and thus there is no reason to expect large currents to be rapidly generated. Secondly, seismologists have extensively searched for statistically reliable electrical precursors, using sophisticated instrumentation, and have not identified any such precursors. And thirdly, water in the earth's crust would cause any generated currents to be absorbed before reaching the surface.

Trends

Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.

Elastic rebound

Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the earth's crust will bend or deform. According to the elastic rebound theory of Reid (1910), eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process energy is released in various forms, including seismic waves. The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake), the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle.

Characteristic earthquakes

The most studied earthquake faults (such as the Nankai megathrust, the Wasatch fault, and the San Andreas fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments. As the lengths and other properties of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals. For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely return period) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard. UCERF3 is a notable example of such a forecast, prepared for the state of California. Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date. 

The idea of characteristic earthquakes was the basis of the Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years. Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval). The appeal of such a method is that the prediction is derived entirely from the trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests the individual events differ sufficiently in other respects to question whether they have distinct characteristics in common.

The failure of the Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself. Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles). Other studies have considered whether other factors need to be considered, such as the age of the fault. Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.

Seismic gaps

At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest. This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989–1991.

However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region"; statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well". Another study concluded that a long quiet period did not increase earthquake potential.

Seismicity patterns

Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarm for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years. Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happened were skillfully predicted, or only the result of chance.

M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP. In 1999, Keilis-Borok's group published a claim to have achieved statistically significant intermediate-term results using their M8 and MSc models, as far as world-wide large earthquakes are regarded. However, Geller et al. are skeptical of prediction claims over any period shorter than 30 years. A widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs. A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway). It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates [of] seismicity."

Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate. In other words, a plot of the cumulative number of foreshocks gets steeper just before the main shock. 

Following formulation by Bowman et al. (1998) into a testable hypothesis, and a number of positive reports, AMR seemed promising despite several problems. Known issues included not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep. But rigorous testing has shown that apparent AMR trends likely result from how data fitting is done, and failing to account for spatiotemporal clustering of earthquakes. The AMR trends are therefore statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) has fallen off since 2004.

Notable predictions

These are predictions, or claims of predictions, that are notable either scientifically or because of public notoriety, and claim a scientific or quasi-scientific basis. As many predictions are held confidentially, or published in obscure locations, and become notable only when they are claimed, there may be a selection bias in that hits get more attention than misses. The predictions listed here are discussed in Hough's book and Geller's paper.

1975: Haicheng, China

The M 7.3 1975 Haicheng earthquake is the most widely cited "success" of earthquake prediction. Study of seismic activity in the region led the Chinese authorities to issue a medium-term prediction in June 1974. The political authorities therefore ordered various measures taken, including enforced evacuation of homes, construction of "simple outdoor structures", and showing of movies out-of-doors. The quake, striking at 19:36, was powerful enough to destroy or badly damage about half of the homes. However, the "effective preventative measures taken" were said to have kept the death toll under 300 in an area with population of about 1.6 million, where otherwise tens of thousands of fatalities might have been expected.

However, although a major earthquake occurred, there has been some skepticism about the narrative of measures taken on the basis of a timely prediction. This event occurred during the Cultural Revolution, when "belief in earthquake prediction was made an element of ideological orthodoxy that distinguished the true party liners from right wing deviationists". Recordkeeping was disordered, making it difficult to verify details, including whether there was any ordered evacuation. The method used for either the medium-term or short-term predictions (other than "Chairman Mao's revolutionary line") has not been specified. The evacuation may have been spontaneous, following the strong (M 4.7) foreshock that occurred the day before.

A 2006 study that had access to an extensive range of records found that the predictions were flawed. "In particular, there was no official short-term prediction, although such a prediction was made by individual scientists." Also: "it was the foreshocks alone that triggered the final decisions of warning and evacuation". They estimated that 2,041 lives were lost. That more did not die was attributed to a number of fortuitous circumstances, including earthquake education in the previous months (prompted by elevated seismic activity), local initiative, timing (occurring when people were neither working nor asleep), and local style of construction. The authors conclude that, while unsatisfactory as a prediction, "it was an attempt to predict a major earthquake that for the first time did not end up with practical failure."

1981: Lima, Peru (Brady)

In 1976 Brian Brady, a physicist then at the U.S. Bureau of Mines, where he had studied how rocks fracture, "concluded a series of four articles on the theory of earthquakes with the deduction that strain building in the subduction zone [off-shore of Peru] might result in an earthquake of large magnitude within a period of seven to fourteen years from mid November 1974." In an internal memo written in June 1978 he narrowed the time window to "October to November, 1981", with a main shock in the range of 9.2±0.2. In a 1980 memo he was reported as specifying "mid-September 1980". This was discussed at a scientific seminar in San Juan, Argentina, in October 1980, where Brady's colleague, W. Spence, presented a paper. Brady and Spence then met with government officials from the U.S. and Peru on 29 October, and "forecast a series of large magnitude earthquakes in the second half of 1981." This prediction became widely known in Peru, following what the U.S. embassy described as "sensational first page headlines carried in most Lima dailies" on January 26, 1981.

On 27 January 1981, after reviewing the Brady-Spence prediction, the U.S. National Earthquake Prediction Evaluation Council (NEPEC) announced it was "unconvinced of the scientific validity" of the prediction, and had been "shown nothing in the observed seismicity data, or in the theory insofar as presented, that lends substance to the predicted times, locations, and magnitudes of the earthquakes." It went on to say that while there was a probability of major earthquakes at the predicted times, that probability was low, and recommend that "the prediction not be given serious consideration."

Unfazed, Brady subsequently revised his forecast, stating there would be at least three earthquakes on or about July 6, August 18 and September 24, 1981, leading one USGS official to complain: "If he is allowed to continue to play this game ... he will eventually get a hit and his theories will be considered valid by many."

On June 28 (the date most widely taken as the date of the first predicted earthquake), it was reported that: "the population of Lima passed a quiet Sunday". The headline on one Peruvian newspaper: "NO PASO NADA" ("Nothing happens").

In July Brady formally withdrew his prediction on the grounds that prerequisite seismic activity had not occurred. Economic losses due to reduced tourism during this episode has been roughly estimated at one hundred million dollars.

1985–1993: Parkfield, U.S. (Bakun-Lindh)

The "Parkfield earthquake prediction experiment" was the most heralded scientific earthquake prediction ever. It was based on an observation that the Parkfield segment of the San Andreas Fault breaks regularly with a moderate earthquake of about M 6 every several decades: 1857, 1881, 1901, 1922, 1934, and 1966. More particularly, Bakun & Lindh (1985) pointed out that, if the 1934 quake is excluded, these occur every 22 years, ±4.3 years. Counting from 1966, they predicted a 95% chance that the next earthquake would hit around 1988, or 1993 at the latest. The National Earthquake Prediction Evaluation Council (NEPEC) evaluated this, and concurred. The U.S. Geological Survey and the State of California therefore established one of the "most sophisticated and densest nets of monitoring instruments in the world", in part to identify any precursors when the quake came. Confidence was high enough that detailed plans were made for alerting emergency authorities if there were signs an earthquake was imminent. In the words of the Economist: "never has an ambush been more carefully laid for such an event."

1993 came, and passed, without fulfillment. Eventually there was an M 6.0 earthquake on the Parkfield segment of the fault, on 28 September 2004, but without forewarning or obvious precursors. While the experiment in catching an earthquake is considered by many scientists to have been successful, the prediction was unsuccessful in that the eventual event was a decade late.

1983–1995: Greece (VAN)

In 1981, the "VAN" group, headed by Panayiotis Varotsos, said that they found a relationship between earthquakes and 'seismic electric signals' (SES). In 1984 they presented a table of 23 earthquakes from 19 January 1983 to 19 September 1983, of which they claimed to have successfully predicted 18 earthquakes. Other lists followed, such as their 1991 claim of predicting six out of seven earthquakes with Ms  ≥ 5.5 in the period of 1 April 1987 through 10 August 1989, or five out of seven earthquakes with Ms  ≥ 5.3 in the overlapping period of 15 May 1988 to 10 August 1989, In 1996 they published a "Summary of all Predictions issued from January 1st, 1987 to June 15, 1995", amounting to 94 predictions. Matching this against a list of "All earthquakes with MS(ATH)" and within geographical bounds including most of Greece they come up with a list of 14 earthquakes they should have predicted. Here they claim ten successes, for a success rate of 70%, but also a false alarm rate of 89%.

The VAN predictions have been criticized on various grounds, including being geophysically implausible, "vague and ambiguous", that "VAN’s ‘predictions’ never specify the windows, and never state an unambiguous expiration date [and thus] VAN are not making earthquake predictions in the first place", failing to satisfy prediction criteria, and retroactive adjustment of parameters. It has also been objected that no one "can confidently state, except in the most general terms, what the VAN hypothesis is, because the authors of it have nowhere presented a thorough formulation of it."
 
A critical review of 14 cases where VAN claimed 10 successes showed only one case where an earthquake occurred within the prediction parameters. The VAN predictions not only fail to do better than chance, but show "a much better association with the events which occurred before them", according to Mulargia and Gasperini. Other early reviews found that the VAN results, when evaluated by definite parameters, were statistically significant. Both positive and negative views on VAN predictions from this period were summarized in the 1996 book "A Critical Review of VAN" edited by Sir James Lighthill and in a debate issue presented by the journal Geophysical Research Letters that was focused on the statistical significance of the VAN method. VAN had the opportunity to reply to their critics in those review publications. In 2011, the ICEF reviewed the 1996 debate, and concluded that the optimistic SES prediction capability claimed by VAN could not be validated.

A crucial issue is the large and often indeterminate parameters of the predictions, such that some critics say these are not predictions, and should not be recognized as such. Much of the controversy with VAN arises from this failure to adequately specify these parameters. Some of their telegrams include predictions of two distinct earthquake events, such as (typically) one earthquake predicted at 300 km "N.W" of Athens, and another at 240 km "W", "with magnitutes [sic] 5,3 and 5,8", with no time limit.

VAN has disputed the 'pessimistic' conclusions of their critics, but the critics have not relented. It was suggested that VAN failed to account for clustering of earthquakes, or that they interpreted their data differently during periods of greater seismic activity.

VAN has been criticized on several occasions for causing public panic and widespread unrest. This has been exacerbated by the broadness of their predictions, which cover large areas of Greece (up to 240 kilometers across, and often pairs of areas), much larger than the areas actually affected by earthquakes of the magnitudes predicted (usually several tens of kilometers across). Magnitudes are similarly broad: a predicted magnitude of "6.0" represents a range from a benign magnitude 5.3 to a broadly destructive 6.7. Coupled with indeterminate time windows of a month or more, such predictions "cannot be practically utilized" to determine an appropriate level of preparedness, whether to curtail usual societal functioning, or even to issue public warnings. As an instance of the quandary public officials face: in 1995 Professor Varotsos reportedly filed a complaint with the public prosecutor accusing government officials of negligence in not responding to his supposed prediction of an earthquake. A government official was quoted as saying "VAN's prediction was not of any use" in that it covered two-thirds of the area of Greece.

1989: Loma Prieta, U.S.

The 1989 Loma Prieta earthquake (epicenter in the Santa Cruz Mountains northwest of San Juan Bautista, California) caused significant damage in the San Francisco Bay Area of California. The U.S. Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "forecast" this earthquake in a report the previous year. USGS staff subsequently claimed this quake had been "anticipated"; various other claims of prediction have also been made.

Harris (1998) reviewed 18 papers (with 26 forecasts) dating from 1910 "that variously offer or relate to scientific forecasts of the 1989 Loma Prieta earthquake." (In this case no distinction is made between a forecast, which is limited to a probabilistic estimate of an earthquake happening over some time period, and a more specific prediction.) None of these forecasts can be rigorously tested due to lack of specificity, and where a forecast does bracket the correct time and location, the window was so broad (e.g., covering the greater part of California for five years) as to lose any value as a prediction. Predictions that came close (but given a probability of only 30%) had ten- or twenty-year windows.

One debated prediction came from the M8 algorithm used by Keilis-Borok and associates in four forecasts. The first of these forecasts missed both magnitude (M 7.5) and time (a five-year window from 1 January 1984, to 31 December 1988). They did get the location, by including most of California and half of Nevada. A subsequent revision, presented to the NEPEC, extended the time window to 1 July 1992, and reduced the location to only central California; the magnitude remained the same. A figure they presented had two more revisions, for M ≥ 7.0 quakes in central California. The five-year time window for one ended in July 1989, and so missed the Loma Prieta event; the second revision extended to 1990, and so included Loma Prieta.

When discussing success or failure of prediction for the Loma Prieta earthquake, some scientists argue that it did not occur on the San Andreas fault (the focus of most of the forecasts), and involved dip-slip (vertical) movement rather than strike-slip (horizontal) movement, and so was not predicted.

Other scientists argue that it did occur in the San Andreas fault zone, and released much of the strain accumulated since the 1906 San Francisco earthquake; therefore several of the forecasts were correct. Hough states that "most seismologists" do not believe this quake was predicted "per se". In a strict sense there were no predictions, only forecasts, which were only partially successful. 

Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.

1990: New Madrid, U.S. (Browning)

Iben Browning (a scientist with a Ph.D. degree in zoology and training as a biophysicist, but no experience in geology, geophysics, or seismology) was an "independent business consultant" who forecast long-term climate trends for businesses. He supported the idea (scientifically unproven) that volcanoes and earthquakes are more likely to be triggered when the tidal force of the sun and the moon coincide to exert maximum stress on the earth's crust (syzygy). Having calculated when these tidal forces maximize, Browning then "projected" what areas were most at risk for a large earthquake. An area he mentioned frequently was the New Madrid Seismic Zone at the southeast corner of the state of Missouri, the site of three very large earthquakes in 1811–12, which he coupled with the date of 3 December 1990. 

Browning's reputation and perceived credibility were boosted when he claimed in various promotional flyers and advertisements to have predicted (among various other events) the Loma Prieta earthquake of 17 October 1989. The National Earthquake Prediction Evaluation Council (NEPEC) formed an Ad Hoc Working Group (AHWG) to evaluate Browning's prediction. Its report (issued 18 October 1990) specifically rejected the claim of a successful prediction of the Loma Prieta earthquake. A transcript of his talk in San Francisco on 10 October showed he had said: "there will probably be several earthquakes around the world, Richter 6+, and there may be a volcano or two" – which, on a global scale, is about average for a week – with no mention of any earthquake in California.

Though the AHWG report disproved both Browning's claims of prior success and the basis of his "projection", it made little impact after a year of continued claims of a successful prediction. Browning's prediction received the support of geophysicist David Stewart, and the tacit endorsement of many public authorities in their preparations for a major disaster, all of which was amplified by massive exposure in the news media. Nothing happened on 3 December, and Browning died of a heart attack seven months later.

2004 & 2005: Southern California, U.S. (Keilis-Borok)

The M8 algorithm (developed under the leadership of Vladimir Keilis-Borok at UCLA) gained respect by the apparently successful predictions of the 2003 San Simeon and Hokkaido earthquakes. Great interest was therefore generated by the prediction in early 2004 of a M ≥ 6.4 earthquake to occur somewhere within an area of southern California of approximately 12,000 sq. miles, on or before 5 September 2004. In evaluating this prediction the California Earthquake Prediction Evaluation Council (CEPEC) noted that this method had not yet made enough predictions for statistical validation, and was sensitive to input assumptions. It therefore concluded that no "special public policy actions" were warranted, though it reminded all Californians "of the significant seismic hazards throughout the state." The predicted earthquake did not occur. 

A very similar prediction was made for an earthquake on or before 14 August 2005, in approximately the same area of southern California. The CEPEC's evaluation and recommendation were essentially the same, this time noting that the previous prediction and two others had not been fulfilled. This prediction also failed.

2009: L'Aquila, Italy (Giuliani)

At 03:32 on 6 April 2009, the Abruzzo region of central Italy was rocked by a magnitude M 6.3 earthquake. In the city of L'Aquila and surrounding area around 60,000 buildings collapsed or were seriously damaged, resulting in 308 deaths and 67,500 people left homeless. Around the same time, it was reported that Giampaolo Giuliani had predicted the earthquake, had tried to warn the public, but had been muzzled by the Italian government.

Giampaolo Giuliani was a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he had for some years been monitoring radon using instruments he had designed and built. Prior to the L'Aquila earthquake he was unknown to the scientific community, and had not published any scientific work. He had been interviewed on 24 March by an Italian-language blog, Donne Democratiche, about a swarm of low-level earthquakes in the Abruzzo region that had started the previous December. He said that this swarm was normal and would diminish by the end of March. On 30 March, L'Aquila was struck by a magnitude 4.0 temblor, the largest to date.

On 27 March Giuliani warned the mayor of L'Aquila there could be an earthquake within 24 hours, and an earthquake M~2.3 occurred. On 29 March he made a second prediction. He telephoned the mayor of the town of Sulmona, about 55 kilometers southeast of L'Aquila, to expect a "damaging" – or even "catastrophic" – earthquake within 6 to 24 hours. Loudspeaker vans were used to warn the inhabitants of Sulmona to evacuate, with consequential panic. No quake ensued and Giuliano was cited for inciting public alarm and enjoined from making future public predictions.

After the L'Aquila event Giuliani claimed that he had found alarming rises in radon levels just hours before. He said he had warned relatives, friends and colleagues on the evening before the earthquake hit. He was subsequently interviewed by the International Commission on Earthquake Forecasting for Civil Protection, which found that Giuliani had not transmitted a valid prediction of the mainshock to the civil authorities before its occurrence.

Difficulty or impossibility

As the preceding examples show, the record of earthquake prediction has been disappointing. The optimism of the 1970s that routine prediction of earthquakes would be "soon", perhaps within ten years, was coming up disappointingly short by the 1990s, and many scientists began wondering why. By 1997 it was being positively stated that earthquakes can not be predicted, which led to a notable debate in 1999 on whether prediction of individual earthquakes is a realistic scientific goal.

Earthquake prediction may have failed only because it is "fiendishly difficult" and still beyond the current competency of science. Despite the confident announcement four decades ago that seismology was "on the verge" of making reliable predictions, there may yet be an underestimation of the difficulties. As early as 1978 it was reported that earthquake rupture might be complicated by "heterogeneous distribution of mechanical properties along the fault", and in 1986 that geometrical irregularities in the fault surface "appear to exert major controls on the starting and stopping of ruptures". Another study attributed significant differences in fault behavior to the maturity of the fault. These kinds of complexities are not reflected in current prediction methods.

Seismology may even yet lack an adequate grasp of its most central concept, elastic rebound theory. A simulation that explored assumptions regarding the distribution of slip found results "not in agreement with the classical view of the elastic rebound theory". (This was attributed to details of fault heterogeneity not accounted for in the theory.)

Earthquake prediction may be intrinsically impossible. It has been argued that the Earth is in a state of self-organized criticality "where any small earthquake has some probability of cascading into a large event". It has also been argued on decision-theoretic grounds that "prediction of major earthquakes is, in any practical sense, impossible."

That earthquake prediction might be intrinsically impossible has been strongly disputed But the best disproof of impossibility – effective earthquake prediction – has yet to be demonstrated.

Rydberg atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...