Search This Blog

Saturday, November 21, 2020

Earth Simulator

From Wikipedia, the free encyclopedia
 
Earth Simulator (ES), original version
 
Earth Simulator interconnection rack
 
Earth Simulator processing rack
 
Earth Simulator arithmetic processing module
 
Earth Simulator 2 (ES2)
Earth Simulator 3 (ES3)

The Earth Simulator (ES) (地球シミュレータ, Chikyū Shimyurēta), developed by the Japanese government's initiative "Earth Simulator Project", was a highly parallel vector supercomputer system for running global climate models to evaluate the effects of global warming and problems in solid earth geophysics. The system was developed for Japan Aerospace Exploration Agency, Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center (JAMSTEC) in 1997.

Construction started in October 1999, and the site officially opened on 11 March 2002. The project cost 60 billion yen.

Built by NEC, ES was based on their SX-6 architecture. It consisted of 640 nodes with eight vector processors and 16 gigabytes of computer memory at each node, for a total of 5120 processors and 10 terabytes of memory. Two nodes were installed per 1 metre × 1.4 metre × 2 metre cabinet. Each cabinet consumed 20 kW of power. The system had 700 terabytes of disk storage (450 for the system and 250 for the users) and 1.6 petabytes of mass storage in tape drives. It was able to run holistic simulations of global climate in both the atmosphere and the oceans down to a resolution of 10 km. Its performance on the LINPACK benchmark was 35.86 TFLOPS, which was almost five times faster than the previous fastest supercomputer, ASCI White. As of 2020, comparable performance can be achieved by using 4 Nvidia A100 GPUs, each with 9.746 FP64 TFlops.

ES was the fastest supercomputer in the world from 2002 to 2004. Its capacity was surpassed by IBM's Blue Gene/L prototype on 29 September 2004.

ES was replaced by the Earth Simulator 2 (ES2) in March 2009. ES2 is an NEC SX-9/E system, and has a quarter as many nodes each of 12.8 times the performance (3.2× clock speed, four times the processing resource per node), for a peak performance of 131 TFLOPS. With a delivered LINPACK performance of 122.4 TFLOPS, ES2 was the most efficient supercomputer in the world at that point. In November 2010, NEC announced that ES2 topped the Global FFT, one of the measures of the HPC Challenge Awards, with the performance number of 11.876 TFLOPS.

ES2 was replaced by the Earth Simulator 3 (ES3) in March 2015. ES3 is a NEC SX-ACE system with 5120 nodes, and a performance of 1.3 PFLOPS.

ES3, from 2017 to 2018, ran alongside Gyoukou, a supercomputer with immersion cooling that can achieve up to 19 PFLOPS.

System overview

Hardware

The Earth Simulator (ES for short) was developed as a national project by three governmental agencies: the National Space Development Agency of Japan (NASDA), the Japan Atomic Energy Research Institute (JAERI), and the Japan Marine Science and Technology Center (JAMSTEC). The ES is housed in the Earth Simulator Building (approx; 50m × 65m × 17m). The Earth Simulator 2 (ES2) uses 160 nodes of NEC's SX-9E. The upgrade of the Earth Simulator has been completed in March 2015. The Earth Simulator 3(ES3) system uses 5120 nodes of NEC's SX-ACE.

System configuration

The ES is a highly parallel vector supercomputer system of the distributed-memory type, and consisted of 160 processor nodes connected by Fat-Tree Network. Each Processor nodes is a system with a shared memory, consisting of 8 vector-type arithmetic processors, a 128-GB main memory system. The peak performance of each Arithmetic processors is 102.4Gflops. The ES as a whole thus consists of 1280 arithmetic processors with 20 TB of main memory and the theoretical performance of 131Tflops.

Construction of CPU

Each CPU consists of a 4-way super-scalar unit (SU), a vector unit (VU), and main memory access control unit on a single LSI chip. The CPU operates at a clock frequency of 3.2 GHz. Each VU has 72 vector registers, each of which has 256 vector elements, along with 8 sets of six different types of vector pipelines: addition /shifting, multiplication, division, logical operations, masking, and load/store. The same type of vector pipelines works together by a single vector instruction and pipelines of different types can operate concurrently.

Processor Node (PN)

The processor node is composed of 8 CPU and 10 memory modules.

Interconnection Network (IN)

The RCU is directly connected to the crossbar switches and controls inter-node data communications at 64 GB/s bidirectional transfer rate for both sending and receiving data. Thus the total bandwidth of inter-node network is about 10 TB/s.

Processor Node (PN) Cabinet

The processor node is composed two nodes of one cabinet, and consists of power supply part 8 memory modules and PCI box with 8 CPU modules.

Software

Below is the description of software technologies used in the operating system, Job Scheduling and the programming environment of ES2.

Operating system

The operating system running on ES, "Earth Simulator Operating System", is a custom version of NEC's SUPER-UX used for the NEC SX supercomputers that make up ES.

Mass storage file system

If a large parallel job running on 640 PNs reads from/writes to one disk installed in a PN, each PN accesses to the disk in sequence and performance degrades terribly. Although local I/O in which each PN reads from or writes to its own disk solves the problem, it is a very hard work to manage such a large number of partial files. Then ES adopts Staging and Global File System (GFS) that offers a high-speed I/O performance.

Job scheduling

ES is basically a batch-job system. Network Queuing System II (NQSII) is introduced to manage the batch job. Queue configuration of the Earth Simulator. ES has two-type queues. S batch queue is designed for single-node batch jobs and L batch queue is for multi-node batch queue. There are two-type queues. One is L batch queue and the other is S batch queue. S batch queue is aimed at being used for a pre-run or a post-run for large-scale batch jobs (making initial data, processing results of a simulation and other processes), and L batch queue is for a production run. Users choose the appropriate queue for their job.

  1. The nodes allocated to a batch job are used exclusively for that batch job.
  2. The batch job is scheduled based on elapsed time instead of CPU time.

Strategy (1) enables to estimate the job termination time and to make it easy to allocate nodes for the next batch jobs in advance. Strategy (2) contributes to an efficient job execution. The job can use the nodes exclusively and the processes in each node can be executed simultaneously. As a result, the large-scale parallel program is able to be executed efficiently. PNs of L-system are prohibited from access to the user disk to ensure enough disk I/O performance. herefore the files used by the batch job are copied from the user disk to the work disk before the job execution. This process is called "stage-in." It is important to hide this staging time for the job scheduling. Main steps of the job scheduling are summarized as follows;

  1. Node Allocation
  2. Stage-in (copies files from the user disk to the work disk automatically)
  3. Job Escalation (rescheduling for the earlier estimated start time if possible)
  4. Job Execution
  5. Stage-out (copies files from the work disk to the user disk automatically)

When a new batch job is submitted, the scheduler searches available nodes (Step.1). After the nodes and the estimated start time are allocated to the batch job, stage-in process starts (Step.2). The job waits until the estimated start time after stage-in process is finished. If the scheduler find the earlier start time than the estimated start time, it allocates the new start time to the batch job. This process is called "Job Escalation" (Step.3). When the estimated start time has arrived, the scheduler executes the batch job (Step.4). The scheduler terminates the batch job and starts stage-out process after the job execution is finished or the declared elapsed time is over (Step.5). To execute the batch job, the user logs into the login-server and submits the batch script to ES. And the user waits until the job execution is done. During that time, the user can see the state of the batch job using the conventional web browser or user commands. The node scheduling, the file staging and other processing are automatically processed by the system according to the batch script.

Programming environment

Programming model in ES

The ES hardware has a 3-level hierarchy of parallelism: vector processing in an AP, parallel processing with shared memory in a PN, and parallel processing among PNs via IN. To bring out high performance of ES fully, you must develop parallel programs that make the most use of such parallelism. the 3-level hierarchy of parallelism of ES can be used in two manners, which are called hybrid and flat parallelization, respectively . In the hybrid parallelization, the inter-node parallelism is expressed by HPF or MPI, and the intra-node by microtasking or OpenMP, and you must, therefore, consider the hierarchical parallelism in writing your programs. In the flat parallelization, the both inter- and intra-node parallelism can be expressed by HPF or MPI, and it is not necessary for you to consider such complicated parallelism. Generally speaking, the hybrid parallelization is superior to the flat in performance and vice versa in ease of programming. Note that the MPI libraries and the HPF runtimes are optimized to perform as well as possible both in the hybrid and flat parallelization.

Languages

Compilers for Fortran 90, C and C++ are available. All of them have an advanced capability of automatic vectorization and microtasking. Microtasking is a sort of multitasking provided for the Cray's supercomputer at the same time and is also used for intra-node parallelization on ES. Microtasking can be controlled by inserting directives into source programs or using the compiler's automatic parallelization. (Note that OpenMP is also available in Fortran 90 and C++ for intra-node parallelization.)

Parallelization

Message Passing Interface (MPI)

MPI is a message passing library based on the MPI-1 and MPI-2 standards and provides high-speed communication capability that fully exploits the features of IXS and shared memory. It can be used for both intra- and inter-node parallelization. An MPI process is assigned to an AP in the flat parallelization, or to a PN that contains microtasks or OpenMP threads in the hybrid parallelization. MPI libraries are designed and optimizedcarefully to achieve highest performance of communication on the ES architecture in both of the parallelization manner.

High Performance Fortrans (HPF)

Principal users of ES are considered to be natural scientists who are not necessarily familiar with the parallel programming or rather dislike it. Accordingly, a higher-level parallel language is in great demand. HPF/SX provides easy and efficient parallel programming on ES to supply the demand. It supports the specifications of HPF2.0, its approved extensions, HPF/JA, and some unique extensions for ES

Tools

-Integrated development environment (PSUITE)

Integrated development environment (PSUITE) is integration of various tools to develop the program that operates by SUPER-UX. Because PSUITE assumes that various tools can be used by GUI, and has the coordinated function between tools, it comes to be able to develop the program more efficiently than the method of developing the past the program and easily.

-Debug Support

In SUPER-UX, the following are prepared as strong debug support functions to support the program development.

Facilities

Features of the Earth Simulator building

Protection from natural disasters

The Earth Simulator Center has several special features that help to protect the computer from natural disasters or occurrences. A wire nest hangs over the building which helps to protect from lightning. The nest itself uses high-voltage shielded cables to release lightning current into the ground. A special light propagation system utilizes halogen lamps, installed outside of the shielded machine room walls, to prevent any magnetic interference from reaching the computers. The building is constructed on a seismic isolation system, composed of rubber supports, that protect the building during earthquakes.

Lightning protection system

Three basic features:

  • Four poles at both sides of the Earth Simulator Building compose wire nest to protect the building from lightning strikes.
  • Special high-voltage shielded cable is used for inductive wire which releases a lightning current to the earth.
  • Ground plates are laid by keeping apart from the building about 10 meters.

Illumination

Lighting: Light propagation system inside a tube (255mm diameter, 44m(49yd) length, 19 tubes) Light source: halogen lamps of 1 kW Illumination: 300 lx at the floor in average The light sources installed out of the shielded machine room walls.

Seismic isolation system

11 isolators (1 ft height, 3.3 ft. Diameter, 20-layered rubbers supporting the bottom of the ES building)

Performance

LINPACK

The new Earth Simulator system, which began operation in March 2009, achieved sustained performance of 122.4 TFLOPS and computing efficiency (*2) of 93.38% on the LINPACK Benchmark (*1).

  • 1. LINPACK Benchmark

The LINPACK Benchmark is a measure of a computer's performance and is used as a standard benchmark to rank computer systems in the TOP500 project. LINPACK is a program for performing numerical linear algebra on computers.

  • 2. Computing efficiency

Computing efficiency is the ratio of sustained performance to a peak computing performance. Here, it is the ratio of 122.4TFLOPS to 131.072TFLOPS.

Computational performance of WRF on Earth Simulator

WRF (Weather Research and Forecasting Model) is a mesoscale meteorological simulation code which has been developed under the collaboration among US institutions, including NCAR (National Center for Atmospheric Research) and NCEP (National Centers for Environmental Prediction). JAMSTEC has optimized WRFV2 on the Earth Simulator (ES2) renewed in 2009 with the measurement of computational performance. As a result, it was successfully demonstrated that WRFV2 can run on the ES2 with outstanding and sustained performance.

The numerical meteorological simulation was conducted by using WRF on the Earth Simulator for the earth's hemisphere with the Nature Run model condition. The model spatial resolution is 4486 by 4486 horizontally with the grid spacing of 5 km and 101 levels vertically. Mostly adiabatic conditions were applied with the time integration step of 6 seconds. A very high performance on the Earth Simulator was achieved for high-resolution WRF. While the number of CPU cores used is only 1% as compared to the world fastest class system Jaguar (CRAY XT5) at Oak Ridge National Laboratory, the sustained performance obtained on the Earth Simulator is almost 50% of that measured on the Jaguar system. The peak performance ratio on the Earth Simulator is also record-high 22.2%.

General circulation model

From Wikipedia, the free encyclopedia

Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. To "run" a model, scientists divide the planet into a 3-dimensional grid, apply the basic equations, and evaluate the results. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points.
 
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5).

A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.

GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.

Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.

Terminology

The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modelling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.

History

In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model. Following Phillips's work, several groups began working to create GCMs. The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined. In 1996, efforts began to model soil and vegetation types. Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements. The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately.

Atmospheric and oceanic models

Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model.

Structure

Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly.

A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections.

Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs). They may include atmospheric chemistry.

AGCMs consist of a dynamical core which integrates the equations of fluid motion, typically for:

  • surface pressure
  • horizontal components of velocity in layers
  • temperature and water vapor in layers
  • radiation, split into solar/short wave and terrestrial/infrared/long wave
  • parameters for:

A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds.

OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.

AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself.

Grid

The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude / longitude grid). However, non-rectangular grids (e.g., icosahedral) and grids of variable resolution are more often used. The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively. These resolutions are lower than is typically used for weather forecasting. Ocean resolutions tend to be higher, for example HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.

For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.

Flux buffering

Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result, the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections.

Convection

Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used, although a variety of different schemes are now in use. Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcoming of the method.

Software

Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 2-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.

Projections

Projected annual mean surface air temperature from 1970-2100, based on SRES emissions scenario A1B, using the NOAA GFDL CM2.1 climate model (credit: NOAA Geophysical Fluid Dynamics Laboratory).

Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain.

The 2001 IPCC Third Assessment Report F igure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year. Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.

Future scenarios do not include unknown events – for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect.

Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels.

Emissions scenarios

In the 21st century, changes in global mean temperature are projected to vary across the world
Projected change in annual mean surface air temperature from the late 20th century to the middle 21st century, based on SRES emissions scenario A1B (credit: NOAA Geophysical Fluid Dynamics Laboratory).

For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–1999) of 1.8 °C to 4.0 °C. Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C.

In 2008 a study made climate projections using several emission scenarios. In a scenario where global emissions start to decrease by 2010 and then declined at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely.

Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C.

Model accuracy

SST errors in HadCM3
 
North American precipitation from various models
 
Temperature predictions from some climate models assuming the SRES A2 emissions scenario

AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes in Earth system models, such as the carbon cycle, so as to better model feedbacks. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings.

Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century.

A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise, was resolved in favour of the models, following data revisions.

Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface. In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate.

Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models.

In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.

The precise magnitude of future changes in climate is still uncertain; for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (5.4 °F) and the range is +1.3 to +4.5 °C (+2.3 to 8.1 °F).

The IPCC's Fifth Assessment Report asserted "very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period". However, the report also observed that the rate of warming over the period 1998–2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.

Relation to weather forecasting

The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct.

Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are typically a few days or a week and sea surface temperatures change relatively slowly, such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast – typically these are taken from the output of a previous forecast, blended with observations. Weather predictions are required at higher temporal resolutions than climate projections, often sub-hourly compared to monthly or yearly averages for climate. However, because weather forecasts only cover around 10 days the models can also be run at higher vertical and horizontal resolutions than climate mode. Currently the ECMWF runs at 9 km (5.6 mi) resolution as opposed to the 100-to-200 km (62-to-124 mi) scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an 11 km (6.8 mi) resolution covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times.

Computations

Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice.

All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.

The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.

Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly.

Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models.

Models range in complexity:

  • A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy
  • This can be expanded vertically (radiative-convective models), or horizontally
  • Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
  • Box models treat flows across and within ocean basins.

Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.

Comparison with other climate models

Earth-system models of intermediate complexity (EMICs)

The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.

Radiative-convective models (RCM)

One-dimensional, radiative-convective models were used to verify basic climate assumptions in the 1980s and 1990s.

Earth system models

GCMs can form part of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon chemistry transport model may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the effects of climate change on the ozone hole to be studied.

Climatology

From Wikipedia, the free encyclopedia

Climatology is the scientific study of the climate.

Climatology (from Greek κλίμα, klima, "place, zone"; and -λογία, -logia) or climate science is the scientific study of climate, scientifically defined as weather conditions averaged over a period of time. This modern field of study is regarded as a branch of the atmospheric sciences and a subfield of physical geography, which is one of the Earth sciences. Climatology now includes aspects of oceanography and biogeochemistry.

The main methods employed by climatologists are the analysis of observations and modelling the physical laws that determine the climate. The main topics of research are the study of climate variability, mechanisms of climate changes and modern climate change. Basic knowledge of climate can be used within shorter term weather forecasting, for instance about climatic cycles such as the El Niño–Southern Oscillation (ENSO), the Madden–Julian oscillation (MJO), the North Atlantic oscillation (NAO), the Arctic oscillation (AO), the Pacific decadal oscillation (PDO), and the Interdecadal Pacific Oscillation (IPO).

Climate models are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. Weather is known as the condition of the atmosphere over a period of time, while climate has to do with the atmospheric condition over an extended to indefinite period of time.

History

The Greeks began the formal study of climate; in fact the word climate is derived from the Greek word klima, meaning "slope," referring to the slope or inclination of the Earth's axis. Arguably the most influential classic text on climate was On Airs, Water and Places written by Hippocrates around 400 BCE. This work commented on the effect of climate on human health and cultural differences between Asia and Europe. This idea that climate controls which countries excel depending on their climate, or climatic determinism, remained influential throughout history. Chinese scientist Shen Kuo (1031–1095) inferred that climates naturally shifted over an enormous span of time, after observing petrified bamboos found underground near Yanzhou (modern day Yan'an, Shaanxi province), a dry-climate area unsuitable for the growth of bamboo.

The invention of the thermometer and the barometer during the Scientific Revolution allowed for systematic recordkeeping, that began as early as 1640–1642 in England. Early climate researchers include Edmund Halley, who published a map of the trade winds in 1686 after a voyage to the southern hemisphere. Benjamin Franklin (1706–1790) first mapped the course of the Gulf Stream for use in sending mail from the United States to Europe. Francis Galton (1822–1911) invented the term anticyclone. Helmut Landsberg (1906–1985) fostered the use of statistical analysis in climatology, which led to its evolution into a physical science.

In the early 20th century, climatology was mostly focused on the description of regional climates. This descriptive climatology was mainly an applied science, giving farmers and other interested people statistics about what the normal weather was and how big chances were of extreme events. To do this, climatologists had to define a climate normal, or an average of weather and weather extremes over a period of typically 30 years.

Around the middle of the 20th century, many assumptions in meteorology and climatology considered climate to be roughly constant. While scientists knew of past climate change such as the ice ages, the concept of climate as unchanging was useful in the development of a general theory of what determines climate. This started to change in the decades that followed, and while the history of climate change science started earlier, climate change only became one of the mean topics of study for climatologists in the seventies and onward.

Subfields

Map of the average temperature over 30 years. Data sets formed from the long-term average of historical weather parameters are sometimes called a "climatology".

Various subfields of climatology study different aspects of the climate. There are different categorizations of the fields in climatology. The American Meteorological Society for instance identifies descriptive climatology, scientific climatology and applied climatology as the three subcategories of climatology, a categorization based on the complexity and the purpose of the research. Applied climatologists apply their expertise to different industries such as manufacturing and agriculture.

Paleoclimatology seeks to reconstruct and understand past climates by examining records such as ice cores and tree rings (dendroclimatology). Paleotempestology uses these same records to help determine hurricane frequency over millennia. Historical climatology is the study of climate as related to human history and thus focuses only on the last few thousand years.

Boundary-layer climatology is preoccupied with exchanges in water, energy and momentum near the surface. Further identified subfields are physical climatology, dynamic climatology, tornado climatology, regional climatology, bioclimatology, and synoptic climatology. The study of the hydrological cycle over long time scales (hydroclimatology) is further subdivided into the subfields of snow climatology and hail climatology.

Methods

The study of contemporary climates incorporates meteorological data accumulated over many years, such as records of rainfall, temperature and atmospheric composition. Knowledge of the atmosphere and its dynamics is also embodied in models, either statistical or mathematical, which help by integrating different observations and testing how they fit together. Modeling is used for understanding past, present and potential future climates.

Climate research is made difficult by the large scale, long time periods, and complex processes which govern climate. Climate is governed by physical laws which can be expressed as differential equations. These equations are coupled and nonlinear, so that approximate solutions are obtained by using numerical methods to create global climate models. Climate is sometimes modeled as a stochastic process but this is generally accepted as an approximation to processes that are otherwise too complicated to analyze.

Climate data

The collection of long record of climate variables is essential for the study of climate. Climatology deals with the aggregate data that meteorology has collected. Scientists use both direct and indirect observations of the climate, from Earth observing satellites and scientific instrumentation such as a global network of thermometers, to prehistoric ice extracted from glaciers. As measuring technology changes over time, records of data cannot be compared directly. As cities are generally warmer than the surrounding areas, urbanization has made it necessary to constantly correct data for this urban heat island effect.

Models

Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the earth with outgoing energy as long wave (infrared) electromagnetic radiation from the earth. Any unbalance results in a change in the average temperature of the earth. Most climate models include the radiative effects of greenhouse gases such as carbon dioxide. These models predict an upward trend in the surface temperatures, as well as a more rapid increase in temperature at higher latitudes.

Models can range from relatively simple to complex:

  • A simple radiant heat transfer model that treats the earth as a single point and averages outgoing energy
  • this can be expanded vertically (radiative-convective models), or horizontally
  • Coupled atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
  • Earth system models further include the biosphere.

Topics of research

Topics that climatologists study fall roughly into three categories: climate variability, mechanisms of climate change and modern climate change.

Climatological processes

Various factors impact the average state of the atmosphere at a particular location. For instance, midlatitudes will have a pronounced seasonal cycle in temperature whereas tropical regions show little variation in temperature over the year. Another major control in climate is continentality: the distance to major water bodies such as oceans. Oceans act as a moderating factor, so that land close to it has typically has mild winters and moderate summers. The atmosphere interacts with other spheres of the climate system, with winds generating ocean currents that transport heat around the globe.

Climate classification

Classification is an important aspect of many sciences as a tool of simplifying complicated processes. Different climate classifications have been developed over the centuries, with the first ones in Ancient Greece. How climates are classified depends on what the application is. A wind energy producer will require different information (wind) in the classification than somebody interested in agriculture, for who precipitation and temperature are more important. The most widely used classification, the Köppen climate classification, was developed in the late nineteenth century and is based on vegetation. It uses monthly temperature and precipitation data.

Climate variability

El Niño impacts

There are different modes of variability: recurring patterns of temperature or other climate variables. They are quantified with different indices. Much in the way the Dow Jones Industrial Average, which is based on the stock prices of 30 companies, is used to represent the fluctuations in the stock market as a whole, climate indices are used to represent the essential elements of climate. Climate indices are generally devised with the twin objectives of simplicity and completeness, and each index typically represents the status and timing of the climate factor it represents. By their very nature, indices are simple, and combine many details into a generalized, overall description of the atmosphere or ocean which can be used to characterize the factors which impact the global climate system.

El Niño–Southern Oscillation (ENSO) is a coupled ocean-atmosphere phenomenon in the Pacific Ocean responsible for most of the global variability in temperature, and has a cycle between two and seven years. The North Atlantic oscillation is a mode of variability that is mainly contained to the lower atmosphere, the troposphere. The layer of atmosphere above, the stratosphere is also capable of creating its own variability, most importantly in the Madden–Julian oscillation (MJO), which has a cycle of approximately 30-60 days. The interdecadal pacific oscillation can create changes in the Pacific Ocean and lower atmosphere on decadal time scales.

Climatic change

Climate change occurs when changes in Earth's climate system result in new weather patterns that remain in place for an extended period of time. This length of time can be as short as a few decades to as long as millions of years. The climate system receives nearly all of its energy from the sun. The climate system also gives off energy to outer space. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the incoming energy is greater than the outgoing energy, earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and earth experiences cooling. Climate change also influences the average sea level.

Modern climate change is driven by the human emissions of greenhouse gas from the burning of fossil fuel driving up global mean surface temperatures. Rising temperatures are only one aspect of modern climate change though, with includes observed changes in precipitation, storm tracks and cloudiness. Warmer temperatures are driving further changes in the climate system, such as the widespread melt of glaciers, sea level rise and shifts in flora and fauna.

Differences with meteorology

In contrast to meteorology, which focuses on short term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes in long-term average weather patterns, in relation to atmospheric conditions. Climatologists study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and can help predict future climate change.

Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere.

Use in weather forecasting

A more complicated way of making a forecast, the analog technique requires remembering a previous weather event which is expected to be mimicked by an upcoming event. What makes it a difficult technique to use is that there is rarely a perfect analog for an event in the future. Some call this type of forecasting pattern recognition, which remains a useful method of observing rainfall over data voids such as oceans with knowledge of how satellite imagery relates to precipitation rates over land, as well as the forecasting of precipitation amounts and distribution in the future. A variation on this theme is used in medium range forecasting, which is known as teleconnections, when systems in other locations are used to help pin down the location of a system within the surrounding regime. One method of using teleconnections are by using climate indices such as ENSO-related phenomena.

Climate as complex networks

From Wikipedia, the free encyclopedia

The field of complex networks has emerged as an important area of science to generate novel insights into nature of complex systems. The application of network theory to climate science is a young and emerging field. To identify and analyze patterns in global climate, scientists model climate data as complex networks.

Unlike most real-world networks where nodes and edges are well defined, in climate networks, nodes are identified as the sites in a spatial grid of the underlying global climate data set, which can be represented at various resolutions. Two nodes are connected by an edge depending on the degree of statistical similarity (that may be related to dependence) between the corresponding pairs of time-series taken from climate records. The climate network approach enables novel insights into the dynamics of the climate system over different spatial and temporal scales.

Construction of climate networks

Depending upon the choice of nodes and/or edges, climate networks may take many different forms, shapes, sizes and complexities. Tsonis et al. introduced the field of complex networks to climate. In their model, the nodes for the network were constituted by a single variable (500 hPa) from NCEP/NCAR Reanalysis datasets. In order to estimate the edges between nodes, correlation coefficient at zero time lag between all possible pairs of nodes were estimated. A pair of nodes was considered to be connected, if their correlation coefficient is above a threshold of 0.5.

The team of Havlin introduced the weighted links method which considers (i) the time delay of the link, (ii) the maximum of the cross-correlation at the time delay and (iii) the level of noise in the cross-correlation function.

Steinhaeuser and team introduced the novel technique of multivariate networks in climate by constructing networks from several climate variables separately and capture their interaction in multivariate predictive model. It was demonstrated in their studies that in context of climate, extracting predictors based on cluster attributes yield informative precursors to improve predictive skills.

Kawale et al. presented a graph based approach to find dipoles in pressure data. Given the importance of teleconnection, this methodology has potential to provide significant insights. 

Imme et al. introduced a new type of network construction in climate based on temporal probabilistic graphical model, which provides an alternative viewpoint by focusing on information flow within network over time. 

Applications of climate networks

Climate networks enable insights into the dynamics of climate system over many spatial scales. The local degree centrality and related measures have been used to identify super-nodes and to associate them to known dynamical interrelations in the atmosphere, called teleconnection patterns. It was observed that climate networks possess “small world” properties owing to the long-range spatial connections.

The temperatures in different zones in the world do not show significant changes due to El Niño except when measured in a restricted area in the Pacific Ocean. Yamasaki et al. found, in contrast, that the dynamics of a climate network based on the same temperature records in various geographical zones in the world is significantly influenced by El Niño. During El Niño many links of the network are broken, and the number of surviving links comprises a specific and sensitive measure for El Niño events. While during non-El Niño periods these links which represent correlations between temperatures in different sites are more stable, fast fluctuations of the correlations observed during El Niño periods cause the links to break.

Moreover, Gozolchiani et al. presented the structure and evolution of the climate network in different geographical zones and find that the network responds in a unique way to El Niño events. They found that when El Niño events begin, the El Niño basin loses its influence on its surroundings almost all dependence on its surroundings and becomes autonomous. The formation of an autonomous basin is the missing link to understand the seemingly contradicting phenomena of the afore-noticed weakening of the interdependencies in the climate network during El Niño and the known impact of the anomalies inside the El Niño basin on the global climate system.

Steinhaeuser et al. applied complex networks to explore the multivariate and multi-scale dependence in climate data. Findings of the group suggested a close similarity of observed dependence patterns in multiple variables over multiple time and spatial scales.

Tsonis and Roeber investigated the coupling architecture of the climate network. It was found that the overall network emerges from intertwined subnetworks. One subnetwork is operating at higher altitudes and other is operating in the tropics, while the equatorial subnetwork acts as an agent linking the 2 hemispheres . Though, both networks possess Small World Property, the 2 subnetworks are significantly different from each other in terms of network properties like degree distribution.

Donges et al. applied climate networks for physics and nonlinear dynamical interpretations in climate. The team used measure of node centrality, betweenness centrality (BC) to demonstrate the wave-like structures in the BC fields of climate networks constructed from monthly averaged reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature (SAT) data.

The pattern of the local daily fluctuations of climate fields such as temperatures and geopotential heights is not stable and hard to predict. Surprisingly, Berezin et al. found that the observed relations between such fluctuations in different geographical regions yields a very robust network pattern that remains highly stable during time. 

Ludescher et al. found evidence that a large-scale cooperative mode—linking the El Niño basin (equatorial Pacific corridor) and the rest of the ocean—builds up in about the calendar year before the warming event. On this basis, they developed an efficient 12-month forecasting scheme for El Niño events.  The global impact of EN was studied using climate networks in Jing-fang et al. 

The connectivity pattern of networks based on ground level temperature records shows a dense stripe of links in the extra tropics of the southern hemisphere. Wang et al  showed that statistical categorization of these links yields a clear association with the pattern of the atmospheric Rossby waves, one of the major mechanisms associated with the weather system and with planetary scale energy transport. It is shown that alternating densities of negative and positive links are arranged in half Rossby wave distances around 3500, 7000, and 10 000 km and are aligned with the expected direction of energy flow, distribution of time delays, and the seasonality of these waves. In addition, long distance links that are associated with Rossby waves are the most dominant links in the climate network.

Different definitions of links in climate networks may lead to considerably different network topologies. Utilizing detrended fluctuation analysis, shuffled surrogates, and separation analysis of maritime and continental records, Guez et al. found that one of the major influences on the structure of climate networks is the existence of strong autocorrelations in the records, which may introduce spurious links. This explains why different methods could lead to different climate network topologies.

Teleconnection path

Teleconnections play an important role in climate dynamics. A climate network method was developed to identify the direct paths on the globe of teleconnections.

Teleconnections are spatial patterns in the atmosphere that link weather and climate anomalies over large distances across the globe. Teleconnections have the characteristics that they are persistent, lasting for 1 to 2 weeks, and often much longer, and they are recurrent, as similar patterns tend to occur repeatedly. The presence of teleconnections is associated with changes in temperature, wind, precipitation, atmospheric variables of greatest societal interest.

Computational issues and challenges

There are numerous computational challenges that arise at various stages of the network construction and analysis process in field of climate networks:

  1. Calculating the pair-wise correlations between all grid points is a non-trivial task.
  2. Computational demands of network construction, which depends upon the resolution of spatial grid.
  3. Generation of predictive models from the data poses additional challenges.
  4. Inclusion of lag and lead effects over space and time is a non-trivial task.

Catatonia

From Wikipedia, the free encyclopedia

Catatonia
Other namesCatatonic syndrome
Сatatonic stupor3.jpg
A patient in catatonic stupor
SpecialtyPsychiatry

Catatonia is a state of psycho-motor immobility and behavioral abnormality. It was first described in 1874 by Karl Ludwig Kahlbaum as Die Katatonie oder das Spannungsirresein (Catatonia or Tension Insanity).

Though catatonia has historically been related to schizophrenia (catatonic schizophrenia), it is now known that catatonic symptoms are nonspecific and may be observed in other mental disorders and neurological conditions. In the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V), catatonia is not recognized as a separate disorder, but is associated with such psychiatric conditions as schizophrenia (catatonic type), bipolar disorder, post-traumatic stress disorder, depression, narcolepsy, drug abuse, and overdose. It may also be seen in many medical disorders, including infections (such as encephalitis), autoimmune disorders, meningitis, focal neurological lesions (including strokes), alcohol withdrawal, abrupt or overly rapid benzodiazepine withdrawal, cerebrovascular disease, neoplasms, head injury, and some metabolic conditions (homocystinuria, diabetic ketoacidosis, hepatic encephalopathy, and hypercalcaemia).

It can be an adverse reaction to prescribed medication and is similar to encephalitis lethargica and neuroleptic malignant syndrome. There are a variety of treatments available. Benzodiazepines are a first-line treatment strategy. Electroconvulsive therapy is sometimes used. There is growing evidence of the effectiveness of the NMDA receptor antagonists amantadine and memantine for benzodiazepine-resistant catatonia. Antipsychotics are sometimes employed, but they can worsen symptoms and have serious adverse effects.

Signs and symptoms

Catatonia can be stuporous or excited. Stuporous catatonia is characterized by immobility during which patients may show reduced responsiveness to the environment (stupor), rigid poses (posturing), an inability to speak (mutism), and waxy flexibility (in which they maintain positions after being placed in them by someone else). Mutism may be partial and patients may repeat meaningless phrases (verbigeration) or speak only to repeat what someone else says (echolalia). People with stuporous catatonia may also show purposeless, repetitive movements (stereotypy). Excited catatonia is characterized by bizarre, non–goal-directed hyperactivity and impulsiveness.

Catatonia can occur in various psychiatric disorders, including major depressive disorder, bipolar disorder, schizophrenia, schizoaffective disorder, schizophreniform disorder, brief psychotic disorder, and substance-induced psychotic disorder. It appears as the Kahlbaum syndrome (motionless catatonia), malignant catatonia (neuroleptic malignant syndrome, toxic serotonin syndrome), and excited forms (delirious mania, catatonic excitement, oneirophrenia). It also is related to autism spectrum disorders.

Diagnosis

According to the DSM-5, "Catatonia Associated with Another Mental Disorder (Catatonia Specifier)" (code 293.89 [F06.1]) is diagnosed if the clinical picture is dominated by at least three of the following:

  • stupor: no psycho-motor activity; not actively relating to environment
  • catalepsy: passive induction of a posture held against gravity
  • waxy flexibility: allowing positioning by examiner and maintaining that position
  • mutism: no, or very little, verbal response (exclude if known aphasia)
  • negativism: opposition or no response to instructions or external stimuli
  • posturing: spontaneous and active maintenance of a posture against gravity
  • mannerisms that are odd, circumstantial caricatures of normal actions
  • stereotypy: repetitive, abnormally frequent, non-goal-directed movements
  • agitation, not influenced by external stimuli
  • grimacing: keeping a fixed facial expression
  • echolalia: mimicking another's speech
  • echopraxia: mimicking another's movements.

Other disorders (additional code 293.89 [F06.1] to indicate the presence of the co-morbid catatonia):

If catatonic symptoms are present but do not form the catatonic syndrome, a medication- or substance-induced aetiology should first be considered.

Subtypes

Although catatonia can be divided into various subtypes, the natural history of catatonia is often fluctuant and different states can exist within the same individual.

  • Stupor is a motionless state in which one is oblivious of, or does not react to, external stimuli. Motor activity is almost non-existent. People in this state make little or no eye contact with others and may be mute and rigid. One may remain in one position for a long period of time, and then go directly to another position immediately after the first position.
  • Catatonic excitement is a state of constant purposeless agitation and excitation. People in this state are extremely hyperactive and may have delusions and hallucinations. Catatonic excitement is commonly cited as one of the most dangerous mental states in psychiatry.
  • Malignant catatonia is an acute onset of excitement, fever, autonomic instability, and delirium and may be fatal.

Rating scale

Various rating scales for catatonia have been developed. The most commonly used scale is the Bush-Francis Catatonia Rating Scale (BFCRS). A diagnosis can be supported by the lorazepam challenge or the zolpidem challenge. While proven useful in the past, barbiturates are no longer commonly used in psychiatry; thus the option of either benzodiazepines or ECT.

Treatment

Initial treatment is aimed at providing symptomatic relief. Benzodiazepines are the first line of treatment, and high doses are often required. A test dose of intramuscular lorazepam will often result in marked improvement within half an hour. In France, zolpidem has also been used in diagnosis, and response may occur within the same time period. Ultimately the underlying cause needs to be treated.

Electroconvulsive therapy (ECT) is an effective treatment for catatonia, however, it has been pointed out that further high quality randomized controlled trials are needed to evaluate the efficacy, tolerance, and protocols of ECT in catatonia.

Antipsychotics should be used with care as they can worsen catatonia and are the cause of neuroleptic malignant syndrome, a dangerous condition that can mimic catatonia and requires immediate discontinuation of the antipsychotic.

Excessive glutamate activity is believed to be involved in catatonia; when first-line treatment options fail, NMDA antagonists such as amantadine or memantine may be used. Amantadine may have an increased incidence of tolerance with prolonged use and can cause psychosis, due to its additional effects on the dopamine system. Memantine has a more targeted pharmacological profile for the glutamate system, reduced incidence of psychosis and may therefore be preferred for individuals who cannot tolerate amantadine. Topiramate is another treatment option for resistant catatonia; it produces its therapeutic effects by producing glutamate antagonism via modulation of AMPA receptors.

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...