Search This Blog

Tuesday, May 21, 2024

Sea level rise

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Futures_studies
The global average sea level has risen about 250 millimetres (9.8 in) since 1880.

Between 1901 and 2018, average global sea level rose by 15–25 cm (6–10 in), an average of 1–2 mm (0.039–0.079 in) per year. This rate accelerated to 4.62 mm (0.182 in)/yr for the decade 2013–2022. Climate change due to human activities is the main cause. Between 1993 and 2018, thermal expansion of water accounted for 42% of sea level rise. Melting temperate glaciers accounted for 21%, while polar glaciers in Greenland accounted for 15% and those in Antarctica for 8%.

Sea level rise lags behind changes in the Earth's temperature, and sea level rise will therefore continue to accelerate between now and 2050 in response to warming that has already happened. What happens after that depends on human greenhouse gas emissions. Sea level rise would slow down between 2050 and 2100 if there are very deep cuts in emissions. It could then reach slightly over 30 cm (1 ft) from now by 2100. With high emissions it would accelerate. It could rise by 1.01 m (3+13 ft) or even 1.6 m (5+13 ft) by then. In the long run, sea level rise would amount to 2–3 m (7–10 ft) over the next 2000 years if warming amounts to 1.5 °C (2.7 °F). It would be 19–22 metres (62–72 ft) if warming peaks at 5 °C (9.0 °F).

Rising seas affect every coastal and island population on Earth. This can be through flooding, higher storm surges, king tides, and tsunamis. There are many knock-on effects. They lead to loss of coastal ecosystems like mangroves. Crop yields may reduce because of increasing salt levels in irrigation water. Damage to ports disrupts sea trade. The sea level rise projected by 2050 will expose places currently inhabited by tens of millions of people to annual flooding. Without a sharp reduction in greenhouse gas emissions, this may increase to hundreds of millions in the latter decades of the century. Areas not directly exposed to rising sea levels could be vulnerable to large-scale migration and economic disruption.

Local factors like tidal range or land subsidence will greatly affect the severity of impacts. There is also the varying resilience and adaptive capacity of ecosystems and countries which will result in more or less pronounced impacts. For instance, sea level rise in the United States (particularly along the US East Coast) is likely to be 2 to 3 times greater than the global average by the end of the century. Yet, of the 20 countries with the greatest exposure to sea level rise, 12 are in Asia, including Indonesia, Bangladesh and the Philippines. The greatest impact on human populations in the near term will occur in the low-lying Caribbean and Pacific islands. Sea level rise will make many of them uninhabitable later this century.

Societies can adapt to sea level rise in multiple ways. Managed retreat, accommodating coastal change, or protecting against sea level rise through hard-construction practices like seawalls are hard approaches. There are also soft approaches such as dune rehabilitation and beach nourishment. Sometimes these adaptation strategies go hand in hand. At other times choices must be made among different strategies. A managed retreat strategy is difficult if an area's population is increasing rapidly. This is a particularly acute problem for Africa. Poorer nations may also struggle to implement the same approaches to adapt to sea level rise as richer states. Sea level rise at some locations may be compounded by other environmental issues. One example is subsidence in sinking cities. Coastal ecosystems typically adapt to rising sea levels by moving inland. Natural or artificial barriers may make that impossible.

Observations

Between 1901 and 2018, the global mean sea level rose by about 20 cm (7.9 in). More precise data gathered from satellite radar measurements found a rise of 7.5 cm (3.0 in) from 1993 to 2017 (average of 2.9 mm (0.11 in)/yr). This accelerated to 4.62 mm (0.182 in)/yr for 2013–2022.

Regional variations

Sea level rise is not uniform around the globe. Some land masses are moving up or down as a consequence of subsidence (land sinking or settling) or post-glacial rebound (land rising as melting ice reduces weight). Therefore, local relative sea level rise may be higher or lower than the global average. Changing ice masses also affect the distribution of sea water around the globe through gravity.

When a glacier or ice sheet melts, it loses mass. This reduces its gravitational pull. In some places near current and former glaciers and ice sheets, this has caused water levels to drop. At the same time water levels will increase more than average further away from the ice sheet. Thus ice loss in Greenland affects regional sea level differently than the equivalent loss in Antarctica. On the other hand, the Atlantic is warming at a faster pace than the Pacific. This has consequences for Europe and the U.S. East Coast. The East Coast sea level is rising at 3–4 times the global average. Scientists have linked extreme regional sea level rise on the US Northeast Coast to the downturn of the Atlantic meridional overturning circulation (AMOC).

Many ports, urban conglomerations, and agricultural regions stand on river deltas. Here land subsidence contributes to much higher relative sea level rise. Unsustainable extraction of groundwater and oil and gas is one cause. Levees and other flood management practices are another. They prevent sediments from accumulating. These would otherwise compensate for the natural settling of deltaic soils. Estimates for total human-caused subsidence in the Rhine-Meuse-Scheldt delta (Netherlands) are 3–4 m (10–13 ft), over 3 m (10 ft) in urban areas of the Mississippi River Delta (New Orleans), and over 9 m (30 ft) in the Sacramento–San Joaquin River Delta. On the other hand, relative sea level around the Hudson Bay in Canada and the northern Baltic is falling due to post-glacial isostatic rebound.

Projections

A comparison of SLR in six parts of the US. The Gulf Coast and East Coast see the most SLR, whereas the West Coast the least
NOAA predicts different levels of sea level rise through 2050 for several US coastlines.

There are two complementary ways to model sea level rise (SLR) and project the future. The first uses process-based modeling. This combines all relevant and well-understood physical processes in a global physical model. This approach calculates the contributions of ice sheets with an ice-sheet model and computes rising sea temperature and expansion with a general circulation model. The processes are imperfectly understood, but this approach has the advantage of predicting non-linearities and long delays in the response, which studies of the recent past will miss.

The other approach employs semi-empirical techniques. These use historical geological data to determine likely sea level responses to a warming world, and some basic physical modeling. These semi-empirical sea level models rely on statistical techniques. They use relationships between observed past contributions to global mean sea level and temperature. Scientists developed this type of modeling because most physical models in previous Intergovernmental Panel on Climate Change (IPCC) literature assessments had underestimated the amount of sea level rise compared to 20th century observations.

Projections for the 21st century

Historical sea level reconstruction and projections up to 2100 published in 2017 by the U.S. Global Change Research Program. RCPs are different scenarios for future concentrations of greenhouse gases.

Intergovernmental Panel on Climate Change is the largest and most influential scientific organization on climate change, and since 1990, it provides several plausible scenarios of 21st century sea level rise in each of its major reports. The differences between scenarios are mainly due to uncertainty about future greenhouse gas emissions. These depend on future economic developments, and also future political action which is hard to predict. Each scenario provides an estimate for sea level rise as a range with a lower and upper limit to reflect the unknowns. The scenarios in the 2013-2014 Fifth Assessment Report (AR5) were called Representative Concentration Pathways, or RCPs and the scenarios in the IPCC Sixth Assessment Report (AR6) are known as Shared Socioeconomic Pathways, or SSPs. A large difference between the two was the addition of SSP1-1.9 to AR6, which represents meeting the best Paris climate agreement goal of 1.5 °C (2.7 °F). In that case, the likely range of sea level rise by 2100 is 28–55 cm (11–21+12 in).

The lowest scenario in AR5, RCP2.6, would see greenhouse gas emissions low enough to meet the goal of limiting warming by 2100 to 2 °C (3.6 °F). It shows sea level rise in 2100 of about 44 cm (17 in) with a range of 28–61 cm (11–24 in). The "moderate" scenario, where CO2 emissions take a decade or two to peak and its atmospheric concentration does not plateau until 2070s is called RCP 4.5. Its likely range of sea level rise is 36–71 cm (14–28 in). The highest scenario in RCP8.5 pathway sea level would rise between 52 and 98 cm (20+12 and 38+12 in). AR6 had equivalents for both scenarios, but it estimated larger sea level rise under both. In AR6, the SSP1-2.6 pathway results in a range of 32–62 cm (12+1224+12 in) by 2100. The "moderate" SSP2-4.5 results in a 44–76 cm (17+12–30 in) range by 2100 and SSP5-8.5 led to 65–101 cm (25+12–40 in).

A set of older (2007-2012) projections of sea level rise. There was a wide range of estimates.
Sea level rise projections for the years 2030, 2050 and 2100 from 2007 to 2012

Further, AR5 was criticized by multiple researchers for excluding detailed estimates the impact of "low-confidence" processes like marine ice sheet and marine ice cliff instability, which can substantially accelerate ice loss to potentially add "tens of centimeters" to sea level rise within this century. AR6 includes a version of SSP5-8.5 where these processes take place, and in that case, sea level rise of up to 1.6 m (5+13 ft) by 2100 could not be ruled out. The general increase of projections in AR6 was caused by the observed ice-sheet erosion in Greenland and Antarctica matching the upper-end range of the AR5 projections by 2020, and the finding that AR5 projections were likely too slow next to an extrapolation of observed sea level rise trends, while the subsequent reports had improved in this regard.

Notably, some scientists believe that ice sheet processes may accelerate sea level rise even at temperatures below the highest possible scenario, though not as much. For instance, a 2017 study from the University of Melbourne researchers suggested that these processes increase RCP2.6 sea level rise by about one quarter, RCP4.5 sea level rise by one half and practically double RCP8.5 sea level rise. A 2016 study led by Jim Hansen hypothesized that vulnerable ice sheet section collapse can lead to near-term exponential sea level rise acceleration, with a doubling time of 10, 20, or 40 years. Such acceleration would lead to multi-meter sea level rise in 50, 100, or 200 years, respectively, but it remains a minority view amongst the scientific community.

For comparison, a major scientific survey of 106 experts in 2020 found that even when accounting for instability processes they had estimated a median sea level rise of 45 cm (17+12 in) by 2100 for RCP2.6, with a 5%-95% range of 21–82 cm (8+1232+12 in). For RCP8.5, the experts estimated a median of 93 cm (36+12 in) by 2100 and a 5%-95% range of 45–165 cm (17+12–65 in). Similarly, NOAA in 2022 had suggested that there is a 50% probability of 0.5 m (19+12 in) sea level rise by 2100 under 2 °C (3.6 °F), which increases to >80% to >99% under 3–5 °C (5.4–9.0 °F). Year 2019 elicitation of 22 ice sheet experts suggested a median SLR of 30 cm (12 in) by 2050 and 70 cm (27+12 in) by 2100 in the low emission scenario and the median of 34 cm (13+12 in) by 2050 and 110 cm (43+12 in) by 2100 in a high emission scenario. They also estimated a small chance of sea levels exceeding 1 meter by 2100 even in the low emission scenario and of going beyond 2 metres in the high emission scenario, with the latter causing the displacement of 187 million people.

Post-2100 sea level rise

If countries cut greenhouse gas emissions significantly (lowest trace), sea level rise by 2100 will be limited to 0.3 to 0.6 meters (1–2 feet). However, in a worst-case scenario (top trace), sea levels could rise 5 meters (16 feet) by the year 2300.
A map showing major SLR impact in south-east Asia, Northern Europe and the East Coast of the US
Map of the Earth with a long-term 6-metre (20 ft) sea level rise represented in red (uniform distribution, actual sea level rise will vary regionally and local adaptation measures will also have an effect on local sea levels).

Even if the temperature stabilizes, significant sea-level rise (SLR) will continue for centuries, consistent with paleo records of sea level rise. This is due to the high level of inertia in the carbon cycle and the climate system, owing to factors such as the slow diffusion of heat into the deep ocean, leading to a longer climate response time. After 500 years, sea level rise from thermal expansion alone may have reached only half of its eventual level. Models suggest this may lie within ranges of 0.5–2 m (1+126+12 ft). Additionally, tipping points of Greenland and Antarctica ice sheets are likely to play a larger role over such timescales. Ice loss from Antarctica is likely to dominate very long-term SLR, especially if the warming exceeds 2 °C (3.6 °F). Continued carbon dioxide emissions from fossil fuel sources could cause additional tens of metres of sea level rise, over the next millennia. The available fossil fuel on Earth is enough to melt the entire Antarctic ice sheet, causing about 58 m (190 ft) of sea level rise.

Based on research into multimillennial sea level rise, AR6 was able to create medium agreement estimates for the amount of sea level rise over the next 2,000 years, depending on the peak of global warming, which project that:

  • At a warming peak of 1.5 °C (2.7 °F), global sea levels would rise 2–3 m (6+12–10 ft)
  • At a warming peak of 2 °C (3.6 °F), sea levels would rise 2–6 m (6+1219+12 ft)
  • At a warming peak of 5 °C (9.0 °F), sea levels would rise 19–22 m (62+12–72 ft)

Sea levels would continue to rise for several thousand years after the ceasing of emissions, due to the slow nature of climate response to heat. The same estimates on a timescale of 10,000 years project that:

  • At a warming peak of 1.5 °C (2.7 °F), global sea levels would rise 6–7 m (19+12–23 ft)
  • At a warming peak of 2 °C (3.6 °F), sea levels would rise 8–13 m (26–42+12 ft)
  • At a warming peak of 5 °C (9.0 °F), sea levels would rise 28–37 m (92–121+12 ft)

With better models and observational records, several studies have attempted to project SLR for the centuries immediately after 2100. This remains largely speculative. An April 2019 expert elicitation asked 22 experts about total sea level rise projections for the years 2200 and 2300 under its high, 5 °C warming scenario. It ended up with 90% confidence intervals of −10 cm (4 in) to 740 cm (24+12 ft) and −9 cm (3+12 in) to 970 cm (32 ft), respectively. Negative values represent the extremely low probability of very large increases in the ice sheet surface mass balance due to climate change-induced increase in precipitation. An elicitation of 106 experts led by Stefan Rahmstorf also included 2300 for RCP2.6 and RCP8.5. The former had the median of 118 cm (46+12 in), and a 5%-95% range of 24–311 cm (9+12122+12 in). The latter had the median of 329 cm (129+12 in), and a 5%-95% range of 88–783 cm (34+12308+12 in).

By 2021, AR6 was also able to provide estimates for sea level rise in 2150 alongside the 2100 estimates for the first time. This showed that keeping warming at 1.5 °C under the SSP1-1.9 scenario would result in sea level rise in the 17-83% range of 37–86 cm (14+12–34 in). In the SSP1-2.6 pathway the range would be 46–99 cm (18–39 in), for SSP2-4.5 a 66–133 cm (26–52+12 in) range by 2100 and for SSP5-8.5 a rise of 98–188 cm (38+12–74 in). It stated that the "low-confidence, high impact" projected 0.63–1.60 m (2–5 ft) mean sea level rise by 2100, and that by 2150, the total sea level rise in his scenario would be in the range of 0.98–4.82 m (3–16 ft) by 2150. AR6 also provided lower-confidence estimates for year 2300 sea level rise under SSP1-2.6 and SSP5-8.5 with various impact assumptions. In the best case scenario, under SSP1-2.6 with no ice sheet acceleration after 2100, the estimate was only 0.8–2.0 metres (2.6–6.6 ft). In the worst estimated scenario, SSP-8.5 with a marine ice cliff instability scenario, the projected range for total sea level rise was 9.5–16.2 metres (31–53 ft) by the year 2300. 

A 2018 paper estimated that sea level rise in 2300 would increase by a median of 20 cm (8 in) for every five years CO2 emissions increase before peaking. It shows a 5% likelihood of a 1 m (3+12 ft) increase due to the same. The same estimate found that if the temperature stabilized below 2 °C (3.6 °F), 2300 sea level rise would still exceed 1.5 m (5 ft). Early net zero and slowly falling temperatures could limit it to 70–120 cm (27+12–47 in).

Measurements

Variations in the amount of water in the oceans, changes in its volume, or varying land elevation compared to the sea surface can drive sea level changes. Over a consistent time period, assessments can attribute contributions to sea level rise and provide early indications of change in trajectory. This helps to inform adaptation plans. The different techniques used to measure changes in sea level do not measure exactly the same level. Tide gauges can only measure relative sea level. Satellites can also measure absolute sea level changes. To get precise measurements for sea level, researchers studying the ice and oceans factor in ongoing deformations of the solid Earth. They look in particular at landmasses still rising from past ice masses retreating, and the Earth's gravity and rotation.

Satellites

Jason-1 continued the sea surface measurements started by TOPEX/Poseidon. It was followed by the Ocean Surface Topography Mission on Jason-2, and by Jason-3.

Since the launch of TOPEX/Poseidon in 1992, an overlapping series of altimetric satellites has been continuously recording the sea level and its changes. These satellites can measure the hills and valleys in the sea caused by currents and detect trends in their height. To measure the distance to the sea surface, the satellites send a microwave pulse towards Earth and record the time it takes to return after reflecting off the ocean's surface. Microwave radiometers correct the additional delay caused by water vapor in the atmosphere. Combining these data with the location of the spacecraft determines the sea-surface height to within a few centimetres. These satellite measurements have estimated rates of sea level rise for 1993–2017 at 3.0 ± 0.4 millimetres (18 ± 164 in) per year.

Satellites are useful for measuring regional variations in sea level. An example is the substantial rise between 1993 and 2012 in the western tropical Pacific. This sharp rise has been linked to increasing trade winds. These occur when the Pacific Decadal Oscillation (PDO) and the El Niño–Southern Oscillation (ENSO) change from one state to the other. The PDO is a basin-wide climate pattern consisting of two phases, each commonly lasting 10 to 30 years. The ENSO has a shorter period of 2 to 7 years.

Tide gauges

Between 1993 and 2018, the mean sea level has risen across most of the world ocean (blue colors).

The global network of tide gauges is the other important source of sea-level observations. Compared to the satellite record, this record has major spatial gaps but covers a much longer period. Coverage of tide gauges started mainly in the Northern Hemisphere. Data for the Southern Hemisphere remained scarce up to the 1970s. The longest running sea-level measurements, NAP or Amsterdam Ordnance Datum were established in 1675, in Amsterdam. Record collection is also extensive in Australia. They include measurements by Thomas Lempriere, an amateur meteorologist, beginning in 1837. Lempriere established a sea-level benchmark on a small cliff on the Isle of the Dead near the Port Arthur convict settlement in 1841.

Together with satellite data for the period after 1992, this network established that global mean sea level rose 19.5 cm (7.7 in) between 1870 and 2004 at an average rate of about 1.44 mm/yr. (For the 20th century the average is 1.7 mm/yr.) By 2018, data collected by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) had shown that the global mean sea level was rising by 3.2 mm (18 in) per year. This was double the average 20th century rate. The 2023 World Meteorological Organization report found further acceleration to 4.62 mm/yr over the 2013–2022 period. These observations help to check and verify predictions from climate change simulations.

Regional differences are also visible in the tide gauge data. Some are caused by local sea level differences. Others are due to vertical land movements. In Europe, only some land areas are rising while the others are sinking. Since 1970, most tidal stations have measured higher seas. However sea levels along the northern Baltic Sea have dropped due to post-glacial rebound.

Past sea level rise

Changes in sea levels since the end of the last glacial episode

An understanding of past sea level is an important guide to where current changes in sea level will end up. In the recent geological past, thermal expansion from increased temperatures and changes in land ice are the dominant reasons of sea level rise. The last time that the Earth was 2 °C (3.6 °F) warmer than pre-industrial temperatures was 120,000 years ago. This was when warming due to Milankovitch cycles (changes in the amount of sunlight due to slow changes in the Earth's orbit) caused the Eemian interglacial. Sea levels during that warmer interglacial were at least 5 m (16 ft) higher than now. The Eemian warming was sustained over a period of thousands of years. The size of the rise in sea level implies a large contribution from the Antarctic and Greenland ice sheets. Levels of atmospheric carbon dioxide of around 400 parts per million (similar to 2000s) had increased temperature by over 2–3 °C (3.6–5.4 °F) around three million years ago. This temperature increase eventually melted one third of Antarctica's ice sheet, causing sea levels to rise 20 meters above the preindustrial levels.

Since the Last Glacial Maximum, about 20,000 years ago, sea level has risen by more than 125 metres (410 ft). Rates vary from less than 1 mm/year during the pre-industrial era to 40+ mm/year when major ice sheets over Canada and Eurasia melted. Meltwater pulses are periods of fast sea level rise caused by the rapid disintegration of these ice sheets. The rate of sea level rise started to slow down about 8,200 years before today. Sea level was almost constant for the last 2,500 years. The recent trend of rising sea level started at the end of the 19th or beginning of the 20th century.

Causes

A graph showing ice loss sea ice, ice shelves and land ice. Land ice loss contributetes to SLR
Earth lost 28 trillion tonnes of ice between 1994 and 2017: ice sheets and glaciers raised the global sea level by 34.6 ± 3.1 mm. The rate of ice loss has risen by 57% since the 1990s−from 0.8 to 1.2 trillion tonnes per year.

The three main reasons warming causes global sea level to rise are the expansion of oceans due to heating, water inflow from melting ice sheets and water inflow from glaciers. Glacier retreat and ocean expansion have dominated sea level rise since the start of the 20th century. Some of the losses from glaciers are offset when precipitation falls as snow, accumulates and over time forms glacial ice. If precipitation, surface processes and ice loss at the edge balance each other, sea level remains the same. Because of this precipitation began as water vapor evaporated from the ocean surface, effects of climate change on the water cycle can even increase ice build-up. However, this effect is not enough to fully offset ice losses, and sea level rise continues to accelerate.

The contributions of the two large ice sheets, in Greenland and Antarctica, are likely to increase in the 21st century. They store most of the land ice (~99.5%) and have a sea-level equivalent (SLE) of 7.4 m (24 ft 3 in) for Greenland and 58.3 m (191 ft 3 in) for Antarctica. Thus, melting of all the ice on Earth would result in about 70 m (229 ft 8 in) of sea level rise, although this would require at least 10,000 years and up to 10 °C (18 °F) of global warming.

Ocean heating

There has been an increase in ocean heat content during recent decades as the oceans absorb most of the excess heat created by human-induced global warming.

The oceans store more than 90% of the extra heat added to the climate system by Earth's energy imbalance and act as a buffer against its effects. This means that the same amount of heat that would increase the average world ocean temperature by 0.01 °C (0.018 °F) would increase atmospheric temperature by approximately 10 °C (18 °F). So a small change in the mean temperature of the ocean represents a very large change in the total heat content of the climate system. Winds and currents move heat into deeper parts of the ocean. Some of it reaches depths of more than 2,000 m (6,600 ft).

When the ocean gains heat, the water expands and sea level rises. Warmer water and water under great pressure (due to depth) expand more than cooler water and water under less pressure. Consequently, cold Arctic Ocean water will expand less than warm tropical water. Different climate models present slightly different patterns of ocean heating. So their projections do not agree fully on how much ocean heating contributes to sea level rise.

Antarctic ice loss

Processes around an Antarctic ice shelf
The Ross Ice Shelf is Antarctica's largest. It is about the size of France and up to several hundred metres thick.

The large volume of ice on the Antarctic continent stores around 60% of the world's fresh water. Excluding groundwater this is 90%. Antarctica is experiencing ice loss from coastal glaciers in the West Antarctica and some glaciers of East Antarctica. However it is gaining mass from the increased snow build-up inland, particularly in the East. This leads to contradicting trends. There are different satellite methods for measuring ice mass and change. Combining them helps to reconcile the differences. However, there can still be variations between the studies. In 2018, a systematic review estimated average annual ice loss of 43 billion tons (Gt) across the entire continent between 1992 and 2002. This tripled to an annual average of 220 Gt from 2012 to 2017. However, a 2021 analysis of data from four different research satellite systems (Envisat, European Remote-Sensing Satellite, GRACE and GRACE-FO and ICESat) indicated annual mass loss of only about 12 Gt from 2012 to 2016. This was due to greater ice gain in East Antarctica than estimated earlier.

In the future, it is known that West Antarctica at least will continue to lose mass, and the likely future losses of sea ice and ice shelves, which block warmer currents from direct contact with the ice sheet, can accelerate declines even in East Antarctica. Altogether, Antarctica is the source of the largest uncertainty for future sea level projections. In 2019, the SROCC assessed several studies attempting to estimate 2300 sea level rise caused by ice loss in Antarctica alone, arriving at projected estimates of 0.07–0.37 metres (0.23–1.21 ft) for the low emission RCP2.6 scenario, and 0.60–2.89 metres (2.0–9.5 ft) in the high emission RCP8.5 scenario. However, the report notes the wide range of estimates, and gives low confidence in the projection, saying that it retains "deep uncertainty" in their ability to estimate the whole of long term damage to Antarctic ice, especially in scenarios of very high emissions.

East Antarctica

The world's largest potential source of sea level rise is the East Antarctic Ice Sheet (EAIS). It is 2.2 km thick on average and holds enough ice to raise global sea levels by 53.3 m (174 ft 10 in) Its great thickness and high elevation make it more stable than the other ice sheets. As of the early 2020s, most studies show that it is still gaining mass. Some analyses have suggested it began to lose mass in the 2000s. However they over-extrapolated some observed losses on to the poorly observed areas. A more complete observational record shows continued mass gain.

Aerial view of ice flows at Denman Glacier, one of the less stable glaciers in the East Antarctica

In spite of the net mass gain, some East Antarctica glaciers have lost ice in recent decades due to ocean warming and declining structural support from the local sea ice, such as Denman Glacier, and Totten Glacier. Totten Glacier is particularly important because it stabilizes the Aurora Subglacial Basin. Subglacial basins like Aurora and Wilkes Basin are major ice reservoirs together holding as much ice as all of West Antarctica. They are more vulnerable than the rest of East Antarctica. Their collective tipping point probably lies at around 3 °C (5.4 °F) of global warming. It may be as high as 6 °C (11 °F) or as low as 2 °C (3.6 °F). Once this tipping point is crossed, the collapse of these subglacial basins could take place over as little as 500 or as much as 10,000 years. The median timeline is 2000 years. Depending on how many subglacial basins are vulnerable, this causes sea level rise of between 1.4 m (4 ft 7 in) and 6.4 m (21 ft 0 in).

On the other hand, the whole EAIS would not definitely collapse until global warming reaches 7.5 °C (13.5 °F), with a range between 5 °C (9.0 °F) and 10 °C (18 °F). It would take at least 10,000 years to disappear. Some scientists have estimated that warming would have to reach at least 6 °C (11 °F) to melt two thirds of its volume.

West Antarctica

Thwaites Glacier, with its vulnerable bedrock topography visible.

East Antarctica contains the largest potential source of sea level rise. However the West Antarctic ice sheet (WAIS) is substantially more vulnerable. Temperatures on West Antarctica have increased significantly, unlike East Antarctica and the Antarctic Peninsula. The trend is between 0.08 °C (0.14 °F) and 0.96 °C (1.73 °F) per decade between 1976 and 2012. Satellite observations recorded a substantial increase in WAIS melting from 1992 to 2017. This resulted in 7.6 ± 3.9 mm (1964 ± 532 in) of Antarctica sea level rise. Outflow glaciers in the Amundsen Sea Embayment played a disproportionate role.

Scientists estimated in 2021 that the median increase in sea level rise from Antarctica by 2100 is ~11 cm (5 in). There is no difference between scenarios, because the increased warming would intensify the water cycle and increase snowfall accumulation over the EAIS at about the same rate as it would increase ice loss from WAIS. However, most of the bedrock underlying the WAIS lies well below sea level, and it has to be buttressed by the Thwaites and Pine Island glaciers. If these glaciers were to collapse, the entire ice sheet would as well. Their disappearance would take at least several centuries, but is considered almost inevitable, as their bedrock topography deepens inland and becomes more vulnerable to meltwater.

The contribution of these glaciers to global sea levels has already accelerated since the beginning of the 21st century. The Thwaites Glacier now accounts for 4% of global sea level rise. It could start to lose even more ice if the Thwaites Ice Shelf fails, potentially in mid-2020s. This is due to marine ice sheet instability hypothesis, where warm water enters between the seafloor and the base of the ice sheet once it is no longer heavy enough to displace the flow, causing accelerated melting and collapse.

Other hard-to-model processes include hydrofracturing, where meltwater collects atop the ice sheet, pools into fractures and forces them open. and changes in the ocean circulation at a smaller scale. A combination of these processes could cause the WAIS to contribute up to 41 cm (16 in) by 2100 under the low-emission scenario and up to 57 cm (22 in) under the highest-emission one.

The melting of all the ice in West Antarctica would increase the total sea level rise to 4.3 m (14 ft 1 in). However, mountain ice caps not in contact with water are less vulnerable than the majority of the ice sheet, which is located below the sea level. Its collapse would cause ~3.3 m (10 ft 10 in) of sea level rise. This collapse is now considered practically inevitable, as it appears to have already occurred during the Eemian period 125,000 years ago, when temperatures were similar to the early 21st century. This disappearance would take an estimated 2000 years. The absolute minimum for the loss of West Antarctica ice is 500 years, and the potential maximum is 13,000 years.

The only way to stop ice loss from West Antarctica once triggered is by lowering the global temperature to 1 °C (1.8 °F) below the preindustrial level. This would be 2 °C (3.6 °F) below the temperature of 2020. Other researchers suggested that a climate engineering intervention to stabilize the ice sheet's glaciers may delay its loss by centuries and give more time to adapt. However this is an uncertain proposal, and would end up as one of the most expensive projects ever attempted.

Isostatic rebound

2021 research indicates that isostatic rebound after the loss of the main portion of the West Antarctic ice sheet would ultimately add another 1.02 m (3 ft 4 in) to global sea levels. This effect would start to increase sea levels before 2100. However it would take 1000 years for it to cause 83 cm (2 ft 9 in) of sea level rise. At this point, West Antarctica itself would be 610 m (2,001 ft 4 in) higher than now. Estimates of isostatic rebound after the loss of East Antarctica's subglacial basins suggest increases of between 8 cm (3.1 in) and 57 cm (1 ft 10 in)

Greenland ice sheet loss

Greenland 2007 melt, measured as the difference between the number of days on which melting occurred in 2007 compared to the average annual melting days from 1988 to 2006

Most ice on Greenland is in the Greenland ice sheet which is 3 km (10,000 ft) at its thickest. The rest of Greenland ice forms isolated glaciers and ice caps. The average annual ice loss in Greenland more than doubled in the early 21st century compared to the 20th century. Its contribution to sea level rise correspondingly increased from 0.07 mm per year between 1992 and 1997 to 0.68 mm per year between 2012 and 2017. Total ice loss from the Greenland ice sheet between 1992 and 2018 amounted to 3,902 gigatons (Gt) of ice. This is equivalent to a SLR contribution of 10.8 mm. The contribution for the 2012–2016 period was equivalent to 37% of sea level rise from land ice sources (excluding thermal expansion). This observed rate of ice sheet melting is at the higher end of predictions from past IPCC assessment reports.

In 2021, AR6 estimated that by 2100, the melting of Greenland ice sheet would most likely add around 6 cm (2+12 in) to sea levels under the low-emission scenario, and 13 cm (5 in) under the high-emission scenario. The first scenario, SSP1-2.6, largely fulfils the Paris Agreement goals, while the other, SSP5-8.5, has the emissions accelerate throughout the century. The uncertainty about ice sheet dynamics can affect both pathways. In the best-case scenario, ice sheet under SSP1-2.6 gains enough mass by 2100 through surface mass balance feedbacks to reduce the sea levels by 2 cm (1 in). In the worst case, it adds 15 cm (6 in). For SSP5-8.5, the best-case scenario is adding 5 cm (2 in) to sea levels, and the worst-case is adding 23 cm (9 in)

Trends of Greenland ice loss between 2002 and 2019

Greenland's peripheral glaciers and ice caps crossed an irreversible tipping point around 1997. Sea level rise from their loss is now unstoppable. However the temperature changes in future, the warming of 2000–2019 had already damaged the ice sheet enough for it to eventually lose ~3.3% of its volume. This is leading to 27 cm (10+12 in) of future sea level rise. At a certain level of global warming, the Greenland ice sheet will almost completely melt. Ice cores show this happened at least once during the last million years, when the temperatures have at most been 2.5 °C (4.5 °F) warmer than the preindustrial.

2012 research suggested that the tipping point of the ice sheet was between 0.8 °C (1.4 °F) and 3.2 °C (5.8 °F). 2023 modelling has narrowed the tipping threshold to a 1.7 °C (3.1 °F)-2.3 °C (4.1 °F) range. If temperatures reach or exceed that level, reducing the global temperature to 1.5 °C (2.7 °F) above pre-industrial levels or lower would prevent the loss of the entire ice sheet. One way to do this in theory would be large-scale carbon dioxide removal. But it would also cause greater losses and sea level rise from Greenland than if the threshold was not breached in the first place. Otherwise, the ice sheet would take between 10,000 and 15,000 years to disintegrate entirely once the tipping point had been crossed. The most likely estimate is 10,000 years. If climate change continues along its worst trajectory and temperatures continue to rise quickly over multiple centuries, it would only take 1,000 years.

Mountain glacier loss

Based on national pledges to reduce greenhouse gas emissions, global mean temperature is projected to increase by 2.7 °C (4.9 °F), which would cause loss of about half of Earth's glaciers by 2100—causing a sea level rise of 115±40 millimeters.

There are roughly 200,000 glaciers on Earth, which are spread out across all continents. Less than 1% of glacier ice is in mountain glaciers, compared to 99% in Greenland and Antarctica. However, this small size also makes mountain glaciers more vulnerable to melting than the larger ice sheets. This means they have had a disproportionate contribution to historical sea level rise and are set to contribute a smaller, but still significant fraction of sea level rise in the 21st century. Observational and modelling studies of mass loss from glaciers and ice caps show they contribute 0.2-0.4 mm per year to sea level rise, averaged over the 20th century. The contribution for the 2012–2016 period was nearly as large as that of Greenland. It was 0.63 mm of sea level rise per year, equivalent to 34% of sea level rise from land ice sources. Glaciers contributed around 40% to sea level rise during the 20th century, with estimates for the 21st century of around 30%.

In 2023, a Science paper estimated that at 1.5 °C (2.7 °F), one quarter of mountain glacier mass would be lost by 2100 and nearly half would be lost at 4 °C (7.2 °F), contributing ~9 cm (3+12 in) and ~15 cm (6 in) to sea level rise, respectively. Glacier mass is disproportionately concentrated in the most resilient glaciers. So in practice this would remove 49-83% of glacier formations. It further estimated that the current likely trajectory of 2.7 °C (4.9 °F) would result in the SLR contribution of ~11 cm (4+12 in) by 2100. Mountain glaciers are even more vulnerable over the longer term. In 2022, another Science paper estimated that almost no mountain glaciers could survive once warming crosses 2 °C (3.6 °F). Their complete loss is largely inevitable around 3 °C (5.4 °F). There is even a possibility of complete loss after 2100 at just 1.5 °C (2.7 °F). This could happen as early as 50 years after the tipping point is crossed, although 200 years is the most likely value, and the maximum is around 1000 years.

Sea ice loss

Sea ice loss contributes very slightly to global sea level rise. If the melt water from ice floating in the sea was exactly the same as sea water then, according to Archimedes' principle, no rise would occur. However melted sea ice contains less dissolved salt than sea water and is therefore less dense, with a slightly greater volume per unit of mass. If all floating ice shelves and icebergs were to melt sea level would only rise by about 4 cm (1+12 in).

Trends in land water storage from GRACE observations in gigatons per year, April 2002 to November 2014 (glaciers and ice sheets are excluded).

Changes to land water storage

Human activity impacts how much water is stored on land. Dams retain large quantities of water, which is stored on land rather than flowing into the sea, though the total quantity stored will vary from time to time. On the other hand, humans extract water from lakes, wetlands and underground reservoirs for food production. This often causes subsidence. Furthermore, the hydrological cycle is influenced by climate change and deforestation. This can increase or reduce contributions to sea level rise. In the 20th century, these processes roughly balanced, but dam building has slowed down and is expected to stay low for the 21st century.

Water redistribution caused by irrigation from 1993 to 2010 caused a drift of Earth's rotational pole by 78.48 centimetres (30.90 in). This caused groundwater depletion equivalent to a global sea level rise of 6.24 millimetres (0.246 in).

Impacts

On people and societies

High tide flooding, also called tidal flooding, has become much more common in the past seven decades.

Sea-level rise has many impacts. They include higher and more frequent high-tide and storm-surge flooding and increased coastal erosion. Other impacts are inhibition of primary production processes, more extensive coastal inundation, and changes in surface water quality and groundwater. These can lead to a greater loss of property and coastal habitats, loss of life during floods and loss of cultural resources. There are also impacts on agriculture and aquaculture. There can also be loss of tourism, recreation, and transport-related functions. Land use changes such as urbanisation or deforestation of low-lying coastal zones exacerbate coastal flooding impacts. Regions already vulnerable to rising sea level also struggle with coastal flooding. This washes away land and alters the landscape.

Changes in emissions are likely to have only a small effect on the extent of sea level rise by 2050. So projected sea level rise could put tens of millions of people at risk by then. Scientists estimate that 2050 levels of sea level rise would result in about 150 million people under the water line during high tide. About 300 million would be in places flooded every year. This projection is based on the distribution of population in 2010. It does not take into account the effects of population growth and human migration. These figures are 40 million and 50 million more respectively than the numbers at risk in 2010. By 2100, there would be another 40 million people under the water line during high tide if sea level rise remains low. This figure would be 80 million for a high estimate of median sea level rise. Ice sheet processes under the highest emission scenario would result in sea level rise of well over one metre (3+14 ft) by 2100. This could be as much as over two metres (6+12 ft), This could result in as many as 520 million additional people ending up under the water line during high tide and 640 million in places flooded every year, compared to the 2010 population distribution.

Major cities threatened by sea level rise. The cities indicated are under threat of even a small sea level rise (of 1.6 feet/49 cm) compared to the level in 2010. Even moderate projections indicate that such a rise will have occurred by 2060.

Over the longer term, coastal areas are particularly vulnerable to rising sea levels. They are also vulnerable to changes in the frequency and intensity of storms, increased precipitation, and rising ocean temperatures. Ten percent of the world's population live in coastal areas that are less than 10 metres (33 ft) above sea level. Two thirds of the world's cities with over five million people are located in these low-lying coastal areas. About 600 million people live directly on the coast around the world. Cities such as Miami, Rio de Janeiro, Osaka and Shanghai will be especially vulnerable later in the century under warming of 3 °C (5.4 °F). This is close to the current trajectory. LiDAR-based research had established in 2021 that 267 million people worldwide lived on land less than 2 m (6+12 ft) above sea level. With a 1 m (3+12 ft) sea level rise and zero population growth, that could increase to 410 million people.

Potential disruption of sea trade and migrations could impact people living further inland. United Nations Secretary-General António Guterres warned in 2023 that sea level rise risks causing human migrations on a "biblical scale". Sea level rise will inevitably affect ports, but there is limited research on this. There is insufficient knowledge about the investments necessary to protect ports currently in use. This includes protecting current facilities before it becomes more reasonable to build new ports elsewhere. Some coastal regions are rich agricultural lands. Their loss to the sea could cause food shortages. This is a particularly acute issue for river deltas such as Nile Delta in Egypt and Red River and Mekong Deltas in Vietnam. Saltwater intrusion into the soil and irrigation water has a disproportionate effect on them.

On ecosystems

Bramble Cay melomys, the first known mammal species to go extinct due to sea level rise.

Flooding and soil/water salinization threaten the habitats of coastal plants, birds, and freshwater/estuarine fish when seawater reaches inland. When coastal forest areas become inundated with saltwater to the point no trees can survive the resulting habitats are called ghost forests. Starting around 2050, some nesting sites in Florida, Cuba, Ecuador and the island of Sint Eustatius for leatherback, loggerhead, hawksbill, green and olive ridley turtles are expected to be flooded. The proportion will increase over time. In 2016, Bramble Cay islet in the Great Barrier Reef was inundated. This flooded the habitat of a rodent named Bramble Cay melomys. It was officially declared extinct in 2019.

An example of mangrove pneumatophores.

Some ecosystems can move inland with the high-water mark. But natural or artificial barriers prevent many from migrating. This coastal narrowing is sometimes called 'coastal squeeze' when it involves human-made barriers. It could result in the loss of habitats such as mudflats and tidal marshes. Mangrove ecosystems on the mudflats of tropical coasts nurture high biodiversity. They are particularly vulnerable due to mangrove plants' reliance on breathing roots or pneumatophores. These will be submerged if the rate is too rapid for them to migrate upward. This would result in the loss of an ecosystem. Both mangroves and tidal marshes protect against storm surges, waves and tsunamis, so their loss makes the effects of sea level rise worse. Human activities such as dam building may restrict sediment supplies to wetlands. This would prevent natural adaptation processes. The loss of some tidal marshes is unavoidable as a consequence.

Corals are important for bird and fish life. They need to grow vertically to remain close to the sea surface in order to get enough energy from sunlight. The corals have so far been able to keep up the vertical growth with the rising seas, but might not be able to do so in the future.

Adaptation

Oosterscheldekering, the largest barrier of the Dutch Delta Works.

Cutting greenhouse gas emissions can slow and stabilize the rate of sea level rise after 2050. This would greatly reduce its costs and damages, but cannot stop it outright. So climate change adaptation to sea level rise is inevitable. The simplest approach is to stop development in vulnerable areas and ultimately move people and infrastructure away from them. Such retreat from sea level rise often results in the loss of livelihoods. The displacement of newly impoverished people could burden their new homes and accelerate social tensions.

It is possible to avoid or at least delay the retreat from sea level rise with enhanced protections. These include dams, levees or improved natural defenses. Other options include updating building standards to reduce damage from floods, addition of storm water valves to address more frequent and severe flooding at high tide, or cultivating crops more tolerant of saltwater in the soil, even at an increased cost. These options divide into hard and soft adaptation. Hard adaptation generally involves large-scale changes to human societies and ecological systems. It often includes the construction of capital-intensive infrastructure. Soft adaptation involves strengthening natural defenses and local community adaptation. This usually involves simple, modular and locally owned technology. The two types of adaptation may be complementary or mutually exclusive. Adaptation options often require significant investment. But the costs of doing nothing are far greater. One example would involve adaptation against flooding. Effective adaptation measures could reduce future annual costs of flooding in 136 of the world's largest coastal cities from $1 trillion by 2050 without adaptation to a little over $60 billion annually. The cost would be $50 billion per year. Some experts argue that retreat from the coast would have a lower impact on the GDP of India and Southeast Asia then attempting to protect every coastline, in the case of very high sea level rise.

Planning for the future sea level rise used in the United Kingdom.

To be successful, adaptation must anticipate sea level rise well ahead of time. As of 2023, the global state of adaptation planning is mixed. A survey of 253 planners from 49 countries found that 98% are aware of sea level rise projections, but 26% have not yet formally integrated them into their policy documents. Only around a third of respondents from Asian and South American countries have done so. This compares with 50% in Africa, and over 75% in Europe, Australasia and North America. Some 56% of all surveyed planners have plans which account for 2050 and 2100 sea level rise. But 53% use only a single projection rather than a range of two or three projections. Just 14% use four projections, including the one for "extreme" or "high-end" sea level rise. Another study found that over 75% of regional sea level rise assessments from the West and Northeastern United States included at least three estimates. These are usually RCP2.6, RCP4.5 and RCP8.5, and sometimes include extreme scenarios. But 88% of projections from the American South had only a single estimate. Similarly, no assessment from the South went beyond 2100. By contrast 14 assessments from the West went up to 2150, and three from the Northeast went to 2200. 56% of all localities were also found to underestimate the upper end of sea level rise relative to IPCC Sixth Assessment Report.

By region

Africa

A man looking out over the beach from a building destroyed by high tides in Chorkor, a suburb of Accra. Sunny day flooding caused by sea level rise, increases coastal erosion that destroys housing, infrastructure and natural ecosystems. A number of communities in Coastal Ghana are already experiencing the changing tides.

In Africa, future population growth amplifies risks from sea level rise. Some 54.2 million people lived in the highly exposed low elevation coastal zones (LECZ) around 2000. This number will effectively double to around 110 million people by 2030, and then reach 185 to 230 million people by 2060. By then, the average regional sea level rise will be around 21 cm, with little difference from climate change scenarios. By 2100, Egypt, Mozambique and Tanzania are likely to have the largest number of people affected by annual flooding amongst all African countries. And under RCP8.5, 10 important cultural sites would be at risk of flooding and erosion by the end of the century.

In the near term, some of the largest displacement is projected to occur in the East Africa region. At least 750,000 people there are likely to be displaced from the coasts between 2020 and 2050. By 2050, 12 major African cities would collectively sustain cumulative damages of US$65 billion for the "moderate" climate change scenario RCP4.5 and between US$86.5 billion to US$137.5 billion on average: in the worst case, these damages could effectively triple. In all of these estimates, around half of the damages would occur in the Egyptian city of Alexandria. Hundreds of thousands of people in its low-lying areas may already need relocation in the coming decade. Across sub-Saharan Africa as a whole, damage from sea level rise could reach 2–4% of GDP by 2050, although this depends on the extent of future economic growth and climate change adaptation.

Asia

Matsukawaura Lagoon, located in Fukushima Prefecture of Honshu Island
2010 estimates of population exposure to sea level rise in Bangladesh

Asia has the largest population at risk from sea level due to its dense coastal populations. As of 2022, some 63 million people in East and South Asia were already at risk from a 100-year flood. This is largely due to inadequate coastal protection in many countries. Bangladesh, China, India, Indonesia, Japan, Pakistan, the Philippines, Thailand and Vietnam alone account for 70% of people exposed to sea level rise during the 21st century. Sea level rise in Bangladesh is likely to displace 0.9-2.1 million people by 2050. It may also force the relocation of up to one third of power plants as early as 2030, and many of the remaining plants would have to deal with the increased salinity of their cooling water. Nations like Bangladesh, Vietnam and China with extensive rice production on the coast are already seeing adverse impacts from saltwater intrusion.

Modelling results predict that Asia will suffer direct economic damages of US$167.6 billion at 0.47 meters of sea level rise. This rises to US$272.3 billion at 1.12 meters and US$338.1 billion at 1.75 meters. There is an additional indirect impact of US$8.5, 24 or 15 billion from population displacement at those levels. China, India, the Republic of Korea, Japan, Indonesia and Russia experience the largest economic losses. Out of the 20 coastal cities expected to see the highest flood losses by 2050, 13 are in Asia. Nine of these are the so-called sinking cities, where subsidence (typically caused by unsustainable groundwater extraction in the past) would compound sea level rise. These are Bangkok, Guangzhou, Ho Chi Minh City, Jakarta, Kolkata, Nagoya, Tianjin, Xiamen and Zhanjiang.

By 2050, Guangzhou would see 0.2 meters of sea level rise and estimated annual economic losses of US$254 million – the highest in the world. In Shanghai, coastal inundation amounts to about 0.03% of local GDP, yet would increase to 0.8% by 2100 even under the "moderate" RCP4.5 scenario in the absence of adaptation. The city of Jakarta is sinking so much (up to 28 cm (11 in) per year between 1982 and 2010 in some areas) that in 2019, the government had committed to relocate the capital of Indonesia to another city.

Australasia

King's Beach at Caloundra

In Australia, erosion and flooding of Queensland's Sunshine Coast beaches is likely to intensify by 60% by 2030. Without adaptation there would be a big impact on tourism. Adaptation costs for sea level rise would be three times higher under the high-emission RCP8.5 scenario than in the low-emission RCP2.6 scenario. Sea level rise of 0.2-0.3 meters is likely by 2050. In these conditions what is currently a 100-year flood would occur every year in the New Zealand cities of Wellington and Christchurch. With 0.5 m sea level rise, a current 100-year flood in Australia would occur several times a year. In New Zealand this would expose buildings with a collective worth of NZ$12.75 billion to new 100-year floods. A meter or so of sea level rise would threaten assets in New Zealand with a worth of NZD$25.5 billion. There would be a disproportionate impact on Maori-owned holdings and cultural heritage objects. Australian assets worth AUS$164–226 billion including many unsealed roads and railway lines would also be at risk. This amounts to a 111% rise in Australia's inundation costs between 2020 and 2100.

Central and South America

An aerial view of São Paulo's Port of Santos

By 2100, coastal flooding and erosion will affect at least 3-4 million people in South America. Many people live in low-lying areas exposed to sea level rise. This includes 6% of the population of Venezuela, 56% of the population of Guyana and 68% of the population of Suriname. In Guyana much of the capital Georgetown is already below sea level. In Brazil, the coastal ecoregion of Caatinga is responsible for 99% of its shrimp production. A combination of sea level rise, ocean warming and ocean acidification threaten its unique. Extreme wave or wind behavior disrupted the port complex of Santa Catarina 76 times in one 6-year period in the 2010s. There was a US$25,000-50,000 loss for each idle day. In Port of Santos, storm surges were three times more frequent between 2000 and 2016 than between 1928 and 1999.

Europe

Beach nourishment in progress in Barcelona.

Many sandy coastlines in Europe are vulnerable to erosion due to sea level rise. In Spain, Costa del Maresme is likely to retreat by 16 meters by 2050 relative to 2010. This could amount to 52 meters by 2100 under RCP8.5 Other vulnerable coastlines include the Tyrrhenian Sea coast of Italy's Calabria region, the Barra-Vagueira coast in Portugal and Nørlev Strand in Denmark.

In France, it was estimated that 8,000-10,000 people would be forced to migrate away from the coasts by 2080. The Italian city of Venice is located on islands. It is highly vulnerable to flooding and has already spent $6 billion on a barrier system. A quarter of the German state of Schleswig-Holstein, inhabited by over 350,000 people, is at low elevation and has been vulnerable to flooding since preindustrial times. Many levees already exist. Because of its complex geography, the authorities chose a flexible mix of hard and soft measures to cope with sea level rise of over 1 meter per century. In the United Kingdom, sea level at the end of the century would increase by 53 to 115 centimeters at the mouth of the River Thames and 30 to 90 centimeters at Edinburgh. The UK has divided its coast into 22 areas, each covered by a Shoreline Management Plan. Those are sub-divided into 2000 management units, working across three periods of 0–20, 20-50 and 50–100 years.

The Netherlands is a country that sits partially below sea level and is subsiding. It has responded by extending its Delta Works program. Drafted in 2008, the Delta Commission report said that the country must plan for a rise in the North Sea up to 1.3 m (4 ft 3 in) by 2100 and plan for a 2–4 m (7–13 ft) rise by 2200. It advised annual spending between €1.0 and €1.5 billion. This would support measures such as broadening coastal dunes and strengthening sea and river dikes. Worst-case evacuation plans were also drawn up.

North America

Tidal flooding in Miami during a king tide (October 17, 2016). The risk of tidal flooding increases with sea level rise.

As of 2017, around 95 million Americans lived on the coast. The figures for Canada and Mexico were 6.5 million and 19 million. Increased chronic nuisance flooding and king tide flooding is already a problem in the highly vulnerable state of Florida. The US East Coast is also vulnerable. On average, the number of days with tidal flooding in the US increased 2 times in the years 2000–2020, reaching 3–7 days per year. In some areas the increase was much stronger: 4 times in the Southeast Atlantic and 11 times in the Western Gulf. By the year 2030 the average number is expected to be 7–15 days, reaching 25–75 days by 2050. U.S. coastal cities have responded with beach nourishment or beach replenishment. This trucks in mined sand in addition to other adaptation measures such as zoning, restrictions on state funding, and building code standards. Along an estimated some 15% of the US coastline, the majority of local groundwater levels are already below sea level. This places those groundwater reservoirs at risk of sea water intrusion. That would render fresh water unusable once its concentration exceeds 2-3%. Damage is also widespread in Canada. It will affect major cities like Halifax and more remote locations like Lennox Island. The Mi'kmaq community there is already considering relocation due to widespread coastal erosion. In Mexico, damage from SLR to tourism hotspots like Cancun, Isla Mujeres, Playa del Carmen, Puerto Morelos and Cozumel could amount to US$1.4–2.3 billion. The increase in storm surge due to sea level rise is also a problem. Due to this effect Hurricane Sandy caused an additional US$8 billion in damage, impacted 36,000 more houses and 71,000 more people.

In future, the northern Gulf of Mexico, Atlantic Canada and the Pacific coast of Mexico would experience the greatest sea level rise. By 2030, flooding along the US Gulf Coast could cause economic losses of up to US$176 billion. Using nature-based solutions like wetland restoration and oyster reef restoration could avoid around US$50 billion of this. By 2050, coastal flooding in the US is likely to rise tenfold to four "moderate" flooding events per year. That forecast is even without storms or heavy rainfall. In New York City, current 100-year flood would occur once in 19–68 years by 2050 and 4–60 years by 2080. By 2050, 20 million people in the greater New York City area would be at risk. This is because 40% of existing water treatment facilities would be compromised and 60% of power plants will need relocation. By 2100, sea level rise of 0.9 m (3 ft) and 1.8 m (6 ft) would threaten 4.2 and 13.1 million people in the US, respectively. In California alone, 2 m (6+12 ft) of SLR could affect 600,000 people and threaten over US$150 billion in property with inundation. This potentially represents over 6% of the state's GDP. In North Carolina, a meter of SLR inundates 42% of the Albemarle-Pamlico Peninsula, costing up to US$14 billion. In nine southeast US states, the same level of sea level rise would claim up to 13,000 historical and archaeological sites, including over 1000 sites eligible for inclusion in the National Register for Historic Places.

Island nations

Malé, the capital island of Maldives.

Small island states are nations with populations on atolls and other low islands. Atolls on average reach 0.9–1.8 m (3–6 ft) above sea level. These are the most vulnerable places to coastal erosion, flooding and salt intrusion into soils and freshwater caused by sea level rise. Sea level rise may make an island uninhabitable before it is completely flooded. Already, children in small island states encounter hampered access to food and water. They suffer an increased rate of mental and social disorders due to these stresses. At current rates, sea level rise would be high enough to make the Maldives uninhabitable by 2100. Five of the Solomon Islands have already disappeared due to the effects of sea level rise and stronger trade winds pushing water into the Western Pacific.

Surface area change of islands in the Central Pacific and Solomon Islands

Adaptation to sea level rise is costly for small island nations as a large portion of their population lives in areas that are at risk. Nations like Maldives, Kiribati and Tuvalu already have to consider controlled international migration of their population in response to rising seas. The alternative of uncontrolled migration threatens to worsen the humanitarian crisis of climate refugees. In 2014, Kiribati purchased 20 square kilometers of land (about 2.5% of Kiribati's current area) on the Fijian island of Vanua Levu to relocate its population once their own islands are lost to the sea.

Fiji also suffers from sea level rise. It is in a comparatively safer position. Its residents continue to rely on local adaptation like moving further inland and increasing sediment supply to combat erosion instead of relocating entirely. Fiji has also issued a green bond of $50 million to invest in green initiatives and fund adaptation efforts. It is restoring coral reefs and mangroves to protect against flooding and erosion. It sees this as a more cost-efficient alternative to building sea walls. The nations of Palau and Tonga are taking similar steps. Even when an island is not threatened with complete disappearance from flooding, tourism and local economies may end up devastated. For instance, sea level rise of 1.0 m (3 ft 3 in) would cause partial or complete inundation of 29% of coastal resorts in the Caribbean. A further 49–60% of coastal resorts would be at risk from resulting coastal erosion.

Post-scarcity

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Post-scarcity
 
Post-scarcity is a theoretical economic situation in which most goods can be produced in great abundance with minimal human labor needed, so that they become available to all very cheaply or even freely.

Post-scarcity does not mean that scarcity has been eliminated for all goods and services but that all people can easily have their basic survival needs met along with some significant proportion of their desires for goods and services. Writers on the topic often emphasize that some commodities will remain scarce in a post-scarcity society.

Models

Speculative technology

Futurists who speak of "post-scarcity" suggest economies based on advances in automated manufacturing technologies, often including the idea of self-replicating machines, the adoption of division of labour which in theory could produce nearly all goods in abundance, given adequate raw materials and energy.

More speculative forms of nanotechnology such as molecular assemblers or nanofactories, which do not currently exist, raise the possibility of devices that can automatically manufacture any specified goods given the correct instructions and the necessary raw materials and energy, and many nanotechnology enthusiasts have suggested it will usher in a post-scarcity world.

In the more near-term future, the increasing automation of physical labor using robots is often discussed as means of creating a post-scarcity economy.

Increasingly versatile forms of rapid prototyping machines, and a hypothetical self-replicating version of such a machine known as a RepRap, have also been predicted to help create the abundance of goods needed for a post-scarcity economy. Advocates of self-replicating machines such as Adrian Bowyer, the creator of the RepRap project, argue that once a self-replicating machine is designed, then since anyone who owns one can make more copies to sell (and would also be free to ask for a lower price than other sellers), market competition will naturally drive the cost of such machines down to the bare minimum needed to make a profit, in this case just above the cost of the physical materials and energy that must be fed into the machine as input, and the same should go for any other goods that the machine can build.

Even with fully automated production, limitations on the number of goods produced would arise from the availability of raw materials and energy, as well as ecological damage associated with manufacturing technologies. Advocates of technological abundance often argue for more extensive use of renewable energy and greater recycling in order to prevent future drops in availability of energy and raw materials, and reduce ecological damage. Solar energy in particular is often emphasized, as the cost of solar panels continues to drop (and could drop far more with automated production by self-replicating machines), and advocates point out the total solar power striking the Earth's surface annually exceeds our civilization's current annual power usage by a factor of thousands.

Advocates also sometimes argue that the energy and raw materials available could be greatly expanded by looking to resources beyond the Earth. For example, asteroid mining is sometimes discussed as a way of greatly reducing scarcity for many useful metals such as nickel. While early asteroid mining might involve crewed missions, advocates hope that eventually humanity could have automated mining done by self-replicating machines. If this were done, then the only capital expenditure would be a single self-replicating unit (whether robotic or nanotechnological), after which the number of units could replicate at no further cost, limited only by the available raw materials needed to build more.

Social

A World Future Society report looked at how historically capitalism takes advantage of scarcity. Increased resource scarcity leads to increase and fluctuation of prices, which drives advances in technology for more efficient use of resources such that costs will be considerably reduced, almost to zero. They thus claim that following an increase in scarcity from now, the world will enter a post-scarcity age between 2050 and 2075.

Murray Bookchin's 1971 essay collection Post-Scarcity Anarchism outlines an economy based on social ecology, libertarian municipalism, and an abundance of fundamental resources, arguing that post-industrial societies have the potential to be developed into post-scarcity societies. Such development would enable "the fulfillment of the social and cultural potentialities latent in a technology of abundance".

Bookchin claims that the expanded production made possible by the technological advances of the twentieth century were in the pursuit of market profit and at the expense of the needs of humans and of ecological sustainability. The accumulation of capital can no longer be considered a prerequisite for liberation, and the notion that obstructions such as the state, social hierarchy, and vanguard political parties are necessary in the struggle for freedom of the working classes can be dispelled as a myth.

Marxism

Karl Marx, in a section of his Grundrisse that came to be known as the "Fragment on Machines", argued that the transition to a post-capitalist society combined with advances in automation would allow for significant reductions in labor needed to produce necessary goods, eventually reaching a point where all people would have significant amounts of leisure time to pursue science, the arts, and creative activities; a state some commentators later labeled as "post-scarcity". Marx argued that capitalism—the dynamic of economic growth based on capital accumulation—depends on exploiting the surplus labor of workers, but a post-capitalist society would allow for:

The free development of individualities, and hence not the reduction of necessary labour time so as to posit surplus labour, but rather the general reduction of the necessary labour of society to a minimum, which then corresponds to the artistic, scientific etc. development of the individuals in the time set free, and with the means created, for all of them.

Marx's concept of a post-capitalist communist society involves the free distribution of goods made possible by the abundance provided by automation. The fully developed communist economic system is postulated to develop from a preceding socialist system. Marx held the view that socialism—a system based on social ownership of the means of production—would enable progress toward the development of fully developed communism by further advancing productive technology. Under socialism, with its increasing levels of automation, an increasing proportion of goods would be distributed freely.

Marx did not believe in the elimination of most physical labor through technological advancements alone in a capitalist society, because he believed capitalism contained within it certain tendencies which countered increasing automation and prevented it from developing beyond a limited point, so that manual industrial labor could not be eliminated until the overthrow of capitalism. Some commentators on Marx have argued that at the time he wrote the Grundrisse, he thought that the collapse of capitalism due to advancing automation was inevitable despite these counter-tendencies, but that by the time of his major work Capital: Critique of Political Economy he had abandoned this view, and came to believe that capitalism could continually renew itself unless overthrown.

Fiction

Literature

  • The novella The Midas Plague by Frederik Pohl describes a world of cheap energy, in which robots are overproducing the commodities enjoyed by humankind. The lower-class "poor" must spend their lives in frantic consumption, trying to keep up with the robots' extravagant production, while the upper-class "rich" can live lives of simplicity.
  • The Mars trilogy by Kim Stanley Robinson charts the terraforming of Mars as a human colony and the establishment of a post-scarcity society.
  • The Culture novels by Iain M. Banks are centered on a post-scarcity economy where technology is advanced to such a degree that all production is automated, and there is no use for money or property (aside from personal possessions with sentimental value). People in the Culture are free to pursue their own interests in an open and socially-permissive society.
    • The society depicted in the Culture novels has been described by some commentators as "communist-bloc" or "anarcho-communist". Banks' close friend and fellow science fiction writer Ken MacLeod has said that The Culture can be seen as a realization of Marx's communism, but adds that "however friendly he was to the radical left, Iain had little interest in relating the long-range possibility of utopia to radical politics in the here and now. As he saw it, what mattered was to keep the utopian possibility open by continuing technological progress, especially space development, and in the meantime to support whatever policies and politics in the real world were rational and humane."
  • The Rapture of the Nerds by Cory Doctorow and Charles Stross takes place in a post-scarcity society and involves "disruptive" technology. The title is a derogatory term for the technological singularity coined by SF author Ken MacLeod.
  • Con Blomberg's 1959 short story Sales Talk depicts a post-scarcity society in which society incentivizes consumption to reduce the burden of overproduction. To further reduce production, virtual reality is used to fulfill peoples' needs to create.
  • Cory Doctorow's novel Walkaway presents a modern take on the idea of post-scarcity. With the advent of 3D printing – and especially the ability to use these to fabricate even better fabricators – and with machines that can search for and reprocess waste or discarded materials, the protagonists no longer have need of regular society for the basic essentials of life, such as food, clothing and shelter.

Television and film

Ecological civilization

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Ecological_civilization

Ecological civilization is the hypothetical concept that describes the alleged final goal of social and environmental reform within a given society. It implies that the changes required in response to global climate disruption and social injustices are so extensive as to require another form of human civilization, one based on ecological principles.

Conceptualization

Broadly construed, ecological civilization involves a synthesis of economic, educational, political, agricultural, and other societal changes toward sustainability.

Although the term was first coined in the 1980s, it did not see widespread use until 2007, when "ecological civilization" became an explicit goal of the Chinese Communist Party (CCP). In April 2014, the United Nations Alliance of Civilizations and the International Ecological Safety Collaborative Organization founded a sub-committee on ecological civilization. Proponents of ecological civilization agree with Pope Francis who writes, "We are faced not with two separate crises, one environmental and the other social, but rather with one complex crisis which is both social and environmental. Strategies for a solution demand an integrated approach to combating poverty, restoring dignity to the excluded, and at the same time protecting nature." As such, ecological civilization emphasizes the need for major environmental and social reforms that are both long-term and systemic in orientation.

China views ecological civilization as linked to the development of the Belt and Road Initiative, where sometimes the term "Green Silk Road" is used. China's view of ecological civilization is focused on cities, under the view that any solution for the climate crisis must focus on cities because that is where most people live, most energy is consumed, and most carbon emissions are generated. China has designated ecological civilization pilot cities, including Guiyang.

History

In 1984, former Soviet Union environment experts proposed the term "Ecological Civilization" in an article entitled "Ways of Training Individual Ecological Civilization under Mature Socialist Conditions", which was published in the Scientific Communism, Moscow, vol. 2.

Three years later, the concept of ecological civilization (Chinese: 生态文明; pinyin: shēngtài wénmíng) was picked up in China, and was first used by Qianji Ye (1909–2017), an agricultural economist, in 1987. Professor Ye defined ecological civilization by drawing from the ecological sciences and environmental philosophy.

Beginning in 1998, the CCP began to shift from a focus on pure developmentalism towards eco-developmentalism. Responding both to scientific evidence on the environment and increasing public pressure, the CCP began to re-formulate its ideology to recognize that the developmentalist approach during reform and opening up was not sustainable. The CCP began to use the terminology of environmental culture (huanjing wenhua) and ecological civilization.

The first time the phrase "ecological civilization" was used as a technical term in an English-language book was in 1995 by Roy Morrison in his book Ecological Democracy. Both "ecological civilization" and "constructive postmodernism" have been associated with the process philosophy of Alfred North Whitehead. David Ray Griffin, a process philosopher and professor at Claremont School of Theology, first used the term "constructive postmodernism" in his 1989 book, Varieties of Postmodern Theology. A more secular theme that flowed out of Whitehead's process philosophy has been from the Australian environmental philosopher Arran Gare in his book called The Philosophical Foundations of Ecological Civilization: A Manifesto for the Future.

The term is found more extensively in Chinese discussions beginning in 2007. In 2012, the Chinese Communist Party (CCP) included the goal of achieving an ecological civilization in its constitution, and it also featured in its five-year plan. The 18th National Congress of the Chinese Communist Party in 2012 made ecological civilization one of the country's five national development goals. It emphasized a rural development approach of "Ecology, Productivity, Livability". Ecological civilization gained further prominence in China after it was incorporated into Xi Jinping's approach to the Chinese Dream. In the Chinese context, the term generally presupposes the framework of a "constructive postmodernism", as opposed to an extension of modernist practices or a "deconstructive postmodernism", which stems from the deconstruction of Jacques Derrida.

The largest international conference held on the theme "ecological civilization" (Seizing an Alternative: Toward an Ecological Civilization) took place at Pomona College in June 2015, bringing together roughly 2,000 participants from around the world and featuring such leaders in the environmental movement as Bill McKibben, Vandana Shiva, John B. Cobb, Jr., Wes Jackson, and Sheri Liao. This was held in conjunction with the 9th International Forum on Ecological Civilization--an annual conference series in Claremont, CA established in 2006. Out of the Seizing an Alternative conference, Philip Clayton and Wm. Andrew Schwartz co-founded the Institute for Ecological Civilization (EcoCiv), and co-authored the book What is Ecological Civilization: Crisis, Hope, and the Future of the Planet, which was published in 2019.

Since 2015, the Chinese discussion of ecological civilization is increasingly associated with an "organic" form of Marxism. "Organic Marxism" was first used by Philip Clayton and Justin Heinzekehr in their 2014 book, Organic Marxism: An Alternative to Capitalism and Ecological Catastrophe. The book, which was translated into Chinese and published by the People's Press in 2015, describes ecological civilization as an orienting goal for the global ecological movement.

Beginning in 2017, Chinese universities and regional governments have begun establishing centers for the study of Xi Jinping Thought on ecological civilization. At least 18 such centers had been established as of 2021.

In 2018, the Constitution of the People's Republic of China was amended to include the concept of ecological civilization building, as part of amendments that emphasized environmental conservation and the scientific outlook on development.

The 20th National Congress of the Chinese Communist Party further highlighted ecological civilization as a core developmental goal of the CCP.

Monday, May 20, 2024

Computer-aided software engineering

From Wikipedia, the free encyclopedia
Example of a CASE tool

Computer-aided software engineering (CASE) is a domain of software tools used to design and implement applications. CASE tools are similar to and are partly inspired by computer-aided design (CAD) tools used for designing hardware products. CASE tools are intended to help develop high-quality, defect-free, and maintainable software. CASE software was often associated with methods for the development of information systems together with automated tools that could be used in the software development process.

History

The Information System Design and Optimization System (ISDOS) project, started in 1968 at the University of Michigan, initiated a great deal of interest in the whole concept of using computer systems to help analysts in the very difficult process of analysing requirements and developing systems. Several papers by Daniel Teichroew fired a whole generation of enthusiasts with the potential of automated systems development. His Problem Statement Language / Problem Statement Analyzer (PSL/PSA) tool was a CASE tool although it predated the term.

Another major thread emerged as a logical extension to the data dictionary of a database. By extending the range of metadata held, the attributes of an application could be held within a dictionary and used at runtime. This "active dictionary" became the precursor to the more modern model-driven engineering capability. However, the active dictionary did not provide a graphical representation of any of the metadata. It was the linking of the concept of a dictionary holding analysts' metadata, as derived from the use of an integrated set of techniques, together with the graphical representation of such data that gave rise to the earlier versions of CASE.

The next entrant into the market was Excelerator from Index Technology in Cambridge, Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/AT platform. While, at the time of launch, and for several years, the IBM platform did not support networking or a centralized database as did the Convergent Technologies or Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence. Hot on the heels of Excelerator were a rash of offerings from companies such as Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's CA Gen and Andersen Consulting's FOUNDATION toolset (DESIGN/1, INSTALL/1, FCP).

CASE tools were at their peak in the early 1990s. According to the PC Magazine of January 1990, over 100 companies were offering nearly 200 different CASE tools. At the time IBM had proposed AD/Cycle, which was an alliance of software vendors centered on IBM's Software repository using IBM DB2 in mainframe and OS/2:

The application development tools can be from several sources: from IBM, from vendors, and from the customers themselves. IBM has entered into relationships with Bachman Information Systems, Index Technology Corporation, and Knowledgeware wherein selected products from these vendors will be marketed through an IBM complementary marketing program to provide offerings that will help to achieve complete life-cycle coverage.

With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the market for the mainstream CASE tools of today. Many of the leaders of the CASE market of the early 1990s ended up being purchased by Computer Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett Management Systems (LBMS). The other trend that led to the evolution of CASE tools was the rise of object-oriented methods and tools. Most of the various tool vendors added some support for object-oriented methods and tools. In addition new products arose that were designed from the bottom up to support the object-oriented approach. Andersen developed its project Eagle as an alternative to Foundation. Several of the thought leaders in object-oriented development each developed their own methodology and CASE tool set: Jacobson, Rumbaugh, Booch, etc. Eventually, these diverse tool sets and methods were consolidated via standards led by the Object Management Group (OMG). The OMG's Unified Modelling Language (UML) is currently widely accepted as the industry standard for object-oriented modeling.

CASE software

Tools

CASE tools support specific tasks in the software development life-cycle. They can be divided into the following categories:

  1. Business and analysis modeling: Graphical modeling tools. E.g., E/R modeling, object modeling, etc.
  2. Development: Design and construction phases of the life-cycle. Debugging environments. E.g., IISE LKO.
  3. Verification and validation: Analyze code and specifications for correctness, performance, etc.
  4. Configuration management: Control the check-in and check-out of repository objects and files. E.g., SCCS, IISE.
  5. Metrics and measurement: Analyze code for complexity, modularity (e.g., no "go to's"), performance, etc.
  6. Project management: Manage project plans, task assignments, scheduling.

Another common way to distinguish CASE tools is the distinction between Upper CASE and Lower CASE. Upper CASE Tools support business and analysis modeling. They support traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees, Decision tables, etc. Lower CASE Tools support development activities, such as physical design, debugging, construction, testing, component integration, maintenance, and reverse engineering. All other activities span the entire life-cycle and apply equally to upper and lower CASE.

Workbenches

Workbenches integrate two or more CASE tools and support specific software-process activities. Hence they achieve:

  • A homogeneous and consistent interface (presentation integration)
  • Seamless integration of tools and toolchains (control and data integration)

An example workbench is Microsoft's Visual Basic programming environment. It incorporates several development tools: a GUI builder, a smart code editor, debugger, etc. Most commercial CASE products tended to be such workbenches that seamlessly integrated two or more tools. Workbenches also can be classified in the same manner as tools; as focusing on Analysis, Development, Verification, etc. as well as being focused on the upper case, lower case, or processes such as configuration management that span the complete life-cycle.

Environments

An environment is a collection of CASE tools or workbenches that attempts to support the complete software process. This contrasts with tools that focus on one specific task or a specific part of the life-cycle. CASE environments are classified by Fuggetta as follows:

  1. Toolkits: Loosely coupled collections of tools. These typically build on operating system workbenches such as the Unix Programmer's Workbench or the VMS VAX set. They typically perform integration via piping or some other basic mechanism to share data and pass control. The strength of easy integration is also one of the drawbacks. Simple passing of parameters via technologies such as shell scripting can't provide the kind of sophisticated integration that a common repository database can.
  2. Fourth generation: These environments are also known as 4GL standing for fourth generation language environments due to the fact that the early environments were designed around specific languages such as Visual Basic. They were the first environments to provide deep integration of multiple tools. Typically these environments were focused on specific types of applications. For example, user-interface driven applications that did standard atomic transactions to a relational database. Examples are Informix 4GL, and Focus.
  3. Language-centered: Environments based on a single often object-oriented language such as the Symbolics Lisp Genera environment or VisualWorks Smalltalk from Parcplace. In these environments all the operating system resources were objects in the object-oriented language. This provides powerful debugging and graphical opportunities but the code developed is mostly limited to the specific language. For this reason, these environments were mostly a niche within CASE. Their use was mostly for prototyping and R&D projects. A common core idea for these environments was the model–view–controller user interface that facilitated keeping multiple presentations of the same design consistent with the underlying model. The MVC architecture was adopted by the other types of CASE environments as well as many of the applications that were built with them.
  4. Integrated: These environments are an example of what most IT people tend to think of first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These environments attempt to cover the complete life-cycle from analysis to maintenance and provide an integrated database repository for storing all artifacts of the software process. The integrated software repository was the defining feature for these kinds of tools. They provided multiple different design models as well as support for code in heterogenous languages. One of the main goals for these types of environments was "round trip engineering": being able to make changes at the design level and have those automatically be reflected in the code and vice versa. These environments were also typically associated with a particular methodology for software development. For example, the FOUNDATION CASE suite from Andersen was closely tied to the Andersen Method/1 methodology.
  5. Process-centered: This is the most ambitious type of integration. These environments attempt to not just formally specify the analysis and design objects of the software process but the actual process itself and to use that formal process to control and guide software projects. Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia. These environments were by definition tied to some methodology since the software process itself is part of the environment and can control many aspects of tool invocation.

In practice, the distinction between workbenches and environments was flexible. Visual Basic for example was a programming workbench but was also considered a 4GL environment by many. The features that distinguished workbenches from environments were deep integration via a shared repository or common language and some kind of methodology (integrated and process-centered environments) or domain (4GL) specificity.

Major CASE risk factors

Some of the most significant risk factors for organizations adopting CASE technology include:

  • Inadequate standardization: Organizations usually have to tailor and adopt methodologies and tools to their specific requirements. Doing so may require significant effort to integrate both divergent technologies as well as divergent methods. For example, before the adoption of the UML standard the diagram conventions and methods for designing object-oriented models were vastly different among followers of Jacobsen, Booch, and Rumbaugh.
  • Unrealistic expectations: The proponents of CASE technology—especially vendors marketing expensive tool sets—often hype expectations that the new approach will be a silver bullet that solves all problems. In reality no such technology can do that and if organizations approach CASE with unrealistic expectations they will inevitably be disappointed.
  • Inadequate training: As with any new technology, CASE requires time to train people in how to use the tools and to get up to speed with them. CASE projects can fail if practitioners are not given adequate time for training or if the first project attempted with the new technology is itself highly mission critical and fraught with risk.
  • Inadequate process control: CASE provides significant new capabilities to utilize new types of tools in innovative ways. Without the proper process guidance and controls these new capabilities can cause significant new problems as well.

Structured programming

From Wikipedia, the free encyclopedia

Structured programming is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by making extensive use of the structured control flow constructs of selection (if/then/else) and repetition (while and for), block structures, and subroutines.

It emerged in the late 1950s with the appearance of the ALGOL 58 and ALGOL 60 programming languages, with the latter including support for block structures. Contributing factors to its popularity and widespread acceptance, at first in academia and later among practitioners, include the discovery of what is now known as the structured program theorem in 1966, and the publication of the influential "Go To Statement Considered Harmful" open letter in 1968 by Dutch computer scientist Edsger W. Dijkstra, who coined the term "structured programming".

Structured programming is most frequently used with deviations that allow for clearer programs in some particular cases, such as when exception handling has to be performed.

Elements

Control structures

Following the structured program theorem, all programs are seen as composed of three control structures:

  • "Sequence"; ordered statements or subroutines executed in sequence.
  • "Selection"; one or a number of statements is executed depending on the state of the program. This is usually expressed with keywords such as if..then..else..endif. The conditional statement should have at least one true condition and each condition should have one exit point at max.
  • "Iteration"; a statement or block is executed until the program reaches a certain state, or operations have been applied to every element of a collection. This is usually expressed with keywords such as while, repeat, for or do..until. Often it is recommended that each loop should only have one entry point (and in the original structural programming, also only one exit point, and a few languages enforce this).
Graphical representation of the three basic patterns — sequence, selection, and repetition — using NS diagrams (blue) and flow charts (green).

Subroutines

Subroutines; callable units such as procedures, functions, methods, or subprograms are used to allow a sequence to be referred to by a single statement.

Blocks

Blocks are used to enable groups of statements to be treated as if they were one statement. Block-structured languages have a syntax for enclosing structures in some formal way, such as an if-statement bracketed by if..fi as in ALGOL 68, or a code section bracketed by BEGIN..END, as in PL/I and Pascal, whitespace indentation as in Python, or the curly braces {...} of C and many later languages.

Structured programming languages

It is possible to do structured programming in any programming language, though it is preferable to use something like a procedural programming language. Some of the languages initially used for structured programming include: ALGOL, Pascal, PL/I, Ada and RPL but most new procedural programming languages since that time have included features to encourage structured programming, and sometimes deliberately left out features – notably GOTO – in an effort to make unstructured programming more difficult.

Structured programming (sometimes known as modular programming) enforces a logical structure on the program being written to make it more efficient and easier to understand and modify.

History

Theoretical foundation

The structured program theorem provides the theoretical basis of structured programming. It states that three ways of combining programs—sequencing, selection, and iteration—are sufficient to express any computable function. This observation did not originate with the structured programming movement; these structures are sufficient to describe the instruction cycle of a central processing unit, as well as the operation of a Turing machine. Therefore, a processor is always executing a "structured program" in this sense, even if the instructions it reads from memory are not part of a structured program. However, authors usually credit the result to a 1966 paper by Böhm and Jacopini, possibly because Dijkstra cited this paper himself. The structured program theorem does not address how to write and analyze a usefully structured program. These issues were addressed during the late 1960s and early 1970s, with major contributions by Dijkstra, Robert W. Floyd, Tony Hoare, Ole-Johan Dahl, and David Gries.

Debate

P. J. Plauger, an early adopter of structured programming, described his reaction to the structured program theorem:

Us converts waved this interesting bit of news under the noses of the unreconstructed assembly-language programmers who kept trotting forth twisty bits of logic and saying, 'I betcha can't structure this.' Neither the proof by Böhm and Jacopini nor our repeated successes at writing structured code brought them around one day sooner than they were ready to convince themselves.

Donald Knuth accepted the principle that programs must be written with provability in mind, but he disagreed with abolishing the GOTO statement, and as of 2018 has continued to use it in his programs. In his 1974 paper, "Structured Programming with Goto Statements", he gave examples where he believed that a direct jump leads to clearer and more efficient code without sacrificing provability. Knuth proposed a looser structural constraint: It should be possible to draw a program's flow chart with all forward branches on the left, all backward branches on the right, and no branches crossing each other. Many of those knowledgeable in compilers and graph theory have advocated allowing only reducible flow graphs.

Structured programming theorists gained a major ally in the 1970s after IBM researcher Harlan Mills applied his interpretation of structured programming theory to the development of an indexing system for The New York Times research file. The project was a great engineering success, and managers at other companies cited it in support of adopting structured programming, although Dijkstra criticized the ways that Mills's interpretation differed from the published work.

As late as 1987 it was still possible to raise the question of structured programming in a computer science journal. Frank Rubin did so in that year with an open letter titled "'GOTO Considered Harmful' Considered Harmful". Numerous objections followed, including a response from Dijkstra that sharply criticized both Rubin and the concessions other writers made when responding to him.

Outcome

By the end of the 20th century, nearly all computer scientists were convinced that it is useful to learn and apply the concepts of structured programming. High-level programming languages that originally lacked programming structures, such as FORTRAN, COBOL, and BASIC, now have them.

Common deviations

While goto has now largely been replaced by the structured constructs of selection (if/then/else) and repetition (while and for), few languages are purely structured. The most common deviation, found in many languages, is the use of a return statement for early exit from a subroutine. This results in multiple exit points, instead of the single exit point required by structured programming. There are other constructions to handle cases that are awkward in purely structured programming.

Early exit

The most common deviation from structured programming is early exit from a function or loop. At the level of functions, this is a return statement. At the level of loops, this is a break statement (terminate the loop) or continue statement (terminate the current iteration, proceed with next iteration). In structured programming, these can be replicated by adding additional branches or tests, but for returns from nested code this can add significant complexity. C is an early and prominent example of these constructs. Some newer languages also have "labeled breaks", which allow breaking out of more than just the innermost loop. Exceptions also allow early exit, but have further consequences, and thus are treated below.

Multiple exits can arise for a variety of reasons, most often either that the subroutine has no more work to do (if returning a value, it has completed the calculation), or has encountered "exceptional" circumstances that prevent it from continuing, hence needing exception handling.

The most common problem in early exit is that cleanup or final statements are not executed – for example, allocated memory is not deallocated, or open files are not closed, causing memory leaks or resource leaks. These must be done at each return site, which is brittle and can easily result in bugs. For instance, in later development, a return statement could be overlooked by a developer, and an action that should be performed at the end of a subroutine (e.g., a trace statement) might not be performed in all cases. Languages without a return statement, such as standard Pascal and Seed7, do not have this problem.

Most modern languages provide language-level support to prevent such leaks; see detailed discussion at resource management. Most commonly this is done via unwind protection, which ensures that certain code is guaranteed to be run when execution exits a block; this is a structured alternative to having a cleanup block and a goto. This is most often known as try...finally, and considered a part of exception handling. In case of multiple return statements introducing try...finally, without exceptions might look strange. Various techniques exist to encapsulate resource management. An alternative approach, found primarily in C++, is Resource Acquisition Is Initialization, which uses normal stack unwinding (variable deallocation) at function exit to call destructors on local variables to deallocate resources.

Kent Beck, Martin Fowler and co-authors have argued in their refactoring books that nested conditionals may be harder to understand than a certain type of flatter structure using multiple exits predicated by guard clauses. Their 2009 book flatly states that "one exit point is really not a useful rule. Clarity is the key principle: If the method is clearer with one exit point, use one exit point; otherwise don’t". They offer a cookbook solution for transforming a function consisting only of nested conditionals into a sequence of guarded return (or throw) statements, followed by a single unguarded block, which is intended to contain the code for the common case, while the guarded statements are supposed to deal with the less common ones (or with errors). Herb Sutter and Andrei Alexandrescu also argue in their 2004 C++ tips book that the single-exit point is an obsolete requirement.

In his 2004 textbook, David Watt writes that "single-entry multi-exit control flows are often desirable". Using Tennent's framework notion of sequencer, Watt uniformly describes the control flow constructs found in contemporary programming languages and attempts to explain why certain types of sequencers are preferable to others in the context of multi-exit control flows. Watt writes that unrestricted gotos (jump sequencers) are bad because the destination of the jump is not self-explanatory to the reader of a program until the reader finds and examines the actual label or address that is the target of the jump. In contrast, Watt argues that the conceptual intent of a return sequencer is clear from its own context, without having to examine its destination. Watt writes that a class of sequencers known as escape sequencers, defined as a "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. Watt also notes that while jump sequencers (gotos) have been somewhat restricted in languages like C, where the target must be an inside the local block or an encompassing outer block, that restriction alone is not sufficient to make the intent of gotos in C self-describing and so they can still produce "spaghetti code". Watt also examines how exception sequencers differ from escape and jump sequencers; this is explained in the next section of this article.

In contrast to the above, Bertrand Meyer wrote in his 2009 textbook that instructions like break and continue "are just the old goto in sheep's clothing" and strongly advised against their use.

Exception handling

Based on the coding error from the Ariane 501 disaster, software developer Jim Bonang argues that any exceptions thrown from a function violate the single-exit paradigm, and proposes that all inter-procedural exceptions should be forbidden. Bonang proposes that all single-exit conforming C++ should be written along the lines of:

bool MyCheck1() throw() {
  bool success = false;
  try {
    // Do something that may throw exceptions.
    if (!MyCheck2()) {
      throw SomeInternalException();
    }
    // Other code similar to the above.
    success = true;
  } catch (...) {
    // All exceptions caught and logged.
  }
  return success;
}

Peter Ritchie also notes that, in principle, even a single throw right before the return in a function constitutes a violation of the single-exit principle, but argues that Dijkstra's rules were written in a time before exception handling became a paradigm in programming languages, so he proposes to allow any number of throw points in addition to a single return point. He notes that solutions that wrap exceptions for the sake of creating a single-exit have higher nesting depth and thus are more difficult to comprehend, and even accuses those who propose to apply such solutions to programming languages that support exceptions of engaging in cargo cult thinking.

David Watt also analyzes exception handling in the framework of sequencers (introduced in this article in the previous section on early exits.) Watt notes that an abnormal situation (generally exemplified with arithmetic overflows or input/output failures like file not found) is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" He notes that in contrast to status flags testing, exceptions have the opposite default behavior, causing the program to terminate unless the programmer explicitly deals with the exception in some way, possibly by adding code to willfully ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers (discussed in the previous section) are not as suitable as a dedicated exception sequencer with the semantics discussed above.

The textbook by Louden and Lambert emphasizes that exception handling differs from structured programming constructs like while loops because the transfer of control "is set up at a different point in the program than that where the actual transfer takes place. At the point where the transfer actually occurs, there may be no syntactic indication that control will in fact be transferred." Computer science professor Arvind Kumar Bansal also notes that in languages which implement exception handling, even control structures like for, which have the single-exit property in absence of exceptions, no longer have it in presence of exceptions, because an exception can prematurely cause an early exit in any part of the control structure; for instance if init() throws an exception in for (init(); check(); increm()), then the usual exit point after check() is not reached. Citing multiple prior studies by others (1999–2004) and their own results, Westley Weimer and George Necula wrote that a significant problem with exceptions is that they "create hidden control-flow paths that are difficult for programmers to reason about".

The necessity to limit code to single-exit points appears in some contemporary programming environments focused on parallel computing, such as OpenMP. The various parallel constructs from OpenMP, like parallel do, do not allow early exits from inside to the outside of the parallel construct; this restriction includes all manner of exits, from break to C++ exceptions, but all of these are permitted inside the parallel construct if the jump target is also inside it.

Multiple entry

More rarely, subprograms allow multiple entry. This is most commonly only re-entry into a coroutine (or generator/semicoroutine), where a subprogram yields control (and possibly a value), but can then be resumed where it left off. There are a number of common uses of such programming, notably for streams (particularly input/output), state machines, and concurrency. From a code execution point of view, yielding from a coroutine is closer to structured programming than returning from a subroutine, as the subprogram has not actually terminated, and will continue when called again – it is not an early exit. However, coroutines mean that multiple subprograms have execution state – rather than a single call stack of subroutines – and thus introduce a different form of complexity.

It is very rare for subprograms to allow entry to an arbitrary position in the subprogram, as in this case the program state (such as variable values) is uninitialized or ambiguous, and this is very similar to a goto.

State machines

Some programs, particularly parsers and communications protocols, have a number of states that follow each other in a way that is not easily reduced to the basic structures, and some programmers implement the state-changes with a jump to the new state. This type of state-switching is often used in the Linux kernel.

However, it is possible to structure these systems by making each state-change a separate subprogram and using a variable to indicate the active state (see trampoline). Alternatively, these can be implemented via coroutines, which dispense with the trampoline.

Moon

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Moon   Near side of the Moon , lunar ...