Search This Blog

Thursday, May 1, 2025

Materials science

From Wikipedia, the free encyclopedia
A diamond cuboctahedron showing seven crystallographic planes, imaged with scanning electron microscopy
Six classes of conventional engineering materials.
Six classes of conventional engineering materials

Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.

The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.

Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.

Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.

History

A late Bronze Age sword or dagger blade

The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.

Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.

Fundamentals

The materials paradigm represented in the form of a tetrahedron

A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.

The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.

Structure

Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.

Structure is studied in the following levels.

Atomic structure

Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.

Bonding

To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.

Crystallography
Crystal structure of a perovskite with a chemical formula ABX3

Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.

Nanostructure

Buckminsterfullerene nanostructure

Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.

Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.

In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.

Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.

Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.

Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.

Microstructure

Microstructure of pearlite

Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.

The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.

Macrostructure

Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.

Properties

Materials exhibit myriad properties, including the following.

The properties of a material determine its usability and hence its engineering application.

Processing

Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.

Thermodynamics

A phase diagram for a binary system displaying a eutectic point

Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.

The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium.

Kinetics

Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.

Research

Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas.

Nanomaterials

A scanning electron microscopy image of carbon nanotubes bundles

Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.

Biomaterials

The iridescent nacre inside a nautilus shell

A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.

Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.

Electronic, optical, and magnetic

Negative index metamaterial

Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.

Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.

This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.

Computational materials science

With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.

Industry

Beverage containers of all three materials types: ceramic (glass), metal (aluminum), and polymer (plastic).

Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).

Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.

Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass.

Ceramics and glasses

Si3N4 ceramic bearing parts

Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components.

Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.

Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries.

Composites

A 6 μm diameter carbon filament (running from bottom left to top right) sitting atop the much larger human hair

Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases.

Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide.

Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.

Polymers

The repeating unit of the polymer polypropylene
Expanded polystyrene polymer packaging

Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.

Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.

Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.

Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.

The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.

Metal alloys

Wire rope made from steel alloy

The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value.

Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels.

Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.

Semiconductors

A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.

Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.

Relation with other fields

Google Ngram Viewer-diagram visualizing the search terms for complex matter terminology (1940–2018). Green: "materials science", red: "condensed matter physics" and blue: "solid state physics".

Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.

The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.

Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in.

The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields.

Emerging technologies

Emerging technology Status Potentially marginalized technologies Potential applications Related articles
Aerogel Hypothetical, experiments, diffusion,

early uses

Traditional insulation, glass Improved insulation, insulative glass if it can be made clear, sleeves for oil pipelines, aerospace, high-heat & extreme cold applications
Amorphous metal Experiments Kevlar Armor
Conductive polymers Research, experiments, prototypes Conductors Lighter and cheaper wires, antistatic materials, organic solar cells
Femtotechnology, picotechnology Hypothetical Present nuclear New materials; nuclear weapons, power
Fullerene Experiments, diffusion Synthetic diamond and carbon nanotubes (Buckypaper) Programmable matter
Graphene Hypothetical, experiments, diffusion,

early uses

Silicon-based integrated circuit Components with higher strength to weight ratios, transistors that operate at higher frequency, lower cost of display screens in mobile devices, storing hydrogen for fuel cell powered cars, filtration systems, longer-lasting and faster-charging batteries, sensors to diagnose diseases Potential applications of graphene
High-temperature superconductivity Cryogenic receiver front-end (CRFE) RF and microwave filter systems for mobile phone base stations; prototypes in dry ice; Hypothetical and experiments for higher temperatures Copper wire, semiconductor integral circuits No loss conductors, frictionless bearings, magnetic levitation, lossless high-capacity accumulators, electric cars, heat-free integral circuits and processors
LiTraCon Experiments, already used to make Europe Gate Glass Building skyscrapers, towers, and sculptures like Europe Gate
Metamaterials Hypothetical, experiments, diffusion Classical optics Microscopes, cameras, metamaterial cloaking, cloaking devices
Metal foam Research, commercialization Hulls Space colonies, floating cities
Multi function structures[43] Hypothetical, experiments, some prototypes, few commercial Composite materials Wide range, e.g., self-health monitoring, self-healing material, morphing
Nanomaterials: carbon nanotubes Hypothetical, experiments, diffusion,

early uses

Structural steel and aluminium Stronger, lighter materials, the space elevator Potential applications of carbon nanotubes, carbon fiber
Programmable matter Hypothetical, experiments Coatings, catalysts Wide range, e.g., claytronics, synthetic biology
Quantum dots Research, experiments, prototypes LCD, LED Quantum dot laser, future use as programmable matter in display technologies (TV, projection), optical data communications (high-speed data transmission), medicine (laser scalpel)
Silicene Hypothetical, research Field-effect transistors

Subdisciplines

The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites.

There are additionally broadly applicable, materials independent, endeavors.

There are also relatively broad focuses across materials on specific phenomena and techniques.

Professional societies

Seismic tomography

From Wikipedia, the free encyclopedia

Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth using seismic waves. The properties of seismic waves are modified by the material through which they travel. By comparing the differences in seismic waves recorded at different locations, it is possible to create a model of the subsurface structure. Most commonly, these seismic waves are generated by earthquakes or man-made sources such as explosions. Different types of waves, including P, S, Rayleigh, and Love waves can be used for tomographic images, though each comes with their own benefits and downsides and are used depending on the geologic setting, seismometer coverage, distance from nearby earthquakes, and required resolution. The model created by tomographic imaging is almost always a seismic velocity model, and features within this model may be interpreted as structural, thermal, or compositional variations. Geoscientists apply seismic tomography to a wide variety of settings in which the subsurface structure is of interest, ranging in scale from whole-Earth structure to the upper few meters below the surface.

Theory

Tomography is solved as an inverse problem. Seismic data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but structural, chemical, and thermal variations affect the properties of seismic waves, most importantly their velocity, leading to the reflection and refraction of these waves. The location and magnitude of variations in the subsurface can be calculated by the inversion process, although solutions to tomographic inversions are non-unique. Most commonly, only the travel time of the seismic waves is considered in the inversion. However, advances in modeling techniques and computing power have allowed different parts, or the entirety, of the measured seismic waveform to be fit during the inversion.

Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of travel-time difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.

History

In the early 20th century, seismologists first used travel time variations in seismic waves from earthquakes to make discoveries such as the existence of the Moho and the depth to the outer core. While these findings shared some underlying principles with seismic tomography, modern tomography itself was not developed until the 1970s with the expansion of global seismic networks. Networks like the World-Wide Standardized Seismograph Network were initially motivated by underground nuclear tests, but quickly showed the benefits of their accessible, standardized datasets for geoscience. These developments occurred concurrently with advancements in modeling techniques and computing power that were required to solve large inverse problems and generate theoretical seismograms, which are required to test the accuracy of a model. As early as 1972, researchers successfully used some of the underlying principles of modern seismic tomography to search for fast and slow areas in the subsurface.

The first widely cited publication that largely resembles modern seismic tomography was published in 1976 and used local earthquakes to determine the 3D velocity structure beneath Southern California. The following year, P wave delay times were used to create 2D velocity maps of the whole Earth at several depth ranges, representing an early 3D model. The first model using iterative techniques, which improve upon an initial model in small steps and are required when there are a large number of unknowns, was done in 1984. The model was made possible by iterating upon the first radially anisotropic Earth model, created in 1981. A radially anisotropic Earth model describes changes in material properties, specifically seismic velocity, along a radial path through the Earth, and assumes this profile is valid for every path from the core to the surface. This 1984 study was also the first to apply the term "tomography" to seismology, as the term had originated in the medical field with X-ray tomography.

Seismic tomography has continued to improve in the past several decades since its initial conception. The development of adjoint inversions, which are able to combine several different types of seismic data into a single inversion, help negate some of the trade-offs associated with any individual data type. Historically, seismic waves have been modeled as 1D rays, a method referred to as "ray theory" that is relatively simple to model and can usually fit travel-time data well. However, recorded seismic waveforms contain much more information than just travel-time and are affected by a much wider path than is assumed by ray theory. Methods like the finite-frequency method attempt to account for this within the framework of ray theory. More recently, the development of "full waveform" or "waveform" tomography has abandoned ray theory entirely. This method models seismic wave propagation in its full complexity and can yield more accurate images of the subsurface. Originally these inversions were developed in exploration seismology in the 1980s and 1990s and were too computationally complex for global and regional scale studies, but development of numerical modeling methods to simulate seismic waves has allowed waveform tomography to become more common.

Process

Seismic tomography uses seismic records to create 2D and 3D models of the subsurface through an inverse problem that minimizes the difference between the created model and the observed seismic data. Various methods are used to resolve anomalies in the crust, lithosphere, mantle, and core based on the availability of data and types of seismic waves that pass through the region. Longer wavelengths penetrate deeper into the Earth, but seismic waves are not sensitive to features significantly smaller than their wavelength and therefore provide a lower resolution. Different methods also make different assumptions, which can have a large effect on the image created. For example, commonly used tomographic methods work by iteratively improving an initial input model, and thus can produce unrealistic results if the initial model is unreasonable.

P wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most widely used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are also used.

Local tomography

Local tomographic models are often based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage. These allow for the imaging of the crust and upper mantle.

  • Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times. The inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone. Despite the theoretical appeal, these methods are not widely employed because of the computing expense and difficult inversions.
  • Reflection tomography originated with exploration geophysics. It uses an artificial source to resolve small-scale features at crustal depths. Wide-angle tomography is similar, but with a wide source to receiver offset. This allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are often used together.
  • Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known. This requires the simultaneous iteration of both structure and focus locations in model calculations.
  • Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array. The models can reach depths similar to the array aperture, typically to depths for imaging the crust and lithosphere (a few hundred kilometers). The waves travel near 30° from vertical, creating a vertical distortion to compact features.

Regional or global tomography

Simplified and interpreted P and S wave velocity variations in the mantle across southern North America showing the subducted Farallon plate.

Regional to global scale tomographic models are generally based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes. The trade off from whole mantle to whole Earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes). Although often used to image different parts of the subsurface, P and S wave derived models broadly agree where there is image overlap. These models use data from both permanent seismic stations and supplementary temporary arrays.

  • First arrival traveltime P wave data are used to generate the highest resolution tomographic images of the mantle. These models are limited to regions with sufficient seismograph coverage and earthquake density, therefore cannot be used for areas such as inactive plate interiors and ocean basins without seismic networks. Other phases of P waves are used to image the deeper mantle and core.
  • In areas with limited seismograph or earthquake coverage, multiple phases of S waves can be used for tomographic models. These are of lower resolution than P wave models, due to the distances involved and fewer bounce-phase data available. S waves can also be used in conjunction with P waves for differential arrival time models.
  • Surface waves can be used for tomography of the crust and upper mantle where no body wave (P and S) data are available. Both Rayleigh and Love waves can be used. The low frequency waves lead to low resolution models, therefore these models have difficulty with crustal structure. Free oscillations, or normal mode seismology, are the long wavelength, low frequency movements of the surface of the Earth which can be thought of as a type of surface wave. The frequencies of these oscillations can be obtained through Fourier transformation of seismic data. The models based on this method are of broad scale, but have the advantage of relatively uniform data coverage as compared to data sourced directly from earthquakes.
  • Attenuation tomography attempts to extract the anelastic signal from the elastic-dominated waveform of seismic waves. Generally, it is assumed that seismic waves behave elastically, meaning individual rock particles that are displaced by the seismic wave eventually return to their original position. However, a comparatively small amount of permanent deformation does occur, which adds up to significant energy loss over large distances. This anelastic behavior is called attenuation, and in certain conditions can become just as important as the elastic response. It has been shown that the contribution of anelasticity to seismic velocity is highly sensitive to temperature, so attenuation tomography can help determine if a velocity feature is caused by a thermal or chemical variation, which can be ambiguous when assuming a purely elastic response.
  • Ambient noise tomography uses random seismic waves generated by oceanic and atmospheric disturbances to recover the velocities of surface waves. Assuming ambient seismic noise is equal in amplitude and frequency content from all directions, cross-correlating the ambient noise recorded at two seismometers for the same time period should produce only seismic energy that travels from one station to the other. This allows one station to be treated as a "virtual source" of surface waves sent to the other station, the "virtual receiver". These surface waves are sensitive to the seismic velocity of the Earth at different depths depending on their period. A major advantage of this method is that it does not require an earthquake or man-made source. A disadvantage of the method is that an individual cross-correlation can be quite noisy due to the complexity of the real ambient noise field. Thus, many individual correlations over a shorter time period, typically one day, need to be created and averaged to improve the signal-to-noise ratio. While this has often required very large amounts of seismic data recorded over multiple years, more recent studies have successfully used much shorter time periods to create tomographic images with ambient noise.
  • Waveforms are usually modeled as rays due to ray theory being significantly less complex to model than the full seismic wave equations. However, seismic waves are affected by the material properties of a wide area surrounding the ray path, not just the material through which the ray passes directly. The finite frequency effect is the result the surrounding medium has on a seismic record. Finite frequency tomography accounts for this in determining both travel time and amplitude anomalies, increasing image resolution. This has the ability to resolve much larger variations (i.e. 10–30%) in material properties.

Applications

Seismic tomography can resolve anisotropy, anelasticity, density, and bulk sound velocity. Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes. Larger scale features that can be imaged with tomography include the high velocities beneath continental shields and low velocities under ocean spreading centers.

Hotspots

The African large low-shear-velocity province (superplume)

The mantle plume hypothesis proposes that areas of volcanism not readily explained by plate tectonics, called hotspots, are a result of thermal upwelling within the mantle. Some researchers have proposed an upper mantle source above the 660km discontinuity for these plumes, while others propose a much deeper source, possibly at the core-mantle boundary.

While the source of mantle plumes has been highly debated since they were first proposed in the 1970s, most modern studies argue in favor of mantle plumes originating at or near the core-mantle boundary. This is in large part due to tomographic images that reveal both the plumes themselves as well as large low-velocity zones in the deep mantle that likely contribute to the formation of mantle plumes. These large low-shear velocity provinces as well as smaller ultra low velocity zones have been consistently observed across many tomographic models of the deep Earth

Subduction Zones

Subducting plates are colder than the mantle into which they are moving. This creates a fast anomaly that is visible in tomographic images. Tomographic images have been made of most subduction zones around the world and have provided insight into the geometries of the crust and upper mantle in these areas. These images have revealed that subducting plates vary widely in how steeply they move into the mantle. Tomographic images have also seen features such as deeper portions of the subducting plate tearing off from the upper portion.

Other applications

Tomography can be used to image faults to better understand their seismic hazard. This can be through imaging the fault itself by seeing differences in seismic velocity across the fault boundary or by determining near-surface velocity structure, which can have a large impact on the magnitude on the amplitude of ground-shaking during an earthquake due to site amplification effects. Near-surface velocity structure from tomographic images can also be useful for other hazards, such as monitoring of landslides for changes in near-surface moisture content which has an effect on both seismic velocity and potential for future landslides.

Tomographic images of volcanoes have yielded new insights into properties of the underlying magmatic system. These images have most commonly been used to estimate the depth and volume of magma stored in the crust, but have also been used to constrain properties such as the geometry, temperature, or chemistry of the magma. It is important to note that both lab experiments and tomographic imaging studies have shown that recovering these properties from seismic velocity alone can be difficult due to the complexity of seismic wave propagation through focused zones of hot, potentially melted rocks.

While comparatively primitive to tomography on Earth, seismic tomography has been proposed on other bodies in the Solar System and successfully used on the Moon. Data collected from four seismometers placed by the Apollo missions have been used many times to create 1-D velocity profiles for the moon, and less commonly 3-D tomographic models. Tomography relies on having multiple seismometers, but tomography-adjacent methods for constraining Earth structure have been used on other planets. While on Earth these methods are often used in combination with seismic tomography models to better constrain the locations of subsurface features, they can still provide useful information about the interiors of other planetary bodies when only a single seismometer is available. For example, data gathered by the SEIS (Seismic Experiment for Interior Structure) instrument on InSight on Mars has been able to detect the Martian core.

Limitations

Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered. Temporary seismic networks have helped improve tomographic models in regions of particular interest, but typically only collect data for months to a few years. The uneven distribution of earthquakes biases tomographic models towards seismically active regions. Methods that do not rely on earthquakes such as active source surveys or ambient noise tomography have helped image areas with little to no seismicity, though these both have their own limitations as compared to earthquake-based tomography.

The type of seismic wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the Earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models deeper than the crust and upper mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P and S wave models respond differently to the types of anomalies. Models based solely on the wave that arrives first naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features. This can prove to be a significant issue in areas such as volcanoes where rocks are much hotter than their surroundings and oftentimes partially melted. Shallow models must also consider the significant lateral velocity variations in continental crust.

Because seismometers have only been deployed in large numbers since the late-20th century, tomography is only capable of viewing changes in velocity structure over decades. For example, tectonic plates only move at millimeters per year, so the total amount of change in geologic structure due to plate tectonics since the development of seismic tomography is several orders of magnitude lower than the finest resolution possible with modern seismic networks. However, seismic tomography has still been used to view near-surface velocity structure changes at time scales of years to months.

Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains. This contributes to difficulty comparing the validity of different model results.

Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust.

Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images.

Environmental resource management

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Environmental_resource_management
The shrinking Aral Sea, an example of poor water resource management diverted for irrigation

Environmental resource management or environmental management is the management of the interaction and impact of human societies on the environment. It is not, as the phrase might suggest, the management of the environment itself. Environmental resources management aims to ensure that ecosystem services are protected and maintained for future human generations, and also maintain ecosystem integrity through considering ethical, economic, and scientific (ecological) variables. Environmental resource management tries to identify factors between meeting needs and protecting resources. It is thus linked to environmental protection, resource management, sustainability, integrated landscape management, natural resource management, fisheries management, forest management, wildlife management, environmental management systems, and others.

Significance

Environmental resource management is an issue of increasing concern, as reflected in its prevalence in several texts influencing global sociopolitical frameworks such as the Brundtland Commission's Our Common Future, which highlighted the integrated nature of the environment and international development, and the Worldwatch Institute's annual State of the World reports.

The environment determines the nature of people, animals, plants, and places around the Earth, affecting behaviour, religion, culture and economic practices.

Scope

Improved agricultural practices such as these terraces in northwest Iowa can serve to preserve soil and improve water quality.

Environmental resource management can be viewed from a variety of perspectives. It involves the management of all components of the biophysical environment, both living (biotic) and non-living (abiotic), and the relationships among all living species and their habitats. The environment also involves the relationships of the human environment, such as the social, cultural, and economic environment, with the biophysical environment. The essential aspects of environmental resource management are ethical, economical, social, and technological. These underlie principles and help make decisions.

The concept of environmental determinism, probabilism, and possibilism are significant in the concept of environmental resource management.

Environmental resource management covers many areas in science, including geography, biology, social sciences, political sciences, public policy, ecology, physics, chemistry, sociology, psychology, and physiology. Environmental resource management as a practice and discourse (across these areas) is also the object of study in the social sciences.

Aspects

Ethical

Environmental resource management strategies are intrinsically driven by conceptions of human-nature relationships. Ethical aspects involve the cultural and social issues relating to the environment, and dealing with changes to it. "All human activities take place in the context of certain types of relationships between society and the bio-physical world (the rest of nature)," and so, there is a great significance in understanding the ethical values of different groups around the world. Broadly speaking, two schools of thought exist in environmental ethics: Anthropocentrism and Ecocentrism, each influencing a broad spectrum of environmental resource management styles along a continuum. These styles perceive "...different evidence, imperatives, and problems, and prescribe different solutions, strategies, technologies, roles for economic sectors, culture, governments, and ethics, etc."

Anthropocentrism

Anthropocentrism, "an inclination to evaluate reality exclusively in terms of human values," is an ethic reflected in the major interpretations of Western religions and the dominant economic paradigms of the industrialised world. Anthropocentrism looks at nature as existing solely for the benefit of humans, and as a commodity to use for the good of humanity and to improve human quality of life. Anthropocentric environmental resource management is therefore not the conservation of the environment solely for the environment's sake, but rather the conservation of the environment, and ecosystem structure, for humans' sake.

Ecocentrism

Ecocentrists believe in the intrinsic value of nature while maintaining that human beings must use and even exploit nature to survive and live. It is this fine ethical line that ecocentrists navigate between fair use and abuse. At an extreme of the ethical scale, ecocentrism includes philosophies such as ecofeminism and deep ecology, which evolved as a reaction to dominant anthropocentric paradigms. "In its current form, it is an attempt to synthesize many old and some new philosophical attitudes about the relationship between nature and human activity, with particular emphasis on ethical, social, and spiritual aspects that have been downplayed in the dominant economic worldview."

Economics

Main article: Economics

A water harvesting system collects rainwater from the Rock of Gibraltar into pipes that lead to tanks excavated inside the rock.

The economy functions within and is dependent upon goods and services provided by natural ecosystems. The role of the environment is recognized in both classical economics and neoclassical economics theories, yet the environment was a lower priority in economic policies from 1950 to 1980 due to emphasis from policy makers on economic growth. With the prevalence of environmental problems, many economists embraced the notion that, "If environmental sustainability must coexist for economic sustainability, then the overall system must [permit] identification of an equilibrium between the environment and the economy." As such, economic policy makers began to incorporate the functions of the natural environment – or natural capital – particularly as a sink for wastes and for the provision of raw materials and amenities.

Debate continues among economists as to how to account for natural capital, specifically whether resources can be replaced through knowledge and technology, or whether the environment is a closed system that cannot be replenished and is finite. Economic models influence environmental resource management, in that management policies reflect beliefs about natural capital scarcity. For someone who believes natural capital is infinite and easily substituted, environmental management is irrelevant to the economy. For example, economic paradigms based on neoclassical models of closed economic systems are primarily concerned with resource scarcity and thus prescribe legalizing the environment as an economic externality for an environmental resource management strategy. This approach has often been termed 'Command-and-control'. Colby has identified trends in the development of economic paradigms, among them, a shift towards more ecological economics since the 1990s.

Ecology

A diagram showing the juvenile fish bypass system, which allows young salmon and steelhead to safely pass the Rocky Reach Hydro Project in Washington
Fencing separates big game from vehicles along the Quebec Autoroute 73 in Canada.

There are many definitions of the field of science commonly called ecology. A typical one is "the branch of biology dealing with the relations and interactions between organisms and their environment, including other organisms." "The pairing of significant uncertainty about the behaviour and response of ecological systems with urgent calls for near-term action constitutes a difficult reality, and a common lament" for many environmental resource managers. Scientific analysis of the environment deals with several dimensions of ecological uncertainty. These include: structural uncertainty resulting from the misidentification, or lack of information pertaining to the relationships between ecological variables; parameter uncertainty referring to "uncertainty associated with parameter values that are not known precisely but can be assessed and reported in terms of the likelihood…of experiencing a defined range of outcomes"; and stochastic uncertainty stemming from chance or unrelated factors. Adaptive management is considered a useful framework for dealing with situations of high levels of uncertainty though it is not without its detractors.

A common scientific concept and impetus behind environmental resource management is carrying capacity. Simply put, carrying capacity refers to the maximum number of organisms a particular resource can sustain. The concept of carrying capacity, whilst understood by many cultures over history, has its roots in Malthusian theory. An example is visible in the EU Water Framework Directive. However, "it is argued that Western scientific knowledge ... is often insufficient to deal with the full complexity of the interplay of variables in environmental resource management. These concerns have been recently addressed by a shift in environmental resource management approaches to incorporate different knowledge systems including traditional knowledge, reflected in approaches such as adaptive co-managemen community-based natural resource management and transitions management[34] among others.

Sustainability

Sustainability in environmental resource management involves managing economic, social, and ecological systems both within and outside an organizational entity so it can sustain itself and the system it exists in. In context, sustainability implies that rather than competing for endless growth on a finite planet, development improves quality of life without necessarily consuming more resources. Sustainably managing environmental resources requires organizational change that instills sustainability values that portrays these values outwardly from all levels and reinforces them to surrounding stakeholders. The result should be a symbiotic relationship between the sustaining organization, community, and environment.

Many drivers compel environmental resource management to take sustainability issues into account. Today's economic paradigms do not protect the natural environment, yet they deepen human dependency on biodiversity and ecosystem services. Ecologically, massive environmental degradation and climate change threaten the stability of ecological systems that humanity depends on. Socially, an increasing gap between rich and poor and the global North–South divide denies many access to basic human needs, rights, and education, leading to further environmental destruction. The planet's unstable condition is caused by many anthropogenic sources. As an exceptionally powerful contributing factor to social and environmental change, the modern organisation has the potential to apply environmental resource management with sustainability principles to achieve highly effective outcomes. To achieve sustainable development with environmental resource management an organisation should work within sustainability principles, including social and environmental accountability, long-term planning; a strong, shared vision; a holistic focus; devolved and consensus decision making; broad stakeholder engagement and justice; transparency measures; trust; and flexibility.

Current paradigm shifts

To adjust to today's environment of quick social and ecological changes, some organizations have begun to experiment with new tools and concepts. Those that are more traditional and stick to hierarchical decision making have difficulty dealing with the demand for lateral decision making that supports effective participation. Whether it be a matter of ethics or just strategic advantage organizations are internalizing sustainability principles. Some of the world's largest and most profitable corporations are shifting to sustainable environmental resource management: Ford, Toyota, BMW, Honda, Shell, Du Port, Sta toil, Swiss Re, Hewlett-Packard, and Unilever, among others. An extensive study by the Boston Consulting Group reaching 1,560 business leaders from diverse regions, job positions, expertise in sustainability, industries, and sizes of organizations, revealed the many benefits of sustainable practice as well as its viability.

Although the sustainability of environmental resource management has improved, corporate sustainability, for one, has yet to reach the majority of global companies operating in the markets. The three major barriers to preventing organizations from shifting towards sustainable practice with environmental resource management are not understanding what sustainability is; having difficulty modeling an economically viable case for the switch; and having a flawed execution plan, or a lack thereof. Therefore, the most important part of shifting an organization to adopt sustainability in environmental resource management would be to create a shared vision and understanding of what sustainability is for that particular organization and to clarify the business case.

Stakeholders

Public sector

A conservation project in North Carolina involving the search for bog turtles was conducted by United States Fish and Wildlife Service and the North Carolina Wildlife Resources Commission and its volunteers.

The public sector comprises the general government sector plus all public corporations including the central bank. In environmental resource management the public sector is responsible for administering natural resource management and implementing environmental protection legislation. The traditional role of the public sector in environmental resource management is to provide professional judgement through skilled technicians on behalf of the public. With the increase of intractable environmental problems, the public sector has been led to examine alternative paradigms for managing environmental resources. This has resulted in the public sector working collaboratively with other sectors (including other governments, private and civil) to encourage sustainable natural resource management behaviours.

Private sector

The private sector comprises private corporations and non-profit institutions serving households. The private sector's traditional role in environmental resource management is that of the recovery of natural resources. Such private sector recovery groups include mining (minerals and petroleum), forestry and fishery organisations. Environmental resource management undertaken by the private sectors varies dependent upon the resource type, that being renewable or non-renewable and private and common resources (also see Tragedy of the Commons). Environmental managers from the private sector also need skills to manage collaboration within a dynamic social and political environment.

Civil society

Civil society comprises associations in which societies voluntarily organise themselves and which represent a wide range of interests and ties. These can include community-based organisations, indigenous peoples' organisations and non-government organisations (NGOs). Functioning through strong public pressure, civil society can exercise their legal rights against the implementation of resource management plans, particularly land management plans. The aim of civil society in environmental resource management is to be included in the decision-making process by means of public participation. Public participation can be an effective strategy to invoke a sense of social responsibility of natural resources.

Tools

As with all management functions, effective management tools, standards, and systems are required. An environmental management standard or system or protocol attempts to reduce environmental impact as measured by some objective criteria. The ISO 14001 standard is the most widely used standard for environmental risk management and is closely aligned to the European Eco-Management and Audit Scheme (EMAS). As a common auditing standard, the ISO 19011 standard explains how to combine this with quality management.

Other environmental management systems (EMS) tend to be based on the ISO 14001 standard and many extend it in various ways:

  • The Green Dragon Environmental Management Standard is a five-level EMS designed for smaller organisations for whom ISO 14001 may be too onerous and for larger organisations who wish to implement ISO 14001 in a more manageable step-by-step approach,
  • BS 8555 is a phased standard that can help smaller companies move to ISO 14001 in six manageable steps,
  • The Natural Step focuses on basic sustainability criteria and helps focus engineering on reducing use of materials or energy use that is unsustainable in the long term,
  • Natural Capitalism advises using accounting reform and a general biomimicry and industrial ecology approach to do the same thing,
  • US Environmental Protection Agency has many further terms and standards that it defines as appropriate to large-scale EMS,
  • The UN and World Bank has encouraged adopting a "natural capital" measurement and management framework.

Other strategies exist that rely on making simple distinctions rather than building top-down management "systems" using performance audits and full cost accounting. For instance, Ecological Intelligent Design divides products into consumables, service products or durables and unsaleables – toxic products that no one should buy, or in many cases, do not realize they are buying. By eliminating the unsaleables from the comprehensive outcome of any purchase, better environmental resource management is achieved without systems.

Another example that diverges from top-down management is the implementation of community based co-management systems of governance. An example of this is community based subsistence fishing areas, such as is implemented in Ha'ena, Hawaii. Community based systems of governance allow for the communities who most directly interact with the resource and who are most deeply impacted by the overexploitation of said resource to make the decisions regarding its management, thus empowering local communities and more effectively managing resources.

Recent successful cases have put forward the notion of integrated management. It shares a wider approach and stresses out the importance of interdisciplinary assessment. It is an interesting notion that might not be adaptable to all cases.

Case Study: Kissidougou, Guinea (Fairhead, Leach)

Kissidougou, Guinea’s dry season brings about fires in the open grass fires which defoliate the few trees in the savanna. There are villages within this savanna surrounded by “islands” of forests, allowing for forts, hiding, rituals, protection from wind and fire, and shade for crops. According to scholars and researchers in the region during the late-19th and 20th centuries, there was a steady decline in tree cover. This led to colonial Guinea’s implementation of policies, including the switch of upland to swamp farming; bush-fire control; protection of certain species and land; and tree planting in villages. These policies were carried out in the form of permits, fines, and military repression.

But, Kissidougou villagers claim their ancestors’ established these islands. Many maps and letters evidence France’s occupation of Guinea, as well as Kissidougou’s past landscape. During the 1780s to 1860s “the whole country [was] prairie.” James Fairhead and Melissa Leach, both environmental anthropologists at the University of Sussex, claim the state’s environmental analyses “casts into question the relationships between society, demography, and environment.” With this, they reformed the state’s narratives: Local land use can be both vegetation enriching and degrading; combined effect on resource management is greater than the sum of their parts; there is evidence of increased population correlating to an increase in forest cover. Fairhead and Leach support the enabling of policy and socioeconomic conditions in which local resource management conglomerates can act effectively. In Kissidougou, there is evidence that local powers and community efforts shaped the island forests that shape the savanna’s landscape.

Why is there anything at all?

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Why_is_there_anything_at_all ...