Search This Blog

Tuesday, February 1, 2022

Uncertainty quantification

From Wikipedia, the free encyclopedia

Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification.

Sources

Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:

Parameter
This comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods. Some examples of this are the local free-fall acceleration in a falling object experiment, various material properties in a finite element analysis for engineering, and multiplier uncertainty in the context of macroeconomic policy optimization.
Parametric
This comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance.
Structural uncertainty
Also known as model inadequacy, model bias, or model discrepancy, this comes from the lack of knowledge of the underlying physics in the problem. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics.
Algorithmic
Also known as numerical uncertainty, or discrete uncertainty. This type comes from numerical errors and numerical approximations per implementation of the computer model. Most models are too complicated to solve exactly. For example, the finite element method or finite difference method may be used to approximate the solution of a partial differential equation (which introduces numerical errors). Other examples are numerical integration and infinite sum truncation that are necessary approximations in numerical implementation.
Experimental
Also known as observation error, this comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables.
Interpolation
This comes from a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses.

Aleatoric and epistemic

Uncertainty is sometimes classified into two categories, prominently seen in medical applications.

Aleatoric
Aleatoric uncertainty is also known as statistical uncertainty, and is representative of unknowns that differ each time we run the same experiment. For example, a single arrow shot with a mechanical bow that exactly duplicates each launch (the same acceleration, altitude, direction and final velocity) will not all impact the same point on the target due to random and complicated vibrations of the arrow shaft, the knowledge of which cannot be determined sufficiently to eliminate the resulting scatter of impact points. The argument here is obviously in the definition of "cannot". Just because we cannot measure sufficiently with our currently available measurement devices does not preclude necessarily the existence of such information, which would move this uncertainty into the below category. Aleatoric is derived from the Latin alea or dice, referring to a game of chance.
Epistemic uncertainty
Epistemic uncertainty is also known as systematic uncertainty, and is due to things one could in principle know but does not in practice. This may be because a measurement is not accurate, because the model neglects certain effects, or because particular data have been deliberately hidden. An example of a source of this uncertainty would be the drag in an experiment designed to measure the acceleration of gravity near the earth's surface. The commonly used gravitational acceleration of 9.8 m/s² ignores the effects of air resistance, but the air resistance for the object could be measured and incorporated into the experiment to reduce the resulting uncertainty in the calculation of the gravitational acceleration.
Combined occurrence and interaction of aleatoric and epistemic uncertainty
Aleatoric and epistemic uncertainty can also occur simultaneously in a single term E.g., when experimental parameters show aleatoric uncertainty, and those experimental parameters are input to a computer simulation. If then for the uncertainty quantification a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is learnt from computer experiments, this surrogate exhibits epistemic uncertainty that depends on or interacts with the aleatoric uncertainty of the experimental parameters. Such an uncertainty cannot solely be classified as aleatoric or epistemic any more, but is a more general inferential uncertainty.

In real life applications, both kinds of uncertainties are present. Uncertainty quantification intends to explicitly express both types of uncertainty separately. The quantification for the aleatoric uncertainties can be relatively straightforward, where traditional (frequentist) probability is the most basic form. Techniques such as the Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice, although, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to understand the (lack of) knowledge of the system, process or mechanism. Epistemic uncertainty is generally understood through the lens of Bayesian probability, where probabilities are interpreted as indicating how certain a rational person could be regarding a specific claim.

Mathematical perspective

In mathematics, uncertainty is often characterized in terms of a probability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what a random sample drawn from a probability distribution will be.

Uncertainty versus variability

Technical professionals are often asked to estimate "ranges" for uncertain quantities. It is important that they distinguish whether they are being asked for variability ranges or uncertainty ranges. Likewise, it is important for modelers to know if they are building models of variability or uncertainty, and their relationship, if any.

Types of problems

There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty (where the various sources of uncertainty are propagated through the model to predict the overall uncertainty in the system response) and the other is the inverse assessment of model uncertainty and parameter uncertainty (where the model parameters are calibrated simultaneously using test data). There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequent predictions of the true system response(s) are of great interest in designing robust systems.

Forward

Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from the parametric variability listed in the sources of uncertainty. The targets of uncertainty propagation analysis can be:

  • To evaluate low-order moments of the outputs, i.e. mean and variance.
  • To evaluate the reliability of the outputs. This is especially useful in reliability engineering where outputs of a system are usually closely related to the performance of the system.
  • To assess the complete probability distribution of the outputs. This is useful in the scenario of utility optimization where the complete distribution is used to calculate the utility.

Inverse

Given some experimental measurements of a system and some computer simulation results from its mathematical model, inverse uncertainty quantification estimates the discrepancy between the experiment and the mathematical model (which is called bias correction), and estimates the values of unknown parameters in the model if there are any (which is called parameter calibration or simply calibration). Generally this is a much more difficult problem than forward uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification:

The outcome of bias correction, including an updated model (prediction mean) and prediction confidence interval.

Bias correction only

Bias correction quantifies the model inadequacy, i.e. the discrepancy between the experiment and the mathematical model. The general model updating formula for bias correction is:

where denotes the experimental measurements as a function of several input variables , denotes the computer model (mathematical model) response, denotes the additive discrepancy function (aka bias function), and denotes the experimental uncertainty. The objective is to estimate the discrepancy function , and as a by-product, the resulting updated model is . A prediction confidence interval is provided with the updated model as the quantification of the uncertainty.

Parameter calibration only

Parameter calibration estimates the values of one or more unknown parameters in a mathematical model. The general model updating formulation for calibration is:

where denotes the computer model response that depends on several unknown model parameters , and denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimate , or to come up with a probability distribution of that encompasses the best knowledge of the true parameter values.

Bias correction and parameter calibration

It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together:

It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve.

Selective methodologies

Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems.

Forward propagation

Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically five categories of probabilistic approaches for uncertainty propagation:

  • Simulation-based methods: Monte Carlo simulations, importance sampling, adaptive sampling, etc.
  • General surrogate-based methods: In a non-instrusive approach, a surrogate model is learnt in order to replace the experiment or the simulation with a cheap and fast approximation. Surrogate-based methods can also be employed in a fully Bayesian fashion. This approach has proven particularly powerful when the cost of sampling, e.g. computationally expensive simulations, is prohibitively high.
  • Local expansion-based methods: Taylor series, perturbation method, etc. These methods have advantages when dealing with relatively small input variability and outputs that don't express high nonlinearity. These linear or linearized methods are detailed in the article Uncertainty propagation.
  • Functional expansion-based methods: Neumann expansion, orthogonal or Karhunen–Loeve expansions (KLE), with polynomial chaos expansion (PCE) and wavelet expansions as special cases.
  • Most probable point (MPP)-based methods: first-order reliability method (FORM) and second-order reliability method (SORM).
  • Numerical integration-based methods: Full factorial numerical integration (FFNI) and dimension reduction (DR).

For non-probabilistic approaches, interval analysis, Fuzzy theory, possibility theory and evidence theory are among the most widely used.

The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory of decision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics. This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals.

Inverse uncertainty

Frequentist

In regression analysis and least squares problems, the standard error of parameter estimates is readily available, which can be expanded into a confidence interval.

Bayesian

Several methodologies for inverse uncertainty quantification exist under the Bayesian framework. The most complicated direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations. Another common situation is that parameters derived from experiments are input to simulations. For computationally expensive simulations, then often a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is necessary, defining an inverse problem for finding the surrogate model that best approximates the simulations.

Modular approach

An approach to inverse uncertainty quantification is the modular Bayesian approach. The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, a prior distribution of unknown parameters should be assigned.

Module 1: Gaussian process modeling for the computer model

To address the issue from lack of simulation results, the computer model is replaced with a Gaussian process (GP) model

where

is the dimension of input variables, and is the dimension of unknown parameters. While is pre-defined, , known as hyperparameters of the GP model, need to be estimated via maximum likelihood estimation (MLE). This module can be considered as a generalized kriging method.

Module 2: Gaussian process modeling for the discrepancy function

Similarly with the first module, the discrepancy function is replaced with a GP model

where

Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for . At the same time, from Module 1 gets updated as well.

Module 3: Posterior distribution of unknown parameters

Bayes' theorem is applied to calculate the posterior distribution of the unknown parameters:

where includes all the fixed hyperparameters in previous modules.

Module 4: Prediction of the experimental response and discrepancy function
Full approach

Fully Bayesian approach requires that not only the priors for unknown parameters but also the priors for the other hyperparameters should be assigned. It follows the following steps:

  1. Derive the posterior distribution ;
  2. Integrate out and obtain . This single step accomplishes the calibration;
  3. Prediction of the experimental response and discrepancy function.

However, the approach has significant drawbacks:

  • For most cases, is a highly intractable function of . Hence the integration becomes very troublesome. Moreover, if priors for the other hyperparameters are not carefully chosen, the complexity in numerical integration increases even more.
  • In the prediction stage, the prediction (which should at least include the expected value of system responses) also requires numerical integration. Markov chain Monte Carlo (MCMC) is often used for integration; however it is computationally expensive.

The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations.

Known issues

The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved:

  1. Dimensionality issue: The computational cost increases dramatically with the dimensionality of the problem, i.e. the number of input variables and/or the number of unknown parameters.
  2. Identifiability issue: Multiple combinations of unknown parameters and discrepancy function can yield the same experimental prediction. Hence different values of parameters cannot be distinguished/identified. This issue is circumvented in a Bayesian approach, where such combinations are averaged over.

Random events

While rolling one six-sided die, the probability of getting one to six is equal. An interval of 90% coverage probability extends the entire output range. While rolling 5 dice and observing the sum of outcomes, the width of an interval of 88.244% confidence is 46.15% of the range. The interval becomes narrower compared to the range with a larger number of dice-rolling. Our real-life events are influenced by numerous probabilistic events and the effect of all probabilistic events can be predicted by a narrow interval of high coverage probability; most of the situations.

Neuroconstructivism

From Wikipedia, the free encyclopedia

Neuroconstructivism is a theory that states that gene–gene interaction, gene–environment interaction and, crucially, ontogeny all play a vital role in how the brain progressively sculpts itself and how it gradually becomes specialized over developmental time.

Supporters of neuroconstructivism, such as Annette Karmiloff-Smith, argue against innate modularity of mind, the notion that a brain is composed of innate neural structures or modules which have distinct evolutionarily established functions. Instead, emphasis is put on innate domain relevant biases. These biases are understood as aiding learning and directing attention. Module-like structures are therefore the product of both experience and these innate biases. Neuroconstructivism can therefore be seen as a bridge between Jerry Fodor's psychological nativism and Jean Piaget's theory of cognitive development.

Development vs. innate modularity

Neuroconstructivism has arisen as a direct rebuttal against psychologists who argue for an innate modularity of the brain. Modularity of the brain would require a pre-specified pattern of synaptic connectivity within the cortical microcircuitry of a specific neural system. Instead, Annette Karmiloff-Smith has suggested that the microconnectivity of the brain emerges from the gradual process of ontogenetic development. Proponents of the modular theory might have been misled by the seemingly normal performances of individuals who exhibit a learning disability on tests. While it may appear that cognitive functioning may be impaired in only specified areas, this may be a functional flaw in the test. Many standardized tasks used to assess the extent of damage within the brain do not measure underlying causes, instead only showing the static end-state of complex processes. An alternative explanation to account for these normal test scores would be the ability of the individual to compensate using other brain regions that are not normally used for such a task. Such compensation could only have resulted from developmental neuroplasticity and the interaction between environment and brain functioning.

Different functions within the brain arise through development. Instead of having prespecified patterns of connectivity, neuroconstructivism suggests that there are "tiny regional differences in type, density, and orientation of neurons, in neurotransmitters, in firing thresholds, in rate of myelination, lamination, ratio of gray matter to white matter," etc. that led to differing capabilities of neurons or brain regions to handle specific functions. For example, the ventral and dorsal streams only arise because of innate differences in processing speed of neurons, not an innate selection to be either ventral or dorsal by the respective neurons. Such a differentiation has been entitled a domain-relevant approach to development.

This contrasts the previous domain-general and domain-specific approaches. In the domain-general framework, differences in cognitive functioning are attributed to overarching differences in the neurons across the entire brain. The domain-specific approach in contrast argues for inherent, specific differences within the genes which directly control a person's development. While it cannot rule out domain-specificity, neuroconstructivism instead offers a developmental approach that focuses on change and emergent outcomes. Such change leads to domain-specificity in adult brains, but neuroconstructivism argues that the key component of the specificity occurred from the domain-general start state.

Every aspect of development is dynamic and interactive. Human intelligence may be more accurately defined by focusing on the plasticity of the brain and its interactions with the environment rather than inherent differences within the DNA structure. Dissociations seen in Williams syndrome or autism provide neuroscientists with a means of exploring different developmental trajectories.

Context dependence

Neuroconstructivism uses context to demonstrate the possible changes to the brain's neural connections. Starting with genes and incorporating progressively more context indicates some of the constraints involved in development. Instead of viewing the brain as independent of its current or previous environment, neuroconstructivism shows how context interacts with the brain to gradually form the specialized adult brain. In fact, by being built on preexisting representations, representations become increasingly context bound (rather than context free). This leads to "restrictions of fate" in which later learning is more restricted than earlier learning.

Genes

Previous theories have supposed that genes are static unchanging code for specific developmental outcomes. However, new research suggests that genes may be triggered by both environmental and behavioral influences. This probabilistic epigenesis view of development suggests that instead of following a predetermined path to expression, genes are modified by the behavior and environment of an organism. Furthermore, these modifications can then act on the environment, creating a causal circle in which genes influencing the environment are re-influenced by these changes in the environment.

Encellment

Cells do not develop in isolation. Even from a young age, neurons are influenced by the surrounding environment (e.g. other neurons). Over time, neurons interact either spontaneously or in response to some sensory stimulation to form neural networks. Competition between neurons plays a key role in establishing the exact pattern of connections. As a result, specific neural activation patterns may arise due to the underlying morphology and connection patterns within the specified neural structures. These may subsequently be modified by morphological change imposed by the current representations. Progressively more complex patterns may arise through manipulation of current neuronal structures by an organism's experience.

Enbrainment

While neurons are embedded within networks, these networks are further embedded within the brain as a whole. Neural networks do not work in isolation, such as in the modularity of mind perspective. Instead, different regions interact through feedback processes and top-down interactions, constraining and specifying the development of each region. For example, the primary visual cortex in blind individuals has been shown to process tactile information. The function of cortical areas emerges as a result of this sensory input and competition for cortical space. "This interactive specialization view implies that cortical regions might initially be non-specific in their responses but gradually narrow their responses as their functional specialization restricts them to a narrower set of circumstances."

Embodiment

The brain is further limited by its constraint within the body. The brain receives input from receptors on the body (e.g., somatosensory system, visual system, auditory system, etc.). These receptors provide the brain with a source of information. As a result, they manipulate the brain's neural activation patterns, and thus its structure, leading to constraining effects on the construction of representations in the mind. The sensory systems limit the possible information the brain can receive and therefore act as a filter. However, the brain may also interact with the environment through manipulation of the body (e.g., movement, changes in attention, etc.), thus manipulating the environment and the subsequent information received. Pro-activity while exploring the environment leads to altered experiences and consequently altered cognitive development.

Ensocialment

While a person may manipulate the environment, the specific environment in which the person develops has highly constraining effects on the possible neural representations exhibited through a restriction of the possible physical and social experiences. For example, if a child is raised without a mother, the child cannot change his/her responses or actions to generate a mother. S/he may only work within the specified constraints of the environment in which s/he is born.

The nature of representations

All of the above constraints interact to form cognitive representations in the brain. The main principle is context dependence, as shaping occurs through competition and cooperation. Competition leads to the specialization of developing components which then forms new representations. Cooperation, on the other hand, leads to combinations of existing mental representations that allow existing knowledge to be reused. Construction of representations also depends on the exploration of the environment by the individual. However, the experiences derived from this pro-activity constrain the range of possible adaptations within the mental representations. Such progressive specialization arises from the constraints of the past and current learning environment. To alter representations, the environment demands improvements through small additions to the current mental state. This leads to partial instead of fixed representations that are assumed to occur in adults. Neuroconstructivism argues such end products do not exist. The brain's plasticity leads to ever-changing mental representations through individual proactivity and environmental interactions. Such a viewpoint implies that any current mental representations are the optimal outcome for a specified environment. For example, in developmental disorders like autism, atypical development arises because of adaptations to multiple interacting constraints, the same as normal development. However, the constraints differ and thus result in a different end-product. This view directly contrasts previous theories which assumed that disorders arise from isolated failures of particular functional modules.

Environmental engineering science

From Wikipedia, the free encyclopedia
 
Students in Environmental Engineering Science typically combine scientific studies of the biosphere with mathematical, analytical and design tools found in the engineering fields

Environmental engineering science (EES) is a multidisciplinary field of engineering science that combines the biological, chemical and physical sciences with the field of engineering. This major traditionally requires the student to take basic engineering classes in fields such as thermodynamics, advanced math, computer modeling and simulation and technical classes in subjects such as statics, mechanics, hydrology, and fluid dynamics. As the student progresses, the upper division elective classes define a specific field of study for the student with a choice in a range of science, technology and engineering related classes.

Difference with related fields

Graduates of Environmental Engineering Science can go on to work on the technical aspects of designing a Living Roof like the one pictured here at the California Academy of the Sciences

As a recently created program, environmental engineering science has not yet been incorporated into the terminology found among environmentally focused professionals. In the few engineering colleges that offer this major, the curriculum shares more classes in common with environmental engineering than it does with environmental science. Typically, EES students follow a similar course curriculum with environmental engineers until their fields diverge during the last year of college. The majority of the environmental engineering students must take classes designed to connect their knowledge of the environment to modern building materials and construction methods. This is meant to direct the environmental engineer into a field where they will more than likely assist in building treatment facilities, preparing environmental impact assessments or helping to mitigate air pollution from specific point sources.

Meanwhile, the environmental engineering science student will choose a direction for their career. From the range of electives they have to choose from, these students can move into a fields such as the design of nuclear storage facilities, bacterial bioreactors or environmental policies. These students combine the practical design background of an engineer with the detailed theory found in many of the biological and physical sciences.

Description at universities

Stanford University

The Civil and Environmental Engineering department at Stanford University provides the following description for their program in Environmental Engineering and Science: The Environmental Engineering and Science (EES) program focuses on the chemical and biological processes involved in water quality engineering, water and air pollution, remediation and hazardous substance control, human exposure to pollutants, environmental biotechnology, and environmental protection.

UC Berkeley

The College of Engineering at UC Berkeley defines Environmental Engineering Science, including the following:

This is a multidisciplinary field requiring an integration of physical, chemical and biological principles with engineering analysis for environmental protection and restoration. The program incorporates courses from many departments on campus to create a discipline that is rigorously based in science and engineering, while addressing a wide variety of environmental issues. Although an environmental engineering option exists within the civil engineering major, the engineering science curriculum provides a more broadly based foundation in the sciences than is possible in civil engineering

Massachusetts Institute of Technology

At MIT, the major is described in their curriculum, including the following:

The Bachelor of Science in Environmental Engineering Science emphasizes the fundamental physical, chemical, and biological processes necessary for understanding the interactions between man and the environment. Issues considered include the provision of clean and reliable water supplies, flood forecasting and protection, development of renewable and nonrenewable energy sources, causes and implications of climate change, and the impact of human activities on natural cycles

University of Florida

The College of Engineering at UF defines Environmental Engineering Science as follows:

The broad undergraduate environmental engineering curriculum of EES has earned the department a ranking as a leading undergraduate program. The ABET accredited engineering bachelor's degree is comprehensively based on physical, chemical, and biological principles to solve environmental problems affecting air, land, and water resources. An advising scheme including select faculty, led by the undergraduate coordinator, guides each student through the program.

The program educational objectives of the EES program at the University of Florida are to produce engineering practitioners and graduate students who 3-5 years after graduation:

Continue to learn, develop and apply their knowledge and skills to identify, prevent, and solve environmental problems. Have careers that benefit society as a result of their educational experiences in science, engineering analysis and design, as well as in their social and cultural studies.

Communicate and work effectively in all work settings including those that are multidisciplinary.

Wet labs are required as part of the lower division curriculum

Lower division coursework

Lower division coursework in this field requires the student to take several laboratory-based classes in calculus-based physics, chemistry, biology, programming and analysis. This is intended to give the student background information in order to introduce them to the engineering fields and to prepare them for more technical information in their upper division coursework.

Upper division coursework

Students learn to integrate their math and statistics with software to perform analysis of physical systems like the Finite Element Analysis shown above

The upper division classes in Environmental Engineering Science prepares the student for work in the fields of engineering and science with coursework in subjects including the following:

Electives

Process engineering

On this track, students are introduced to the fundamental reaction mechanisms in the field of chemical and biochemical engineering.

Resource engineering

For this track, students take classes introducing them to ways to conserve natural resources. This can include classes in water chemistry, sanitation, combustion, air pollution and radioactive waste management.

Geoengineering

This examines geoengineering in detail.

Ecology

This prepares the students for using their engineering and scientific knowledge to solve the interactions between plants, animals and the biosphere.

Biology

This includes further education about microbial, molecular and cell biology. Classes can include cell biology, virology, microbial and plant biology

Policy

This covers in more detail ways the environment can be protected through political means. This is done by introducing students to qualitative and quantitative tools in classes such as economics, sociology, political science and energy and resources.

Post graduation work

The multidisciplinary approach in Environmental Engineering Science gives the student expertise in technical fields related to their own personal interest. While some graduates choose to use this major to go to graduate school, students who choose to work often go into the fields of civil and environmental engineering, biotechnology, and research. However, the less technical math, programming and writing background gives the students opportunities to pursue IT work and technical writing.

Environmental engineering

From Wikipedia, the free encyclopedia
 

Environmental engineering is a professional engineering discipline that encompasses broad scientific topics like chemistry, biology, ecology, geology, hydraulics, hydrology, microbiology, and mathematics to create solutions that will protect and also improve the health of living organisms and improve the quality of the environment. Environmental engineering is a sub-discipline of civil engineering and chemical engineering.

Environmental engineering is the application of scientific and engineering principles to improve and maintain the environment to:

  • protect human health,
  • protect nature's beneficial ecosystems,
  • and improve environmental-related enhancement of the quality of human life.

Environmental engineers devise solutions for wastewater management, water and air pollution control, recycling, waste disposal, and public health. They design municipal water supply and industrial wastewater treatment systems, and design plans to prevent waterborne diseases and improve sanitation in urban, rural and recreational areas. They evaluate hazardous-waste management systems to evaluate the severity of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. They implement environmental engineering law, as in assessing the environmental impact of proposed construction projects.

Environmental engineers study the effect of technological advances on the environment, addressing local and worldwide environmental issues such as acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources.

Most jurisdictions impose licensing and registration requirements for qualified environmental engineers.

Etymology

The word environmental has its root in the late 14th-century French word environ (verb), meaning to encircle or to encompass. The word environment was used by Carlyle in 1827 to refer to the aggregate of conditions in which a person or thing lives. The meaning shifted again in 1956 when it was used in the ecological sense, where Ecology is the branch of science dealing with the relationship of living things to their environment. 

The second part of the phrase environmental engineer originates from Latin roots and was used in the 14th century French as engignour, meaning a constructor of military engines such as trebuchets, harquebuses, longbows, cannons, catapults, ballistas, stirrups, armour as well as other deadly or bellicose contraptions. The word engineer was not used to reference public works until the 16th century; and it likely entered the popular vernacular as meaning a contriver of public works during John Smeaton's time.

History

Ancient civilizations

Environmental engineering is a name for work that has been done since early civilizations, as people learned to modify and control the environmental conditions to meet needs. As people recognized that their health was related to the quality of their environment, they built systems to improve it. The ancient Indus Valley Civilization (3300 B.C.E. to 1300 B.C.E.) had advanced control over their water resources. The public work structures found at various sites in the area include wells, public baths, water storage tanks, a drinking water system, and a city-wide sewage collection system. They also had an early canal irrigation system enabling large-scale agriculture.

From 4000 to 2000 B.C.E., many civilizations had drainage systems and some had sanitation facilities, including the Mesopotamian Empire, Mohenjo-Daro, Egypt, Crete, and the Orkney Islands in Scotland. The Greeks also had aqueducts and sewer systems that used rain and wastewater to irrigate and fertilize fields.

The first aqueduct in Rome was constructed in 312 B.C.E., and from there, they continued to construct aqueducts for irrigation and safe urban water supply during droughts. They also built an underground sewer system as early as the 7th century B.C.E. that fed into the Tiber River, draining marshes to create farmland as well as removing sewage from the city.

Modern era

Very little change was seen from the fall of Rome until the 19th century, where improvements saw increasing efforts focused on public health. Modern environmental engineering began in London in the mid-19th century when Joseph Bazalgette designed the first major sewerage system following the Great Stink. The city's sewer system conveyed raw sewage to the River Thames, which also supplied the majority of the city's drinking water, leading to an outbreak of cholera. The introduction of drinking water treatment and sewage treatment in industrialized countries reduced waterborne diseases from leading causes of death to rarities.

The field emerged as a separate academic discipline during the middle of the 20th century in response to widespread public concern about water and air pollution and other environmental degradation. As society and technology grew more complex, they increasingly produced unintended effects on the natural environment. One example is the widespread application of the pesticide DDT to control agricultural pests in the years following World War II. The story of DDT as vividly told in Rachel Carson's Silent Spring (1962) is considered to be the birth of the modern environmental movement, which led to the modern field of "environmental engineering."

Education

Many universities offer environmental engineering programs through either the department of civil engineering or chemical engineering and also including electronic projects to develop and balance the environmental conditions. Environmental engineers in a civil engineering program often focus on hydrology, water resources management, bioremediation, and water and wastewater treatment plant design. Environmental engineers in a chemical engineering program tend to focus on environmental chemistry, advanced air and water treatment technologies, and separation processes. Some subdivisions of environmental engineering include natural resources engineering and agricultural engineering.

Courses for students fall into a few broad classes:

  • Mechanical engineering courses oriented towards designing machines and mechanical systems for environmental use such as water and wastewater treatment facilities, pumping stations, garbage segregation plants, and other mechanical facilities.
  • Environmental engineering or environmental systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment.
  • Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects of chemicals in the environment, including any mining processes, pollutants, and also biochemical processes.
  • Environmental technology courses oriented towards producing electronic or electrical graduates capable of developing devices and artifacts able to monitor, measure, model and control environmental impact, including monitoring and managing energy generation from renewable sources.

Curriculum

The following topics make up a typical curriculum in environmental engineering:

  1. Mass and Energy transfer
  2. Environmental chemistry
    1. Inorganic chemistry
    2. Organic Chemistry
    3. Nuclear Chemistry
  3. Growth models
    1. Resource consumption
    2. Population growth
    3. Economic growth
  4. Risk assessment
    1. Hazard identification
    2. Dose-response Assessment
    3. Exposure assessment
    4. Risk characterization
    5. Comparative risk analysis
  5. Water pollution
    1. Water resources and pollutants
    2. Oxygen demand
    3. Pollutant transport
    4. Water and waste water treatment
  6. Air pollution
    1. Industry, transportation, commercial and residential emissions
    2. Criteria and toxic air pollutants
    3. Pollution modelling (e.g. Atmospheric dispersion modeling)
    4. Pollution control
    5. Air pollution and meteorology
  7. Global change
    1. Greenhouse effect and global temperature
    2. Carbon, nitrogen, and oxygen cycle
    3. IPCC emissions scenarios
    4. Oceanic changes (ocean acidification, other effects of global warming on oceans) and changes in the stratosphere (see Physical impacts of climate change)
  8. Solid waste management and resource recovery
    1. Life cycle assessment
    2. Source reduction
    3. Collection and transfer operations
    4. Recycling
    5. Waste-to-energy conversion
    6. Landfill

Mass Balance

Consider a man made chemical that we wish to find the fate of in relation to time, position, some phase of matter, or flow of a liquid. We represent the measured change in concentration as a function of all the rates of change that effect that clump of chemical matter.

Meaning that for some control volume, the change in concentration versus change in linear independent time is equal to the sum of whatever changes are occurring in (+) and out(-) of that control volume. This is allowed for a few different reasons:

(1) Conservation of mass.

(2) Representation as an ordinary differential equation.

(3) A solution exists.

Although differential equations can be intimidating, this formula for a change in the concentration for a control volume per time is very versatile even without calculus. Take for instance the common scenario of a tank containing a volume with a contaminant of a certain concentration. Given that there is a first order reaction -kC taking place and that the tank is in steady state the effluent concentration becomes an expression of the initial concentration, the reaction constant k, and the hydraulic retention time (HRT) which is equal to the quotient of the volume of the tank by the flow.

Applications

Water supply and treatment

Environmental engineers evaluate the water balance within a watershed and determine the available water supply, the water needed for various needs in that watershed, the seasonal cycles of water movement through the watershed and they develop systems to store, treat, and convey water for various uses.

Water is treated to achieve water quality objectives for the end uses. In the case of a potable water supply, water is treated to minimize the risk of infectious disease transmission, the risk of non-infectious illness, and to create a palatable water flavor. Water distribution systems are designed and built to provide adequate water pressure and flow rates to meet various end-user needs such as domestic use, fire suppression, and irrigation.

Wastewater treatment

There are numerous wastewater treatment technologies. A wastewater treatment train can consist of a primary clarifier system to remove solid and floating materials, a secondary treatment system consisting of an aeration basin followed by flocculation and sedimentation or an activated sludge system and a secondary clarifier, a tertiary biological nitrogen removal system, and a final disinfection process. The aeration basin/activated sludge system removes organic material by growing bacteria (activated sludge). The secondary clarifier removes the activated sludge from the water. The tertiary system, although not always included due to costs, is becoming more prevalent to remove nitrogen and phosphorus and to disinfect the water before discharge to a surface water stream or ocean outfall.

Air pollution management

Scientists have developed air pollution dispersion models to evaluate the concentration of a pollutant at a receptor or the impact on overall air quality from vehicle exhausts and industrial flue gas stack emissions. To some extent, this field overlaps the desire to decrease carbon dioxide and other greenhouse gas emissions from combustion processes.

Environmental impact assessment and mitigation

Water pollution

Environmental engineers apply scientific and engineering principles to evaluate if there are likely to be any adverse impacts to water quality, air quality, habitat quality, flora and fauna, agricultural capacity, traffic, ecology, and noise. If impacts are expected, they then develop mitigation measures to limit or prevent such impacts. An example of a mitigation measure would be the creation of wetlands in a nearby location to mitigate the filling in of wetlands necessary for a road development if it is not possible to reroute the road.

In the United States, the practice of environmental assessment was formally initiated on January 1, 1970, the effective date of the National Environmental Policy Act (NEPA). Since that time, more than 100 developing and developed nations either have planned specific analogous laws or have adopted procedure used elsewhere. NEPA is applicable to all federal agencies in the United States.

Regulatory agencies

Environmental Protection Agency

The U.S. Environmental Protection Agency (EPA) is one of the many agencies that work with environmental engineers to solve key issues. An important component of EPA's mission is to protect and improve air, water, and overall environmental quality in order to avoid or mitigate the consequences of harmful effects.

Two-state solution

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Two-state_solution A peace movement po...