Search This Blog

Friday, September 29, 2023

Model predictive control

From Wikipedia, the free encyclopedia

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.

Overview

The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.

MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.

MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.

While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.

When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.

An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.

Theory behind MPC

A discrete MPC scheme.

MPC is based on iterative, finite-horizon optimization of a plant model. At time the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: . Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time . Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.

Principles of MPC

Model predictive control is a multivariable control algorithm that uses:

  • an internal dynamic model of the process
  • a cost function J over the receding horizon
  • an optimization algorithm minimizing the cost function J using the control input u

An example of a quadratic cost function for optimization is given by:

without violating constraints (low/high limits) with

: th controlled variable (e.g. measured temperature)
: th reference variable (e.g. required temperature)
: th manipulated variable (e.g. control valve)
: weighting coefficient reflecting the relative importance of
: weighting coefficient penalizing relative big changes in

etc.

Nonlinear MPC

Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.

The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized. Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function. 

While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.

Explicit MPC

Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.

Robust MPC

Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.

  • Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance. This is the optimal solution to linear robust control problems, however it carries a high computational cost. The basic idea behind the min/max MPC approach is to modify the on-line "min" optimization to a "min-max" problem, minimizing the worst case of the objective function, maximized over all possible plants from the uncertainty set.
  • Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.
  • Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state. The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.
  • Multi-stage MPC. This uses a scenario-tree formulation by approximating the uncertainty space with a set of samples and the approach is non-conservative because it takes into account that the measurement information is available at every time stage in the prediction and the decisions at every stage can be different and can act as recourse to counteract the effects of uncertainties. The drawback of the approach however is that the size of the problem grows exponentially with the number of uncertainties and the prediction horizon.
  • Tube-enhanced multi-stage MPC. This approach synergizes multi-stage MPC and tube-based MPC. It provides high degrees of freedom to choose the desired trade-off between optimality and simplicity by the classification of uncertainties and the choice of control laws in the predictions.

Commercially available MPC software

Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.

A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.

MPC vs. LQR

Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.

While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.

Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.

The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.

This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected.

Earthscope

From Wikipedia, the free encyclopedia
EarthScope logo

The EarthScope project was an National Science Foundation (NSF) funded earth science program that, from 2003-2018, used geological and geophysical techniques to explore the structure and evolution of the North American continent and to understand the processes controlling earthquakes and volcanoes. The project had three components: USArray, the Plate Boundary Observatory, and the San Andreas Fault Observatory at Depth. Organizations associated with the project included UNAVCO, the Incorporated Research Institutions for Seismology (IRIS), Stanford University, the United States Geological Survey (USGS) and National Aeronautics and Space Administration (NASA). Several international organizations also contributed to the initiative. EarthScope data are publicly accessible.

Observatories

There were three EarthScope observatories: the San Andreas Fault Observatory at Depth (SAFOD), the Plate Boundary Observatory (PBO), and the Seismic and Magnetotelluric Observatory (USArray). These observatories consist of boreholes into an active fault zone, global positioning system (GPS) receivers, tiltmeters, long-baseline laser strainmeters, borehole strainmeters, permanent and portable seismographs, and magnetotelluric stations. The various EarthScope components will provide integrated and highly accessible data on geochronology and thermochronology, petrology and geochemistry, structure and tectonics, surficial processes and geomorphology, geodynamic modeling, rock physics, and hydrogeology.

Seismic and Magnetotelluric Observatory (USArray)

USArray, managed by IRIS, is a 15-year program to place a dense network of permanent and portable seismographs across the continental United States. These seismographs record the seismic waves released by earthquakes that occur around the world. Seismic waves are indicators of energy disbursement within the earth. By analyzing the records of earthquakes obtained from this dense grid of seismometers, scientists can learn about Earth structure and dynamics and the physical processes controlling earthquakes and volcanoes. The goal of USArray is primarily to gain a better understanding of the structure and evolution of the continental crust, lithosphere, and mantle underneath North America.

The USArray is composed of four facilities: a Transportable Array, a Flexible Array, a Reference Network, and a Magnetotelluric Facility.

The Transportable Array is composed of 400 seismometers that are being deployed in a rolling grid across the United States over a period of 10 years. The stations are placed 70 km apart, and can map the upper 70 km of the Earth. After approximately two years, stations are moved east to the next site on the grid – unless adopted by an organization and made a permanent installation. Once the sweep across the United States is completed, over 2000 locations will have been occupied. The Array Network Facility is responsible for data collection from the Transportable Array stations.

The Flexible Array is composed of 291 broadband stations, 120 short period stations, and 1700 active source stations. The Flexible Array allows sites to be targeted in a more focused manner than the broad Transportable Array. Natural or artificially created seismic waves can be used to map structures in the Earth.

The Reference Network is composed of permanent seismic stations spaced about 300 km apart. The Reference Network provides a baseline for the Transportable Array and Flexible Array. EarthScope added and upgraded 39 stations to the already existing Advanced National Seismic System, which is part of the Reference Network.

The Magnetotelluric Facility is composed of seven permanent and 20 portable sensors that record electromagnetic fields. It is the electromagnetic equivalent of the seismic arrays. The portable sensors are moved in a rolling grid similar to the Transportable Array grid, but are only in place about a month before they are moved to the next location. A magnetotelluric station consists of a magnetometer, four electrodes, and a data recording unit that are buried in shallow holes. The electrodes are oriented north-south and east-west and are saturated in a salt solution to improve conductivity with the ground.

An EarthScope GPS Geosensor, a component of the Plate Boundary Observatory (PBO)

Plate Boundary Observatory (PBO)

The Plate Boundary Observatory PBO consists of a series of geodetic instruments, Global Positioning System (GPS) receivers and borehole strainmeters, that have been installed to help understand the boundary between the North American Plate and Pacific Plate. The PBO network includes several major observatory components: a network of 1100 permanent, continuously operating Global Positioning System (GPS) stations many of which provide data at high-rate and in real-time, 78 borehole seismometers, 74 borehole strainmeters, 26 shallow borehole tiltmeters, and six long baseline laser strainmeters. These instruments are complemented by InSAR (interferometric synthetic aperture radar) and LiDAR (light detection and ranging) imagery and geochronology acquired as part of the GeoEarthScope initiative. PBO also includes comprehensive data products, data management and education and outreach efforts. These permanent networks are supplemented by a pool of portable GPS receivers that can be deployed for temporary networks to researchers, to measure the crustal motion at a specific target or in response to a geologic event. The Plate Boundary Observatory portion of EarthScope is operated by UNAVCO, Inc. UNAVCO is a non-profit, university-governed consortium that facilitates research and education using geodesy.

Schematic representation of the SAFOD main borehole and pilot hole

San Andreas Fault Observatory at Depth (SAFOD)

The San Andreas Fault Observatory at Depth (SAFOD) consists of a main borehole that cuts across the active San Andreas Fault at a depth of approximately 3 km and a pilot hole about 2 km southwest of San Andreas Fault. Data from the instruments installed in the holes, which consist of geophone sensors, data acquisition systems, and GPS clocks, as well as samples collected during drilling, will help to better understand the processes that control the behavior of the San Andreas Fault.

Data Products

Data collected from the various observatories are used to create different types of data products. Each data product addresses a different scientific problem.

P-Wave Tomography

Tomography is a method of producing a three-dimensional image of the internal structures of a solid object (such as the human body or the earth) by the observation and recording of differences in the effects on the passage of energy waves impinging on those structures. The waves of energy are P-waves generated by earthquakes and are recording the wave velocities. The high quality data that is being collected by the permanent seismic stations of USArray and the Advanced National Seismic System (ANSS) will allow the creation of high resolution seismic imaging of the Earth's interior below the United States. Seismic tomography helps constrain mantle velocity structure and aids in the understanding of chemical and geodynamic processes that are at work. With the use of the data collected by USArray and global travel-time data, a global tomography model of P-wave velocity heterogeneity in the mantle can be created. The range and resolution of this technique will allow investigation into the suite of problems that are of concern in the North American mantle lithosphere, including the nature of the major tectonic features. This method gives evidence for differences in thickness and the velocity anomaly of the mantle lithosphere between the stable center of the continent and the more active western North America. This data is vital for the understanding of local lithosphere evolution, and when combined with additional global data, will allow the mantle to be imaged beyond the current extent of USArray.

Receiver Reference Models

EarthScope Automated Receiver Survey (EARS), has created a prototype of a system that will be used to address several key elements of the production of EarthScope products. One of the prototype systems is the receiver reference model. It will provide crustal thickness and average crustal Vp/Vs ratios beneath USArray transportable array stations.

P-waves and S-waves from a seismograph

Ambient Seismic Noise

The main function of the Advanced National Seismic System (ANSS) and USArray, is to provide high quality data for earthquake monitoring, source studies and Earth structure research. The utility of seismic data is greatly increased when noise levels, unwanted vibrations, are reduced; however broadband seismograms will always contain a certain level of noise. The dominant sources of noise are either from the instrumentation itself or from ambient Earth vibrations. Normally, seismometer self noise will be well below the seismic noise level, and every station will have a characteristic noise pattern that can be calculated or observed. Sources of seismic noise within the Earth are caused by any of the following: the actions of human beings at or near the surface of the Earth, objects moved by wind with the movement being transferred to the ground, running water (river flow), surf, volcanic activity, or long period tilt due to thermal instabilities from poor station design.

A new approach to seismic noise studies will be introduced with the EarthScope project, in that there are no attempts to screen the continuous waveforms to eliminate body and surface waves from the naturally occurring earthquakes. Earthquake signals are not generally included in the processing of noise data, because they are generally low probability occurrences, even at low power levels. The two objectives behind the collection of the seismic noise data are to provide and document a standard method to calculate ambient seismic background noise, and to characterize the variation of ambient background seismic noise levels across the United States as a function of geography, season, and time of day. The new statistical approach will provide the ability to compute probability density functions (PDFs) to evaluate the full range of noise at a given seismic station, allowing the estimation of noise levels over a broad range of frequencies from 0.01–16 Hz (100-0.0625s period). With the use of this new method it will be much easier to compare seismic noise characteristics between different networks in different regions.

Earthquake Ground Motion Animations

Seismometers of USArray transportable array record the passage of numerous seismic waves through a given point near the Earth's surface, and classically these seismograms are analyzed to deduce properties of the Earth's structure and the seismic source. Given a spatially dense set of seismic recordings, these signals can also be used to visualize the actual continuous seismic waves, providing new insights and interpretation techniques into complex wave propagation effects. Using signals recorded by the array of seismometers, the EarthScope project will be able to animate seismic waves as they sweep across the USArray transportable array for selected larger earthquakes. This will be able to illustrate the regional and teleseismic wave propagation phenomena. The seismic data collected from both permanent and transportable seismic stations will be used to provide these computer generated animations.

Regional Moment Tensors

The seismic moment tensor is one of the fundamental parameters of earthquakes that can be determined from seismic observations. It is directly related to earthquake fault orientation and rupture direction. The moment magnitude, Mw derived from the moment tensor magnitude, is the most reliable quantity for comparing and measuring the size of an earthquake with other earthquake magnitudes. Moment tensors are used in a wide range of seismological research fields, such as earthquake statistics, earthquake scaling relationships, and stress inversion. The creation of regional moment tensor solutions, with the appropriate software, for moderate-to-large earthquakes in the U.S. will be from USArray transportable array and Advance National Seismic System broadband seismic stations. Results are obtained in the time and the frequency domain. Waveform fit and amplitude-phase match figures are provided to allow users to evaluate moment tensor quality.

Geodetic Monitoring of the Western US and Hawaii

Global Positioning System (GPS) equipment and techniques provide a unique opportunity for earth scientists to study regional and local tectonic plate motions and conduct natural hazards monitoring. Cleaned network solutions from several GPS arrays have merged into regional clusters in conjunction with the EarthScope project. The arrays include the Pacific Northwest Geodetic Array, EarthScope's Plate Boundary Observatory, the Western Canadian Deformation Array, and networks run by the US Geological Survey. The daily GPS measurements from ~1500 stations along the Pacific/North American plate boundary provide millimeter-scale accuracy and can be used monitor the displacements of the earths crust. With the use of data modeling software and the recorded GPS data, the opportunity to quantify crustal deformation caused by plate tectonics, earthquakes, landslides and volcanic eruptions will be possible.

Time-dependent Strain

The goal is to provide models of time-dependent strain associated with a number of recent earthquakes and other geologic events as constrained by GPS data. With the use of InSAR (Interferometric Synthetic Aperture Radar), a remote-sensing technique, and PBO (Plate Boundary Observatory), a fixed array of GPS receivers and strainmeters, the EarthScope project will provide spatially continuous strain measurements over wide geographic areas with decimeter to centimeter resolution.

Global Strain Rate Map

The Global Strain Rate Map (GSRM) is a project of the International Lithosphere Program whose mission is to determine a globally self-consistent strain rate and velocity field model, consistent with geodetic and geologic field observations collected by GPS, seismometers, and strainometers. GSRM is a digital model of the global velocity gradient tensor field associated with the accommodation of present-day crustal motions. The overall mission also includes: (1) contributions of global, regional, and local models by individual researchers; (2) archive existing data sets of geologic, geodetic, and seismic information that can contribute toward a greater understanding of strain phenomena; and (3) archive existing methods for modeling strain rates and strain transients. A completed global strain rate map will provide a large amount of information which will contribute to the understanding of continental dynamics and for the quantification of seismic hazards.

Science

There are seven topics that EarthScope will address with the use of the observatories.

Convergent Margin Processes

Oceanic-Continental convergent margin

Convergent margins, also known as convergent boundaries, are active regions of deformation between two or more tectonic plates colliding with one another. Convergent margins create areas of tectonic uplift, such as mountain ranges or volcanoes. EarthScope is focusing on the boundary between the Pacific Plate and the North American Plate in the western United States. EarthScope will provide GPS geodetic data, seismic images, detailed seismicity, magnetotelluric data, InSAR, stress field maps, digital elevation models, baseline geology, and paleoseismology for a better understanding of convergent margin processes.

A few questions hoping to be answered by EarthScope include:

  • What controls the lithospheric architecture?
  • What controls the locus of volcanism?
  • How do convergent margin processes contribute to growth of the continent through time?

Crustal Strain and Deformation

Crustal strain and deformation is the change in shape and volume of continental and oceanic crust caused by stress applied to rock through tectonic forces. An array of variables including composition, temperature, pressure, etc., determines how the crust will deform.

A few questions hoping to be answered by EarthScope include:

  • How do crust and mantle rheology vary with rock type and with depth?
  • How does lithospheric rheology change in the vicinity of a fault zone?
  • What is the distribution of stress in the lithosphere?

Continental Deformation

Continental deformation is driven by plate interactions through active tectonic processes such as continental transform systems with extensional, strike-slip, and contractional regimes. EarthScope will provide velocity field data, portable and continuous GPS data, fault-zone drilling and sampling, reflection seismology, modern seismicity, pre-Holocene seismicity, and magnetotelluric and potential field data for a better understanding of continental deformation.

A few questions hoping to be answered by EarthScope include:

  • What are the fundamental controls on deformation of the continent?
  • What is the strength profile(s) of the lithosphere?
  • What defines tectonic regimes within the continent?

Continent Structure and Evolution

Earth's continents are compositionally distinct from the oceanic crust. The continents record four billion years of geologic history, while the oceanic crust gets recycled about every 180 million years. Because of the age of continental crusts, the ancient structural evolution of the continents can be studied. Data from EarthScope will be used to find the mean seismic structure of the continental crust, associated mantle, and crust-mantle transition. Variability in that structure will also be studied. EarthScope will attempt to define continental lithosphere formation and continent structure and to identify the relationship between continental structure and deformation.

A few questions hoping to be answered by EarthScope include:

  • How does magmatism modify, enlarge, and deform continental lithosphere?
  • How are the crust and lithospheric mantle related?
  • What is the role of extension, orogenic collapse, and rifting in constructing the continents?

Faults and Earthquake Processes

EarthScope is acquiring 3D and 4D data that will give scientists a more detailed insight into faulting and earthquakes than ever before. This project is providing a much needed data upgrade from work done in previous years thanks to many technological advances. New data will enable an improved study and understanding of faults and earthquakes that will increase our knowledge of the complete earthquake process, allowing for the continued development of building predictive models. Detailed information on internal fault zone architecture, crust and upper mantle structure, strain rates, and transitions between fault systems and deformation types; as well as heat flow, electromagnetic/magnetotelluric, and seismic waveform data, will all be made available.

A few questions hoping to be answered by EarthScope include:

  • How does strain accumulate and release at plate boundaries and within the North American plate?
  • How do earthquakes start, rupture, and stop?
  • What is the absolute strength of faults and the surrounding lithosphere?
The structure of the Earth

Deep Earth Structure

Through the use of seismology, scientists will be able to collect and evaluate data from the deepest parts of our planet, from the continental lithosphere down to the core. The relationship between lithospheric and the upper mantle processes is something that is not completely known, including upper mantle processes below the United States and their effects on the continental lithosphere. There are many issues of interest, such as determining the source of forces originating in the upper mantle and their effects on the continental lithosphere. Seismic data will also give scientists more understanding and insight into the lower mantle and the Earth's core, as well as activity at the core-mantle boundary.

A few questions hoping to be answered by EarthScope include:

  • How is evolution of the continents linked to processes in the upper mantle?
  • What is the level of heterogeneity in the mid-mantle?
  • What is the nature and heterogeneity of the lower mantle and core-mantle boundary?

Fluids and Magmas

EarthScope hopes to provide a better understanding of the physics of fluids and magmas in active volcanic systems in relation to the deep Earth and how the evolution of continental lithosphere is related to upper mantle processes. The basic idea of how the various melts are formed is known, but not the volumes and rates of magma production outside of Mid-ocean ridge basalts. EarthScope will provide seismic data and tomographic images of the mantle to better understand these processes.

A few questions hoping to be answered by EarthScope include:

  • Over what temporal and spatial scales do earthquake deformation and volcanic eruptions couple?
  • What controls eruption style?
  • What are the predictive signs of imminent volcanic eruption? What are the structural, rheological, and chemical controls on fluid flow in the crust?

Education and Outreach

The Education and Outreach Program is designed to integrate EarthScope into both the classroom and the community. The program must reach out to scientific educators and students as well as industry professionals (engineers, land/resource managers, technical application/data users), partners of the project (UNAVCO, IRIS, USGS, NASA, etc.), and the general public. To accomplish this, the EOP offers a wide array of educational workshops and seminars, directed at various audiences, to offer support on data interpretation and implementation of data products into the classroom. Their job is to make sure that everyone understands what EarthScope is, what it is doing in the community, and how to use the data it is producing. By generating new research opportunities for students in the scientific community, the program also hopes to expand recruitment for future generations of earth scientists.

Mission

"To use EarthScope data, products, and results to create a measurable and lasting change on the way that Earth science is taught and perceived in the United States."

Goals

  • Create a high-profile public identity for EarthScope that emphasizes the integrated nature of the scientific discoveries and the importance of EarthScope research initiatives.
  • Establish a sense of ownership among scientific, professional, and educational communities and the public so that a diverse group of individuals and organizations can and will make contributions to EarthScope.
  • Promote science literacy and understanding of EarthScope among all audiences through informal education venues.
  • Advance formal Earth science education by promoting inquiry-based classroom investigations that focus on understanding Earth and the interdisciplinary nature of EarthScope.
  • Encourage use of EarthScope data, discoveries, and new technology in resolving challenging problems and improving our quality of life.

EarthScope In the Classroom

Education and outreach will be developing tools for educators and students across the United States to interpret and apply this information for solving a wide range of scientific issues within the earth sciences. The project tailors its products to the specified needs and requests of educators.

K-12 Education

One tool that has already been put into action is the EarthScope Education and Outreach Bulletin. The bulletin, targeted for grades 5-8, summarizes a volcanic or tectonic event documented by EarthScope and puts it into an easily interpretable format, complete with diagrams and 3D models. They follow specific content standards based on what a child should be learning at those grade levels. Another is the EarthScope Voyager, Jr. which allows students to explore and visualize the various types of data that are being collected. In this interactive map, the user can add various types of base maps, features, and plate velocities. Educators have access to real time GPS data of plate movement and influences through the UNAVCO website.

University Level

EarthScope promises to produce a large amount of geological and geophysical data that will open the door for numerous research opportunities in the scientific community. As the USArray Big Foot project moves across the country, universities are adopting seismic stations near their areas. These stations are then monitored and maintained by not only the professors, but their students as well. Scouting for future seismic station locations has created field work opportunities for students. The influx of data has already begun creating projects for undergraduate research, master's thesis, and doctoral dissertations. A list of currently funded proposals can be found on the NSF website.

Legacy

Many applications for EarthScope data currently exist, as mentioned above, and many more will arise as more data becomes available. The EarthScope program is dedicated to determining the three dimensional structure of the North American continent. Future uses of the data that it produces might include hydrocarbon exploration, aquifer boundary establishment, remote sensing technique development, and earthquake risk assessment. Due to the open and free-to-the-public data portals that EarthScope and its partners maintain, the applications are limited only by the creativity of those who wish to sort through the gigabytes of data. Also, because of its scale, the program will undoubtedly be the topic of casual conversation for many people outside of the geologic community. EarthScope chatter will be made by people in political, educational, social, and scientific arenas.

Geologic Legacy

The multidisciplinary character of EarthScope will create stronger network connections between geologists of all types and from around the country. Building an Earth model of this scale requires a complex community effort, and this model is likely to be the first EarthScope legacy. Researchers analyzing the data will leave us with a greater scientific understanding of geologic resources in the Great Basin and of the evolution of the plate boundary on the North American west coast. Another geologic legacy desired by the initiative, is to invigorate the Earth sciences community. Invigoration is self-perpetuating as evidenced by participation from thousands of organizations from around the world and from all levels of students and researchers. This leads to a significantly heightened awareness within the general public, including the next cohort of prospective Earth scientists. With further evolution of the EarthScope project, there may even be opportunities to create new observatories with greater capabilities, including extending the USArray over the Gulf of Mexico and the Gulf of California. There is much promise for EarthScope tools and observatories, even after retirement, to be used by universities and professional geologists. These tools include the physical equipment, software invented to analyze the data, and other data and educational products initiated or inspired by EarthScope.

Political Legacy

The science produced by EarthScope and the researchers using its data products will guide lawmakers in environmental policy, hazard identification, and ultimately, federal funding of more large-scale projects like this one. Besides the three physical dimensions of North America's structure, a fourth dimension of the continent is being described through geochronology using EarthScope data. Improving understanding of the continent's geologic history will allow future generations to more efficiently manage and utilize geologic resources and live with geologic hazards. Environmental policy laws have been the subject of some controversy since the European settlement of North America. Specifically, water and mineral rights issues have been the focus of dispute. Representatives in Washington D.C. and the state capitals require guidance from authoritative science in drafting the soundest environmental laws for our country. The EarthScope research community is in a position to provide the most reliable course for government to take concerning environmental policy.

Hazard identification with EarthScope is an application already in use. In fact, the Federal Emergency Management Agency (FEMA) has awarded the Arizona Geological Survey and its partner universities funding to adopt and maintain eight Transportable Array stations. The stations will be used to update Arizona's earthquake risk assessment.

Social Legacy

For EarthScope to live up to its potential in the Earth sciences, the connections between the research and the education and outreach communities must continue to be cultivated. Enhanced public outreach to museums, the National Park System, and public schools will ensure that these forward-thinking connections are fostered. National media collaboration with high-profile outlets such as Discovery Channel, Science Channel, and National Geographic may secure a lasting legacy within the social consciousness of the world. Earth science has already been promoted as a vital modern discipline, especially in today's “green” culture, to which EarthScope is contributing. The size of the EarthScope project augments the growing public awareness of the broad structure of the planet on which we live.

Planetary coordinate system

From Wikipedia, the free encyclopedia

A planetary coordinate system (also referred to as planetographic, planetodetic, or planetocentric) is a generalization of the geographic, geodetic, and the geocentric coordinate systems for planets other than Earth. Similar coordinate systems are defined for other solid celestial bodies, such as in the selenographic coordinates for the Moon. The coordinate systems for almost all of the solid bodies in the Solar System were established by Merton E. Davies of the Rand Corporation, including Mercury, Venus, Mars, the four Galilean moons of Jupiter, and Triton, the largest moon of Neptune.

Longitude

The longitude systems of most of those bodies with observable rigid surfaces have been defined by references to a surface feature such as a crater. The north pole is that pole of rotation that lies on the north side of the invariable plane of the Solar System (near the ecliptic). The location of the prime meridian as well as the position of the body's north pole on the celestial sphere may vary with time due to precession of the axis of rotation of the planet (or satellite). If the position angle of the body's prime meridian increases with time, the body has a direct (or prograde) rotation; otherwise the rotation is said to be retrograde.

In the absence of other information, the axis of rotation is assumed to be normal to the mean orbital plane; Mercury and most of the satellites are in this category. For many of the satellites, it is assumed that the rotation rate is equal to the mean orbital period. In the case of the giant planets, since their surface features are constantly changing and moving at various rates, the rotation of their magnetic fields is used as a reference instead. In the case of the Sun, even this criterion fails (because its magnetosphere is very complex and does not really rotate in a steady fashion), and an agreed-upon value for the rotation of its equator is used instead.

For planetographic longitude, west longitudes (i.e., longitudes measured positively to the west) are used when the rotation is prograde, and east longitudes (i.e., longitudes measured positively to the east) when the rotation is retrograde. In simpler terms, imagine a distant, non-orbiting observer viewing a planet as it rotates. Also suppose that this observer is within the plane of the planet's equator. A point on the Equator that passes directly in front of this observer later in time has a higher planetographic longitude than a point that did so earlier in time.

However, planetocentric longitude is always measured positively to the east, regardless of which way the planet rotates. East is defined as the counter-clockwise direction around the planet, as seen from above its north pole, and the north pole is whichever pole more closely aligns with the Earth's north pole. Longitudes traditionally have been written using "E" or "W" instead of "+" or "−" to indicate this polarity. For example, −91°, 91°W, +269° and 269°E all mean the same thing.

The modern standard for maps of Mars (since about 2002) is to use planetocentric coordinates. Guided by the works of historical astronomers, Merton E. Davies established the meridian of Mars at Airy-0 crater. For Mercury, the only other planet with a solid surface visible from Earth, a thermocentric coordinate is used: the prime meridian runs through the point on the equator where the planet is hottest (due to the planet's rotation and orbit, the Sun briefly retrogrades at noon at this point during perihelion, giving it more sunlight). By convention, this meridian is defined as exactly twenty degrees of longitude east of Hun Kal.

Tidally-locked bodies have a natural reference longitude passing through the point nearest to their parent body: 0° the center of the primary-facing hemisphere, 90° the center of the leading hemisphere, 180° the center of the anti-primary hemisphere, and 270° the center of the trailing hemisphere. However, libration due to non-circular orbits or axial tilts causes this point to move around any fixed point on the celestial body like an analemma.

Latitude

Planetographic latitude and planetocentric latitude may be similarly defined. The zero latitude plane (Equator) can be defined as orthogonal to the mean axis of rotation (poles of astronomical bodies). The reference surfaces for some planets (such as Earth and Mars) are ellipsoids of revolution for which the equatorial radius is larger than the polar radius, such that they are oblate spheroids.

Altitude

Vertical position can be expressed with respect to a given vertical datum, by means of physical quantities analogous to the topographical geocentric distance (compared to a constant nominal Earth radius or the varying geocentric radius of the reference ellipsoid surface) or altitude/elevation (above and below the geoid).

The areoid (the geoid of Mars) has been measured using flight paths of satellite missions such as Mariner 9 and Viking. The main departures from the ellipsoid expected of an ideal fluid are from the Tharsis volcanic plateau, a continent-size region of elevated terrain, and its antipodes.

The selenoid (the geoid of the Moon) has been measured gravimetrically by the GRAIL twin satellites.

Ellipsoid of revolution (spheroid)

Reference ellipsoids are also useful for defining geodetic coordinates and mapping other planetary bodies including planets, their satellites, asteroids and comet nuclei. Some well observed bodies such as the Moon and Mars now have quite precise reference ellipsoids.

For rigid-surface nearly-spherical bodies, which includes all the rocky planets and many moons, ellipsoids are defined in terms of the axis of rotation and the mean surface height excluding any atmosphere. Mars is actually egg shaped, where its north and south polar radii differ by approximately 6 km (4 miles), however this difference is small enough that the average polar radius is used to define its ellipsoid. The Earth's Moon is effectively spherical, having almost no bulge at its equator. Where possible, a fixed observable surface feature is used when defining a reference meridian.

For gaseous planets like Jupiter, an effective surface for an ellipsoid is chosen as the equal-pressure boundary of one bar. Since they have no permanent observable features, the choices of prime meridians are made according to mathematical rules.

Flattening

Comparison of the rotation period (sped up 10 000 times, negative values denoting retrograde), flattening and axial tilt of the planets and the Moon (SVG animation)

For the WGS84 ellipsoid to model Earth, the defining values are

a (equatorial radius): 6 378 137.0 m
(inverse flattening): 298.257 223 563

from which one derives

b (polar radius): 6 356 752.3142 m,

so that the difference of the major and minor semi-axes is 21.385 km (13 mi). This is only 0.335% of the major axis, so a representation of Earth on a computer screen would be sized as 300 pixels by 299 pixels. This is rather indistinguishable from a sphere shown as 300 pix by 300 pix. Thus illustrations typically greatly exaggerate the flattening to highlight the concept of any planet's oblateness.

Other f values in the Solar System are 116 for Jupiter, 110 for Saturn, and 1900 for the Moon. The flattening of the Sun is about 9×10−6.

Origin of flattening

In 1687, Isaac Newton published the Principia in which he included a proof that a rotating self-gravitating fluid body in equilibrium takes the form of an oblate ellipsoid of revolution (a spheroid). The amount of flattening depends on the density and the balance of gravitational force and centrifugal force.

Equatorial bulge

Equatorial bulge of the Solar Systems major celestial bodies
Body Diameter (km) Equatorial
bulge (km)
Flattening
ratio
Rotation
period (h)
Density
(kg/m3)
Deviation
from
Equatorial Polar
Earth 12,756.2 12,713.6 42.6 1 : 299.4 23.936 5515 1 : 232 −23%
Mars 6,792.4 6,752.4 40 1 : 170 24.632 3933 1 : 175 +3%
Ceres 964.3 891.8 72.5 1 : 13.3 9.074 2162 1 : 13.1 −2%
Jupiter 142,984 133,708 9,276 1 : 15.41 9.925 1326 1 : 9.59 −38%
Saturn 120,536 108,728 11,808 1 : 10.21 10.56 687 1 : 5.62 −45%
Uranus 51,118 49,946 1,172 1 : 43.62 17.24 1270 1 : 27.71 −36%
Neptune 49,528 48,682 846 1 : 58.54 16.11 1638 1 : 31.22 −47%

Generally any celestial body that is rotating (and that is sufficiently massive to draw itself into spherical or near spherical shape) will have an equatorial bulge matching its rotation rate. With 11808 km Saturn is the planet with the largest equatorial bulge in our Solar System.

Equatorial ridges

Equatorial bulges should not be confused with equatorial ridges. Equatorial ridges are a feature of at least four of Saturn's moons: the large moon Iapetus and the tiny moons Atlas, Pan, and Daphnis. These ridges closely follow the moons' equators. The ridges appear to be unique to the Saturnian system, but it is uncertain whether the occurrences are related or a coincidence. The first three were discovered by the Cassini probe in 2005; the Daphnean ridge was discovered in 2017. The ridge on Iapetus is nearly 20 km wide, 13 km high and 1300 km long. The ridge on Atlas is proportionally even more remarkable given the moon's much smaller size, giving it a disk-like shape. Images of Pan show a structure similar to that of Atlas, while the one on Daphnis is less pronounced.

Triaxial ellipsoid

Small moons, asteroids, and comet nuclei frequently have irregular shapes. For some of these, such as Jupiter's Io, a scalene (triaxial) ellipsoid is a better fit than the oblate spheroid. For highly irregular bodies, the concept of a reference ellipsoid may have no useful value, so sometimes a spherical reference is used instead and points identified by planetocentric latitude and longitude. Even that can be problematic for non-convex bodies, such as Eros, in that latitude and longitude don't always uniquely identify a single surface location.

Smaller bodies (Io, Mimas, etc.) tend to be better approximated by triaxial ellipsoids; however, triaxial ellipsoids would render many computations more complicated, especially those related to map projections. Many projections would lose their elegant and popular properties. For this reason spherical reference surfaces are frequently used in mapping programs.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...