Search This Blog

Friday, July 28, 2023

Action research

From Wikipedia, the free encyclopedia

Action research is a philosophy and methodology of research generally applied in the social sciences. It seeks transformative change through the simultaneous process of taking action and doing research, which are linked together by critical reflection. Kurt Lewin, then a professor a MIT, first coined the term "action research" in 1944. In his 1946 paper "Action Research and Minority Problems" he described action research as "a comparative research on the conditions and effects of various forms of social action and research leading to social action" that uses "a spiral of steps, each of which is composed of a circle of planning, action and fact-finding about the result of the action".

Process

Action research is an interactive inquiry process that balances problem-solving actions implemented in a collaborative context with data-driven collaborative analysis or research to understand underlying causes enabling future predictions about personal and organizational change.

After seven decades of action research development, many methods have evolved that adjust the balance to focus more on the actions taken or more on the research that results from the reflective understanding of the actions. This tension exists between:

  1. those who are more driven either by the researcher's agenda or by participants;
  2. those who are motivated primarily by instrumental goal attainment or by the aim of personal, organizational or societal transformation; and
  3. 1st-, to 2nd-, to 3rd-person research, that is, my research on my own action, aimed primarily at personal change; our research on our group (family/team), aimed primarily at improving the group; and 'scholarly' research aimed primarily at theoretical generalization or large-scale change.

Action research challenges traditional social science by moving beyond reflective knowledge created by outside experts sampling variables, to an active moment-to-moment theorizing, data collecting and inquiry occurring in the midst of emergent structure. "Knowledge is always gained through action and for action. From this starting point, to question the validity of social knowledge is to question, not how to develop a reflective science about action, but how to develop genuinely well-informed action – how to conduct an action science". In this sense, engaging in action research is a form of problem-based investigation by practitioners into their practice, thus it is an empirical process. The goal is both to create and share knowledge in the social sciences.

Major theoretical approaches

Chris Argyris' action science

Chris Argyris' action science begins with the study of how human beings design their actions in difficult situations. Humans design their actions to achieve intended consequences and are governed by a set of environment variables. How those governing variables are treated in designing actions are the key differences between single-loop and double-loop learning. When actions are designed to achieve the intended consequences and to suppress conflict about the governing variables, a single-loop learning cycle usually ensues.

On the other hand, when actions are taken not only to achieve the intended consequences, but also to openly inquire about conflict and to possibly transform the governing variables, both single- and double-loop learning cycles usually ensue. (Argyris applies single- and double-loop learning concepts not only to personal behaviors but also to organizational behaviors in his models.) This is different from experimental research in which environmental variables are controlled and researchers try to find out cause and effect in an isolated environment.

John Heron and Peter Reason's cooperative inquiry

Cooperative, aka collaborative, inquiry was first proposed by John Heron in 1971 and later expanded with Peter Reason and Demi Brown. The major idea is to "research 'with' rather than 'on' people." It emphasizes the full involvement in research decisions of all active participants as co-researchers.

Cooperative inquiry creates a research cycle among 4 different types of knowledge: propositional (as in contemporary science), practical (the knowledge that comes with actually doing what you propose), experiential (the real-time feedback we get about our interaction with the larger world) and presentational (the artistic rehearsal process through which we craft new practices). At every cycle, the research process includes these four stages, with deepening experience and knowledge of the initial proposition, or of new propositions.

Paulo Freire's participatory action research

Participatory action research builds on the critical pedagogy put forward by Paulo Freire as a response to the traditional formal models of education where the "teacher" stands at the front and "imparts" information to the "students" who are passive recipients. This was further developed in "adult education" models throughout Latin America.

Orlando Fals-Borda (1925–2008), Colombian sociologist and political activist, was one of the principal promoters of participatory action research (IAP in Spanish) in Latin America. He published a "double history of the coast", book that compares the official "history" and the non-official "story" of the north coast of Colombia.

William Barry's living educational theory approach to action research

William Barry defined an approach to action research which focuses on creating ontological weight. He adapted the idea of ontological weight to action research from existential Christian philosopher Gabriel Marcel. Barry was influenced by Jean McNiff's and Jack Whitehead's phraseology of living theory action research but was diametrically opposed to the validation process advocated by Whitehead which demanded video "evidence" of "energy flowing values" and his atheistic ontological position which influenced his conception of values in action research.

Barry explained that living educational theory (LET) is "a critical and transformational approach to action research. It confronts the researcher to challenge the status quo of their educational practice and to answer the question, 'How can I improve what I'm doing?' Researchers who use this approach must be willing to recognize and assume responsibility for being 'living contradictions' in their professional practice – thinking one way and acting in another. The mission of the LET action researcher is to overcome workplace norms and self-behavior which contradict the researcher's values and beliefs. The vision of the LET researcher is to make an original contribution to knowledge through generating an educational theory proven to improve the learning of people within a social learning space. The standard of judgment for theory validity is evidence of workplace reform, transformational growth of the researcher, and improved learning by the people researcher claimed to have influenced...".

Action research in organization development

Wendell L. French and Cecil Bell define organization development (OD) at one point as "organization improvement through action research". If one idea can be said to summarize OD's underlying philosophy, it would be action research as it was conceptualized by Kurt Lewin and later elaborated and expanded on by other behavioral scientists. Concerned with social change and, more particularly, with effective, permanent social change, Lewin believed that the motivation to change was strongly related to action: If people are active in decisions affecting them, they are more likely to adopt new ways. "Rational social management", he said, "proceeds in a spiral of steps, each of which is composed of a circle of planning, action and fact-finding about the result of action".[18]

  • Unfreezing: first step.
  • Changing: The situation is diagnosed and new models of behavior are explored and tested.
  • Refreezing: Application of new behavior is evaluated, and if reinforcing, adopted.
Figure 1: Systems model of action-research process

Lewin's description of the process of change involves three steps: Figure 1 summarizes the steps and processes involved in planned change through action research. Action research is depicted as a cyclical process of change.

  1. The cycle begins with a series of planning actions initiated by the client and the change agent working together. The principal elements of this stage include a preliminary diagnosis, data gathering, feedback of results, and joint action planning. In the language of systems theory, this is the input phase, in which the client system becomes aware of problems as yet unidentified, realizes it may need outside help to effect changes, and shares with the consultant the process of problem diagnosis.
  2. The second stage of action research is the action, or transformation, phase. This stage includes actions relating to learning processes (perhaps in the form of role analysis) and to planning and executing behavioral changes in the client organization. As shown in Figure 1, feedback at this stage would move via Feedback Loop A and would have the effect of altering previous planning to bring the learning activities of the client system into better alignment with change objectives. Included in this stage is action-planning activity carried out jointly by the consultant and members of the client system. Following the workshop or learning sessions, these action steps are carried out on the job as part of the transformation stage.
  3. The third stage of action research is the output or results phase. This stage includes actual changes in behavior (if any) resulting from corrective action steps taken following the second stage. Data are again gathered from the client system so that progress can be determined and necessary adjustments in learning activities can be made. Minor adjustments of this nature can be made in learning activities via Feedback Loop B (see Figure 1).

Major adjustments and reevaluations would return the OD project to the first or planning stage for basic changes in the program. The action-research model shown in Figure 1 closely follows Lewin's repetitive cycle of planning, action, and measuring results. It also illustrates other aspects of Lewin's general model of change. As indicated in the diagram, the planning stage is a period of unfreezing, or problem awareness. The action stage is a period of changing, that is, trying out new forms of behavior in an effort to understand and cope with the system's problems. (There is inevitable overlap between the stages, since the boundaries are not clear-cut and cannot be in a continuous process).

The results stage is a period of refreezing, in which new behaviors are tried out on the job and, if successful and reinforcing, become a part of the system's repertoire of problem-solving behavior. Action research is problem centered, client centered, and action oriented. It involves the client system in a diagnostic, active-learning, problem-finding and problem-solving process.

Worldwide expansion

Action research has become a significant methodology for intervention, development and change within groups and communities. It is promoted and implemented by many international development agencies and university programs, as well as local community organizations around the world, such as AERA and Claremont Lincoln in America, CARN in the United Kingdom, CCAR in Sweden, CLAYSS in Argentina, CARPED and PRIA in India, and ARNA in the Americas.

The Center for Collaborative Action Research makes available a set of twelve tutorials as a self-paced online course in learning how to do action research. It includes a free workbook that can be used online or printed.

Journal

The field is supported by a quarterly peer-reviewed academic journal, Action Research, founded in 2003 and edited by Hilary Bradbury.

Electric power system

From Wikipedia, the free encyclopedia
A steam turbine used to provide electric power

An electric power system is a network of electrical components deployed to supply, transfer, and use electric power. An example of a power system is the electrical grid that provides power to homes and industries within an extended area. The electrical grid can be broadly divided into the generators that supply the power, the transmission system that carries the power from the generating centers to the load centers, and the distribution system that feeds the power to nearby homes and industries.

Smaller power systems are also found in industry, hospitals, commercial buildings, and homes. A single line diagram helps to represent this whole system. The majority of these systems rely upon three-phase AC power—the standard for large-scale power transmission and distribution across the modern world. Specialized power systems that do not always rely upon three-phase AC power are found in aircraft, electric rail systems, ocean liners, submarines, and automobiles.

History

A sketch of the Pearl Street Station

In 1881, two electricians built the world's first power system at Godalming in England. It was powered by two water wheels and produced an alternating current that in turn supplied seven Siemens arc lamps at 250 volts and 34 incandescent lamps at 40 volts. However, supply to the lamps was intermittent and in 1882 Thomas Edison and his company, Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station initially powered around 3,000 lamps for 59 customers. The power station generated direct current and operated at a single voltage. Direct current power could not be transformed easily or efficiently to the higher voltages necessary to minimize power loss during long-distance transmission, so the maximum economic distance between the generators and load was limited to around half a mile (800 m).

That same year in London, Lucien Gaulard and John Dixon Gibbs demonstrated the "secondary generator"—the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up 40 kilometers (25 miles) of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that active lamps would affect the brightness of other lamps further down the line.

In 1885, Ottó Titusz Bláthy working with Károly Zipernowsky and Miksa Déri perfected the secondary generator of Gaulard and Gibbs, providing it with a closed iron core and its present name: the "transformer". The three engineers went on to present a power system at the National General Exhibition of Budapest that implemented the parallel AC distribution system proposed by a British scientist in which several power transformers have their primary windings fed in parallel from a high-voltage distribution line. The system lit more than 1000 carbon filament lamps and operated successfully from May until November of that year.

Also in 1885 George Westinghouse, an American entrepreneur, obtained the patent rights to the Gaulard-Gibbs transformer and imported a number of them along with a Siemens generator, and set his engineers to experimenting with them in hopes of improving them for use in a commercial power system. In 1886, one of Westinghouse's engineers, William Stanley, independently recognized the problem with connecting transformers in series as opposed to parallel and also realized that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built a multi-voltage transformer-based alternating-current power system serving multiple homes and businesses at Great Barrington, Massachusetts in 1886. The system was unreliable and short-lived, though, due primarily to generation issues. However, based on that system, Westinghouse would begin installing AC transformer systems in competition with the Edison Company later that year. In 1888, Westinghouse licensed Nikola Tesla's patents for a polyphase AC induction motor and transformer designs. Tesla consulted for a year at the Westinghouse Electric & Manufacturing Company's but it took a further four years for Westinghouse engineers to develop a workable polyphase motor and transmission system.

By 1889, the electric power industry was flourishing, and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe. These networks were effectively dedicated to providing electric lighting. During this time the rivalry between Thomas Edison and George Westinghouse's companies had grown into a propaganda campaign over which form of transmission (direct or alternating current) was superior, a series of events known as the "war of the currents". In 1891, Westinghouse installed the first major power system that was designed to drive a 100 horsepower (75 kW) synchronous electric motor, not just provide electric lighting, at Telluride, Colorado. On the other side of the Atlantic, Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown, built the first long-distance (175 kilometers (109 miles)) high-voltage (15 kV, then a record) three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt, where power was used to light lamps and run a water pump. In the United States the AC/DC competition came to an end when Edison General Electric was taken over by their chief AC rival, the Thomson-Houston Electric Company, forming General Electric. In 1895, after a protracted decision-making process, alternating current was chosen as the transmission standard with Westinghouse building the Adams No. 1 generating station at Niagara Falls and General Electric building the three-phase alternating current power system to supply Buffalo at 11 kV.

Developments in power systems continued beyond the nineteenth century. In 1936 the first experimental high voltage direct current (HVDC) line using mercury arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by series-connected direct current generators and motors (the Thury system) although this suffered from serious reliability issues. The first solid-state metal diode suitable for general power uses was developed by Ernst Presser at TeKaDe in 1928. It consisted of a layer of selenium applied on an aluminum plate. In 1957, a General Electric research group developed the first thyristor suitable for use in power applications, starting a revolution in power electronics. In that same year, Siemens demonstrated a solid-state rectifier, but it was not until the early 1970s that solid-state devices became the standard in HVDC, when GE emerged as one of the top suppliers of thyristor-based HVDC. In 1979, a European consortium including Siemens, Brown Boveri & Cie and AEG realized the record HVDC link from Cabora Bassa to Johannesburg, extending more than 1,420 kilometers (880 miles) that carried 1.9 GW at 533 kV.

In recent times, many important developments have come from extending innovations in the information and communications technology (ICT) field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently, allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for effective remote control of a power system's switchgear and generators.

Basics of electric power

Animation of three-phase alternating current

Electric power is the product of two quantities: current and voltage. These two quantities can vary with respect to time (AC power) or can be kept at constant levels (DC power).

Most refrigerators, air conditioners, pumps and industrial machinery use AC power, whereas most computers and digital equipment use DC power (digital devices plugged into the mains typically have an internal or external power adapter to convert from AC to DC power). AC power has the advantage of being easy to transform between voltages and is able to be generated and utilised by brushless machinery. DC power remains the only practical choice in digital systems and can be more economical to transmit over long distances at very high voltages (see HVDC).

The ability to easily transform the voltage of AC power is important for two reasons: firstly, power can be transmitted over long distances with less loss at higher voltages. So in power systems where generation is distant from the load, it is desirable to step-up (increase) the voltage of power at the generation point and then step-down (decrease) the voltage near the load. Secondly, it is often more economical to install turbines that produce higher voltages than would be used by most appliances, so the ability to easily transform voltages means this mismatch between voltages can be easily managed.

Solid-state devices, which are products of the semiconductor revolution, make it possible to transform DC power to different voltages, build brushless DC machines and convert between AC and DC power. Nevertheless, devices utilising solid-state technology are often more expensive than their traditional counterparts, so AC power remains in widespread use.

Components of power systems

Supplies

The majority of the world's power still comes from coal-fired power stations like this

All power systems have one or more sources of power. For some power systems, the source of power is external to the system but for others, it is part of the system itself—it is these internal power sources that are discussed in the remainder of this section. Direct current power can be supplied by batteries, fuel cells or photovoltaic cells. Alternating current power is typically supplied by a rotor that spins in a magnetic field in a device known as a turbo generator. There have been a wide range of techniques used to spin a turbine's rotor, from steam heated using fossil fuel (including coal, gas and oil) or nuclear energy to falling water (hydroelectric power) and wind (wind power).

The speed at which the rotor spins in combination with the number of generator poles determines the frequency of the alternating current produced by the generator. All generators on a single synchronous system, for example, the national grid, rotate at sub-multiples of the same speed and so generate electric current at the same frequency. If the load on the system increases, the generators will require more torque to spin at that speed and, in a steam power station, more steam must be supplied to the turbines driving them. Thus the steam used and the fuel expended directly relate to the quantity of electrical energy supplied. An exception exists for generators incorporating power electronics such as gearless wind turbines or linked to a grid through an asynchronous tie such as a HVDC link — these can operate at frequencies independent of the power system frequency.

Depending on how the poles are fed, alternating current generators can produce a variable number of phases of power. A higher number of phases leads to more efficient power system operation but also increases the infrastructure requirements of the system. Electricity grid systems connect multiple generators operating at the same frequency: the most common being three-phase at 50 or 60 Hz.

There are a range of design considerations for power supplies. These range from the obvious: How much power should the generator be able to supply? What is an acceptable length of time for starting the generator (some generators can take hours to start)? Is the availability of the power source acceptable (some renewables are only available when the sun is shining or the wind is blowing)? To the more technical: How should the generator start (some turbines act like a motor to bring themselves up to speed in which case they need an appropriate starting circuit)? What is the mechanical speed of operation for the turbine and consequently what are the number of poles required? What type of generator is suitable (synchronous or asynchronous) and what type of rotor (squirrel-cage rotor, wound rotor, salient pole rotor or cylindrical rotor)?

Loads

A toaster is a great example of a single-phase load that might appear in a residence. Toasters typically draw 2 to 10 amps at 110 to 260 volts consuming around 600 to 1200 watts of power.

Power systems deliver energy to loads that perform a function. These loads range from household appliances to industrial machinery. Most loads expect a certain voltage and, for alternating current devices, a certain frequency and number of phases. The appliances found in residential settings, for example, will typically be single-phase operating at 50 or 60 Hz with a voltage between 110 and 260 volts (depending on national standards). An exception exists for larger centralized air conditioning systems as these are now often three-phase because this allows them to operate more efficiently. All electrical appliances also have a wattage rating, which specifies the amount of power the device consumes. At any one time, the net amount of power consumed by the loads on a power system must equal the net amount of power produced by the supplies less the power lost in transmission.

Making sure that the voltage, frequency and amount of power supplied to the loads is in line with expectations is one of the great challenges of power system engineering. However it is not the only challenge, in addition to the power used by a load to do useful work (termed real power) many alternating current devices also use an additional amount of power because they cause the alternating voltage and alternating current to become slightly out-of-sync (termed reactive power). The reactive power like the real power must balance (that is the reactive power produced on a system must equal the reactive power consumed) and can be supplied from the generators, however it is often more economical to supply such power from capacitors (see "Capacitors and reactors" below for more details).

A final consideration with loads has to do with power quality. In addition to sustained overvoltages and undervoltages (voltage regulation issues) as well as sustained deviations from the system frequency (frequency regulation issues), power system loads can be adversely affected by a range of temporal issues. These include voltage sags, dips and swells, transient overvoltages, flicker, high-frequency noise, phase imbalance and poor power factor. Power quality issues occur when the power supply to a load deviates from the ideal. Power quality issues can be especially important when it comes to specialist industrial machinery or hospital equipment.

Conductors

Partially insulated medium-voltage conductors in California

Conductors carry power from the generators to the load. In a grid, conductors may be classified as belonging to the transmission system, which carries large amounts of power at high voltages (typically more than 69 kV) from the generating centres to the load centres, or the distribution system, which feeds smaller amounts of power at lower voltages (typically less than 69 kV) from the load centres to nearby homes and industry.

Choice of conductors is based on considerations such as cost, transmission losses and other desirable characteristics of the metal like tensile strength. Copper, with lower resistivity than aluminum, was once the conductor of choice for most power systems. However, aluminum has a lower cost for the same current carrying capacity and is now often the conductor of choice. Overhead line conductors may be reinforced with steel or aluminium alloys.

Conductors in exterior power systems may be placed overhead or underground. Overhead conductors are usually air insulated and supported on porcelain, glass or polymer insulators. Cables used for underground transmission or building wiring are insulated with cross-linked polyethylene or other flexible insulation. Conductors are often stranded for to make them more flexible and therefore easier to install.

Conductors are typically rated for the maximum current that they can carry at a given temperature rise over ambient conditions. As current flow increases through a conductor it heats up. For insulated conductors, the rating is determined by the insulation. For bare conductors, the rating is determined by the point at which the sag of the conductors would become unacceptable.

Capacitors and reactors

A synchronous condenser installation at Templestowe substation, Melbourne, Victoria

The majority of the load in a typical AC power system is inductive; the current lags behind the voltage. Since the voltage and current are out-of-phase, this leads to the emergence of an "imaginary" form of power known as reactive power. Reactive power does no measurable work but is transmitted back and forth between the reactive power source and load every cycle. This reactive power can be provided by the generators themselves but it is often cheaper to provide it through capacitors, hence capacitors are often placed near inductive loads (i.e. if not on-site at the nearest substation) to reduce current demand on the power system (i.e. increase the power factor).

Reactors consume reactive power and are used to regulate voltage on long transmission lines. In light load conditions, where the loading on transmission lines is well below the surge impedance loading, the efficiency of the power system may actually be improved by switching in reactors. Reactors installed in series in a power system also limit rushes of current flow, small reactors are therefore almost always installed in series with capacitors to limit the current rush associated with switching in a capacitor. Series reactors can also be used to limit fault currents.

Capacitors and reactors are switched by circuit breakers, which results in sizeable step changes of reactive power. A solution to this comes in the form of synchronous condensers, static VAR compensators and static synchronous compensators. Briefly, synchronous condensers are synchronous motors that spin freely to generate or absorb reactive power. Static VAR compensators work by switching in capacitors using thyristors as opposed to circuit breakers allowing capacitors to be switched-in and switched-out within a single cycle. This provides a far more refined response than circuit-breaker-switched capacitors. Static synchronous compensators take this a step further by achieving reactive power adjustments using only power electronics.

Power electronics

This external household AC to DC power adapter uses power electronics

Power electronics are semiconductor based devices that are able to switch quantities of power ranging from a few hundred watts to several hundred megawatts. Despite their relatively simple function, their speed of operation (typically in the order of nanoseconds) means they are capable of a wide range of tasks that would be difficult or impossible with conventional technology. The classic function of power electronics is rectification, or the conversion of AC-to-DC power, power electronics are therefore found in almost every digital device that is supplied from an AC source either as an adapter that plugs into the wall (see photo) or as component internal to the device. High-powered power electronics can also be used to convert AC power to DC power for long distance transmission in a system known as HVDC. HVDC is used because it proves to be more economical than similar high voltage AC systems for very long distances (hundreds to thousands of kilometres). HVDC is also desirable for interconnects because it allows frequency independence thus improving system stability. Power electronics are also essential for any power source that is required to produce an AC output but that by its nature produces a DC output. They are therefore used by photovoltaic installations.

Power electronics also feature in a wide range of more exotic uses. They are at the heart of all modern electric and hybrid vehicles—where they are used for both motor control and as part of the brushless DC motor. Power electronics are also found in practically all modern petrol-powered vehicles, this is because the power provided by the car's batteries alone is insufficient to provide ignition, air-conditioning, internal lighting, radio and dashboard displays for the life of the car. So the batteries must be recharged while driving—a feat that is typically accomplished using power electronics.

Some electric railway systems also use DC power and thus make use of power electronics to feed grid power to the locomotives and often for speed control of the locomotive's motor. In the middle twentieth century, rectifier locomotives were popular, these used power electronics to convert AC power from the railway network for use by a DC motor. Today most electric locomotives are supplied with AC power and run using AC motors, but still use power electronics to provide suitable motor control. The use of power electronics to assist with the motor control and with starter circuits, in addition to rectification, is responsible for power electronics appearing in a wide range of industrial machinery. Power electronics even appear in modern residential air conditioners allow are at the heart of the variable speed wind turbine.

Protective devices

A multifunction digital protective relay typically installed at a substation to protect a distribution feeder

Power systems contain protective devices to prevent injury or damage during failures. The quintessential protective device is the fuse. When the current through a fuse exceeds a certain threshold, the fuse element melts, producing an arc across the resulting gap that is then extinguished, interrupting the circuit. Given that fuses can be built as the weak point of a system, fuses are ideal for protecting circuitry from damage. Fuses however have two problems: First, after they have functioned, fuses must be replaced as they cannot be reset. This can prove inconvenient if the fuse is at a remote site or a spare fuse is not on hand. And second, fuses are typically inadequate as the sole safety device in most power systems as they allow current flows well in excess of that that would prove lethal to a human or animal.

The first problem is resolved by the use of circuit breakers—devices that can be reset after they have broken current flow. In modern systems that use less than about 10 kW, miniature circuit breakers are typically used. These devices combine the mechanism that initiates the trip (by sensing excess current) as well as the mechanism that breaks the current flow in a single unit. Some miniature circuit breakers operate solely on the basis of electromagnetism. In these miniature circuit breakers, the current is run through a solenoid, and, in the event of excess current flow, the magnetic pull of the solenoid is sufficient to force open the circuit breaker's contacts (often indirectly through a tripping mechanism).

In higher powered applications, the protective relays that detect a fault and initiate a trip are separate from the circuit breaker. Early relays worked based upon electromagnetic principles similar to those mentioned in the previous paragraph, modern relays are application-specific computers that determine whether to trip based upon readings from the power system. Different relays will initiate trips depending upon different protection schemes. For example, an overcurrent relay might initiate a trip if the current on any phase exceeds a certain threshold whereas a set of differential relays might initiate a trip if the sum of currents between them indicates there may be current leaking to earth. The circuit breakers in higher powered applications are different too. Air is typically no longer sufficient to quench the arc that forms when the contacts are forced open so a variety of techniques are used. One of the most popular techniques is to keep the chamber enclosing the contacts flooded with sulfur hexafluoride (SF6)—a non-toxic gas with sound arc-quenching properties. Other techniques are discussed in the reference.

The second problem, the inadequacy of fuses to act as the sole safety device in most power systems, is probably best resolved by the use of residual-current devices (RCDs). In any properly functioning electrical appliance, the current flowing into the appliance on the active line should equal the current flowing out of the appliance on the neutral line. A residual current device works by monitoring the active and neutral lines and tripping the active line if it notices a difference. Residual current devices require a separate neutral line for each phase and to be able to trip within a time frame before harm occurs. This is typically not a problem in most residential applications where standard wiring provides an active and neutral line for each appliance (that's why your power plugs always have at least two tongs) and the voltages are relatively low however these issues limit the effectiveness of RCDs in other applications such as industry. Even with the installation of an RCD, exposure to electricity can still prove fatal.

SCADA systems

In large electric power systems, supervisory control and data acquisition (SCADA) is used for tasks such as switching on generators, controlling generator output and switching in or out system elements for maintenance. The first supervisory control systems implemented consisted of a panel of lamps and switches at a central console near the controlled plant. The lamps provided feedback on the state of the plant (the data acquisition function) and the switches allowed adjustments to the plant to be made (the supervisory control function). Today, SCADA systems are much more sophisticated and, due to advances in communication systems, the consoles controlling the plant no longer need to be near the plant itself. Instead, it is now common for plants to be controlled with equipment similar (if not identical) to a desktop computer. The ability to control such plants through computers has increased the need for security—there have already been reports of cyber-attacks on such systems causing significant disruptions to power systems.

Power systems in practice

Despite their common components, power systems vary widely both with respect to their design and how they operate. This section introduces some common power system types and briefly explains their operation.

Residential power systems

Residential dwellings almost always take supply from the low voltage distribution lines or cables that run past the dwelling. These operate at voltages of between 110 and 260 volts (phase-to-earth) depending upon national standards. A few decades ago small dwellings would be fed a single phase using a dedicated two-core service cable (one core for the active phase and one core for the neutral return). The active line would then be run through a main isolating switch in the fuse box and then split into one or more circuits to feed lighting and appliances inside the house. By convention, the lighting and appliance circuits are kept separate so the failure of an appliance does not leave the dwelling's occupants in the dark. All circuits would be fused with an appropriate fuse based upon the wire size used for that circuit. Circuits would have both an active and neutral wire with both the lighting and power sockets being connected in parallel. Sockets would also be provided with a protective earth. This would be made available to appliances to connect to any metallic casing. If this casing were to become live, the theory is the connection to earth would cause an RCD or fuse to trip—thus preventing the future electrocution of an occupant handling the appliance. Earthing systems vary between regions, but in countries such as the United Kingdom and Australia both the protective earth and neutral line would be earthed together near the fuse box before the main isolating switch and the neutral earthed once again back at the distribution transformer.

There have been a number of minor changes over the years to practice of residential wiring. Some of the most significant ways modern residential power systems in developed countries tend to vary from older ones include:

  • For convenience, miniature circuit breakers are now almost always used in the fuse box instead of fuses as these can easily be reset by occupants and, if of the thermomagnetic type, can respond more quickly to some types of fault.
  • For safety reasons, RCDs are now often installed on appliance circuits and, increasingly, even on lighting circuits.
  • Whereas residential air conditioners of the past might have been fed from a dedicated circuit attached to a single phase, larger centralised air conditioners that require three-phase power are now becoming common in some countries.
  • Protective earths are now run with lighting circuits to allow for metallic lamp holders to be earthed.
  • Increasingly residential power systems are incorporating microgenerators, most notably, photovoltaic cells.

Commercial power systems

Commercial power systems such as shopping centers or high-rise buildings are larger in scale than residential systems. Electrical designs for larger commercial systems are usually studied for load flow, short-circuit fault levels and voltage drop. The objectives of the studies are to assure proper equipment and conductor sizing, and to coordinate protective devices so that minimal disruption is caused when a fault is cleared. Large commercial installations will have an orderly system of sub-panels, separate from the main distribution board to allow for better system protection and more efficient electrical installation.

Typically one of the largest appliances connected to a commercial power system in hot climates is the HVAC unit, and ensuring this unit is adequately supplied is an important consideration in commercial power systems. Regulations for commercial establishments place other requirements on commercial systems that are not placed on residential systems. For example, in Australia, commercial systems must comply with AS 2293, the standard for emergency lighting, which requires emergency lighting be maintained for at least 90 minutes in the event of loss of mains supply. In the United States, the National Electrical Code requires commercial systems to be built with at least one 20 A sign outlet in order to light outdoor signage. Building code regulations may place special requirements on the electrical system for emergency lighting, evacuation, emergency power, smoke control and fire protection.

Power system management

Power system management varies depending upon the power system. Residential power systems and even automotive electrical systems are often run-to-fail. In aviation, the power system uses redundancy to ensure availability. On the Boeing 747-400 any of the four engines can provide power and circuit breakers are checked as part of power-up (a tripped circuit breaker indicating a fault). Larger power systems require active management. In industrial plants or mining sites a single team might be responsible for fault management, augmentation and maintenance. Where as for the electric grid, management is divided amongst several specialised teams.

Fault management

Fault management involves monitoring the behaviour of the power system so as to identify and correct issues that affect the system's reliability. Fault management can be specific and reactive: for example, dispatching a team to restring conductor that has been brought down during a storm. Or, alternatively, can focus on systemic improvements: such as the installation of reclosers on sections of the system that are subject to frequent temporary disruptions (as might be caused by vegetation, lightning or wildlife).

Maintenance and augmentation

In addition to fault management, power systems may require maintenance or augmentation. As often it is neither economical nor practical for large parts of the system to be offline during this work, power systems are built with many switches. These switches allow the part of the system being worked on to be isolated while the rest of the system remains live. At high voltages, there are two switches of note: isolators and circuit breakers. Circuit breakers are load-breaking switches where as operating isolators under load would lead to unacceptable and dangerous arcing. In a typical planned outage, several circuit breakers are tripped to allow the isolators to be switched before the circuit breakers are again closed to reroute power around the isolated area. This allows work to be completed on the isolated area.

Frequency and voltage management

Beyond fault management and maintenance one of the main difficulties in power systems is that the active power consumed plus losses must equal the active power produced. If load is reduced while generation inputs remain constant the synchronous generators will spin faster and the system frequency will rise. The opposite occurs if load is increased. As such the system frequency must be actively managed primarily through switching on and off dispatchable loads and generation. Making sure the frequency is constant is usually the task of a system operator. Even with frequency maintained, the system operator can be kept occupied ensuring:

  1. equipment or customers on the system are being supplied with the required voltage
  2. reactive power transmission is minimised (leading to more efficient operation)
  3. teams are dispatched and the system is switched to mitigate any faults
  4. remote switching is undertaken to allow for system works

Magnetohydrodynamics

From Wikipedia, the free encyclopedia
The plasma making up the Sun can be modeled as an MHD system

Magnetohydrodynamics (MHD; also called magneto-fluid dynamics or hydro­magnetics) is a model of electrically conducting fluids that treats all interpenetrating particle species together as a single continuous medium. It is primarily concerned with the low-frequency, large-scale, magnetic behavior in plasmas and liquid metals and has applications in numerous fields including geophysics, astrophysics, and engineering.

The word magneto­hydro­dynamics is derived from magneto- meaning magnetic field, hydro- meaning water, and dynamics meaning movement. The field of MHD was initiated by Hannes Alfvén, for which he received the Nobel Prize in Physics in 1970.

History

The MHD description of electrically conducting fluids was first developed by Hannes Alfvén in a 1942 paper published in Nature titled "Existence of Electromagnetic–Hydrodynamic Waves" which outlined his discovery of what are now referred to as Alfvén waves. Alfvén initially referred to these waves as "electromagnetic–hydrodynamic waves"; however, in a later paper he noted, "As the term 'electromagnetic–hydrodynamic waves' is somewhat complicated, it may be convenient to call this phenomenon 'magneto–hydrodynamic' waves."

Equations

In MHD, motion in the fluid is described using linear combinations of the mean motions of the individual species: the current density and the center of mass velocity . In a given fluid, each species has a number density , mass , electric charge , and a mean velocity . The fluid's total mass density is then , and the motion of the fluid can be described by the current density expressed as

and the center of mass velocity expressed as:

MHD can be described by a set of equations consisting of a continuity equation, an equation of motion, an equation of state, Ampère's Law, Faraday's law, and Ohm's law. As with any fluid description to a kinetic system, a closure approximation must be applied to highest moment of the particle distribution equation. This is often accomplished with approximations to the heat flux through a condition of adiabaticity or isothermality.

In the adiabatic limit, that is, the assumption of an isotropic pressure and isotropic temperature, a fluid with an adiabatic index , electrical resistivity , magnetic field , and electric field can be described by the continuity equation

the equation of state

the equation of motion

the low-frequency Ampère's law

Faraday's law

and Ohm's law

Taking the curl of this equation and using Ampère's law and Faraday's law results in the induction equation,

where is the magnetic diffusivity.

In the equation of motion, the Lorentz force term can be expanded using Ampère's law and a vector calculus identity to give

where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force.

Ideal MHD

In view of the infinite conductivity, every motion (perpendicular to the field) of the liquid in relation to the lines of force is forbidden because it would give infinite eddy currents. Thus the matter of the liquid is "fastened" to the lines of force...

Hannes Alfvén, 1943

The simplest form of MHD, ideal MHD, assumes that the resistive term in Ohm's law is small relative to the other terms such that it can be taken to be equal to zero. This occurs in the limit of large magnetic Reynolds numbers during which magnetic induction dominates over magnetic diffusion at the velocity and length scales under consideration. Consequently, processes in ideal MHD that convert magnetic energy into kinetic energy, referred to as ideal processes, cannot generate heat and raise entropy.

A fundamental concept underlying ideal MHD is the frozen-in flux theorem which states that the bulk fluid and embedded magnetic field are constrained to move together such that one can be said to be "tied" or "frozen" to the other. Therefore, any two points that move with the bulk fluid velocity and lie on the same magnetic field line will continue to lie on the same field line even as the points are advected by fluid flows in the system. The connection between the fluid and magnetic field fixes the topology of the magnetic field in the fluid—for example, if a set of magnetic field lines are tied into a knot, then they will remain so as long as the fluid has negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store energy by moving the fluid or the source of the magnetic field. The energy can then become available if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored energy from the magnetic field.

Ideal MHD equations

In ideal MHD, the resistive term vanishes in Ohm's law giving the ideal Ohm's law,

Similarly, the magnetic diffusion term in the induction equation vanishes giving the ideal induction equation,

Applicability of ideal MHD to plasmas

Ideal MHD is only strictly applicable when:

  1. The plasma is strongly collisional, so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are therefore close to Maxwellian.
  2. The resistivity due to these collisions is small. In particular, the typical magnetic diffusion times over any scale length present in the system must be longer than any time scale of interest.
  3. Interest in length scales much longer than the ion skin depth and Larmor radius perpendicular to the field, long enough along the field to ignore Landau damping, and time scales much longer than the ion gyration time (system is smooth and slowly evolving).

Importance of resistivity

In an imperfectly conducting fluid the magnetic field can generally move through the fluid following a diffusion law with the resistivity of the plasma serving as a diffusion constant. This means that solutions to the ideal MHD equations are only applicable for a limited time for a region of a given size before diffusion becomes too important to ignore. One can estimate the diffusion time across a solar active region (from collisional resistivity) to be hundreds to thousands of years, much longer than the actual lifetime of a sunspot—so it would seem reasonable to ignore the resistivity. By contrast, a meter-sized volume of seawater has a magnetic diffusion time measured in milliseconds.

Even in physical systems—which are large and conductive enough that simple estimates of the Lundquist number suggest that the resistivity can be ignored—resistivity may still be important: many instabilities exist that can increase the effective resistivity of the plasma by factors of more than 109. The enhanced resistivity is usually the result of the formation of small scale structure like current sheets or fine scale magnetic turbulence, introducing small spatial scales into the system over which ideal MHD is broken and magnetic diffusion can occur quickly. When this happens, magnetic reconnection may occur in the plasma to release stored magnetic energy as waves, bulk mechanical acceleration of material, particle acceleration, and heat.

Magnetic reconnection in highly conductive systems is important because it concentrates energy in time and space, so that gentle forces applied to a plasma for long periods of time can cause violent explosions and bursts of radiation.

When the fluid cannot be considered as completely conductive, but the other conditions for ideal MHD are satisfied, it is possible to use an extended model called resistive MHD. This includes an extra term in Ohm's Law which models the collisional resistivity. Generally MHD computer simulations are at least somewhat resistive because their computational grid introduces a numerical resistivity.

Structures in MHD systems

Schematic view of the different current systems which shape the Earth's magnetosphere

In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional ribbons termed current sheets. These can divide the fluid into magnetic domains, inside of which the currents are relatively weak. Current sheets in the solar corona are thought to be between a few meters and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are thousands to hundreds of thousands of kilometers across). Another example is in the Earth's magnetosphere, where current sheets separate topologically distinct domains, isolating most of the Earth's ionosphere from the solar wind.

Waves

The wave modes derived using the MHD equations are called magnetohydrodynamic waves or MHD waves. There are three MHD wave modes that can be derived from the linearized ideal-MHD equations for a fluid with a uniform and constant magnetic field:

  • Alfvén waves
  • Slow magnetosonic waves
  • Fast magnetosonic waves
Phase velocity plotted with respect to θ
'"`UNIQ--postMath-0000001F-QINU`"'
vA > vs
 
'"`UNIQ--postMath-00000020-QINU`"'
vA < vs

These modes have phase velocities that are independent of the magnitude of the wavevector, so they experience no dispersion. The phase velocity depends on the angle between the wave vector k and the magnetic field B. An MHD wave propagating at an arbitrary angle θ with respect to the time independent or bulk field B0 will satisfy the dispersion relation

where

is the Alfvén speed. This branch corresponds to the shear Alfvén mode. Additionally the dispersion equation gives

where

is the ideal gas speed of sound. The plus branch corresponds to the fast-MHD wave mode and the minus branch corresponds to the slow-MHD wave mode. A summary of the properties of these waves is provided:

Mode Type Limiting phase speeds Group velocity Direction of energy flow
Alfvén wave transversal; incompressible
Fast magnetosonic wave neither transversal nor longitudinal; compressional equal to phase velocity approx.
Slow magnetosonic wave approx.

The MHD oscillations will be damped if the fluid is not perfectly conducting but has a finite conductivity, or if viscous effects are present.

MHD waves and oscillations are a popular tool for the remote diagnostics of laboratory and astrophysical plasmas, for example, the corona of the Sun (Coronal seismology).

Extensions

Resistive
Resistive MHD describes magnetized fluids with finite electron diffusivity (η ≠ 0). This diffusivity leads to a breaking in the magnetic topology; magnetic field lines can 'reconnect' when they collide. Usually this term is small and reconnections can be handled by thinking of them as not dissimilar to shocks; this process has been shown to be important in the Earth-Solar magnetic interactions.
Extended
Extended MHD describes a class of phenomena in plasmas that are higher order than resistive MHD, but which can adequately be treated with a single fluid description. These include the effects of Hall physics, electron pressure gradients, finite Larmor Radii in the particle gyromotion, and electron inertia.
Two-fluid
Two-fluid MHD describes plasmas that include a non-negligible Hall electric field. As a result, the electron and ion momenta must be treated separately. This description is more closely tied to Maxwell's equations as an evolution equation for the electric field exists.
Hall
In 1960, M. J. Lighthill criticized the applicability of ideal or resistive MHD theory for plasmas. It concerned the neglect of the "Hall current term" in Ohm's law, a frequent simplification made in magnetic fusion theory. Hall-magnetohydrodynamics (HMHD) takes into account this electric field description of magnetohydrodynamics, and Ohm's law takes the form
where is the electron number density and is the elementary charge. The most important difference is that in the absence of field line breaking, the magnetic field is tied to the electrons and not to the bulk fluid.
Electron MHD
Electron Magnetohydrodynamics (EMHD) describes small scales plasmas when electron motion is much faster than the ion one. The main effects are changes in conservation laws, additional resistivity, importance of electron inertia. Many effects of Electron MHD are similar to effects of the Two fluid MHD and the Hall MHD. EMHD is especially important for z-pinch, magnetic reconnection, ion thrusters, neutron stars, and plasma switches.
Collisionless
MHD is also often used for collisionless plasmas. In that case the MHD equations are derived from the Vlasov equation.
Reduced
By using a multiscale analysis the (resistive) MHD equations can be reduced to a set of four closed scalar equations. This allows for, amongst other things, more efficient numerical calculations.

Limitations

Importance of kinetic effects

Another limitation of MHD (and fluid theories in general) is that they depend on the assumption that the plasma is strongly collisional (this is the first criterion listed above), so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are Maxwellian. This is usually not the case in fusion, space and astrophysical plasmas. When this is not the case, or the interest is in smaller spatial scales, it may be necessary to use a kinetic model which properly accounts for the non-Maxwellian shape of the distribution function. However, because MHD is relatively simple and captures many of the important properties of plasma dynamics it is often qualitatively accurate and is therefore often the first model tried.

Effects which are essentially kinetic and not captured by fluid models include double layers, Landau damping, a wide range of instabilities, chemical separation in space plasmas and electron runaway. In the case of ultra-high intensity laser interactions, the incredibly short timescales of energy deposition mean that hydrodynamic codes fail to capture the essential physics.

Applications

Geophysics

Beneath the Earth's mantle lies the core, which is made up of two parts: the solid inner core and liquid outer core. Both have significant quantities of iron. The liquid outer core moves in the presence of the magnetic field and eddies are set up into the same due to the Coriolis effect. These eddies develop a magnetic field which boosts Earth's original magnetic field—a process which is self-sustaining and is called the geomagnetic dynamo.

Reversals of Earth's magnetic field

Based on the MHD equations, Glatzmaier and Paul Roberts have made a supercomputer model of the Earth's interior. After running the simulations for thousands of years in virtual time, the changes in Earth's magnetic field can be studied. The simulation results are in good agreement with the observations as the simulations have correctly predicted that the Earth's magnetic field flips every few hundred thousand years. During the flips, the magnetic field does not vanish altogether—it just gets more complex.

Earthquakes

Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ultra low frequency (ULF) activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California, although a subsequent study indicates that this was little more than a sensor malfunction. On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake. Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes.

Space Physics

The study of space plasmas near Earth and throughout the Solar System is known as space physics. Areas researched within space physics encompass a large number of topics, ranging from the ionosphere to auroras, Earth's magnetosphere, the Solar wind, and coronal mass ejections.

MHD forms the framework for understanding how populations of plasma interact within the local geospace environment. Researchers have developed global models using MHD to simulate phenomena within Earth's magnetosphere, such as the location of Earth's magnetopause (the boundary between the Earth's magnetic field and the solar wind), the formation of the ring current, auroral electrojets, and geomagnetically induced currents.

One prominent use of global MHD models is in space weather forecasting. Intense solar storms have the potential to cause extensive damage to satellites and infrastructure, thus it is crucial that such events are detected early. The Space Weather Prediction Center (SWPC) runs MHD models to predict the arrival and impacts of space weather events at Earth.

Astrophysics

MHD applies to astrophysics, including stars, the interplanetary medium (space between the planets), and possibly within the interstellar medium (space between the stars) and jets. Most astrophysical systems are not in local thermal equilibrium, and therefore require an additional kinematic treatment to describe all the phenomena within the system (see Astrophysical plasma).

Sunspots are caused by the Sun's magnetic fields, as Joseph Larmor theorized in 1919. The solar wind is also governed by MHD. The differential solar rotation may be the long-term effect of magnetic drag at the poles of the Sun, an MHD phenomenon due to the Parker spiral shape assumed by the extended magnetic field of the Sun.

Previously, theories describing the formation of the Sun and planets could not explain how the Sun has 99.87% of the mass, yet only 0.54% of the angular momentum in the Solar System. In a closed system such as the cloud of gas and dust from which the Sun was formed, mass and angular momentum are both conserved. That conservation would imply that as the mass concentrated in the center of the cloud to form the Sun, it would spin faster, much like a skater pulling their arms in. The high speed of rotation predicted by early theories would have flung the proto-Sun apart before it could have formed. However, magnetohydrodynamic effects transfer the Sun's angular momentum into the outer solar system, slowing its rotation.

Breakdown of ideal MHD (in the form of magnetic reconnection) is known to be the likely cause of solar flares. The magnetic field in a solar active region over a sunspot can store energy that is released suddenly as a burst of motion, X-rays, and radiation when the main current sheet collapses, reconnecting the field.

Sensors

Magnetohydrodynamic sensors are used for precision measurements of angular velocities in inertial navigation systems such as in aerospace engineering. Accuracy improves with the size of the sensor. The sensor is capable of surviving in harsh environments.

Engineering

MHD is related to engineering problems such as plasma confinement, liquid-metal cooling of nuclear reactors, and electromagnetic casting (among others).

A magnetohydrodynamic drive or MHD propulsor is a method for propelling seagoing vessels using only electric and magnetic fields with no moving parts, using magnetohydrodynamics. The working principle involves electrification of the propellant (gas or water) which can then be directed by a magnetic field, pushing the vehicle in the opposite direction. Although some working prototypes exist, MHD drives remain impractical.

The first prototype of this kind of propulsion was built and tested in 1965 by Steward Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior-year undergraduate students to develop a submarine with this new propulsion system. In the early 1990s, a foundation in Japan (Ship & Ocean Foundation (Minato-ku, Tokyo)) built an experimental boat, the Yamato-1, which used a magnetohydrodynamic drive incorporating a superconductor cooled by liquid helium, and could travel at 15 km/h.

MHD power generation fueled by potassium-seeded coal combustion gas showed potential for more efficient energy conversion (the absence of solid moving parts allows operation at higher temperatures), but failed due to cost-prohibitive technical difficulties. One major engineering problem was the failure of the wall of the primary-coal combustion chamber due to abrasion.

In microfluidics, MHD is studied as a fluid pump for producing a continuous, nonpulsating flow in a complex microchannel design.

MHD can be implemented in the continuous casting process of metals to suppress instabilities and control the flow.

Industrial MHD problems can be modeled using the open-source software EOF-Library. Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting, and liquid metal stirring by rotating permanent magnets.

Magnetic drug targeting

An important task in cancer research is developing more precise methods for delivery of medicine to affected areas. One method involves the binding of medicine to biologically compatible magnetic particles (such as ferrofluids), which are guided to the target via careful placement of permanent magnets on the external body. Magnetohydrodynamic equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...