Search This Blog

Tuesday, February 11, 2020

Grand Coulee

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Grand_Coulee
 
Grand Coulee
Grand coulee below dry falls.JPG
Grand Coulee, below Dry Falls. The layering effect of periodic basalt lava flows is visible.
Map showing the location of Grand Coulee
Map showing the location of Grand Coulee
Map of Washington state
LocationWashington state
Coordinates47.62°N 119.3075°WCoordinates: 47.62°N 119.3075°W

Designated1965
Looking northward in the Grand Coulee.
 
Steamboat Rock in the Grand Coulee.
 
Part of the Grand Coulee has been dammed and filled with water as part of the Columbia Basin Project.
 
The Grand Coulee is an ancient river bed in the U.S. state of Washington. This National Natural Landmark stretches for about 60 miles (100 km) southwest from Grand Coulee Dam to Soap Lake, being bisected by Dry Falls into the Upper and Lower Grand Coulee.

Geological history

The Grand Coulee is part of the Columbia River Plateau. This area has underlying granite bedrock, formed deep in the Earth's crust 40 to 60 million years ago. The land periodically uplifted and subsided over millions of years giving rise to some small mountains and, eventually, an inland sea.

From about 10 to 18 million years ago, a series of volcanic eruptions from the Grand Ronde Rift near the Idaho/Oregon/Washington/Montana border began to fill the inland sea with lava. In some places the volcanic basalt is 6,600 feet (2.0 km) thick. In other areas granite from the earlier mountains is still exposed. 

Starting about two million years ago, during the Pleistocene epoch, glaciation took place in the area. Large parts of northern North America were repeatedly covered with glacial ice sheets, at times reaching over 10,000 feet (3,000 m) in thickness. Periodic climate changes resulted in corresponding advances and retreats of ice. 

About 18,000 years ago a large finger of ice advanced into present-day Idaho, forming an ice dam at what is now Lake Pend Oreille. It blocked the Clark Fork River drainage, thus creating an enormous lake reaching far back into mountain valleys of western Montana. As the lake deepened, the ice began to float. Leaks likely developed and enlarged, causing the dam to fail. The 500 cubic miles (2,100 km3) of water in Lake Missoula, were released in just 48 hours—a torrential flood equivalent to ten times the combined flow of all the rivers in the world.

This mass of water and ice, towering 2,000 feet (610 m) thick near the ice dam before release, flowed across the Columbia Basin, moving at speeds of up to 65 miles per hour (105 km/h). The deluge stripped away soil, cut deep canyons and carved out 50 cubic miles (210 km3) of earth, leaving behind areas of stark scabland. 

Over nearly 2500 years the cycle was repeated many times. Most of the displaced soil created new landforms, but some was carried far out into the Pacific Ocean. In Oregon's Willamette Valley, as far south as Eugene, the cataclysmic flood waters deposited fertile soil and icebergs left numerous boulders from as far away as Montana and Canada. At present day Portland, the water measured 400 feet (120 m) deep. A canyon 200 feet (61 m) deep is carved into the far edge of the continental shelf. The web-like formation can be seen from space. Mountains of gravel as tall as 40-story buildings were left behind; boulders the size of small houses and weighing many tons were strewn about the landscape. 

Grooves in the exposed granite bedrock are still visible in the area from the movement of glaciers, and numerous erratics are found in the elevated areas to the northwest of the coulee.

Early theories suggested that glaciers diverted the Columbia River into what became the Grand Coulee and that normal flows caused the erosion observed. In 1910 Joseph T. Pardee described a great Ice Age lake, "Glacial Lake Missoula", a glacier dammed lake with water up to 1,970 feet (600 m) deep, in northwest Montana and in 1940 he reported his discovery that giant dunes 50 feet (15 m) high and 200–500 feet (61–152 m) feet apart had formed the lake bed. In the 1920s J Harlen Bretz looked deeper into the landscape and put forth his theory of the dam breaches and massive glacial floods from Lake Missoula.

Of the Channeled Scablands, Dry Falls, one of the largest waterfalls ever known, is an excellent example (south of Banks Lake).

It is probable that humans were witnesses, and victims, of the immense power of the Ice Age Floods. Archeological records date human presence back to nearly the end of the Ice Age, but the raging torrents erased the land of clear evidence, leaving us to question who, if anyone, may have survived. With the end of the last glacial advance, the Columbia settled into its present course. The river bed is about 660 feet (200 m) below the Grand Coulee. Walls of the coulee reach 1,300 feet (400 m) in height. 

Upper Coulee

Grand Coulee is the longest and deepest of eastern Washington canyons. Its unique characteristics include a lower floor at the head of the channel than at its outlet and the widest and highest dry falls cliff in the middle. It was created through the process of cataract recession, which included a cataract twice as high as its existing Dry Falls.

Grand Coulee is two canyons, with an open basin in the middle. The Upper Coulee, filled by Banks Lake, is 25 miles (40 km) long with walls 800 to 900 feet (240 to 270 m) tall. It links to the Columbia River at Grand Coulee Dam and leads southward, through the surrounding highlands. The entry to the coulee is 650 feet (200 m) above the Columbia. It began as the course of a Glacial Columbia River. The Wisconsin ice sheets Okanogan lobe extended southward across the Columbia Rivers pathway and onto the southern plateau creating an ice dam. This dam backed up the waters of the Columbia into Glacial Lake Columbia and later during the Missoula Floods forced those waters into eastern Washington, creating the Scablands.

The river at Grand Coulee found no existing valley and thus forged its own pathway across the divide, creating the Upper Coulee. The plateau is not level, but is marked with wrinkles and upfolds of the basalt. The diverted waters of the Columbia, encountered the monoclinal flexure, a steep warping up of 1,000 feet (300 m) toward the northwest. Lake Columbia topped the ridge at the higher side of the flexure. Encountering the steep slope of the monocline, the new river would have cascaded off the rim, 800 feet (240 m) down onto a broad plain where Coulee City and Dry Falls State Park now stand.

Waterfall Erosion

Formation of Grand Coulee
 
Upper Grand Coulee began as an 800 feet (240 m) cascade just north of Coulee City. As the rush of water eroded the surface, it steepened into a waterfall. The falls continued to erode backward (northward) creating the canyon. When the falls reached the divide into Lake Columbia, i.e., preglacial Columbia Valley, it disappeared, leaving the elongated notch. Today, the waters of the Lake Roosevelt are pumped 280 feet (85 m) from the Grand Coulee Dam, into Banks Lake to act as an Equalizing Reservoir and irrigation water source.

Evidence of the waterfalls includes a plunge basin where the falls began, immediately south of Coulee City. It contains at least 300 feet (91 m) of gravel lower than the open flooring of the land. The river above the falls was shallow and much wider than the gorge. Thus, it wrapped around the lip of the main falls creating lateral falls. These flowed until the recession of the main falls denied them water. Northrup Canyon in Steamboat Rock State Park, contains a dry cataract as wide as Niagara Falls and three times as high.[7] Steamboat Rock 880 feet (270 m) high and a 1 square mile (2.6 km2) in area, now stands as an isolated rise, but for a time it created two cataracts. When the falls passed north of Steamboat Rock, it found a granite base beneath the basal flows. Granite lacks the close vertical joints of basalt, was resisted the erosion from the cataract's plunge. It remains as hills on the broad floor of the Coulee. Some gravel-bar deposits are visible along the Route 155. They provide evidence of eddies in the lee of rock shoulders. 

Lower Coulee

Dry Falls is at the head of Lower Grand Coulee. The Great Cataract forms the divide from the upper to lower coulees. The Lower Coulee tends along the monoclinal flexure to Soap Lake where the canyons end and the water flowed out into Quincy Basin. Quincy Basin is filled with the eroded gravels and silts from the Coulee. The Lower Coulee, also created its own path across the plains. Evidence of this is found in the tilted flows visible at Hogback islands in Lake Lenore and tilted flows along Washington 17 from Dry Falls to Park Lake. Numerous canyons acted as a distribution system for the volume of water flowing out of the upper coulee. The distribution begins in the uncanyoned basin below Dry Falls and expanded to over a 15 miles (24 km) before reaching Quincy Basin. One cataract (Unnamed Coulee) is 150 feet (46 m) high and had three alcoves over more than a 1 mile (1.6 km). There is no channel as the water arrived in a broad sheet. The gravel deposits of Quincy Basin represent only a third or a fourth of the estimated 11 cubic miles of rock excavated from the Grand Coulee and its smaller other related coulees (Dry, Long Lake, Jasper, Lenore, and Unnamed). Most of the debris was carried on through and beyond Quincy Basin

The Ephrata Fan is a gravel fan formed when floodwaters from the lower Grand Coulee entered the Quincy Basin during the formation of the Scablands.

Modern uses

The area surrounding the Grand Coulee is shrub-steppe habitat, with an average annual rainfall of less than twelve inches (300 mm). The Lower Grand Coulee contains Park, Blue, Alkali, Lenore, and Soap lakes. Until recently, the Upper Coulee was dry.

The Columbia Basin Project changed this in 1952, using the ancient river bed as an irrigation distribution network. The Upper Grand Coulee was dammed and turned into Banks Lake. The lake is filled by pumps from the Grand Coulee Dam and forms the first leg of a one-hundred-mile (160 km) irrigation system. Canals, siphons, and more dams are used throughout the Columbia Basin, supplying over 600,000 acres (240,000 ha) of farm land.

Water has turned the Upper Coulee and surrounding region into a haven for wildlife, including bald eagles. Recreation is a side benefit and includes several lakes, mineral springs, hunting and fishing, and water sports of all kinds. Sun Lakes and Steamboat Rock state parks are both found in the Grand Coulee. However, the lake has also flooded a large area of natural habitat and native hunting grounds, displacing local Native Americans.

Channeled Scablands

From Wikipedia, the free encyclopedia
 
  Cordilleran Ice Sheet
  maximum extent of Glacial Lake Missoula (eastern) and Glacial Lake Columbia (western)
  areas swept by Missoula and Columbia floods
 
Map of the Channeled Scablands
 
Loess island remnant in the Scablands

The Channeled Scablands at one time were a relatively barren and soil-free region of interconnected relict and dry flood channels, coulees and cataracts eroded into Palouse loess and the typically flat-lying basalt flows that remain after cataclysmic floods within the southeastern part of the U.S. state of Washington. The channeled scablands were scoured by more than 40 cataclysmic floods during the Last Glacial Maximum and innumerable older cataclysmic floods over the last two million years. These cataclysmic floods were repeatedly unleashed when a large glacial lake repeatedly drained and swept across eastern Washington and down the Columbia River Plateau during the Pleistocene epoch. The last of the cataclysmic floods occurred between 18,200 and 14,000 years ago.

Geologist J Harlen Bretz defined "scablands" in a series of papers written in the 1920s as lowlands diversified by a multiplicity of irregular channels and rock basins eroded into basalt. Flood waters eroded the loess cover, creating large anastomizing channels that exposed bare basalt and creating butte-and-basin topography. The buttes range in height from 30 to 100 m, while the rock basins range from 10 m in width up to the 11 km long and 30 m deep Rock Lake. Bretz further stated, "The channels run uphill and downhill, they unite and they divide, they head on the back-slopes and cut through the summit; they could not be more erratically and impossibly designed."

The debate on the origin of the Scablands that ensued for four decades became one of the great controversies in the history of earth science. The Scablands are also important to planetary scientists as perhaps the best terrestrial analog of the Martian outflow channels.

History

Bretz conducted research and published many papers during the 1920s describing the Channeled Scablands. His theories of how they were formed required short but immense floods – 500 cubic miles (2,100 km3) – for which Bretz had no explanation. Bretz's theories met with vehement opposition from geologists of the day, who tried to explain the features with uniformitarian theories.

J.T. Pardee first suggested in 1925 to Bretz that the draining of a glacial lake could account for flows of the magnitude needed. Pardee continued his research over the next 30 years, collecting and analyzing evidence that eventually identified Lake Missoula as the source of the Missoula Floods and creator of the Channeled Scablands.

Pardee's and Bretz's theories were accepted only after decades of painstaking work and fierce scientific debate. Research on open-channel hydraulics in the 1970s put Bretz's theories on solid scientific ground. In 1979 Bretz received the highest medal of the Geological Society of America, the Penrose Medal, to recognize that he had developed one of the great ideas in the earth sciences. 

Geology

Distinct geomorphological features include coulees, dry falls, streamlined hills and islands of remnant loess, gravel fans and bars, and giant current ripples.

The term scabland refers to an area that has experienced fluvial erosion resulting in the loss of loess and other soils, leaving the land barren. River valleys formed by erosional downcutting of rivers create V-shaped valleys, while glaciers carve out U-shaped valleys. The Channeled Scablands have a rectangular cross section, with flat plateaus and steep canyon sides, and are spread over immense areas of eastern Washington. The morphology of the scablands is butte-and-basin. The area that encompasses the Scablands has been estimated between 1,500 and 2,000 square miles (3,900 and 5,200 km2), though those estimates still may be too conservative.

They exhibit a unique drainage pattern that appears to have an entrance in the northeast and an exit in the southwest. The Cordilleran Ice Sheet dammed up Glacial Lake Missoula at the Purcell Trench Lobe. A series of floods occurring over the period of 18,000 to 13,000 years ago swept over the landscape when the ice dam broke. The eroded channels also show an anastomosing, or braided, appearance.

The presence of Middle and Early Pleistocene Missoula flood deposits have been documented within the Channeled Scabland as other parts of the Columbia Basin, e.g. the Othello Channels, Columbia River Gorge, Quincy Basin, Pasco Basin, and the Walla Walla Valley. Based on the presence of multiple interglacial calcretes interbedded with glaciofluvial flood deposits, magnetostratigraphy, optically stimulated luminescence dating, and unconformity truncated clastic dikes, it has been estimated that the oldest of these megafloods flowed through the Channel Scablands sometime before 1.5 million years ago. Because of the fragmentary nature of older glaciofluvial deposits, which have been largely removed by subsequent Missoula floods, the exact number of older Missoula floods, which are known as Ancient Cataclysmic Floods, that occurred during the Pleistocene cannot be estimated with any confidence. As many as 100 separate, cataclysmic Ice Age floods may have occurred during the last glaciation. There have been at least 17 complete interglacial-glacial cycles since about 1.77 million years ago, and perhaps as many as 44 interglacial-glacial cycles since the beginning of the Pleistocene about 2.58 million years ago. Presuming a dozen (or more) floods were associated with each glaciation, the total number of cataclysmic Ice Age Missoula floods that flowed through the Channeled Scablands for the entire Pleistocene Epoch could possibly number in the hundreds, perhaps exceeding a thousand Ancient Cataclysmic Floods.

There are also immense potholes and ripple marks, much larger than those found on ordinary rivers. When these features were first studied, no known theories could explain their origin. The giant current ripples are between 3 and 49 feet (1 and 15 m) high and are regularly spaced, relatively uniform hills. Vast volumes of flowing water would be required to produce ripple marks of this magnitude, as they are larger-scale versions of the ripple marks found on streambeds that are typically only centimeters high. Large potholes were formed by swirling vortexes of water called kolks scouring and plucking out the bedrock.

The Scablands are littered with large boulders called glacial erratics that rafted on glaciers and were deposited by the glacial outburst flooding. The lithology of erratics usually does not match the rock type that surrounds it, as they are often carried very far from their origin.

Industrial control system

From Wikipedia, the free encyclopedia

Industrial control system (ICS) is a general term that encompasses several types of control systems and associated instrumentation used for industrial process control.

Such systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems with many thousands of field connections. Systems receive data received from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions which are used to control a process through the final control elements (FCEs), such as control valves.

Larger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or distributed control systems (DCS), and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications.

Discrete controllers

Panel mounted controllers with integral displays. The process value (PV), and setvalue (SV) or setpoint are on the same scale for easy comparison. The controller output is shown as MV (manipulated variable) with range 0-100%.
 
A control loop using a discrete controller. Field signals are flow rate measurement from the sensor, and control output to the valve. A valve positioner ensures correct valve operation.
 
The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.

Quite complex systems can be created with networks of these controllers communicating using industry standard protocols. Networking allow the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective.

Distributed control systems

Functional manufacturing control levels. DCS (including PLCs or RTUs) operate on level 1. Level 2 contains the SCADA software and computing platform.

A distributed control system (DCS) is a digital processor control system for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralised control rooms and local on-plant monitoring and control.

A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to equipment being controlled to reduce cabling.

A DCS typically uses custom-designed processors as controllers, and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system.

The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.

The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch.

Distributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals. 

SCADA systems

Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller set point changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery.

The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks.

The SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. 

Programmable logic controllers

Siemens Simatic S7-400 system in a rack, left-to-right: power supply unit (PSU), CPU, interface module (IM) and communication processor (CP).
 
PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory

History

A pre-DCS era central control room. Whilst the controls are centralised in one place, they are still discrete and not integrated into one system.
 
A DCS control room where plant information and controls are displayed on computer graphics screens. The operators are seated as they can view and control any part of the process from their screens, whilst retaining a plant overview.

Process control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-manned central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process.

However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of distributed control was realised.

The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name distributed control system (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high speed networking and a full suite of displays and control racks.

While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLC were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.

SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control center. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control.

The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed.

In 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions.

Microcontroller

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Microcontroller
 
The die from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip
 
Two ATmega microcontrollers

A microcontroller (MCU for microcontroller unit) is a small computer on a single metal-oxide-semiconductor (MOS) integrated circuit chip. In modern terminology, it is similar to, but less sophisticated than, a system on a chip (SoC); a SoC may include a microcontroller as one of its components. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Program memory in the form of ferroelectric RAM, NOR flash or OTP ROM is also often included on chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips. 

Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control even more devices and processes. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems. In the context of the internet of things, microcontrollers are an economical and popular means of data collection, sensing and actuating the physical world as edge devices.

Some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz, for low power consumption (single-digit milliwatts or microwatts). They generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Other microcontrollers may serve performance-critical roles, where they may need to act more like a digital signal processor (DSP), with higher clock speeds and power consumption.

History


Background

The origins of both the microprocessor and the microcontroller can be traced back to the invention of the MOSFET (metal-oxide-semiconductor field-effect transistor), also known as the MOS transistor. It was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and first demonstrated in 1960. The same year, Atalla proposed the concept of the MOS integrated circuit, which was an integrated circuit chip fabricated from MOSFETs. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.

The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released on a single MOS LSI chip in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. It was followed by the 4-bit Intel 4040, the 8-bit Intel 8008, and the 8-bit Intel 8080. All of these processors required several external chips to implement a working system, including memory and peripheral interface chips. As a result, the total system cost was several hundred (1970s US) dollars, making it impossible to economically computerize small appliances. MOS Technology introduced sub-$100 microprocessors, the 6501 and 6502, with the chief aim of addressing this economic obstacle, but these microprocessors still required external support, memory, and peripheral chips which kept the total system cost in the hundreds of dollars. 

Development

One book credits TI engineers Gary Boone and Michael Cochran with the successful creation of the first microcontroller in 1971. The result of their work was the TMS 1000, which became commercially available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems.

During the early-to-mid-1970s, Japanese electronics manufacturers began producing microcontrollers for automobiles, including 4-bit MCUs for in-car entertainment, automatic wipers, electronic locks, and dashboard, and 8-bit MCUs for engine control.

Partly in response to the existence of the single-chip TMS 1000, Intel developed a computer system on a chip optimized for control applications, the Intel 8048, with commercial parts first shipping in 1977. It combined RAM and ROM on the same chip with a microprocessor. Among numerous applications, this chip would eventually find its way into over one billion PC keyboards. At that time Intel's President, Luke J. Valenter, stated that the microcontroller was one of the most successful products in the company's history, and he expanded the microcontroller division's budget by over 25%.

Most microcontrollers at this time had concurrent variants. One had EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light. These erasable chips were often used for prototyping. The other variant was either a mask programmed ROM or a PROM variant which was only programmable once. For the latter, sometimes the designation OTP was used, standing for "one-time programmable". In an OTP microcontroller, the PROM was usually of identical type as the EPROM, but the chip package had no quartz window; because there was no way to expose the EPROM to ultraviolet light, it could not be erased. Because the erasable versions required ceramic packages with quartz windows, they were significantly more expensive than the OTP versions, which could be made in lower-cost opaque plastic packages. For the erasable variants, quartz was required, instead of less expensive glass, for its transparency to ultraviolet light—to which glass is largely opaque—but the main cost differentiator was the ceramic package itself.

In 1993, the introduction of EEPROM memory allowed microcontrollers (beginning with the Microchip PIC16C84) to be electrically erased quickly without an expensive package as required for EPROM, allowing both rapid prototyping, and in-system programming. (EEPROM technology had been available prior to this time, but the earlier EEPROM was more expensive and less durable, making it unsuitable for low-cost mass-produced microcontrollers.) The same year, Atmel introduced the first microcontroller using Flash memory, a special type of EEPROM. Other companies rapidly followed suit, with both memory types. 

Nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors. 

Volume and cost

In 2002, about 55% of all CPUs sold in the world were 8-bit microcontrollers and microprocessors.

Over two billion 8-bit microcontrollers were sold in 1997, and according to Semico, over four billion 8-bit microcontrollers were sold in 2006. More recently, Semico has claimed the MCU market grew 36.5% in 2010 and 12% in 2011.

A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid-range automobile has about 30 microcontrollers. They can also be found in many electrical devices such as washing machines, microwave ovens, and telephones.
Historically, the 8-bit segment has dominated the MCU market [..] 16-bit microcontrollers became the largest volume MCU category in 2011, overtaking 8-bit devices for the first time that year [..] IC Insights believes the makeup of the MCU market will undergo substantial changes in the next five years with 32-bit devices steadily grabbing a greater share of sales and unit volumes. By 2017, 32-bit MCUs are expected to account for 55% of microcontroller sales [..] In terms of unit volumes, 32-bit MCUs are expected account for 38% of microcontroller shipments in 2017, while 16-bit devices will represent 34% of the total, and 4-/8-bit designs are forecast to be 28% of units sold that year.
The 32-bit MCU market is expected to grow rapidly due to increasing demand for higher levels of precision in embedded-processing systems and the growth in connectivity using the Internet. [..] In the next few years, complex 32-bit MCUs are expected to account for over 25% of the processing power in vehicles.
— IC Insights, MCU Market on Migration Path to 32-bit and ARM-based Devices
Cost to manufacture can be under $0.10 per unit.

Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under 0.03 USD in 2018, and some 32-bit microcontrollers around US$1 for similar quantities.

In 2012, following a global crisis—a worst ever annual sales decline and recovery and average sales price year-over-year plunging 17%—the biggest reduction since the 1980s—the average price for a microcontroller was US$0.88 ($0.69 for 4-/8-bit, $0.59 for 16-bit, $1.76 for 32-bit).

In 2012, worldwide sales of 8-bit microcontrollers were around $4 billion, while 4-bit microcontrollers also saw significant sales.

In 2015, 8-bit microcontrollers could be bought for $0.311 (1,000 units), 16-bit for $0.385 (1,000 units), and 32-bit for $0.378 (1,000 units, but at $0.35 for 5,000).

In 2018, 8-bit microcontrollers can be bought for $0.03, 16-bit for $0.393 (1,000 units, but at $0.563 for 100 or $0.349 for full reel of 2,000), and 32-bit for $0.503 (1,000 units, but at $0.466 for 5,000). A lower-priced 32-bit microcontroller, in units of one, can be had for $0.891.

In 2018, the low-priced microcontrollers above from 2015 are all more expensive (with inflation calculated between 2018 and 2015 prices for those specific units) at: the 8-bit microcontroller can be bought for $0.319 (1,000 units) or 2.6% higher, the 16-bit one for $0.464 (1,000 units) or 21% higher, and the 32-bit one for $0.503 (1,000 units, but at $0.466 for 5,000) or 33% higher.

A PIC 18F8720 microcontroller in an 80-pin TQFP package
 

Smallest computer

On 21 June 2018, the "world's smallest computer" was announced by the University of Michigan. The device is a "0.04mm3 16nW wireless and batteryless sensor system with integrated Cortex-M0+ processor and optical communication for cellular temperature measurement." It "measures just 0.3 mm to a side—dwarfed by a grain of rice. [...] In addition to the RAM and photovoltaics, the new computing devices have processors and wireless transmitters and receivers. Because they are too small to have conventional radio antennae, they receive and transmit data with visible light. A base station provides light for power and programming, and it receives the data." The device is 1/10th the size of IBM's previously claimed world-record-sized computer from months back in March 2018, which is "smaller than a grain of salt", has a million transistors, costs less than $0.10 to manufacture, and, combined with blockchain technology, is intended for logistics and “crypto-anchors”—”digital fingerprints” applications.

Embedded design

A microcontroller can be considered a self-contained system with a processor, memory and peripherals and can be used as an embedded system. The majority of microcontrollers in use today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems. 

While some embedded systems are very sophisticated, many have minimal requirements for memory and program length, with no operating system, and low software complexity. Typical input and output devices include switches, relays, solenoids, LED's, small or custom liquid-crystal displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, or other recognizable I/O devices of a personal computer, and may lack human interaction devices of any kind. 

Interrupts

Microcontrollers must provide real-time (predictable, though not necessarily fast) response to events in the embedded system they are controlling. When certain events occur, an interrupt system can signal the processor to suspend processing the current instruction sequence and to begin an interrupt service routine (ISR, or "interrupt handler") which will perform any processing required based on the source of the interrupt, before returning to the original instruction sequence. Possible interrupt sources are device dependent, and often include events such as an internal timer overflow, completing an analog to digital conversion, a logic level change on an input such as from a button being pressed, and data received on a communication link. Where power consumption is important as in battery devices, interrupts may also wake a microcontroller from a low-power sleep state where the processor is halted until required to do something by a peripheral event. 

Programs

Typically micro-controller programs must fit in the available on-chip memory, since it would be costly to provide a system with external, expandable memory. Compilers and assemblers are used to convert both high-level and assembly language codes into a compact machine code for storage in the micro-controller's memory. Depending on the device, the program memory may be permanent, read-only memory that can only be programmed at the factory, or it may be field-alterable flash or erasable read-only memory. 

Manufacturers have often produced special versions of their micro-controllers in order to help the hardware and software development of the target system. Originally these included EPROM versions that have a "window" on the top of the device through which program memory can be erased by ultraviolet light, ready for reprogramming after a programming ("burn") and test cycle. Since 1998, EPROM versions are rare and have been replaced by EEPROM and flash, which are easier to use (can be erased electronically) and cheaper to manufacture. 

Other versions may be available where the ROM is accessed as an external device rather than as internal memory, however these are becoming rare due to the widespread availability of cheap microcontroller programmers. 

The use of field-programmable devices on a micro controller may allow field update of the firmware or permit late factory revisions to products that have been assembled but not yet shipped. Programmable memory also reduces the lead time required for deployment of a new product. 

Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacture can be economical. These "mask programmed" parts have the program laid down in the same way as the logic of the chip, at the same time.

A customized micro-controller incorporates a block of digital logic that can be personalized for additional processing capability, peripherals and interfaces that are adapted to the requirements of the application. One example is the AT91CAP from Atmel

Other microcontroller features

Microcontrollers usually contain from several to dozens of general purpose input/output pins (GPIO). GPIO pins are software configurable to either an input or an output state. When GPIO pins are configured to an input state, they are often used to read sensors or external signals. Configured to the output state, GPIO pins can drive external devices such as LEDs or motors, often indirectly, through external power electronics. 

Many embedded systems need to read sensors that produce analog signals. This is the purpose of the analog-to-digital converter (ADC). Since processors are built to interpret and process digital data, i.e. 1s and 0s, they are not able to do anything with the analog signals that may be sent to it by a device. So the analog to digital converter is used to convert the incoming data into a form that the processor can recognize. A less common feature on some microcontrollers is a digital-to-analog converter (DAC) that allows the processor to output analog signals or voltage levels.

In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is the programmable interval timer (PIT). A PIT may either count down from some value to zero, or up to the capacity of the count register, overflowing to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on, the heater on, etc.

A dedicated pulse-width modulation (PWM) block makes it possible for the CPU to control power converters, resistive loads, motors, etc., without using lots of CPU resources in tight timer loops.

A universal asynchronous receiver/transmitter (UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU. Dedicated on-chip hardware also often includes capabilities to communicate with other devices (chips) in digital formats such as Inter-Integrated Circuit (I²C), Serial Peripheral Interface (SPI), Universal Serial Bus (USB), and Ethernet.

Higher integration

Die of a PIC12C508 8-bit, fully static, EEPROM/EPROM/ROM-based CMOS microcontroller manufactured by Microchip Technology using a 1200 nanometre process
 
Die of a STM32F100C4T6B ARM Cortex-M3 microcontroller with 16 kilobytes flash memory, 24 MHz central processing unit (CPU), motor control and Consumer Electronics Control (CEC) functions. Manufactured by STMicroelectronics.

Micro-controllers may not implement an external address or data bus as they integrate RAM and non-volatile memory on the same chip as the CPU. Using fewer pins, the chip can be placed in a much smaller, cheaper package.

Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of that chip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that has integrated peripherals is slightly more than the cost of a CPU and external peripherals, having fewer chips typically allows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board, in addition to tending to decrease the defect rate for the finished assembly.

A micro-controller is a single integrated circuit, commonly with the following features:
This integration drastically reduces the number of chips and the amount of wiring and circuit board space that would be needed to produce equivalent systems using separate chips. Furthermore, on low pin count devices in particular, each pin may interface to several internal peripherals, with the pin function selected by software. This allows a part to be used in a wider variety of applications than if pins had dedicated functions.

Micro-controllers have proved to be highly popular in embedded systems since their introduction in the 1970s. 

Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers. 

The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.

Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose built for control applications. A micro-controller instruction set usually has many instructions intended for bit manipulation (bit-wise operations) to make control programs more compact. For example, a general purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a micro-controller could have a single instruction to provide that commonly required function.

Microcontrollers traditionally do not have a math coprocessor, so floating point arithmetic is performed by software. However, some recent designs do include an FPU and DSP optimized features. An example would be Microchip's PIC32 MIPS based line. 

Programming environments

Microcontrollers were originally programmed only in assembly language, but various high-level programming languages, such as C, Python and JavaScript, are now also in common use to target microcontrollers and embedded systems. Compilers for general purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.

Microcontrollers with specialty hardware may require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters may also contain nonstandard features, such as MicroPython, although a fork, CircuitPython, has looked to move hardware dependencies to libraries and have the language adhere to a more CPython standard.

Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollers Intel 8052; BASIC and FORTH on the Zilog Z8 as well as some modern devices. Typically these interpreters support interactive programming

Simulators are available for some microcontrollers. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems. 

Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an in-circuit emulator (ICE) via JTAG, allow debugging of the firmware with a debugger. A real-time ICE may allow viewing and/or manipulating of internal states while running. A tracing ICE can record executed program and MCU states before/after a trigger point. 

Types

As of 2008, there are several dozen microcontroller architectures and vendors including:
Many others exist, some of which are used in very narrow range of applications or are more like applications processors than microcontrollers. The microcontroller market is extremely fragmented, with numerous vendors, technologies, and markets. Note that many vendors sell or have sold multiple architectures. 

Interrupt latency

In contrast to general-purpose computers, microcontrollers used in embedded systems often seek to optimize interrupt latency over instruction throughput. Issues include both reducing the latency, and making it be more predictable (to support real-time control).

When an electronic device causes an interrupt, during the context switch the intermediate results (registers) have to be saved before the software responsible for handling the interrupt can run. They must also be restored after that interrupt handler is finished. If there are more processor registers, this saving and restoring process may take more time, increasing the latency. (If an ISR does not require the use of some registers, it may simply leave them alone rather than saving and restoring them, so in that case those registers are not involved with the latency.) Ways to reduce such context/restore latency include having relatively few registers in their central processing units (undesirable because it slows down most non-interrupt processing substantially), or at least having the hardware not save them all (this fails if the software then needs to compensate by saving the rest "manually"). Another technique involves spending silicon gates on "shadow registers": One or more duplicate registers used only by the interrupt software, perhaps supporting a dedicated stack. 

Other factors affecting interrupt latency include:
  • Cycles needed to complete current CPU activities. To minimize those costs, microcontrollers tend to have short pipelines (often three instructions or less), small write buffers, and ensure that longer instructions are continuable or restartable. RISC design principles ensure that most instructions take the same number of cycles, helping avoid the need for most such continuation/restart logic.
  • The length of any critical section that needs to be interrupted. Entry to a critical section restricts concurrent data structure access. When a data structure must be accessed by an interrupt handler, the critical section must block that interrupt. Accordingly, interrupt latency is increased by however long that interrupt is blocked. When there are hard external constraints on system latency, developers often need tools to measure interrupt latencies and track down which critical sections cause slowdowns.
    • One common technique just blocks all interrupts for the duration of the critical section. This is easy to implement, but sometimes critical sections get uncomfortably long.
    • A more complex technique just blocks the interrupts that may trigger access to that data structure. This is often based on interrupt priorities, which tend to not correspond well to the relevant system data structures. Accordingly, this technique is used mostly in very constrained environments.
    • Processors may have hardware support for some critical sections. Examples include supporting atomic access to bits or bytes within a word, or other atomic access primitives like the LDREX/STREX exclusive access primitives introduced in the ARMv6 architecture.
  • Interrupt nesting. Some microcontrollers allow higher priority interrupts to interrupt lower priority ones. This allows software to manage latency by giving time-critical interrupts higher priority (and thus lower and more predictable latency) than less-critical ones.
  • Trigger rate. When interrupts occur back-to-back, microcontrollers may avoid an extra context save/restore cycle by a form of tail call optimization.
Lower end microcontrollers tend to support fewer interrupt latency controls than higher end ones. 

Memory technology

Two different kinds of memory are commonly used with microcontrollers, a non-volatile memory for storing firmware and a read-write memory for temporary data.

Data

From the earliest microcontrollers to today, six-transistor SRAM is almost always used as the read/write working memory, with a few more transistors per bit used in the register file. FRAM or MRAM could potentially replace it as it is 4 to 10 times denser which would make it more cost effective.

In addition to the SRAM, some microcontrollers also have internal EEPROM for data storage; and even ones that do not have any (or not enough) are often connected to external serial EEPROM chip (such as the BASIC Stamp) or external serial flash memory chip.

A few recent microcontrollers beginning in 2003 have "self-programmable" flash memory.

Firmware

The earliest microcontrollers used mask ROM to store firmware. Later microcontrollers (such as the early versions of the Freescale 68HC11 and early PIC microcontrollers) had EPROM memory, which used a translucent window to allow erasure via UV light, while production versions had no such window, being OTP (one-time-programmable). Firmware updates were equivalent to replacing the microcontroller itself, thus many products were not upgradeable.

Motorola MC68HC805 was the first microcontroller to use EEPROM to store the firmware. EEPROM microcontrollers became more popular in 1993 when Microchip introduced PIC16C84 and Atmel introduced an 8051-core microcontroller that was first one to use NOR Flash memory to store the firmware. Today's microcontrollers almost exclusively use flash memory, with a few models using FRAM, and some ultra-low-cost parts still use OTP or Mask-ROM.

Behavioral modernity

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Beh...