Search This Blog

Wednesday, August 24, 2022

Distributed generation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Distributed_generation

Distributed generation, also distributed energy, on-site generation (OSG), or district/decentralized energy, is electrical generation and storage performed by a variety of small, grid-connected or distribution system-connected devices referred to as distributed energy resources (DER).

Conventional power stations, such as coal-fired, gas, and nuclear powered plants, as well as hydroelectric dams and large-scale solar power stations, are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular, and more flexible technologies that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance, they are referred to as hybrid power systems.

DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables the collection of energy from many sources and may lower environmental impacts and improve the security of supply.

One of the major issues with the integration of the DER such as solar power, wind power, etc. is the uncertain nature of such electricity resources. This uncertainty can cause a few problems in the distribution system: (i) it makes the supply-demand relationships extremely complex, and requires complicated optimization tools to balance the network, and (ii) it puts higher pressure on the transmission network, and (iii) it may cause reverse power flow from the distribution system to transmission system.

Microgrids are modern, localized, small-scale grids, contrary to the traditional, centralized electricity grid (macrogrid). Microgrids can disconnect from the centralized grid and operate autonomously, strengthen grid resilience, and help mitigate grid disturbances. They are typically low-voltage AC grids, often use diesel generators, and are installed by the community they serve. Microgrids increasingly employ a mixture of different distributed energy resources, such as solar hybrid power systems, which significantly reduce the amount of carbon emitted.

Overview

Historically, central plants have been an integral part of the electric grid, in which large generating facilities are specifically located either close to resources or otherwise located far from populated load centers. These, in turn, supply the traditional transmission and distribution (T&D) grid that distributes bulk power to load centers and from there to consumers. These were developed when the costs of transporting fuel and integrating generating technologies into populated areas far exceeded the cost of developing T&D facilities and tariffs. Central plants are usually designed to take advantage of available economies of scale in a site-specific manner, and are built as "one-off," custom projects.

These economies of scale began to fail in the late 1960s and, by the start of the 21st century, Central Plants could arguably no longer deliver competitively cheap and reliable electricity to more remote customers through the grid, because the plants had come to cost less than the grid and had become so reliable that nearly all power failures originated in the grid. Thus, the grid had become the main driver of remote customers’ power costs and power quality problems, which became more acute as digital equipment required extremely reliable electricity. Efficiency gains no longer come from increasing generating capacity, but from smaller units located closer to sites of demand.

For example, coal power plants are built away from cities to prevent their heavy air pollution from affecting the populace. In addition, such plants are often built near collieries to minimize the cost of transporting coal. Hydroelectric plants are by their nature limited to operating at sites with sufficient water flow.

Low pollution is a crucial advantage of combined cycle plants that burn natural gas. The low pollution permits the plants to be near enough to a city to provide district heating and cooling.

Distributed energy resources are mass-produced, small, and less site-specific. Their development arose out of:

  1. concerns over perceived externalized costs of central plant generation, particularly environmental concerns;
  2. the increasing age, deterioration, and capacity constraints upon T&D for bulk power;
  3. the increasing relative economy of mass production of smaller appliances over heavy manufacturing of larger units and on-site construction;
  4. Along with higher relative prices for energy, higher overall complexity and total costs for regulatory oversight, tariff administration, and metering and billing.

Capital markets have come to realize that right-sized resources, for individual customers, distribution substations, or microgrids, are able to offer important but little-known economic advantages over central plants. Smaller units offered greater economies from mass-production than big ones could gain through unit size. These increased value—due to improvements in financial risk, engineering flexibility, security, and environmental quality—of these resources can often more than offset their apparent cost disadvantages. Distributed generation (DG), vis-à-vis central plants, must be justified on a life-cycle basis. Unfortunately, many of the direct, and virtually all of the indirect, benefits of DG are not captured within traditional utility cash-flow accounting.

While the levelized cost of DG is typically more expensive than conventional, centralized sources on a kilowatt-hour basis, this does not consider negative aspects of conventional fuels. The additional premium for DG is rapidly declining as demand increases and technology progresses, and sufficient and reliable demand may bring economies of scale, innovation, competition, and more flexible financing, that could make DG clean energy part of a more diversified future.

DG reduces the amount of energy lost in transmitting electricity because the electricity is generated very near where it is used, perhaps even in the same building. This also reduces the size and number of power lines that must be constructed.

Typical DER systems in a feed-in tariff (FIT) scheme have low maintenance, low pollution and high efficiencies. In the past, these traits required dedicated operating engineers and large complex plants to reduce pollution. However, modern embedded systems can provide these traits with automated operation and renewable energy, such as solar, wind and geothermal. This reduces the size of power plant that can show a profit.

Grid parity

Grid parity occurs when an alternative energy source can generate electricity at a levelized cost (LCOE) that is less than or equal to the end consumer's retail price. Reaching grid parity is considered to be the point at which an energy source becomes a contender for widespread development without subsidies or government support. Since the 2010s, grid parity for solar and wind has become a reality in a growing number of markets, including Australia, several European countries, and some states in the U.S.

Technologies

Distributed energy resource (DER) systems are small-scale power generation or storage technologies (typically in the range of 1 kW to 10,000 kW) used to provide an alternative to or an enhancement of the traditional electric power system. DER systems typically are characterized by high initial capital costs per kilowatt. DER systems also serve as storage device and are often called Distributed energy storage systems (DESS).

DER systems may include the following devices/technologies:

Cogeneration

Distributed cogeneration sources use steam turbines, natural gas-fired fuel cells, microturbines or reciprocating engines to turn generators. The hot exhaust is then used for space or water heating, or to drive an absorptive chiller for cooling such as air-conditioning. In addition to natural gas-based schemes, distributed energy projects can also include other renewable or low carbon fuels including biofuels, biogas, landfill gas, sewage gas, coal bed methane, syngas and associated petroleum gas.

Delta-ee consultants stated in 2013 that with 64% of global sales, the fuel cell micro combined heat and power passed the conventional systems in sales in 2012. 20.000 units were sold in Japan in 2012 overall within the Ene Farm project. With a Lifetime of around 60,000 hours for PEM fuel cell units, which shut down at night, this equates to an estimated lifetime of between ten and fifteen years. For a price of $22,600 before installation. For 2013 a state subsidy for 50,000 units is in place.

In addition, molten carbonate fuel cell and solid oxide fuel cells using natural gas, such as the ones from FuelCell Energy and the Bloom energy server, or waste-to-energy processes such as the Gate 5 Energy System are used as a distributed energy resource.

Solar power

Photovoltaics, by far the most important solar technology for distributed generation of solar power, uses solar cells assembled into solar panels to convert sunlight into electricity. It is a fast-growing technology doubling its worldwide installed capacity every couple of years. PV systems range from distributed, residential, and commercial rooftop or building integrated installations, to large, centralized utility-scale photovoltaic power stations.

The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its sunlight to electricity conversion efficiency, reduced the installation cost per watt as well as its energy payback time (EPBT) and levelised cost of electricity (LCOE), and has reached grid parity in at least 19 different markets in 2014.

As most renewable energy sources and unlike coal and nuclear, solar PV is variable and non-dispatchable, but has no fuel costs, operating pollution, as well as greatly reduced mining-safety and operating-safety issues. It produces peak power around local noon each day and its capacity factor is around 20 percent.

Wind power

Wind turbines can be distributed energy resources or they can be built at utility scale. These have low maintenance and low pollution, but distributed wind unlike utility-scale wind has much higher costs than other sources of energy. As with solar, wind energy is variable and non-dispatchable. Wind towers and generators have substantial insurable liabilities caused by high winds, but good operating safety. Distributed generation from wind hybrid power systems combines wind power with other DER systems. One such example is the integration of wind turbines into solar hybrid power systems, as wind tends to complement solar because the peak operating times for each system occur at different times of the day and year.

Hydro power

Hydroelectricity is the most widely used form of renewable energy and its potential has already been explored to a large extent or is compromised due to issues such as environmental impacts on fisheries, and increased demand for recreational access. However, using modern 21st century technology, such as wave power, can make large amounts of new hydropower capacity available, with minor environmental impact.

Modular and scalable Next generation kinetic energy turbines can be deployed in arrays to serve the needs on a residential, commercial, industrial, municipal or even regional scale. Microhydro kinetic generators neither require dams nor impoundments, as they utilize the kinetic energy of water motion, either waves or flow. No construction is needed on the shoreline or sea bed, which minimizes environmental impacts to habitats and simplifies the permitting process. Such power generation also has minimal environmental impact and non-traditional microhydro applications can be tethered to existing construction such as docks, piers, bridge abutments, or similar structures.

Waste-to-energy

Municipal solid waste (MSW) and natural waste, such as sewage sludge, food waste and animal manure will decompose and discharge methane-containing gas that can be collected and used as fuel in gas turbines or micro turbines to produce electricity as a distributed energy resource. Additionally, a California-based company, Gate 5 Energy Partners, Inc. has developed a process that transforms natural waste materials, such as sewage sludge, into biofuel that can be combusted to power a steam turbine that produces power. This power can be used in lieu of grid-power at the waste source (such as a treatment plant, farm or dairy).

Energy storage

A distributed energy resource is not limited to the generation of electricity but may also include a device to store distributed energy (DE). Distributed energy storage systems (DESS) applications include several types of battery, pumped hydro, compressed air, and thermal energy storage. Access to energy storage for commercial applications is easily accessible through programs such as energy storage as a service (ESaaS).

PV storage

Common rechargeable battery technologies used in today's PV systems include, the valve regulated lead-acid battery (lead–acid battery), nickel–cadmium and lithium-ion batteries. Compared to the other types, lead-acid batteries have a shorter lifetime and lower energy density. However, due to their high reliability, low self-discharge (4–6% per year) as well as low investment and maintenance costs, they are currently the predominant technology used in small-scale, residential PV systems, as lithium-ion batteries are still being developed and about 3.5 times as expensive as lead-acid batteries. Furthermore, as storage devices for PV systems are stationary, the lower energy and power density and therefore higher weight of lead-acid batteries are not as critical as for electric vehicles.
However, lithium-ion batteries, such as the Tesla Powerwall, have the potential to replace lead-acid batteries in the near future, as they are being intensively developed and lower prices are expected due to economies of scale provided by large production facilities such as the Gigafactory 1. In addition, the Li-ion batteries of plug-in electric cars may serve as future storage devices, since most vehicles are parked an average of 95 percent of the time, their batteries could be used to let electricity flow from the car to the power lines and back. Other rechargeable batteries that are considered for distributed PV systems include, sodium–sulfur and vanadium redox batteries, two prominent types of a molten salt and a flow battery, respectively.

Vehicle-to-grid

Future generations of electric vehicles may have the ability to deliver power from the battery in a vehicle-to-grid into the grid when needed. An electric vehicle network has the potential to serve as a DESS.

Flywheels

An advanced flywheel energy storage (FES) stores the electricity generated from distributed resources in the form of angular kinetic energy by accelerating a rotor (flywheel) to a very high speed of about 20,000 to over 50,000 rpm in a vacuum enclosure. Flywheels can respond quickly as they store and feed back electricity into the grid in a matter of seconds.

Integration with the grid

For reasons of reliability, distributed generation resources would be interconnected to the same transmission grid as central stations. Various technical and economic issues occur in the integration of these resources into a grid. Technical problems arise in the areas of power quality, voltage stability, harmonics, reliability, protection, and control. Behavior of protective devices on the grid must be examined for all combinations of distributed and central station generation. A large scale deployment of distributed generation may affect grid-wide functions such as frequency control and allocation of reserves. As a result, smart grid functions, virtual power plants  and grid energy storage such as power to gas stations are added to the grid. Conflicts occur between utilities and resource managing organizations.

Each distributed generation resource has its own integration issues. Solar PV and wind power both have intermittent and unpredictable generation, so they create many stability issues for voltage and frequency. These voltage issues affect mechanical grid equipment, such as load tap changers, which respond too often and wear out much more quickly than utilities anticipated. Also, without any form of energy storage during times of high solar generation, companies must rapidly increase generation around the time of sunset to compensate for the loss of solar generation. This high ramp rate produces what the industry terms the duck curve that is a major concern for grid operators in the future. Storage can fix these issues if it can be implemented. Flywheels have shown to provide excellent frequency regulation. Also, flywheels are highly cyclable compared to batteries, meaning they maintain the same energy and power after a significant amount of cycles( on the order of 10,000 cycles). Short term use batteries, at a large enough scale of use, can help to flatten the duck curve and prevent generator use fluctuation and can help to maintain voltage profile. However, cost is a major limiting factor for energy storage as each technique is prohibitively expensive to produce at scale and comparatively not energy dense compared to liquid fossil fuels. Finally, another necessary method of aiding in integration of photovoltaics for proper distributed generation is in the use of intelligent hybrid inverters. Intelligent hybrid inverters store energy when there is more energy production than consumption. When consumption is high, these inverters provide power relieving the distribution system.

Another approach does not demand grid integration: stand alone hybrid systems.

Mitigating Voltage and Frequency Issues of DG integration

There have been some efforts to mitigate voltage and frequency issues due to increased implementation of DG. Most notably, IEEE 1547 sets the standard for interconnection and interoperability of distributed energy resources. IEEE 1547 sets specific curves signaling when to clear a fault as a function of the time after the disturbance and the magnitude of the voltage irregularity or frequency irregularity. Voltage issues also give legacy equipment the opportunity to perform new operations. Notably, inverters can regulate the voltage output of DGs. Changing inverter impedances can change voltage fluctuations of DG, meaning inverters have the ability to control DG voltage output. To reduce the effect of DG integration on mechanical grid equipment, transformers and load tap changers have the potential to implement specific tap operation vs. voltage operation curves mitigating the effect of voltage irregularities due to DG. That is, load tap changers respond to voltage fluctuations that last for a longer period than voltage fluctuations created from DG equipment.

Stand alone hybrid systems

It is now possible to combine technologies such as photovoltaics, batteries and cogen to make stand alone distributed generation systems.

Recent work has shown that such systems have a low levelized cost of electricity.

Many authors now think that these technologies may enable a mass-scale grid defection because consumers can produce electricity using off grid systems primarily made up of solar photovoltaic technology. For example, the Rocky Mountain Institute has proposed that there may wide scale grid defection. This is backed up by studies in the Midwest.

Cost factors

Cogenerators are also more expensive per watt than central generators. They find favor because most buildings already burn fuels, and the cogeneration can extract more value from the fuel . Local production has no electricity transmission losses on long distance power lines or energy losses from the Joule effect in transformers where in general 8-15% of the energy is lost (see also cost of electricity by source).

Some larger installations utilize combined cycle generation. Usually this consists of a gas turbine whose exhaust boils water for a steam turbine in a Rankine cycle. The condenser of the steam cycle provides the heat for space heating or an absorptive chiller. Combined cycle plants with cogeneration have the highest known thermal efficiencies, often exceeding 85%.

In countries with high pressure gas distribution, small turbines can be used to bring the gas pressure to domestic levels whilst extracting useful energy. If the UK were to implement this countrywide an additional 2-4 GWe would become available. (Note that the energy is already being generated elsewhere to provide the high initial gas pressure - this method simply distributes the energy via a different route.)

Microgrid

A microgrid is a localized grouping of electricity generation, energy storage, and loads that normally operates connected to a traditional centralized grid (macrogrid). This single point of common coupling with the macrogrid can be disconnected. The microgrid can then function autonomously. Generation and loads in a microgrid are usually interconnected at low voltage and it can operate in DC, AC, or the combination of both. From the point of view of the grid operator, a connected microgrid can be controlled as if it were one entity.

Microgrid generation resources can include stationary batteries, fuel cells, solar, wind, or other energy sources. The multiple dispersed generation sources and ability to isolate the microgrid from a larger network would provide highly reliable electric power. Produced heat from generation sources such as microturbines could be used for local process heating or space heating, allowing flexible trade off between the needs for heat and electric power.

Micro-grids were proposed in the wake of the July 2012 India blackout:

  • Small micro-grids covering 30–50 km radius
  • Small power stations of 5–10 MW to serve the micro-grids
  • Generate power locally to reduce dependence on long distance transmission lines and cut transmission losses.

Micro-grids have seen implementation in a number of communities over the world. For example, Tesla has implemented a solar micro-grid in the Samoan island of Ta'u, powering the entire island with solar energy. This localized production system has helped save over 380 cubic metres (100,000 US gal) of diesel fuel. It is also able to sustain the island for three whole days if the sun were not to shine at all during that period. This is a great example of how micro-grid systems can be implemented in communities to encourage renewable resource usage and localized production.

To plan and install Microgrids correctly, engineering modelling is needed. Multiple simulation tools and optimization tools exist to model the economic and electric effects of Microgrids. A widely used economic optimization tool is the Distributed Energy Resources Customer Adoption Model (DER-CAM) from Lawrence Berkeley National Laboratory. Another frequently used commercial economic modelling tool is Homer Energy, originally designed by the National Renewable Laboratory. There are also some power flow and electrical design tools guiding the Microgrid developers. The Pacific Northwest National Laboratory designed the public available GridLAB-D tool and the Electric Power Research Institute (EPRI) designed OpenDSS to simulate the distribution system (for Microgrids). A professional integrated DER-CAM and OpenDSS version is available via BankableEnergy. A European tool that can be used for electrical, cooling, heating, and process heat demand simulation is EnergyPLAN from the Aalborg University, Denmark.

Communication in DER systems

  • IEC 61850-7-420 is published by IEC TC 57: Power systems management and associated information exchange. It is one of the IEC 61850 standards, some of which are core Standards required for implementing smart grids. It uses communication services mapped to MMS as per IEC 61850-8-1 standard.
  • OPC is also used for the communication between different entities of DER system.
  • Institute of Electrical and Electronics Engineers IEEE 2030.7 microgrid controller standard. That concept relies on 4 blocks: a) Device Level control (e.g. Voltage and Frequency Control), b) Local Area Control (e.g. data communication), c) Supervisory (software) controller (e.g. forward looking dispatch optimization of generation and load resources), and d) Grid Layer (e.g. communication with utility).
  • A wide variety of complex control algorithms exist, making it difficult for small and residential Distributed Energy Resource (DER) users to implement energy management and control systems. Especially, communication upgrades and data information systems can make it expensive. Thus, some projects try to simplify the control of DER via off-the shelf products and make it usable for the mainstream (e.g. using a Raspberry Pi).

Legal requirements for distributed generation

In 2010 Colorado enacted a law requiring that by 2020 that 3% of the power generated in Colorado utilize distributed generation of some sort.

On 11 October 2017, California Governor Jerry Brown signed into law a bill, SB 338, that makes utility companies plan "carbon-free alternatives to gas generation" in order to meet peak demand. The law requires utilities to evaluate issues such as energy storage, efficiency, and distributed energy resources.

Tuesday, August 23, 2022

Petroleum industry

From Wikipedia, the free encyclopedia
World oil reserves, 2013.

The petroleum industry, also known as the oil industry or the oil patch, includes the global processes of exploration, extraction, refining, transportation (often by oil tankers and pipelines), and marketing of petroleum products. The largest volume products of the industry are fuel oil and gasoline (petrol). Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, synthetic fragrances, and plastics. The industry is usually divided into three major components: upstream, midstream, and downstream. Upstream regards exploration and extraction of crude oil, midstream encompasses transportation and storage of crude, and downstream concerns refining crude oil into various end products.

Petroleum is vital to many industries, and is necessary for the maintenance of industrial civilization in its current configuration, making it a critical concern for many nations. Oil accounts for a large percentage of the world’s energy consumption, ranging from a low of 32% for Europe and Asia, to a high of 53% for the Middle East.

Other geographic regions' consumption patterns are as follows: South and Central America (44%), Africa (41%), and North America (40%). The world consumes 36 billion barrels (5.8 km³) of oil per year, with developed nations being the largest consumers. The United States consumed 18% of the oil produced in 2015. The production, distribution, refining, and retailing of petroleum taken as a whole represents the world's largest industry in terms of dollar value.

The oil and gas industry spends only 0,4% of its net sales for Research & Development which is in comparison with a range of other industries the lowest share.

Governments such as the United States government provide a heavy public subsidy to petroleum companies, with major tax breaks at various stages of oil exploration and extraction, including the costs of oil field leases and drilling equipment.

In recent years, enhanced oil recovery techniques — most notably multi-stage drilling and hydraulic fracturing ("fracking") — have moved to the forefront of the industry as this new technology plays a crucial and controversial role in new methods of oil extraction.

History

Oil Field in Baku, Azerbaijan, 1926

Prehistory

Natural oil spring in Korňa, Slovakia.

Petroleum is a naturally occurring liquid found in rock formations. It consists of a complex mixture of hydrocarbons of various molecular weights, plus other organic compounds. It is generally accepted that oil is formed mostly from the carbon rich remains of ancient plankton after exposure to heat and pressure in Earth's crust over hundreds of millions of years. Over time, the decayed residue was covered by layers of mud and silt, sinking further down into Earth’s crust and preserved there between hot and pressured layers, gradually transforming into oil reservoirs.

Early history

Petroleum in an unrefined state has been utilized by humans for over 5000 years. Oil in general has been used since early human history to keep fires ablaze and in warfare.

Its importance to the world economy however, evolved slowly, with whale oil being used for lighting in the 19th century and wood and coal used for heating and cooking well into the 20th century. Even though the Industrial Revolution generated an increasing need for energy, this was initially met mainly by coal, and from other sources including whale oil. However, when it was discovered that kerosene could be extracted from crude oil and used as a lighting and heating fuel, the demand for petroleum increased greatly, and by the early twentieth century had become the most valuable commodity traded on world markets.

Modern history

Oil wells in Boryslav
 
Galician oil wells
 
World crude oil production from wells (excludes surface-mined oil, such as from Canadian heavy oil sands), 1930-2012

Imperial Russia produced 3,500 tons of oil in 1825 and doubled its output by mid-century. After oil drilling began in the region of present-day Azerbaijan in 1846, in Baku, the Russian Empire built two large pipelines: the 833 km long pipeline to transport oil from the Caspian to the Black Sea port of Batum (Baku-Batum pipeline), completed in 1906, and the 162 km long pipeline to carry oil from Chechnya to the Caspian. The first drilled oil wells in Baku were built in 1871-1872 by Ivan Mirzoev, an Armenian businessman who is referred to as one of the 'founding fathers' of Baku's oil industry.

At the turn of the 20th century, Imperial Russia's output of oil, almost entirely from the Apsheron Peninsula, accounted for half of the world's production and dominated international markets. Nearly 200 small refineries operated in the suburbs of Baku by 1884. As a side effect of these early developments, the Apsheron Peninsula emerged as the world's "oldest legacy of oil pollution and environmental negligence". In 1846 Baku (Bibi-Heybat settlement) featured the first ever well drilled with percussion tools to a depth of 21 meters for oil exploration. In 1878 Ludvig Nobel and his Branobel company "revolutionized oil transport" by commissioning the first oil tanker and launching it on the Caspian Sea.

Samuel Kier established America's first oil refinery in Pittsburgh on Seventh avenue near Grant Street in 1853. Ignacy Łukasiewicz built one of the first modern oil-refineries near Jasło (then in the Austrian dependent Kingdom of Galicia and Lodomeria in Central European Galicia), present-day Poland, in 1854–56. Galician refineries were initially small, as demand for refined fuel was limited. The refined products were used in artificial asphalt, machine oil and lubricants, in addition to Łukasiewicz's kerosene lamp. As kerosene lamps gained popularity, the refining industry grew in the area.

The first commercial oil-well in Canada became operational in 1858 at Oil Springs, Ontario (then Canada West). Businessman James Miller Williams dug several wells between 1855 and 1858 before discovering a rich reserve of oil four metres below ground. Williams extracted 1.5 million litres of crude oil by 1860, refining much of it into kerosene-lamp oil. Some historians challenge Canada's claim to North America's first oil field, arguing that Pennsylvania's famous Drake Well was the continent's first. But there is evidence to support Williams, not least of which is that the Drake well did not come into production until August 28, 1859. The controversial point might be that Williams found oil above bedrock while Edwin Drake’s well located oil within a bedrock reservoir. The discovery at Oil Springs touched off an oil boom which brought hundreds of speculators and workers to the area. Canada's first gusher (flowing well) erupted on January 16, 1862, when local oil-man John Shaw struck oil at 158 feet (48 m). For a week the oil gushed unchecked at levels reported as high as 3,000 barrels per day.

The first modern oil-drilling in the United States began in West Virginia and Pennsylvania in the 1850s. Edwin Drake's 1859 well near Titusville, Pennsylvania, typically considered the first true modern oil well, touched off a major boom. In the first quarter of the 20th century, the United States overtook Russia as the world's largest oil producer. By the 1920s, oil fields had been established in many countries including Canada, Poland, Sweden, Ukraine, the United States, Peru and Venezuela.

The first successful oil tanker, the Zoroaster, was built in 1878 in Sweden, designed by Ludvig Nobel. It operated from Baku to Astrakhan. A number of new tanker designs developed in the 1880s.

In the early 1930s the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the Gulf of Mexico. In 1937 Pure Oil Company (now part of Chevron Corporation) and its partner Superior Oil Company (now part of ExxonMobil Corporation) used a fixed platform to develop a field in 14 feet (4.3 m) of water, one mile (1.6 km) offshore of Calcasieu Parish, Louisiana. In early 1947 Superior Oil erected a drilling/production oil-platform in 20 ft (6.1 m) of water some 18 miles off Vermilion Parish, Louisiana. Kerr-McGee Oil Industries, as operator for partners Phillips Petroleum (ConocoPhillips) and Stanolind Oil & Gas (BP), completed its historic Ship Shoal Block 32 well in November 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's Gulf of Mexico well, Kermac No. 16, the first oil discovery drilled out of sight of land. Forty-four Gulf of Mexico exploratory wells discovered 11 oil and natural gas fields by the end of 1949.

During World War II (1939–1945) control of oil supply from Romania, Baku, the Middle East and the Dutch East Indies played a huge role in the events of the war and the ultimate victory of the Allies. The Anglo-Soviet invasion of Iran (1941) secured Allied control of oil-production in the Middle East. The expansion of Imperial Japan to the south aimed largely at accessing the oil-fields of the Dutch East Indies. Germany, cut off from sea-borne oil supplies by Allied blockade, failed in Operation Edelweiss to secure the Caucasus oil-fields for the Axis military in 1942, while Romania deprived the Wehrmacht of access to Ploesti oilfields - the largest in Europe - from August 1944. Cutting off the East Indies oil-supply (especially via submarine campaigns) considerably weakened Japan in the latter part of the war. After World War II ended in 1945, the countries of the Middle East took the lead in oil production from the United States. Important developments since World War II include deep-water drilling, the introduction of the drillship, and the growth of a global shipping network for petroleum - relying upon oil tankers and pipelines. In 1949 the first offshore oil-drilling at Oil Rocks (Neft Dashlari) in the Caspian Sea off Azerbaijan eventually resulted in a city built on pylons. In the 1960s and 1970s, multi-governmental organizations of oil–producing nations - OPEC and OAPEC - played a major role in setting petroleum prices and policy. Oil spills and their cleanup have become an issue of increasing political, environmental, and economic importance. New fields of hydrocarbon production developed in places such as Siberia, Sakhalin, Venezuela and North and West Africa.

With the advent of hydraulic fracturing and other horizontal drilling techniques, shale play has seen an enormous uptick in production. Areas of shale such as the Permian Basin and Eagle-Ford have become huge hotbeds of production for the largest oil corporations in the United States.

Structure

NIS refinery in Pančevo, Serbia

The American Petroleum Institute divides the petroleum industry into five sectors:

Upstream

Oil companies used to be classified by sales as "supermajors" (BP, Chevron, ExxonMobil, ConocoPhillips, Shell, Eni and TotalEnergies), "majors", and "independents" or "jobbers". In recent years however, National Oil Companies (NOC, as opposed to IOC, International Oil Companies) have come to control the rights over the largest oil reserves; by this measure the top ten companies all are NOC. The following table shows the ten largest national oil companies ranked by reserves and by production in 2012.

Top 10 largest world oil companies by reserves and production
Rank Company (Reserves) Worldwide Liquids Reserves (109 bbl) Worldwide Natural Gas Reserves (1012 ft3) Total Reserves in Oil Equivalent Barrels (109 bbl)
Company (Production) Output (Millions bbl/day)
1 Saudi Arabia Saudi Aramco 260 254 303
Saudi Arabia Saudi Aramco 12.5
2 Iran NIOC 138 948 300
Iran NIOC 6.4
3 Qatar QatarEnergy 15 905 170
United States ExxonMobil 5.3
4 Iraq INOC 116 120 134
China PetroChina 4.4
5 Venezuela PDVSA 99 171 129
United Kingdom BP 4.1
6 United Arab Emirates ADNOC 92 199 126
Netherlands United Kingdom Royal Dutch Shell 3.9
7 Mexico Pemex 102 56 111
Mexico Pemex 3.6
8 Nigeria NNPC 36 184 68
United States Chevron 3.5
9 Libya NOC 41 50 50
Kuwait Kuwait Petroleum Corporation 3.2
10 Algeria Sonatrach 12 159 39
United Arab Emirates ADNOC 2.9
^1 : Total energy output, including natural gas (converted to bbl of oil) for companies producing both.

Most upstream work in the oil field or on an oil well is contracted out to drilling contractors and oil field service companies.

Aside from the NOCs which dominate the Upstream sector, there are many international companies that have a market share. For example:

Midstream

Midstream operations are sometimes classified within the downstream sector, but these operations compose a separate and discrete sector of the petroleum industry. Midstream operations and processes include the following:

  • Gathering: The gathering process employs narrow, low-pressure pipelines to connect oil- and gas-producing wells to larger, long-haul pipelines or processing facilities.
  • Processing/refining: Processing and refining operations turn crude oil and gas into marketable products. In the case of crude oil, these products include heating oil, gasoline for use in vehicles, jet fuel, and diesel oil. Oil refining processes include distillation, vacuum distillation, catalytic reforming, catalytic cracking, alkylation, isomerization and hydrotreating. Natural gas processing includes compression; glycol dehydration; amine treating; separating the product into pipeline-quality natural gas and a stream of mixed natural gas liquids; and fractionation, which separates the stream of mixed natural gas liquids into its components. The fractionation process yields ethane, propane, butane, isobutane, and natural gasoline.
  • Transportation: Oil and gas are transported to processing facilities, and from there to end users, by pipeline, tanker/barge, truck, and rail. Pipelines are the most economical transportation method and are most suited to movement across longer distances, for example, across continents. Tankers and barges are also employed for long-distance, often international transport. Rail and truck can also be used for longer distances but are most cost-effective for shorter routes.
  • Storage: Midstream service providers provide storage facilities at terminals throughout the oil and gas distribution systems. These facilities are most often located near refining and processing facilities and are connected to pipeline systems to facilitate shipment when product demand must be met. While petroleum products are held in storage tanks, natural gas tends to be stored in underground facilities, such as salt dome caverns and depleted reservoirs.
  • Technological applications: Midstream service providers apply technological solutions to improve efficiency during midstream processes. Technology can be used during compression of fuels to ease flow through pipelines; to better detect leaks in pipelines; and to automate communications for better pipeline and equipment monitoring.

While some upstream companies carry out certain midstream operations, the midstream sector is dominated by a number of companies that specialize in these services. Midstream companies include:

Environmental impact

Water pollution

Some petroleum industry operations have been responsible for water pollution through by-products of refining and oil spills. Though hydraulic fracturing has significantly increased natural gas extraction, there is some belief and evidence to support that consumable water has seen increased in methane contamination due to this gas extraction. Leaks from underground tanks and abandoned refineries may also contaminate groundwater in surrounding areas. Hydrocarbons that comprise refined petroleum are resistant to biodegradation and have been found to remain present in contaminated soils for years. To hasten this process, bioremediation of petroleum hydrocarbon pollutants is often employed by means of aerobic degradation. More recently, other bioremediative methods have been explored such as phytoremediation and thermal remediation.

Air pollution

The industry is the largest industrial source of emissions of volatile organic compounds (VOCs), a group of chemicals that contribute to the formation of ground-level ozone (smog). The combustion of fossil fuels produces greenhouse gases and other air pollutants as by-products. Pollutants include nitrogen oxides, sulphur dioxide, volatile organic compounds and heavy metals.

Researchers have discovered that the petrochemical industry can produce ground-level ozone pollution at higher amounts in winter than in summer.

Climate change

The greenhouse gases due to fossil fuels drive climate change. Already in 1959, at a symposium organised by the American Petroleum Institute for the centennial of the American oil industry, the physicist Edward Teller warned then of the danger of global climate change. Edward Teller explained that carbon dioxide "in the atmosphere causes a greenhouse effect" and that burning more fossil fuels could "melt the icecap and submerge New York".

The Intergovernmental Panel on Climate Change, founded by the United Nations in 1988, concludes that human-sourced greenhouse gases are responsible for most of the observed temperature increase since the middle of the twentieth century.

As a result of climate change concerns, many alternative energy enthusiasts have begun using other methods of energy such as solar and wind, among others. This recent view has some petroleum enthusiasts skeptical about the true future of the industry.

Vertebral column

From Wikipedia, the free encyclopedia ...