Search This Blog

Saturday, September 3, 2022

Deep learning processor

From Wikipedia, the free encyclopedia

A deep learning processor (DLP), or a deep learning accelerator, is an electronic circuit designed for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors range from mobile devices, such as neural processing units (NPUs) in Huawei cellphones, to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform.

The goal of DLPs is to provide higher efficiency and performance for deep learning algorithms than general central processing unit (CPUs) and graphics processing units (GPUs) would. Most DLPs employ a large number of computing components to leverage high data-level parallelism, a relatively larger on-chip buffer/memory to leverage the data reuse patterns, and limited data-width operators for error-resilience of deep learning. Deep learning processors differ from AI accelerators in that they are specialized for running learning algorithms, while AI accelerators are typically more specialized for inference. However, the two terms (DLP vs AI accelerator) are not used rigorously and there is often overlap between the two.

History

The use of CPUs/GPUs

At the beginning, general CPUs were adopted to perform deep learning algorithms. Later, GPUs are introduced to the domain of deep learning. For example, in 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet, which won the champion of the ISLVRC-2012 competition. As the interests in deep learning algorithms and DLPs keep increasing, GPU manufacturers start to add deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library). For example, Nvidia even released the Turing Tensor Core—a DLP—to accelerate deep learning processing.

The first DLP

To provide higher efficiency in performance and energy, domain-specific design starts to draw a great attention. In 2014, Chen et al. proposed the first DLP in the world, DianNao (Chinese for "electric brain"), to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance (of key operations in deep neural networks) only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao, ShiDianNao, PuDianNao) are proposed by the same group, forming the DianNao Family.

The blooming DLPs

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. Only at ISCA 2016, three sessions, 15% (!) of the accepted papers, are all architecture designs about deep learning. Such efforts include Eyeriss (MIT), EIE (Stanford), Minerva (Harvard), Stripes (University of Toronto) in academia, and TPU (Google), MLU (Cambricon) in industry. We listed several representative works in Table 1.

Table 1. Typical DLPs
Year DLPs Institution Type Computation Memory Hierarchy Control Peak Performance
2014 DianNao ICT, CAS digital vector MACs scratchpad VLIW 452 Gops (16-bit)
DaDianNao ICT, CAS digital vector MACs scratchpad VLIW 5.58 Tops (16-bit)
2015 ShiDianNao ICT, CAS digital scalar MACs scratchpad VLIW 194 Gops (16-bit)
PuDianNao ICT, CAS digital vector MACs scratchpad VLIW 1,056 Gops (16-bit)
2016 DnnWeaver Georgia Tech digital Vector MACs scratchpad - -
EIE Stanford digital scalar MACs scratchpad - 102 Gops (16-bit)
Eyeriss MIT digital scalar MACs scratchpad - 67.2 Gops (16-bit)
Prime UCSB hybrid Process-in-Memory ReRAM - -
2017 TPU Google digital scalar MACs scratchpad CISC 92 Tops (8-bit)
PipeLayer U of Pittsburgh hybrid Process-in-Memory ReRAM -
FlexFlow ICT, CAS digital scalar MACs scratchpad - 420 Gops ()
2018 MAERI Georgia Tech digital scalar MACs scratchpad -
PermDNN City University of New York digital vector MACs scratchpad - 614.4 Gops (16-bit)
2019 FPSA Tsinghua hybrid Process-in-Memory ReRAM -
Cambricon-F ICT, CAS digital vector MACs scratchpad FISA 14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

DLP architecture

With the rapid evolution of deep learning algorithms and DLPs, many architectures have been explored. Roughly, DLPs can be classified into three categories based on their implementation: digital circuits, analog circuits, and hybrid circuits. As the pure analog DLPs are rarely seen, we introduce the digital DLPs and hybrid DLPs.

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs or scalar MACs. Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically. Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue. Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing. Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM, phase change memory, etc.

GPUs and FPGAs

Despite the DLPs, GPUs and FPGAs are also being used as accelerators to speed up the execution of deep learning algorithms. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory, contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms. Microsoft builds its deep learning platform using FPGAs in its Azure to support real-time deep learning services. In Table 2 we compare the DLPs against GPUs and FPGAs in terms of target, performance, energy efficiency, and flexibility.

Table 2. DLPs vs. GPUs vs. FPGAs

Target Performance Energy Efficiency Flexibility
DLPs deep learning high high domain-specific
FPGAs all low moderate general
GPUs matrix computation moderate low matrix applications

Atomically thin semiconductors for deep learning

Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). They use two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. 

Integrated photonic tensor core

In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.

Benchmarks

Benchmarking has served long as the foundation of designing new hardware architectures, where both architects and practitioners can compare various architectures, identify their bottlenecks, and conduct the corresponding system/architectural optimization. Table 3 lists several typical benchmarks for DLPs, dating from the year 2012 in time order.

Table 3. Benchmarks.
Year NN Benchmark Affiliations # of microbenchmarks # of component benchmarks # of application benchmarks
2012 BenchNN ICT, CAS N/A 12 N/A
2016 Fathom Harvard N/A 8 N/A
2017 BenchIP ICT, CAS 12 11 N/A
2017 DAWNBench Stanford 8 N/A N/A
2017 DeepBench Baidu 4 N/A N/A
2018 MLPerf Harvard, Intel, and Google, etc. N/A 7 N/A
2019 AIBench ICT, CAS and Alibaba, etc. 12 16 2
2019 NNBench-X UCSB N/A 10 N/A

Hubbert peak theory

From Wikipedia, the free encyclopedia

2004 U.S. government predictions for oil production other than in OPEC and the former Soviet Union

The Hubbert peak theory says that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production tends to follow a bell-shaped curve. It is one of the primary theories on peak oil.

Choosing a particular curve determines a point of maximum production based on discovery rates, production rates and cumulative production. Early in the curve (pre-peak), the production rate increases due to the discovery rate and the addition of infrastructure. Late in the curve (post-peak), production declines because of resource depletion.

The Hubbert peak theory is based on the observation that the amount of oil under the ground in any region is finite, therefore the rate of discovery which initially increases quickly must reach a maximum and decline. In the US, oil extraction followed the discovery curve after a time lag of 32 to 35 years. The theory is named after American geophysicist M. King Hubbert, who created a method of modeling the production curve given an assumed ultimate recovery volume.

Hubbert's peak

"Hubbert's peak" can refer to the peaking of production of a particular area, which has now been observed for many fields and regions.

Hubbert's peak was thought to have been achieved in the United States contiguous 48 states (that is, excluding Alaska and Hawaii) in the early 1970s. Oil production peaked at 10.2 million barrels (1.62 million cubic metres) per day in 1970 and then declined over the subsequent 35 years in a pattern which closely followed the one predicated by Hubbert in the mid-1950s. However, beginning in the mid-2000 decade, advances in extraction technology, particularly those that led to the extraction of tight oil and unconventional oil resulted in a large increase in U.S. oil production, thus establishing a pattern which deviated drastically from the model predicted by Hubbert for the contiguous 48-states as a whole. In November 2017 the United States once again surpassed the 10 million barrel mark for the first time since 1970.

Peak oil as a proper noun, or "Hubbert's peak" applied more generally, refers to a predicted event: the peak of the entire planet's oil production. After peak oil, according to the Hubbert Peak Theory, the rate of oil production on Earth would enter a terminal decline. On the basis of his theory, in a paper he presented to the American Petroleum Institute in 1956, Hubbert correctly predicted that production of oil from conventional sources would peak in the continental United States around 1965–1970. Hubbert further predicted a worldwide peak at "about half a century" from publication and approximately 12 gigabarrels (GB) a year in magnitude. In a 1976 TV interview Hubbert added that the actions of OPEC might flatten the global production curve but this would only delay the peak for perhaps 10 years. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction.

Hubbert's theory

Hubbert curve

The standard Hubbert curve. For applications, the x and y scales are replaced by time and production scales.
 
U.S. Oil Production and Imports 1910 to 2012

In 1956, Hubbert proposed that fossil fuel production in a given region over time would follow a roughly bell-shaped curve without giving a precise formula; he later used the Hubbert curve, the derivative of the logistic curve, for estimating future production using past observed discoveries.

Hubbert assumed that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. At some point, a peak output is reached, and production begins declining until it approximates an exponential decline.

The Hubbert curve satisfies these constraints. Furthermore, it is symmetrical, with the peak of production reached when half of the fossil fuel that will ultimately be produced has been produced. It also has a single peak.

Given past oil discovery and production data, a Hubbert curve that attempts to approximate past discovery data may be constructed and used to provide estimates for future production. In particular, the date of peak oil production or the total amount of oil ultimately produced can be estimated that way. Cavallo defines the Hubbert curve used to predict the U.S. peak as the derivative of:

where max is the total resource available (ultimate recovery of crude oil), the cumulative production, and and are constants. The year of maximum annual production (peak) is:

so now the cumulative production reaches the half of the total available resource:

The Hubbert equation assumes that oil production is symmetrical about the peak. Others have used similar but non-symmetrical equations which may provide better a fit to empirical production data.

Use of multiple curves

The sum of multiple Hubbert curves, a technique not developed by Hubbert himself, may be used in order to model more complicated real life scenarios. For example, when new technologies like hydraulic fracturing combined with new formations that were not productive before the new technology, this can create a need for multiple curves. These technologies are limited in number, but make a big impact on production and cause a need for a new curve to be added to the old curve and the entire curve to be reworked.

Reliability

Crude oil

Hubbert's upper-bound prediction for US crude oil production (1956), and actual lower-48 states production through 2016

Hubbert, in his 1956 paper, presented two scenarios for US crude oil production:

  • most likely estimate: a logistic curve with a logistic growth rate equal to 6%, an ultimate resource equal to 150 Giga-barrels (Gb) and a peak in 1965. The size of the ultimate resource was taken from a synthesis of estimates by well-known oil geologists and the US Geological Survey, which Hubbert judged to be the most likely case.
  • upper-bound estimate: a logistic curve with a logistic growth rate equal to 6% and ultimate resource equal to 200 Giga-barrels and a peak in 1970.

Hubbert's upper-bound estimate, which he regarded as optimistic, accurately predicted that US oil production would peak in 1970, although the actual peak was 17% higher than Hubbert's curve. Production declined, as Hubbert had predicted, and stayed within 10 percent of Hubbert's predicted value from 1974 through 1994; since then, actual production has been significantly greater than the Hubbert curve. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction.

Hubbert's 1956 production curves depended on geological estimates of ultimate recoverable oil resources, but he was dissatisfied by the uncertainty this introduced, given the various estimates ranging from 110 billion to 590 billion barrels for the US. Starting in his 1962 publication, he made his calculations, including that of ultimate recovery, based only on mathematical analysis of production rates, proved reserves, and new discoveries, independent of any geological estimates of future discoveries. He concluded that the ultimate recoverable oil resource of the contiguous 48 states was 170 billion barrels, with a production peak in 1966 or 1967. He considered that because his model incorporated past technical advances, that any future advances would occur at the same rate, and were also incorporated. Hubbert continued to defend his calculation of 170 billion barrels in his publications of 1965 and 1967, although by 1967 he had moved the peak forward slightly, to 1968 or 1969.

A post-hoc analysis of peaked oil wells, fields, regions and nations found that Hubbert's model was the "most widely useful" (providing the best fit to the data), though many areas studied had a sharper "peak" than predicted.

A 2007 study of oil depletion by the UK Energy Research Centre pointed out that there is no theoretical and no robust practical reason to assume that oil production will follow a logistic curve. Neither is there any reason to assume that the peak will occur when half the ultimate recoverable resource has been produced; and in fact, empirical evidence appears to contradict this idea. An analysis of a 55 post-peak countries found that the average peak was at 25 percent of the ultimate recovery.

Natural gas

Hubbert's 1962 prediction of US lower 48-state gas production, versus actual production through 2012

Hubbert also predicted that natural gas production would follow a logistic curve similar to that of oil. The graph shows actual gas production in blue compared to his predicted gas production for the United States in red, published in 1962.

Economics

Oil imports by country Pre-2006

Energy return on energy investment

The ratio of energy extracted to the energy expended in the process is often referred to as the Energy Return on Energy Investment (EROI or EROEI). Should the EROEI drops to one, or equivalently the Net energy gain falls to zero, the oil production is no longer a net energy source.

There is a difference between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca tar sands plants. Cheap natural gas has also led to ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate.

The assumption of inevitable declining volumes of oil and gas produced per unit of effort is contrary to recent experience in the US. In the United States, as of 2017, there has been an ongoing decade-long increase in the productivity of oil and gas drilling in all the major tight oil and gas plays. The US Energy Information Administration reports, for instance, that in the Bakken Shale production area of North Dakota, the volume of oil produced per day of drilling rig time in January 2017 was 4 times the oil volume per day of drilling five years previous, in January 2012, and nearly 10 times the oil volume per day of ten years previous, in January 2007. In the Marcellus gas region of the northeast, The volume of gas produced per day of drilling time in January 2017 was 3 times the gas volume per day of drilling five years previous, in January 2012, and 28 times the gas volume per day of drilling ten years previous, in January 2007.

Growth-based economic models

World energy consumption & predictions, 2005–2035. Source: International Energy Outlook 2011.

Insofar as economic growth is driven by oil consumption growth, post-peak societies must adapt. Hubbert believed:

Our principal constraints are cultural. During the last two centuries we have known nothing but exponential growth and in parallel we have evolved what amounts to an exponential-growth culture, a culture so heavily dependent upon the continuance of exponential growth for its stability that it is incapable of reckoning with problems of non growth.

— M. King Hubbert, "Exponential Growth as a Transient Phenomenon in Human History"

Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth". Brief oil interruptions in 1973 and 1979 markedly slowed—but did not stop—the growth of world GDP.

Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.

David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), in their 2003 study Food, Land, Population and the U.S. Economy, placed the maximum U.S. population for a sustainable economy at 200 million (actual population approx. 290m in 2003, 329m in 2019). To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.

Hubbert peaks

Although Hubbert peak theory receives most attention in relation to peak oil production, it has also been applied to other natural resources.

Natural gas

Doug Reynolds predicted in 2005 that the North American peak would occur in 2007. Bentley predicted a world "decline in conventional gas production from about 2020".

Coal

Although observers believe that peak coal is significantly further out than peak oil, Hubbert studied the specific example of anthracite in the US, a high grade coal, whose production peaked in the 1920s. Hubbert found that anthracite matches a curve closely. Hubbert had recoverable coal reserves worldwide at 2.500 × 1012 metric tons and peaking around 2150 (depending on usage).

More recent estimates suggest an earlier peak. Coal: Resources and Future Production (PDF 630KB), published on April 5, 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years. Reporting on this, Richard Heinberg also notes that the date of peak annual energetic extraction from coal is likely to come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively. A second study, The Future of Coal by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for European Commission Joint Research Centre, reaches similar conclusions and states that "coal might not be so abundant, widely available and reliable as an energy source in the future".

Work by David Rutledge of Caltech predicts that the total of world coal production will amount to only about 450 gigatonnes. This implies that coal is running out faster than usually assumed.

Fissionable materials

In a paper in 1956, after a review of US fissionable reserves, Hubbert notes of nuclear power:

There is promise, however, provided mankind can solve its international problems and not destroy itself with nuclear weapons, and provided world population (which is now expanding at such a rate as to double in less than a century) can somehow be brought under control, that we may at last have found an energy supply adequate for our needs for at least the next few centuries of the "foreseeable future."

As of 2015, the identified resources of uranium are sufficient to provide more than 135 years of supply at the present rate of consumption. Technologies such as the thorium fuel cycle, reprocessing and fast breeders can, in theory, extend the life of uranium reserves from hundreds to thousands of years.

Caltech physics professor David Goodstein stated in 2004 that

... you would have to build 10,000 of the largest power plants that are feasible by engineering standards in order to replace the 10 terawatts of fossil fuel we're burning today ... that's a staggering amount and if you did that, the known reserves of uranium would last for 10 to 20 years at that burn rate. So, it's at best a bridging technology ... You can use the rest of the uranium to breed plutonium 239 then we'd have at least 100 times as much fuel to use. But that means you're making plutonium, which is an extremely dangerous thing to do in the dangerous world that we live in.

Helium

Helium production and storage in the United States, 1940–2014 (data from USGS)

Almost all helium on Earth is a result of radioactive decay of uranium and thorium. Helium is extracted by fractional distillation from natural gas, which contains up to 7% helium. The world's largest helium-rich natural gas fields are found in the United States, especially in the Hugoton and nearby gas fields in Kansas, Oklahoma, and Texas. The extracted helium is stored underground in the National Helium Reserve near Amarillo, Texas, the self-proclaimed "Helium Capital of the World". Helium production is expected to decline along with natural gas production in these areas.

Helium, which is the second-lightest chemical element, will rise to the upper layers of Earth's atmosphere, where it can forever break free from Earth's gravitational attraction. Approximately 1,600 tons of helium are lost per year as a result of atmospheric escape mechanisms.

Transition metals

Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The price of copper rose 500% between 2003 and 2007 and was attributed by some to peak copper. Copper prices later fell, along with many other commodities and stock prices, as demand shrank from fear of a global recession. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years. A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled.

Precious metals

In 2009, Aaron Regent president of the Canadian gold giant Barrick Gold said that global output has been falling by roughly one million ounces a year since the start of the decade. The total global mine supply has dropped by 10pc as ore quality erodes, implying that the roaring bull market of the last eight years may have further to run. "There is a strong case to be made that we are already at 'peak gold'," he told The Daily Telegraph at the RBC's annual gold conference in London. "Production peaked around 2000 and it has been in decline ever since, and we forecast that decline to continue. It is increasingly difficult to find ore," he said.

Ore grades have fallen from around 12 grams per tonne in 1950 to nearer 3 grams in the US, Canada, and Australia. South Africa's output has halved since peaking in 1970. Output fell a further 14 percent in South Africa in 2008 as companies were forced to dig ever deeper – at greater cost – to replace depleted reserves.

World mined gold production has peaked four times since 1900: in 1912, 1940, 1971, and 2001, each peak being higher than previous peaks. The latest peak was in 2001, when production reached 2,600 metric tons, then declined for several years. Production started to increase again in 2009, spurred by high gold prices, and achieved record new highs each year in 2012, 2013, and in 2014, when production reached 2,990 tonnes.

Phosphorus

Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. According to a 2008 study, the total reserves of phosphorus are estimated to be approximately 3,200 MT, with a peak production at 28 MT/year in 2034. Individual countries' supplies vary widely; without a recycling initiative America's supply is estimated around 30 years. Phosphorus supplies affect agricultural output which in turn limits alternative fuels such as biodiesel and ethanol. Its increasing price and scarcity (global price of rock phosphate rose 8-fold in the 2 years to mid 2008) could change global agricultural patterns. Lands, perceived as marginal because of remoteness, but with very high phosphorus content, such as the Gran Chaco may get more agricultural development, while other farming areas, where nutrients are a constraint, may drop below the line of profitability.

Renewable resources

Wood

Unlike fossil resources, forests keep growing, thus the Hubbert peak theory does not apply. There had been wood shortages in the past, called Holznot in German speaking regions, but no global peak wood yet, despite the early 2021 "Lumber Crisis". Besides, deforestation may cause other problems, like erosion.

Water

Hubbert's original analysis did not apply to renewable resources. However, over-exploitation often results in a Hubbert peak nonetheless. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced.

For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe aquifers whose water is not being recharged.

Fishing

Peak fish: At least one researcher has attempted to perform Hubbert linearization (Hubbert curve) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. The Atlantic northwest cod fishery was a renewable resource, but the numbers of fish taken exceeded the fish's rate of recovery. The end of the cod fishery does match the exponential drop of the Hubbert bell curve. Another example is the cod of the North Sea.

Air/oxygen

Half the world's oxygen is produced by phytoplankton. The numbers of plankton have dropped by 40% since the 1950s.

Criticisms of peak oil

Economist Michael Lynch argues that the theory behind the Hubbert curve is too simplistic and relies on an overly Malthusian point of view. Lynch claims that Campbell's predictions for world oil production are strongly biased towards underestimates, and that Campbell has repeatedly pushed back the date.

Leonardo Maugeri, vice president of the Italian energy company Eni, argues that nearly all of peak estimates do not take into account unconventional oil even though the availability of these resources is significant and the costs of extraction and processing, while still very high, are falling because of improved technology. He also notes that the recovery rate from existing world oil fields has increased from about 22% in 1980 to 35% today because of new technology and predicts this trend will continue. The ratio between proven oil reserves and current production has constantly improved, passing from 20 years in 1948 to 35 years in 1972 and reaching about 40 years in 2003. These improvements occurred even with low investment in new exploration and upgrading technology because of the low oil prices during the last 20 years. However, Maugeri feels that encouraging more exploration will require relatively high oil prices.

Edward Luttwak, an economist and historian, claims that unrest in countries such as Russia, Iran and Iraq has led to a massive underestimate of oil reserves. The Association for the Study of Peak Oil and Gas (ASPO) responds by claiming neither Russia nor Iran are troubled by unrest currently, but Iraq is.

Cambridge Energy Research Associates authored a report that is critical of Hubbert-influenced predictions:

Despite his valuable contribution, M. King Hubbert's methodology falls down because it does not consider likely resource growth, application of new technology, basic commercial factors, or the impact of geopolitics on production. His approach does not work in all cases-including on the United States itself-and cannot reliably model a global production outlook. Put more simply, the case for the imminent peak is flawed. As it is, production in 2005 in the Lower 48 in the United States was 66 percent higher than Hubbert projected.

CERA does not believe there will be an endless abundance of oil, but instead believes that global production will eventually follow an "undulating plateau" for one or more decades before declining slowly, and that production will reach 40 Mb/d by 2015.

Alfred J. Cavallo, while predicting a conventional oil supply shortage by no later than 2015, does not think Hubbert's peak is the correct theory to apply to world production.

Criticisms of peak element scenarios

Although M. King Hubbert himself made major distinctions between decline in petroleum production versus depletion (or relative lack of it) for elements such as fissionable uranium and thorium, some others have predicted peaks like peak uranium and peak phosphorus soon on the basis of published reserve figures compared to present and future production. According to some economists, though, the amount of proved reserves inventoried at a time may be considered "a poor indicator of the total future supply of a mineral resource."

As some illustrations, tin, copper, iron, lead, and zinc all had both production from 1950 to 2000 and reserves in 2000 much exceed world reserves in 1950, which would be impossible except for how "proved reserves are like an inventory of cars to an auto dealer" at a time, having little relationship to the actual total affordable to extract in the future. In the example of peak phosphorus, additional concentrations exist intermediate between 71,000 Mt of identified reserves (USGS) and the approximately 30,000,000,000 Mt of other phosphorus in Earth's crust, with the average rock being 0.1% phosphorus, so showing decline in human phosphorus production will occur soon would require far more than comparing the former figure to the 190 Mt/year of phosphorus extracted in mines (2011 figure).

Sex chromosome

From Wikipedia, the free encyclopedia
 
Human male XY chromosomes after G-banding

A sex chromosome (also referred to as an allosome, heterotypical chromosome, gonosome, heterochromosome,  or idiochromosome) is a chromosome that differs from an ordinary autosome in form, size, and behavior. The human sex chromosomes, a typical pair of mammal allosomes, determine the sex of an individual created in sexual reproduction. Autosomes differ from allosomes because autosomes appear in pairs whose members have the same form but differ from other pairs in a diploid cell, whereas members of an allosome pair may differ from one another and thereby determine sex.

Nettie Stevens and Edmund Beecher Wilson both independently discovered sex chromosomes in 1905. However, Stevens is credited for discovering them earlier than Wilson.

Differentiation

In humans, each cell nucleus contains 23 pairs of chromosomes, a total of 46 chromosomes. The first 22 pairs are called autosomes. Autosomes are homologous chromosomes i.e. chromosomes which contain the same genes (regions of DNA) in the same order along their chromosomal arms. The 23rd pair of chromosomes are called allosomes. These consist of two X chromosomes in most females, and an X chromosome and a Y chromosome in most males. Females therefore have 23 homologous chromosome pairs, while males have 22. The X and Y chromosomes have small regions of homology called pseudoautosomal regions.

The X chromosome is always present as the 23rd chromosome in the ovum, while either an X or Y chromosome may be present in an individual sperm. Early in female embryonic development, in cells other than egg cells, one of the X chromosomes is randomly and permanently partially deactivated: In some cells, the X chromosome inherited from the mother deactivates; in other cells, it’s the X chromosome inherited from the father. This ensures that both sexes always have exactly one functional copy of the X chromosome in each body cell. The deactivated X chromosome is silenced by repressive heterochromatin that compacts the DNA and prevents expression of most genes. This compaction is regulated by PRC2 (Polycomb Repressive Complex 2).

Sex determination

All diploid organisms with allosome-determined sex get half of their allosomes from each of their parents. In most mammals, females are XX, and can pass along either of their Xs; since males are XY they can pass along either an X or a Y. Females in such species receive an X chromosome from each parent while males receive an X chromosome from their mother and a Y chromosome from their father. It is thus the male's sperm that determines the sex of each offspring in such species.

However, a small percentage of humans have a divergent sexual development, known as intersex. This can result from allosomes that are neither XX nor XY. It can also occur when two fertilized embryo fuse, producing a chimera that might contain two different sets of DNA one XX and the other XY. It could also result from exposure, often in utero, to chemicals that disrupt the normal conversion of the allosomes into sex hormones and further into the development of either ambiguous outer genitalia or internal organs.

There is a gene in the Y chromosome that has regulatory sequences that control genes that code for maleness, called the SRY gene. This gene produces a testis-determining factor ("TDF"), which initiates testis development in humans and other mammals. The SRY sequence's prominence in sex determination was discovered when the genetics of sex-reversed XX men (i.e. humans who possess biological male-traits but actually have XX allosomes) were studied. After examination, it was discovered that the difference between a typical XX individual (traditional female) and a sex-reversed XX man was that the typical individuals lacked the SRY gene. It is theorized that in sex-reversed XX men, the SRY mistakenly gets translocated to an X chromosome in the XX pair during meiosis.

Other vertebrates

Diverse mechanisms are involved in the determination of sex in animals. For mammals, sex determination is carried by the genetic contribution of the spermatozoon. Lower chordates, such as fish, amphibians and reptiles, have systems that are influenced by the environment. Fish and amphibians, for example, have genetic sex determination but their sex can also be influenced by externally available steroids and incubation temperature of eggs. In reptiles, only incubation temperature determines sex.

Plants

Many scientists argue that sex determination in plants is more complex than that in humans. This is because even flowering plants have a variety of mating systems, their sex determination primarily regulated by MADS-box genes. These genes code for proteins that form the sex organs in flowers.

Plant sex chromosomes are most common in bryophytes, relatively common in vascular plants and unknown in ferns and lycophytes. The diversity of plants is reflected in their sex-determination systems, which include XY and UV systems as well as many variants. Sex chromosomes have evolved independently across many plant groups. Recombination of chromosomes may lead to heterogamety before the development of sex chromosomes, or recombination may be reduced after sex chromosomes develop. Only a few pseudoautosomal regions normally remain once sex chromosomes are fully differentiated. When chromosomes do not recombine, neutral sequence divergences begin to accumulate, which has been used to estimate the age of sex chromosomes in various plant lineages. Even the oldest estimated divergence, in the liverwort Marchantia polymorpha, is more recent than mammal or bird divergence. Due to this recency, most plant sex chromosomes also have relatively small sex-linked regions. Current evidence does not support the existence of plant sex chromosomes more ancient than those of M. polymorpha.

The high prevalence of autopolyploidy in plants also impacts the structure of their sex chromosomes. Polyploidization can occur before and after the development of sex chromosomes. If it occurs after sex chromosomes are established, dosage should stay consistent between the sex chromosomes and autosomes, with minimal impact on sex differentiation. If it occurs before sex chromosomes become heteromorphic, as is likely in the octoploid red sorrel Rumex acetosella, sex is determined in a single XY system. In a more complicated system, the sandalwood species Viscum fischeri has X1X1X2X2 chromosomes in females, and X1X2Y chromosomes in males.

Sequence composition and evolution

Amplification of transposable elements, tandom repeats especially accumulation of long tandom repeats (LTR) retrotransposones are responsible for plant sex chromosome evolution. The insertion of retrotransposons is probably the major cause of y-chromosome expansion and plant genome size evolution. Retrotransposones contribute in size determination of sex chromosomes and its proliferation varies even in closely related species. LTR and tandom repeats play dominant role in the evolution of S. latifolia sex chromosomes. Athila is new family of retroelements, discovered in Arabidopsis thaliana, present in heterochromatin region only. Athila retroelements overrepresented in X but absent in Y while tandem repeats enriched in Y-chromosome. Some chloroplast sequences have also been identified in the Y-chromosome of S. latifolia. S. vulgaris has more retroelements in their sex chromosomes compare to S. latifolia. Microsatellite data shows that there is no significant difference between X and Y-chromosome microsatellites in both Silene species. This would conclude that microsatellites do not participate in Y-chromosome evolution. The portion of Y-chromosome that never recombine with X-chromosome faces selection reduction. This reduced selection leads to insertion of transposable elements and accumulation of deleterious mutation. The Y become larger and smaller than X due to insertion of retroelement and deletion of genetic material respectively. The genus Humulus is also used as model for the study of sex chromosomes evolution. Based on the phylogenetic topology distribution there are three regions on sex chromosomes. One region that stops recombining in the ancestor of H. lupulus, second that stops recombining in modern H. lupulus and the third region called pseudoautosomal region. H. lupulus is the rare case in plants in which Y is smaller than X, while its ancestor plant has the same size of both X and Y chromosomes. This size difference should be caused by deletion of genetic material in Y but that is not the case. This is because of complex dynamics like the larger size of X than Y-chromosome may be due to duplication or retrotransposition and size of Y remains same.

Non-vascular plants

Ferns and lycophytes have bisexual gametophytes, so there is no evidence for sex chromosomes. In the bryophytes, including liverworts, hornworts and mosses, sex chromosomes are common. The sex chromosomes in bryophytes affect what type of gamete is produced by the gametophyte, and there is wide diversity in gametophyte type. Unlike seed plants, where gametophytes are always unisexual, in bryophytes they may produce male, female, or both types of gamete.

Bryophytes most commonly employ a UV sex-determination system, where U produces female gametophytes and V produces male gametophytes. The U and V chromosomes are heteromorphic with U larger than V and are frequently both larger than the autosomes. There is variation even within this system, including UU/V and U/VV chromosome arrangements. In some bryophytes, microchromosomes have been found to co-occur with sex chromosomes and likely impact sex determination.

Gymnosperms

Dioecy is common among gymnosperms, found in an estimated 36% of species. However, heteromorphic sex chromosomes are relatively rare, with only 5 species known as of 2014. Five of these use an XY system, and one (Ginkgo biloba) uses a WZ system. Some gymnosperms, such as Johann's Pine (Pinus johannis), have homomorphic sex chromosomes that are almost indistinguishable through karyotyping.

Angiosperms

Cosexual angiosperms with either monoecious or hermaphroditic flowers do not have sex chromosomes. Angiosperms with separate sexes (dioecious) may use sex chromosomes or environmental flowers for sex determination. Cytogenetic data from about 100 angiosperm species showed heteromorphic sex chromosomes in approximately half, mostly taking the form of XY sex-determination systems. Their Y is typically larger, unlike in humans; however there is diversity among angiosperms. In the Poplar genus (Populus) some species have male heterogamety while others have female heterogamety. Sex chromosomes have arisen independently multiple times in angiosperms, from the monoecious ancestral condition. The move from a monoecious to dioecious system requires both male and female sterility mutations to be present in the population. Male sterility likely arises first as an adaptation to prevent selfing. Once male sterility has reached a certain prevalence, then female sterility may have a chance to arise and spread.

In the domesticated papaya (Carica papaya), three sex chromosomes are present, denoted as X, Y and Yh. This corresponds with three sexes: females with XX chromosomes, males with XY, and hermaphrodites with XYh . The hermaphrodite sex is estimated to have arisen only 4000 years ago, post-domestication of the plant. The genetic architecture suggests that either the Y chromosome has an X-inactivating gene, or that the Yh chromosome has an X-activating gene.

Medical applications

Allosomes not only carry the genes that determine male and female traits, but also those for some other characteristics as well. Genes that are carried by either sex chromosome are said to be sex linked. Sex linked diseases are passed down through families through one of the X or Y chromosomes. Since usually men inherit Y chromosomes, they are the only ones to inherit Y-linked traits. Men and women can get the X-linked ones since both inherit X chromosomes.

An allele is either said to be dominant or recessive. Dominant inheritance occurs when an abnormal gene from one parent causes disease even though the matching gene from the other parent is normal. The abnormal allele dominates. Recessive inheritance is when both matching genes must be abnormal to cause disease. If only one gene in the pair is abnormal, the disease does not occur, or is mild. Someone who has one abnormal gene (but no symptoms) is called a carrier. A carrier can pass this abnormal gene to his or her children. X chromosome carry about 1500 genes, more than any other chromosome in the human body. Most of them code for something other than female anatomical traits. Many of the non-sex determining X-linked genes are responsible for abnormal conditions. The Y chromosome carries about 78 genes. Most of the Y chromosome genes are involved with essential cell house-keeping activities and sperm production. Only one of the Y chromosome genes, the SRY gene, is responsible for male anatomical traits. When any of the 9 genes involved in sperm production are missing or defective the result is usually very low sperm counts and infertility. Examples of mutations on the X chromosome include more common diseases such as the following:

  • Color blindness or color vision deficiency is the inability or decreased ability to see color, or perceive color differences, under normal lighting conditions. Color blindness affects many individuals in the population. There is no actual blindness, but there is a deficiency of color vision. The most usual cause is a fault in the development of one or more sets of retinal cones that perceive color in light and transmit that information to the optic nerve. This type of color blindness is usually a sex-linked condition. The genes that produce photopigments are carried on the X chromosome; if some of these genes are missing or damaged, color blindness will be expressed in males with a higher probability than in females because males only have one X chromosome.
  • Hemophilia refers to a group of bleeding disorders in which it takes a long time for the blood to clot. This is referred to as X-Linked recessive. Hemophilia is much more common in males than females because males are hemizygous. They only have one copy of the gene in question and therefore express the trait when they inherit one mutant allele. In contrast, a female must inherit two mutant alleles, a less frequent event since the mutant allele is rare in the population. X-linked traits are maternally inherited from carrier mothers or from an affected father. Each son born to a carrier mother has a 50% probability of inheriting the X chromosome carrying the mutant allele.
  • Fragile X syndrome is a genetic condition involving changes in part of the X chromosome. It is the most common form of inherited intellectual disability (mental retardation) in males. It is caused by a change in a gene called FMR1. A small part of the gene code is repeated on a fragile area of the X chromosome. The more repeats, the more likely there is to be a problem. Males and females can both be affected, but because males have only one X chromosome, a single fragile X is likely to affect them more. Most fragile-X males have large testes, big ears, narrow faces, and sensory processing disorders that result in learning disabilities.

Other complications include:

  • 46,XX testicular disorder of sex development, also called XX male syndrome, is a condition in which individuals with two X chromosomes in each cell, the pattern normally found in females, have a male appearance. People with this disorder have male external genitalia. In most people with 46,XX testicular disorder of sex development, the condition results from an exchange of genetic material between chromosomes (translocation). This exchange occurs as a random event during the formation of sperm cells in the affected person's father. The SRY gene (normally on the Y chromosome) is misplaced in this disorder, onto an X chromosome. Any person with an X chromosome that carries the SRY gene will develop male characteristics despite not having a Y chromosome.

Right to education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Right_to_education ...