Search This Blog

Wednesday, February 11, 2015

Moore's law


From Wikipedia, the free encyclopedia


Plot of CPU transistor counts against dates of introduction; note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years.

An Osborne Executive portable computer, from 1982 with a Zilog Z80 4MHz CPU, and a 2007 Apple iPhone with a 412MHz ARM11 CPU; the Executive weighs 100 times as much, has nearly 500 times as much volume, cost approximately 10 times as much (adjusted for inflation), and has about 1/100th the clock frequency of the smartphone.

"Moore's law" is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The observation is named after Gordon E. Moore, co-founder of the Intel Corporation, who first described the trend in a 1965 paper[1][2][2][3] and formulated its current statement in 1975. His prediction has proven to be accurate, in part because the law now is used in the semiconductor industry to guide long-term planning and to set targets for research and development.[4] The capabilities of many digital electronic devices are strongly linked to Moore's law: quality-adjusted microprocessor prices,[5] memory capacity, sensors and even the number and size of pixels in digital cameras.[6] All of these are improving at roughly exponential rates as well.

This exponential improvement has dramatically enhanced the effect of digital electronics in nearly every segment of the world economy.[7] Moore's law describes a driving force of technological and social change, productivity, and economic growth in the late twentieth and early twenty-first centuries.[8][9][10][11]

The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and their being faster).[12]

Although this trend has continued for more than half a century, "Moore's law" should be considered an observation or conjecture and not a physical or natural law. Sources in 2005 expected it to continue until at least 2015 or 2020.[note 1][14] The 2010 update to the International Technology Roadmap for Semiconductors predicted that growth will slow at the end of 2013,[15] however, when transistor counts and densities are to double only every three years.

History


Gordon Moore in 2004

For the thirty-fifth anniversary issue of Electronics Magazine, which was published on April 19, 1965, Gordon E. Moore, who was working as the director of research and development (R&D) at Fairchild Semiconductor at the time, was asked to predict what was going to happen in the semiconductor components industry over the next ten years. His response was a brief article entitled, "Cramming more components onto integrated circuits".[16] Within his editorial, he speculated that by 1975 it would be possible to contain as many as 65,000 components on a single quarter-inch semiconductor.
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
[emphasis added]
G. Moore, 1965

His reasoning was a log-linear relationship between device complexity (higher circuit density at reduced cost) and time:[17][18]

In 1975 Moore slowed his forecast regarding the rate of density-doubling, stating circuit density-doubling would occur every 24 months. During the 1975 IEEE International Electron Devices Meeting he outlined his analysis of the contributing factors to this exponential behavior:[17][18]
  • Die sizes were increasing at an exponential rate and as defective densities decreased, chip manufacturers could work with larger areas without losing reduction yields
  • Simultaneous evolution to finer minimum dimensions
  • and what Moore called "circuit and device cleverness"
Shortly after the 1975 IEEE Meeting, Caltech professor Carver Mead popularized the term "Moore's law".[2][19]

Despite a popular misconception, Moore is adamant that he did not predict a doubling "every 18 months." Rather, David House, an Intel colleague, had factored in the increasing performance of transistors to conclude that integrated circuits would double in performance every 18 months.

Predictions of similar increases in computer power had existed years prior. For example, Douglas Engelbart discussed the projected downscaling of integrated circuit size in 1959 [20] or 1960.[21]
In April 2005, Intel offered US$10,000 to purchase a copy of the original Electronics Magazine issue in which Moore's article appeared.[22] An engineer living in the United Kingdom was the first to find a copy and offer it to Intel.[23]

As a target for industry and a self-fulfilling prophecy

Although Moore's law initially was made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry.

This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon attain. In this regard, it may be viewed as a self-fulfilling prophecy.[4][24]

Moore's second law

As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. Rising manufacturing costs are an important consideration for the sustaining of Moore's law.[25] This had led to the formulation of Moore's second law, also called Rock's law, which is that the capital cost of a semiconductor fab also increases exponentially over time.[26][27]

Major enabling factors and future trends

Numerous innovations by a large number of scientists and engineers have helped significantly to sustain Moore's law since the beginning of the integrated circuit (IC) era. Whereas assembling a detailed list of such significant contributions would be as desirable as it would be difficult, just a few innovations are listed below as examples of breakthroughs that have played a critical role in the advancement of integrated circuit technology by more than seven orders of magnitude in less than five decades:
  • The foremost contribution, which is the raison d’etre for Moore's law, is the invention of the integrated circuit, credited contemporaneously to Jack Kilby at Texas Instruments[28] and Robert Noyce at Fairchild Semiconductor.[29]
  • The invention of the complementary metal–oxide–semiconductor (CMOS) process by Frank Wanlass in 1963 [30] and a number of advances in CMOS technology by many workers in the semiconductor field since the work of Wanlass have enabled the extremely dense and high-performance ICs that the industry makes today.
  • The invention of the dynamic random access memory (DRAM) technology by Robert Dennard at I.B.M. in 1967 [31] made it possible to fabricate single-transistor memory cells, and the invention of flash memory by Fujio Masuoka at Toshiba in the 1980s,[32][33][34] leading to low-cost, high-capacity memory in diverse electronic products.
  • The invention of chemically-amplified photoresist by C. Grant Willson, Hiroshi Ito and J.M.J. Fréchet at IBM c.1980,[35][36][37] that was 10-100 times more sensitive to ultraviolet light.[38] IBM introduced chemically amplified photoresist for DRAM production in the mid-1980s.[39][40]
  • The invention of deep UV excimer laser photolithography by Kanti Jain [41] at IBM c.1980,[42][43][44] has enabled the smallest features in ICs to shrink from 800 nanometers in 1990 to as low as 22 nanometers in 2012.[45] This built on the invention of the excimer laser in 1970 [46] by Nikolai Basov, V. A. Danilychev and Yu. M. Popov, at the Lebedev Physical Institute. From a broader scientific perspective, the invention of excimer laser lithography has been highlighted as one of the major milestones in the 50-year history of the laser.[47][48]
  • The interconnect innovations of the late 1990s include that IBM developed CMP or chemical mechanical planarization c.1980, based on the centuries-old polishing process for making telescope lenses.[49] CMP smooths the chip surface. Intel used chemical-mechanical polishing to enable additional layers of metal wires in 1990; higher transistor density (tighter spacing) via trench isolation, local polysilicon (wires connecting nearby transistors), and improved wafer yield (all in 1995).[50][51] Higher yield, the fraction of working chips on a wafer, reduces manufacturing cost. IBM with assistance from Motorola used CMP for lower electrical resistance copper interconnect instead of aluminum in 1997.[52]
Computer industry technology road maps predict (as of 2001) that Moore's law will continue for several generations of semiconductor chips. Depending on the doubling time used in the calculations, this could mean up to a hundredfold increase in transistor count per chip within a decade. The semiconductor industry technology roadmap uses a three-year doubling time for microprocessors, leading to a tenfold increase in the next decade.[53] Intel was reported in 2005 as stating that the downsizing of silicon chips with good economics can continue during the next decade,[note 1] and in 2008 as predicting the trend through 2029.[54]

Some of the new directions in research that may allow Moore's law to continue are:
  • In April 2008, researchers at HP Labs announced the creation of a working memristor, a fourth basic passive circuit element whose existence only had been theorized previously. The memristor's unique properties permit the creation of smaller and better-performing electronic devices.[57]
  • In February 2010, Researchers at the Tyndall National Institute in Cork, Ireland announced a breakthrough in transistors with the design and fabrication of the world's first junctionless transistor. The research led by Professor Jean-Pierre Colinge was published in Nature Nanotechnology and describes a control gate around a silicon nanowire that can tighten around the wire to the point of closing down the passage of electrons without the use of junctions or doping. The researchers claim that the new junctionless transistors may be produced at 10-nanometer scale using existing fabrication techniques.[58]
  • In April 2011, a research team at the University of Pittsburgh announced the development of a single-electron transistor, 1.5 nanometers in diameter, made out of oxide based materials. According to the researchers, three "wires" converge on a central "island" that can house one or two electrons. Electrons tunnel from one wire to another through the island. Conditions on the third wire result in distinct conductive properties including the ability of the transistor to act as a solid state memory.[59]
  • In February 2012, a research team at the University of New South Wales announced the development of the first working transistor consisting of a single atom placed precisely in a silicon crystal (not just picked from a large sample of random transistors).[60] Moore's law predicted this milestone to be reached in the lab by 2020.
  • In April 2014, bioengineers at Stanford University developed a new circuit board modeled on the human brain. 16 custom-designed "Neurocore" chips simulate 1 million neurons and billions of synaptic connections. This Neurogrid is claimed to be 9,000 times faster and more energy efficient than a typical PC. The cost of the prototype was $40,000. With current technology, however, a similar Neurogrid could be made for $400.[61]
  • The advancement of nanotechnology could spur the creation of microscopic computers and restore Moore's Law to its original rate of growth.[62][63][64]

The trend of scaling for NAND flash memory allows doubling of components manufactured in the same wafer area in less than 18 months

Ultimate limits


Atomistic simulation result for formation of inversion channel (electron density) and attainment of threshold voltage (IV) in a nanowire MOSFET. Note that the threshold voltage for this device lies around 0.45 V. Nanowire MOSFETs lie toward the end of the ITRS road map for scaling devices below 10 nm gate lengths.[53]

On 13 April 2005, Gordon Moore stated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens". He also noted that transistors eventually would reach the limits of miniaturization at atomic levels:
In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far—but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.
[65]
In January 1995, the Digital Alpha 21164 microprocessor had 9.3 million transistors. This 64-bit processor was a technological spearhead at the time, even if the circuit's market share remained average. Six years later, a state of the art microprocessor contained more than 40 million transistors. It is theorised that, with further miniaturisation, by 2015 these processors should contain more than 15 billion transistors, and by 2020 will be in molecular scale production, where each molecule can be individually positioned.[66]

In 2003, Intel predicted the end would come between 2013 and 2018 with 16 nanometer manufacturing processes and 5 nanometer gates, due to quantum tunnelling, although others suggested chips could just get larger, or become layered.[67] In 2008 it was noted that for the last 30 years, it has been predicted that Moore's law would last at least another decade.[54]

Some see the limits of the law as being in the distant future. Lawrence Krauss and Glenn D. Starkman announced an ultimate limit of approximately 600 years in their paper,[68] based on rigorous estimation of total information-processing capacity of any system in the Universe, which is limited by the Bekenstein bound. On the other hand, based on first principles, there are predictions that Moore's law will collapse in the next few decades [20–40 years]".[69][70]

One also could limit the theoretical performance of a rather practical "ultimate laptop" with a mass of one kilogram and a volume of one litre. This is done by considering the speed of light, the quantum scale, the gravitational constant, and the Boltzmann constant, giving a performance of 5.4258 ⋅ 1050 logical operations per second on approximately 1031 bits.[71]

Then again, the law often has met obstacles that first appeared insurmountable, but were indeed surmounted before long. In that sense, Moore says he now sees his law as more beautiful than he had realized: "Moore's law is a violation of Murphy's law. Everything gets better and better."[72]

Consequences and limitations

Technological change is a combination of more and of better technology. A 2011 study in the journal Science showed that the peak of the rate of change of the world's capacity to compute information was in the year 1998, when the world's technological capacity to compute information on general-purpose computers grew at 88% per year.[73] Since then, technological change clearly has slowed. In recent times, every new year allowed humans to carry out roughly 60% of the computations that possibly could have been executed by all existing general-purpose computers before that year.[73] This still is exponential, but shows the varying nature of technological change.[74]

The primary driving force of economic growth is the growth of productivity,[10] and Moore's law factors into productivity. Moore (1995) expected that “the rate of technological progress is going to be controlled from financial realities.”[75] The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation."[11] An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth,[76][77][78] which reached 3.4% per year in 1997-2004, outpacing the 1.6% per year during both 1972-1996 and 2005-2013.[79] As economist Richard G. Anderson notes, “Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products).”[80]

Intel transistor gate length trend - transistor scaling has slowed down significantly at advanced (smaller) nodes

While physical limits to transistor scaling such as source-to-drain leakage, limited gate metals, and limited options for channel material have been reached, new avenues for continued scaling are open. The most promising of these approaches rely on using the spin state of electron spintronics, tunnel junctions, and advanced confinement of channel materials via nano-wire geometry. A comprehensive list of available device choices shows that a wide range of device options is open for continuing Moore's law into the next few decades.[81] Spin-based logic and memory options are being developed actively in industrial labs,[82] as well as academic labs.[83]

Another source of improved performance is in microarchitecture techniques exploiting the growth of available transistor count. Out-of-order execution and on-chip caching and prefetching reduce the memory latency bottleneck at the expense of using more transistors and increasing the processor complexity. These increases are described empirically by Pollack's Rule, which states that performance increases due to microarchitecture techniques are square root of the number of transistors or the area of a processor.

For years, processor makers delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification.[84] Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded manner to take full advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed vs number of processors. This is particularly true while accessing shared or dependent resources, due to lock contention. This effect becomes more noticeable as the number of processors increases. There are cases where a roughly 45% increase in processor transistors has translated to roughly 10–20% increase in processing power.[85]

On the other hand, processor manufactures are taking advantage of the 'extra space' that the transistor shrinkage provides to add specialized processing units to deal with features such as graphics, video, and cryptography. For one example, Intel's Parallel JavaScript extension not only adds support for multiple cores, but also for the other non-general processing features of their chips, as part of the migration in client side scripting toward HTML5.[86]

A negative implication of Moore's law is obsolescence, that is, as technologies continue to rapidly "improve", these improvements may be significant enough to render predecessor technologies obsolete rapidly. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence may pose obstacles to smooth or continued operations.[87] Because of the toxic materials used in the production of modern computers, obsolescence if not properly managed, may lead to harmful environmental impacts.[88]

Moore's law has affected the performance of other technologies significantly: Michael S. Malone wrote of a Moore's War following the apparent success of shock and awe in the early days of the Iraq War. Progress in the development of guided weapons depends on electronic technology.[89] Improvements in circuit density and low-power operation associated with Moore's law, also have contributed to the development of Star Trek-like technologies including mobile telephones[90] and replicator-like 3-D printing.[91]

Other formulations and similar observations

Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor,"[75] at minimum cost.

Transistors per integrated circuit - The most popular formulation is of the doubling of the number of transistors on integrated circuits every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top shows this trend holds true today.

Density at minimum cost per transistor - This is the formulation given in Moore's 1965 paper.[1] It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.[92] As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances in photolithography, this number would increase at "a rate of roughly a factor of two per year".[1]

Dennard scaling - This suggests that power requirements are proportional to area (both voltage and current being proportional to length) for transistors. Combined with Moore's law, performance per watt would grow at roughly the same rate as transistor density, doubling every 1–2 years. According to Dennard scaling transistor dimensions are scaled by 30% (0.7x) every technology generation, thus reducing their area by 50%. This reduces the delay by 30% (0.7x) and therefore increases operating frequency by about 40% (1.4x). Finally, to keep electric field constant, voltage is reduced by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%.[note 2] Therefore, in every technology generation transistor density doubles, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same.[93]

The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 2005–2007, Dennard scaling appears to have broken down, so even though Moore's law continued for several years after that, it has not yielded dividends in improved performance.[94][95][96] The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat of thermal runaway and therefore, further increases energy costs.[94][95][96] The breakdown of Dennard scaling prompted a switch among some chip manufacturers to a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued.[97][98] In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22 nm in 2012 that is faster and consumes less power than a conventional planar transistor.[99]

Quality adjusted price of IT equipment - The price of Information Technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009. [100][101] The pace accelerated, however, to 23% per year in 1995-1999 triggered by faster IT innovation,[11] and later, slowed to 2% per year in 2010–2013.[100][102]

The rate of quality-adjusted microprocessor price improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later.[103][104] Laptop microprocessors in particular improved 25–35% per year in 2004–2010, and slowed to 15–25% per year in 2010–2013.[105]

The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully.[103][106][107] Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, “The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that when plotted on semi-log paper approximates a straight line. I hesitate to review its origins and by doing so restrict its definition.”[75]

Moore (2003) credits chemical mechanical planarization (chip smoothing) with increasing the connectivity of microprocessors from two or three metal layers in the early 1990s to seven in 2003.[50] This progressed to nine metal layers in 2007 and thirteen in 2014.[108][109][110] Connectivity improves performance, and relieves network congestion. Just as additional floors may not enlarge a building's footprint, nor is connectivity tallied in transistor count. Microprocessors rely more on communications (interconnect) than do DRAM chips, which have three or four metal layers.[111][112][113] Microprocessor prices in the late 1990s improved faster than DRAM prices.[103]

Hard disk drive areal density - A similar observation (sometimes called Kryder's law) was made as of 2005 for hard disk drive areal density.[114] Several decades of rapid progress resulted from the use of error correcting codes, the magnetoresistive effect, and the giant magnetoresistive effect. The Kryder rate of areal density advancement slowed significantly around 2010, because of noise related to smaller grain size of the disk media, thermal stability, and writability using available magnetic fields.[115][116]

Network capacity - According to Gerry/Gerald Butters,[117][118] the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[119] a formulation that deliberately parallels Moore's law. Butter's law says that the amount of data coming out of an optical fiber is doubling every nine months.[120] Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and dense wavelength-division multiplexing (DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen's Law says that the bandwidth available to users increases by 50% annually.[121]

Pixels per dollar - Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price, LCD and LED screens, and resolution.[122][123][124]

The great Moore's law compensator (TGMLC) , also known as Wirth's law - generally is referred to as bloat and is the principle that successive generations of computer software acquire enough bloat to offset the performance gains predicted by Moore's law. In a 2008 article in InfoWorld, Randall C. Kennedy,[125] formerly of Intel, introduces this term using successive versions of Microsoft Office between the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.

Library expansion - was calculated in 1945 by Fremont Rider to double in capacity every 16 years, if sufficient space were made available.[126] He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission mediums. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called an Information Age.

The Carlson Curve - is a term coined by The Economist [127] to describe the biotechnological
equivalent of Moore's law, and is named after author Rob Carlson.[128] Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law.[129] Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures.

Evolvability


From Wikipedia, the free encyclopedia

Evolvability is defined as the capacity of a system for adaptive evolution. Evolvability is the ability of a population of organisms to not merely generate genetic diversity, but to generate adaptive genetic diversity, and thereby evolve through natural selection.[1][2][3]

In order for a biological organism to evolve by natural selection, there must be a certain minimum probability that new, heritable variants are beneficial. Random mutations, unless they occur in DNA sequences with no function, are expected to be mostly detrimental. Beneficial mutations are always rare, but if they are too rare, then adaptation cannot occur. Early failed efforts to evolve computer programs by random mutation and selection[4] showed that evolvability is not a given, but depends on the representation of the program.[5] Analogously, the evolvability of organisms depends on their genotype-phenotype map.[6] This means that biological genomes are structured in ways that make beneficial changes less unlikely than they would otherwise be. This has been taken as evidence that evolution has created not just fitter organisms, but populations of organisms that are better able to evolve.

Alternative definitions

Wagner[7] describes two definitions of evolvability. According to the first definition, a biological system is evolvable:
  • if its properties show heritable genetic variation, and
  • if natural selection can thus change these properties.
According to the second definition, a biological system is evolvable:
  • if it can acquire novel functions through genetic change, functions that help the organism survive and reproduce.
For example, consider an enzyme with multiple alleles in the population. Each allele catalyzes the same reaction, but with a different level of activity. However, even after millions of years of evolution, exploring many sequences with similar function, no mutation might exist that gives this enzyme the ability to catalyze a different reaction. Thus, although the enzyme’s activity is evolvable in the first sense, that does not mean that the enzyme's function is evolvable in the second sense. However, every system evolvable in the second sense must also be evolvable in the first.

Pigliucci[8] recognizes three classes of definition, depending on timescale. The first corresponds to Wagner's first, and represents the very short timescales that are described by quantitative genetics. He divides Wagner's second definition into two categories, one representing the intermediate timescales that can be studied using population genetics, and one representing exceedingly rare long-term innovations of form.

Pigliucci's second evolvability definition includes Altenberg's [3] quantitative concept of evolvability, being not a single number, but the entire upper tail of the fitness distribution of the offspring produced by the population. This quantity was considered a "local" property of the instantaneous state of a population, and its integration over the population's evolutionary trajectory, and over many possible populations, would be necessary to give a more global measure of evolvability.

Generating more variation

More heritable phenotypic variation means more evolvability. While mutation is the ultimate source of heritable variation, its permutations and combinations also make a big difference. Sexual reproduction generates more variation (and thereby evolvability) relative to asexual reproduction (see evolution of sexual reproduction). Evolvability is further increased by generating more variation when an organism is stressed,[9] and thus likely to be less well adapted, but less variation when an organism is doing well. The amount of variation generated can be adjusted in many different ways, for example via the mutation rate, via the probability of sexual vs. asexual reproduction, via the probability of outcrossing vs. inbreeding, via dispersal, and via access to previously cryptic variants through the switching of an evolutionary capacitor. A large population size increases the influx of novel mutations each generation.[10]

Enhancement of Selection

Rather than creating more phenotypic variation, some mechanisms increase the intensity and effectiveness with which selection acts on existing phenotypic variation.[11] For example:
  • Mating rituals that allow sexual selection on "good genes", and so intensify natural selection
  • Large effective population size increasing the threshold value of the selection coefficient above which selection becomes an important player. This could happen through an increase in the census population size, decreasing genetic drift, through an increase in the recombination rate, decreasing genetic draft, or through changes in the probability distribution of the numbers of offspring.
  • Recombination decreasing the importance of the Hill-Robertson effect, where different genotypes contain different adaptive mutations. Recombination brings the two alleles together, creating a super-genotype in place of two competing lineages.
  • Shorter generation time

Robustness and evolvability

The relationship between robustness and evolvability depends on whether recombination can be ignored.[12] Recombination can generally be ignored in asexual populations and for traits affects by single genes.

Without recombination

Robustness will not increase evolvability in the first sense. In organisms with a high level of robustness, mutations will have smaller phenotypic effects than in organisms with a low level of robustness. Thus, robustness reduces the amount of heritable genetic variation on which selection can act. However, robustness may allow exploration of large regions of genotype space, increasing evolvability according to the second sense.[7][12] Even without genetic diversity, some genotypes have higher evolvability than others, and selection for robustness can increase the "neighborhood richness" of phenotypes that can be accessed from the same starting genotype by mutation. For example, one reason many proteins are less robust to mutation is that they have marginal thermodynamic stability, and most mutations reduce this stability further. Proteins that are more thermostable can tolerate a wider range of mutations and are more evolvable.[13] For polygenic traits, neighborhood richness contributes more to evolvability than does genetic diversity or "spread" across genotype space.[14]

With recombination

Temporary robustness, or canalisation, may lead to the accumulation of significant quantities of cryptic genetic variation. In a new environment or genetic background, this variation may be revealed and sometimes be adaptive.[12][15]

Exploration ahead of time

When mutational robustness exists, many mutants will persist in a cryptic state. Mutations tend to fall into two categories, having either a very bad effect or very little effect: few mutations fall somewhere in between.[16][17] Sometimes, these mutations will not be completely invisible, but still have rare effects, with very low penetrance. When this happens, natural selection weeds out the really bad mutations, while leaving the other mutations relatively unaffected.[18][19] While evolution has no "foresight" to know which environment will be encountered in the future, some mutations cause major disruption to a basic biological process, and will never be adaptive in any environment.
Screening these out in advance leads to preadapted stocks of cryptic genetic variation.

Another way that phenotypes can be explored, prior to strong genetic commitment, is through learning. An organism that learns gets to "sample" several different phenotypes during its early development, and later sticks to whatever worked best. Later in evolution, the optimal phenotype can be genetically assimilated so it becomes the default behavior rather than a rare behavior. This is known as the Baldwin effect, and it can increase evolvability.[20][21]

Learning biases phenotypes in a beneficial direction. But an exploratory flattening of the fitness landscape can also increase evolvability even when it has no direction, for example when the flattening is a result of random errors in molecular and/or developmental processes. This increase in evolvability can happen when evolution is faced with crossing a "valley" in an adaptive landscape. This means that two mutations exist that are deleterious by themselves, but beneficial in combination. These combinations can evolve more easily when the landscape is first flattened, and the discovered phenotype is then fixed by genetic assimilation.[22][23][24]

Modularity

If every mutation affected every trait, then a mutation that was an improvement for one trait would be a disadvantage for other traits. This means that almost no mutations would be beneficial overall. But if pleiotropy is restricted to within functional modules, then mutations affect only one trait at a time, and adaptation is much less constrained. In a modular gene network, for example, a gene that induces a limited set of other genes that control a specific trait under selection may evolve more readily than one that also induces other gene pathways controlling traits not under selection.[11] Individual genes also exhibit modularity. A mutation in one cis-regulatory element of a gene's promoter region may allow the expression of the gene to be altered only in specific tissues, developmental stages, or environmental conditions rather than changing gene activity in the entire organism simultaneously.[11]

Evolution of evolvability

While variation yielding high evolvability could be useful in the long term, in the short term most of that variation is likely to be a disadvantage. For example, naively it would seem that increasing the mutation rate via a mutator allele would increase evolvability. But as an extreme example, if the mutation rate is too high then all individuals will be dead or at least carry a heavy mutation load.
Short-term selection for low variation most of the time is usually thought likely to be more powerful than long-term selection for evolvability, making it difficult for natural selection to cause the evolution of evolvability. Other forces of selection also affect the generation of variation; for example, mutation and recombination may in part be byproducts of mechanisms to cope with DNA damage.[25]

When recombination is low, mutator alleles may still sometimes hitchhike on the success of adaptive mutations that they cause. In this case, selection can take place at the level of the lineage.[26] This may explain why mutators are often seen during experimental evolution of microbes. Mutator alleles can also evolve more easily when they only increase mutation rates in nearby DNA sequences, not across the whole genome: this is known as a contingency locus.

The evolution of evolvability is less controversial if it occurs via the evolution of sexual reproduction, or via the tendency of variation-generating mechanisms to become more active when an organism is stressed. The yeast prion [PSI+] may also be an example of the evolution of evolvability through evolutionary capacitance.[27][28] An evolutionary capacitor is a switch that turns genetic variation on and off. This is very much like bet-hedging the risk that a future environment will be similar or different.[29] Theoretical models also predict the evolution of evolvability via modularity.[30] When the costs of evolvability are sufficiently short-lived, more evolvable lineages may be the most successful in the long-term.[31] However, the hypothesis that evolvability is an adaptation is often rejected in favor of alternative hypotheses, e.g. minimization of costs.[8]

Applications

The study of evolvability has fundamental importance for understanding very long term evolution of protein superfamilies[32][33] and organism phyla and kingdoms.[34][35][36] A thorough understanding of the details of long term evolution will likely form part of the Extended Evolutionary Synthesis (the update to the Modern Synthesis).[37][38][39] In addition, these phenomena have two main practical applications. For protein engineering we wish to increase evolvability, and in medicine and agriculture we wish to decrease it.

Firstly, for protein engineering it is important to understand the factors that determine how much a protein function can be altered. In particular, both rational design and directed evolution approaches aim to create changes rapidly through mutations with large effects.[40][41] Such mutations, however, commonly destroy enzyme function or at least reduce tolerance to further mutations.[42][43] Identifying evolvable proteins and manipulating their evolvability is becoming increasingly necessary in order to achieve ever larger functional modification of enzymes.[44]

Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs.[45][46][47] These same problems occur in agriculture with pesticide[48] and herbicide[49] resistance. It is possible that we are facing the end of the effective life of most of available antibiotics[50] and predicting the evolution and evolvability[51] of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.[52]

This Incredible Hospital Robot Is Saving Lives.


By   | Original link:  http://www.wired.com/2015/02/incredible-hospital-robot-saving-lives-also-hate/
 
tuggy-inline
The Tug autonomous medical robot, aka Tuggy McFresh, aka Little McTuggy, aka the bane of my existence. Josh Valcarcel/WIRED

The robot, I’m told, is on its way. Any minute now you’ll see it. We can track them, you know. There’s quite a few of them, so it’s only a matter of time. Any minute now.
Ah, and here it is.

Far down the hospital hall, double doors part to reveal the automaton. There’s no dramatic fog or lighting—which I jot down as “disappointing”—only a white, rectangular machine about four feet tall. It waits for the doors to fully part, then cautiously begins to roll toward us, going about as fast as a casual walk, emitting a soft beep every so often to let the humans around it know it’s on a very important quest. It’s not traveling on a track. It’s unleashed. It’s free.

The robot, known as a Tug, edges closer and closer to me at the elbow of the L-shaped corridor and stops. It turns its wheels before accelerating through the turn, then suddenly halts once again. Josh, the photographer I’d brought along, is blocking its path, and by way of its sensors, the robot knows it. Tug, it seems, is programmed to avoid breaking knees.

This hospital—the University of California, San Francisco’s Mission Bay wing—had opened four days before our visit. From the start, a fleet of Tugs has been shuffling around the halls. They deliver drugs and clean linens and meals while carting away medical waste and soiled sheets and trash. And by the time the fleet spins up to 25 robots on March 1, it’ll be the largest swarm of Tug medical automatons in the world, with each robot traveling an admirable average of 12 miles a day.

The whole circus is, in a word, bewildering. The staff still seems unsure what to make of Tug. Reactions I witness range from daaawing over its cuteness (the gentle bleeping, the slow-going, the politeness of stopping before pancaking people) to an unconvincingly restrained horror that the machines had suddenly become sentient. I grew up in Silicon Valley and write for WIRED and even I’m confused about it. The whole thing is just weird.

It’s really weird. And I’m not sure I like it much.

Roll, Roll, Roll Your Scary-Intelligent Medical Robot

The Tug that’d emerged without so much as smoke or pyrotechnics had come from the kitchen, where the exhaust system hums worryingly loud and a man hands out hairnets and even a beardnet to Josh, who finds this more amusing than inconvenient. Dan Henroid, the hospital’s director of nutrition and food services, has brought me to a wall where Tugs are lined up charging in their docking stations, save for one robot out doing the rounds.
GG3A0440
You’re looking at what is perhaps the only useful application of QR codes in the world. Tug scans it when it docks in the station so humans know where it is. Josh Valcarcel/WIRED

“We’ve named ours after fruit,” he says, forcefully, over the fans. “So we have Apple, Grape, Banana, Orange, Pear—and Banana is out right now. At some point we’ll get them skins so they actually look like the fruit.” Other departments have their own naming conventions, with monikers that include Tuggy McFresh and Little McTuggy, plus Wall-E and of course the love of his life, Eve (the hospital is apparently trying to get permission from Disney to dress them up like they appear in the movie). Other Tugs will be stylized as cable cars, because, well, it’s San Francisco and why the hell not.

If you’re a patient here, you can call down to Henroid and his team and place your order if you’re keen on being a savage, or you can use the fancy tablet at your bedside and tap your order in. Down in the kitchen, the cooks—who aren’t robots—fire up your food, load it onto a Tug, and use a touchscreen next to the docking stations to tell the robot where to go. Once the food is loaded, the Tug will wait for 10 minutes, then depart, whether it has just one tray or 12, its max capacity.

There are no beacons to guide the Tugs. Instead, they use maps in their brains to navigate. They’re communicating with the overall system through the hospital’s Wi-Fi, which also allows them to pick up fire alarms and get out of the way so carbon-based lifeforms can escape. Rolling down the halls using a laser and 27 infrared and ultrasonic sensors to avoid collisions, a Tug will stop well away from the elevators and call one down through the Wi-Fi (to open doors, it uses radio waves). It’ll only board an elevator that’s empty, pulling in and doing a three-point turn to flip 180 degrees before disembarking. After it’s made its deliveries to any number of floors—the fleet has delivered every meal since the hospital opened—it gathers empty trays and returns them to the kitchen, where it starts the whole process anew.
GG3A0420
The roboticized kitchen of the UCSF Medical Center. Josh Valcarcel/WIRED

And the cooks and other kitchen staff, says Henroid, adore them for it. “In fact, I think the most interesting thing is people have been very respectful of the robots. When we went and talked to other people at other hospitals, they said, ‘Oh, people get in the way.’ We haven’t had any of that. I think we did a lot as an organization to sort of prime people and say, ‘Hey, the robot’s got a job to do. Stay out of their way.’”

It sounds demeaning, but the humans had been coached on how to deal with robots. So welcome to the future. Your robot ethics instructor will see you now.

“Tuggy! Tuggy Tug!”

Isaac Asimov had three now-iconic rules for robots: They can’t hurt us or let us get hurt, they must follow orders, and they must protect their own existence. We can now tack onto these the new rules for the humans who interact with medical automatons.

“We had to train on a lot of robot etiquette, you know,” says operations director Brian Herriot as we walk the halls in search of Tugs, aided by a laptop that tracks their movements. “Which is, we train them to treat a robot like your grandma, and she’s in the hospital in a wheel chair. If something’s in their way, just move it aside, don’t go stand in front of them.”
GG3A0560
Tug’s impressive array of sensors allow it to detect obstacles like humans. Jump in front of one and it’ll stop and route around you. Josh Valcarcel/WIRED

Asimov’s laws are good to keep in mind so we don’t end up with murderous hordes of machines, but we need to start talking more about the other side of things. How should we treat them? We need laws for human-robot interaction. For the moment, it seems that we’re supposed to just pretend they’re Grandma. That’s Law Number One. What the other laws will be, I’m not so sure. How will we treat AI that’s smart enough to pass as human, for instance? I mean, we’re already getting emotional about a box that rolls around hospitals. Maybe it’s too early to tell these things. Give me some time to think about it.

In this hospital, Law Number One is working. Most staffers have a strange nonreciprocal affection for Tugs. Reactions to our convoy of PR reps and technicians and me and Josh and of course robots included, but were not limited to:

• “Wall-E has an escort?”
• A woman watching a Tug turn: “I usually call it the Tug shuffle.” And her companion, subtly one-upping her with nice alliteration: “The Tug tango?”
• “Tuggy! Tuggy Tug!” And from a fan of brevity: “Tuggy!”
• Plus an outlier from two women who turned a corner to find themselves face to face with a Tug: “Whoa! The robot scares us!” The other woman didn’t say anything, but she didn’t defend the Tug either.

The affection is no accident. Aethon, Tug’s manufacturer, designed it to be comforting, and not in the sense that they avoided things like painting flames on it. It’s more subtle than that. The tone of that constant beep, beep, beep, for instance, was designed to alert humans without being so annoying that you want to ring Tug’s neck.

And then there’s the voice. Tug is chatty. Lest you worry that it’s broken down while waiting for the elevator, it assures you: “Waiting for a clear elevator.” Once it gets one: “Waiting for doors to open.” Tug warns you when it’s about to back up, and thanks you after you’ve unloaded its delivery. Its voice comes in either soothing male, soothing female, or super-enthusiastic Australian bro (have a listen below). Aethon had contracted with a client in Australia and decided to offer the voice track to other hospitals. Australians are famed for their friendliness, after all.

It may have an adult voice, but Tug has a childlike air, even though in this hospital you’re supposed to treat it like a wheelchair-bound old lady. It’s just so innocent, so earnest, and at times, a bit helpless. If there’s enough stuff blocking its way in a corridor, for instance, it can’t reroute around the obstruction.

This happened to the Tug we were trailing in pediatrics. “Oh, something’s in its way!” a woman in scrubs says with an expression like she herself had ruined the robot’s day. She tries moving the wheeled contraption but it won’t budge. “Uh, oh!” She shoves on it some more and finally gets it to move. “Go, Tug, go!” she exclaims as the robot, true to its programming, continues down the hall.

For as cute as Tug can be—and it pains me to say this—it’s also a bit creepy. There’s something unsettling about a robot that’s responsible for human lives tooling around with minimal commands. Maybe it’s that I occasionally felt like we were hunting wild animals, wandering around in search of Tug after Tug. While technicians can track a Tug’s movements, it isn’t always easy to immediately pinpoint and intercept them. We’re both roving parties, after all. We’d turn a corner and expect to see a Tug, only for it to pop through a door seconds later. That accuracy ain’t too bad in the grand scheme of things, but it nevertheless instilled a kind of suspense. It was like tracking a deer that suddenly emerged from the grass … and started beeping.

Alright, fine, maybe that simile isn’t airtight.

I, Robot Drug Dealer

There are two models of Tug roaming the corridors at UCSF Medical Center. The one that hauls food and laundry and such is like a pickup truck. It has a thinner front and a bed in back, which people roll big cabinets onto. The second is more like a van, boxier with built-in cabinets. This is the drug-pusher.

We’re in the hospital’s pharmacy now, meeting Wall-E and Eve. You can tell the difference because behind each hangs a plush toy of their namesake. They’ll hang there until the robots get their new outfits (pending approval from Disney’s lawyers, of course). A pharmacist gathers some drugs, scans their codes into a touchscreen next to the robots, and chooses the destination for each. Walking over to Wall-E, she enters a code on a number pad, then places her thumb on a biometric reader to unlock the machine. A small screen on the robot tells her which medication goes in which numbered drawer, and she proceeds to pop each open and place the drug inside. With a tap of the green button atop Wall-E, the robot is off.

I know at least a dozen of you are thinking that maybe you should get into the Tug drug heist business, so I’m gonna save you some time and embarrassment. Not only does unlocking the drawers require the PIN and thumb print of the doctor or nurse who requested the drugs, but Tug won’t unlock until it reaches its destination. Anywhere else and it’s sealed tight.

So Drug Tugs securely deliver medications, and Linen Tugs haul was much as 1,000 pounds of laundry, and Food Tugs deliver 1,000 meals a day. We might begin to wonder about the people who previously did all that scurrying about. What was their fate?
GG3A0495
Josh Valcarcel/WIRED

Well, according to Pamela Hudson, the medical center’s associate director of administration, their jobs are safe. In fact, she says that with such a massive new hospital, hiring in some departments is on the rise. The robots are about supplementing current jobs, she says, not eliminating them. “It would be a travesty for us to hire more techs who specialize in instrumentation but all they’re doing is running around delivering trays,” Hudson says. “That’s not the best use of their skills—that’s not a real job satisfier.” As an added perk, she says, if staffers aren’t pushing around huge carts, they’re not straining themselves or mowing down their colleagues.

Just down the road in Silicon Valley, El Camino Hospital has been using the bots since 2009. And according to its chief information officer, Greg Walton, there’s huge pressure to bring down the absurd cost of medical care in America, and Tugs have allowed them to avoid hiring additional staff. “So by being more efficient we’re able to devote more of our dollars toward paid employees at the bedside caring for patients,” he says, “as opposed to pushing trash carts or linen carts or moving products and supplies throughout the facility.”

It’d be laughably optimistic, though, to say that robots like Tug won’t infringe on more and more jobs as they grow more and more sophisticated. It’s already happening elsewhere. Robots, long just stealers of manufacturing jobs, are breaking out of the factory into the world. There’s a hotel opening this summer in Japan with robot receptionists. Last week a Roomba ate a woman’s hair as she slept on the floor, which never would have happened had she hired a maid. Soon enough our taxis will drive themselves. And before long Tug will get smart enough to really start chipping away at the hospital workforce. When that happens, there won’t be an outfit cute enough to keep it from playing the villain.

In the Future, Robots Will Be Even Smarter and I’ll Still Be a Dum-Dum

Listening to my audio recording of the visit, there’s a period of about 10 minutes when every so often someone giggles. I hadn’t noticed it at the time, but there’s definitely some suspiciously frequent snickering there. And people seemed to pause before answering my questions, as if over-contemplating things. But these were patently easy questions.

Riding an elevator to intercept another Tug, Josh points the camera in my face—and then it hits me. I hadn’t removed my fluffy white hairnet. Walking around a hospital in scrubs is perfectly normal, but wearing a hairnet beyond the kitchen is considered antisocial at best. I rip the thing off my head, and there is much laughter.

“I forgot about my hat. Thanks for telling me, guys.”

“Well, he told ya … with the camera,” someone in the convoy replies.

I’ve spent the morning tailing an autonomous robot that performs its duties without a hitch almost 100 percent of the time. And here I am, totally incapable of not making an ass out of myself in the line of duty. Right now I’m envying Tug not only on account of its perfection, but because it’s not programmed to feel embarrassment. All it does is roll around as doors magically part for it and doctors and nurses scurry about so as to not hinder Its Holiness the Tug.

Maybe that’s why super-intelligent robots make us uncomfortable. It’s not just fear that they’re dangerous or are going to steal our jobs—it’s envy. They’re not saddled with emotions. They do everything perfectly. They roll about assuming they can woo us with cute beeps and smooth lines like “thank you.” I, for one, shan’t be falling for it. I don’t like Tuggy one bit.

I throw the hairnet in a waste bin and continue on in search of the next ever-elusive Tug. It’s out there somewhere, helping save lives or whatever, trying a bit too hard to be liked. Someone’s probably calling it Tuggy Tug at this very moment, while I’m here trying to salvage what little social currency I have left.

There’s no robot for a man like me. Well, until I end up as a patient here, where there’s plenty of robots for a man like me. Then I’ll have no choice but to sit back and soak in the automated future of medicine—the beeping, the incessant politeness, the whir of electric motors. Count me out, though, when one of them starts talking like an Australian.

You can only push a man so far.
matt
I petitioned unsuccessfully to get Josh fired for taking this photo. So he went ahead and added it to the story. Josh Valcarcel/WIRED

President Obama’s Climate Change Goals: Whose Goals Are They Anyway?

Original link: http://www.theepochtimes.com/n3/1245667-president-obamas-climate-change-goals-whose-goals-are-they-anyway/

"So that’s my plan. The actions I’ve announced today should send a strong signal to the world that America intends to take bold action to reduce carbon pollution.” — President Barack Obama, June 25, 2013, Georgetown University

In a spirited speech on the campus of Georgetown University to an audience of students, President Barack Obama laid out a four-step plan designed to reduce green house gas emissions from cars, trucks, factories, and power plants—a plan that the U.S. Senate Committee on Environment and Public Works says in a report, is controlled by the “Billionaire’s Club.”

The report, which came out in July 2014, accuses “a club of billionaires and their foundations” of controlling “the environmental movement and Obama’s [Environmental Protection Agency].”
During his speech, Obama touted the administration’s progress in securing America’s energy future by using more solar and wind energy, improving technology on cars to save more gas, and producing more of our own oil leading to the goal of independence from other nations’ oil.
We’ve got to look after our children; we have to look after our future.
— President Obama

The president resolutely directed the EPA to “to put an end to the limitless dumping of carbon pollution from our power plants, and complete new pollution standards for both new and existing power plants.”

He often was tenacious in his commitment to a better future for our children and grandchildren. “We’ve got to look after our children; we have to look after our future,” he said.

Obama exalted America’s duty to take the lead in a global climate change initiative and enlisted the help of Georgetown students to accomplish the goals he set, saying: “I’m going to need all of you to educate your classmates, your colleagues, your parents, your friends. Convince those in power to reduce our carbon pollution. Push your own communities to adopt smarter practices.”

The president’s consistent mantra for clean energy has resounded throughout his two terms, and appears to be the indelible mark he wants to make before leaving office in 2016.

The Minority Report

But according to the minority report, those in power are already convinced. So much so, that they are the ones who are orchestrating Obama’s whole climate change agenda.
The Billionaire’s Club is also funneling money through 501(c)(3) and 501(c)(4) public charities.
— U.S. Senate Committee on Environment and Public Works report

This elite, “exclusive” group of anonymous millionaires and billionaires has established a dozen leading private foundations whose sole purpose is to spend money on environmental causes.

The Democracy Alliance (DA), one of the lead funders of the environmental movement, “works to create an all-encompassing far-left infrastructure to support affiliated and approved groups.” Its members pay dues of $30,000 and must contribute at least $200,000 to groups DA supports.

In a spring 2014 statement, the DA touted its “progressive victories,” which included “a series of executive actions to combat the threat of climate change … made possible by a well-aligned network of organizations.”

The Billionaire’s Club is also funneling money through 501(c)(3) and 501(c)(4) public charities with the stipulation that the funds only be used for environmental causes, like the New York-based Park Foundation, which funded a barrage of anti-fracking campaigns conducted by numerous groups in New York before Governor Cuomo officially banned fracking in the state once and for all.

According to the report, that 501(c)(4) groups are known for “engaging in activities designed to influence elections and have no restrictions on their lobbying efforts.”

It says further that the 501(c)(4) groups are funded by the 501(c)(3) groups because the Billionaire’s Club gets better tax benefits by donating to the 501(c)(3).
Regardless of which public charity receives the donation, they are not required to disclose the source. In addition, members of the Billionaire’s Club can gain complete control over the activities of a public charity through fiscal sponsorships, “whereby the charity actually sells its nonprofit status to a group for a fee.”

The sponsorships are legally designed for short-term projects, like construction of a new park, but the report cites one sponsorship that lasted 23 years.

Although Obama’s administration called on today’s youth to get involved in assisting the advancement of his plan, the Billionaire’s Club was already engaging them and catapulting the plan forward.

From 2009 to 2014, the Park Foundation awarded environment grants totaling between $2.8 million and $3.5 million per year, according to the foundation’s website.

Even this year, the push for global climate change action is gearing up for the grand crescendo in December—the international climate change summit in Paris.

Numerous celebrities, the U.N., Al Gore, and environmental groups around the world have established two global campaigns, Live Earth and action2015, both planning a series of events this year in preparation for the summit and calling for climate action.

Who’s funding these gigantic campaigns, and why? Is there really a threat of global warming, and if not, why are scientists saying it?

Why didn’t the Billionaire’s Club continue the environmental movement started by President Jimmy Carter in the 1970s?

Just who these anonymous donors are and why they are so concerned about the environment, we may never know and as the report indicates: “It would be virtually impossible to examine this system completely given the enormity of this carefully coordinated effort and the lack of transparency surrounding it.”

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...