Search This Blog

Tuesday, March 17, 2015

Ecosystem


From Wikipedia, the free encyclopedia


Rainforest ecosystems are rich in biodiversity. This is the Gambia River in Senegal's Niokolo-Koba National Park.

An ecosystem is a community of living organisms (plants, animals and microbes) in conjunction with the nonliving components of their environment (things like air, water and mineral soil), interacting as a system.[2] These biotic and abiotic components are regarded as linked together through nutrient cycles and energy flows.[3] As ecosystems are defined by the network of interactions among organisms, and between organisms and their environment,[4] they can be of any size but usually encompass specific, limited spaces[5] (although some scientists say that the entire planet is an ecosystem).[6]

Energy, water, nitrogen and soil minerals are other essential abiotic components of an ecosystem. The energy that flows through ecosystems is obtained primarily from the sun. It generally enters the system through photosynthesis, a process that also captures carbon from the atmosphere. By feeding on plants and on one another, animals play an important role in the movement of matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes.[7]

Ecosystems are controlled both by external and internal factors. External factors such as climate, the parent material which forms the soil and topography, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem.[8] Other external factors include time and potential biota. Ecosystems are dynamic entities—invariably, they are subject to periodic disturbances and are in the process of recovering from some past disturbance.[9] Ecosystems in similar environments that are located in different parts of the world can have very different characteristics simply because they contain different species.[8] The introduction of non-native species can cause substantial shifts in ecosystem function. Internal factors not only control ecosystem processes but are also controlled by them and are often subject to feedback loops.[8] While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading.[8] Other internal factors include disturbance, succession and the types of species present. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.[8]

Biodiversity affects ecosystem function, as do the processes of disturbance and succession. Ecosystems provide a variety of goods and services upon which people depend; the principles of ecosystem management suggest that rather than managing individual species, natural resources should be managed at the level of the ecosystem itself. Classifying ecosystems into ecologically homogeneous units is an important step towards effective ecosystem management, but there is no single, agreed-upon way to do this.

History and development

The term "ecosystem" was first used in a publication by British ecologist Arthur Tansley.[fn 1][10] Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment.[11] He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment".[12] Tansley regarded ecosystems not simply as natural units, but as mental isolates.[12] Tansley later[13] defined the spatial extent of ecosystems using the term ecotope.

G. Evelyn Hutchinson, a pioneering limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky to suggest that mineral nutrient availability in a lake limited algal production which would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas one step further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems, allowing them to study the flow of energy and material through ecological systems.[11]

Ecosystem processes

Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration.[14] Most mineral nutrients, on the other hand, are recycled within ecosystems.[15]

Ecosystems are controlled both by external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. The most important of these is climate.[8] Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and temperature seasonality determine the amount of water available to the ecosystem and the supply of energy available (by influencing photosynthesis).[8] Parent material, the underlying geological material that gives rise to soils, determines the nature of the soils present, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. This may be the difference between the ecosystem present in wetland situated in a small depression on the landscape, and one present on an adjacent steep hillside.[8]

Other external factors that play an important role in ecosystem functioning include time and potential biota. Ecosystems are dynamic entities—invariably, they are subject to periodic disturbances and are in the process of recovering from some past disturbance.[9] Time plays a role in the development of soil from bare rock and the recovery of a community from disturbance.[8] Similarly, the set of organisms that can potentially be present in an area can also have a major impact on ecosystems. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present.[8] The introduction of non-native species can cause substantial shifts in ecosystem function.

Unlike external factors, internal factors in ecosystems not only control ecosystem processes, but are also controlled by them. Consequently, they are often subject to feedback loops.[8] While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading.[8] Other factors like disturbance, succession or the types of species present are also internal factors. Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.[8]

Primary production


Global oceanic and terrestrial phototroph abundance, from September 1997 to August 2000. As an estimate of autotroph biomass, it is only a rough indicator of primary production potential, and not an actual estimate of it. Provided by the SeaWiFS Project, NASA/Goddard Space Flight Center and ORBIMAGE.

Primary production is the production of organic matter from inorganic carbon sources. Overwhelmingly, this occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.

Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP).[16] About 48–60% of the GPP is consumed in plant respiration. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP).[14] Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.[16]

Energy flow

EnergyFlowFrog.jpg EnergyFlowTransformity.jpg
Left: Energy flow diagram of a frog. The frog represents a node in an extended food web. The energy ingested is utilized for metabolic processes and transformed into biomass. The energy flow continues on its path if the frog is ingested by predators, parasites, or as a decaying carcass in soil. This energy flow diagram illustrates how energy is lost as it fuels the metabolic process that transforms the energy and nutrients into biomass.
Right: An expanded three link energy food chain (1. plants, 2. herbivores, 3. carnivores) illustrating the relationship between food flow diagrams and energy transformity. The transformity of energy becomes degraded, dispersed, and diminished from higher quality to lesser quantity as the energy within a food chain flows from one trophic species into another. Abbreviations: I=input, A=assimilation, R=respiration, NU=not utilized, P=production, B=biomass.[17]
The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, roughly 90% of the NPP ends up being broken down by decomposers. The remainder is either consumed by animals while still alive and enters the plant-based trophic system, or it is consumed after it has died, and enters the detritus-based trophic system. In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher.[18] In trophic systems photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producersherbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.[18] The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey which are part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains.[18]

Decomposition

The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production, and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, dead organic matter would accumulate in an ecosystem and nutrients and atmospheric carbon dioxide would be depleted.[19] Approximately 90% of terrestrial NPP goes directly from plant to decomposer.[18]
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered "lost" to it).[19] Newly shed leaves and newly dead animals have high concentrations of water-soluble components, and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments, and much less important in dry ones.[19]

Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition.[19] Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.[19]

The chemical alteration of dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes which can break through the tough outer structures surrounding dead plant material. They also produce enzymes which break down lignin, which allows to them access to both cell contents and to the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.[19]

Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself.[20] Temperature controls the rate of microbial respiration; the higher the temperature, the faster microbial decomposition occurs. It also affects soil moisture, which slows microbial growth and reduces leaching. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the Spring, creating a pulse of nutrients which become available.[20]

Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth. When the rains return and soils become wet, the osmotic gradient between the bacterial cells and the soil water causes the cells to gain water quickly. Under these conditions, many bacterial cells burst, releasing a pulse of nutrients.[20] Decomposition rates also tend to be slower in acidic soils.[20] Soils which are rich in clay minerals tend to have lower decomposition rates, and thus, higher levels of organic matter.[20] The smaller particles of clay result in a larger surface area that can hold water. The higher the water content of a soil, the lower the oxygen content[21] and consequently, the lower the rate of decomposition. Clay minerals also bind particles of organic material to their surface, making them less accessibly to microbes.[20] Soil disturbance like tilling increase decomposition by increasing the amount of oxygen in the soil and by exposing new organic matter to soil microbes.[20]

The quality and quantity of the material available to decomposers is another major factor that influences the rate of decomposition. Substances like sugars and amino acids decompose readily and are considered "labile". Cellulose and hemicellulose, which are broken down more slowly, are "moderately labile". Compounds which are more resistant to decay, like lignin or cutin, are considered "recalcitrant".[20] Litter with a higher proportion of labile compounds decomposes much more rapidly than does litter with a higher proportion of recalcitrant material. Consequently, dead animals decompose more rapidly than dead leaves, which themselves decompose more rapidly than fallen branches.[20] As organic material in the soil ages, its quality decreases. The more labile compounds decompose quickly, leaving and increasing proportion of recalcitrant material. Microbial cell walls also contain a recalcitrant materials like chitin, and these also accumulate as the microbes die, further reducing the quality of older soil organic matter.[20]

Nutrient cycling

Biological nitrogen cycling

Ecosystems continually exchange energy and carbon with the wider environment; mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer.[15] Since most terrestrial ecosystems are nitrogen-limited, nitrogen cycling is an important control on ecosystem production.[15]

Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen fixing bacteria either live symbiotically with plants, or live freely in the soil. The energetic cost is high for plants which support nitrogen-fixing symbionts—as much as 25% of GPP when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants.[15] Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust.[15] Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.[15]

When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification.[15] Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.[15]

Other important nutrients include phosphorus, sulfur, calcium, potassium, magnesium and manganese.[22] Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics).[22] Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.[22]

Function and biodiversity

Loch Lomond in Scotland forms a relatively isolated ecosystem. The fish community of this lake has remained stable over a long period until a number of introductions in the 1970s restructured its food web.[23]

Spiny forest at Ifaty, Madagascar, featuring various Adansonia (baobab) species, Alluaudia procera (Madagascar ocotillo) and other vegetation.

Ecosystem processes are broad generalizations that actually take place through the actions of individual organisms. The nature of the organisms—the species, functional groups and trophic levels to which they belong—dictates the sorts of actions these individuals are capable of carrying out, and the relative efficiency with which they do so. Thus, ecosystem processes are driven by the number of species in an ecosystem, the exact nature of each individual species, and the relative abundance organisms within these species.[24] Biodiversity plays an important role in ecosystem functioning.[25]

Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise one species would competitively exclude the other.[26] Despite this, the cumulative effect of additional species in an ecosystem is not linear—additional species may enhance nitrogen retention, for example, but beyond some level of species richness, additional species may have little additive effect.[24] The addition (or loss) of species which are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large impact on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.[24]

Ecosystem goods and services

Ecosystems provide a variety of goods and services upon which people depend.[27] Ecosystem goods include the "tangible, material products"[28] of ecosystem processes—food, construction material, medicinal plants—in addition to less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.[27] Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value".[28] These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research.[27] While ecosystem goods have traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.[28] While Gretchen Daily's original definition distinguished between ecosystem goods and ecosystem services, Robert Costanza and colleagues' later work and that of the Millennium Ecosystem Assessment lumped all of these together as ecosystem services.[28]

Ecosystem management

When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management.[29] A variety of definitions exist: F. Stuart Chapin and coauthors define it as "the application of ecological science to resource management to promote long-term sustainability of ecosystems and the delivery of essential ecosystem goods and services",[30] while Norman Christensen and coauthors defined it as "management driven by explicit goals, executed by policies, protocols, and practices, and made adaptable by monitoring and research based on our best understanding of the ecological interactions and processes necessary to sustain ecosystem structure and function"[27] and Peter Brussard and colleagues defined it as "managing areas at various scales in such a way that ecosystem services and biological resources are preserved while appropriate human use and options for livelihood are sustained".[31]
Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions.[30] A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem;[30] "intergenerational sustainability [is] a precondition for management, not an afterthought".[27] It also requires clear goals with respect to future trajectories and behaviors of the system being managed. Other important requirements include a sound ecological understanding of the system, including connectedness, ecological dynamics and the context in which the system is embedded. Other important principles include an understanding of the role of humans as components of the ecosystems and the use of adaptive management.[27] While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems[27] (see, for example, agroecosystem and close to nature forestry).

Ecosystem dynamics


The High Peaks Wilderness Area in the 6,000,000-acre (2,400,000 ha) Adirondack Park is an example of a diverse ecosystem.

Ecosystems are dynamic entities—invariably, they are subject to periodic disturbances and are in the process of recovering from some past disturbance.[9] When an ecosystem is subject to some sort of perturbation, it responds by moving away from its initial state. The tendency of a system to remain close to its equilibrium state, despite that disturbance, is termed its resistance. On the other hand, the speed with which it returns to its initial state after disturbance is called its resilience.[9]

From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, an especially cold winter and a pest outbreak all constitute short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. These changes play out in changes in NPP, decomposition rates, and other ecosystem processes.[9] Longer-term changes also shape ecosystem processes—the forests of eastern North America still show legacies of cultivation which ceased 200 years ago, while methane production in eastern Siberian lakes is controlled by organic matter which accumulated during the Pleistocene.[9]

Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time and space that alters the structure of populations, communities and ecosystems and causes changes in resources availability or the physical environment".[32] This can range from tree falls and insect outbreaks to hurricanes and wildfires to volcanic eruptions and can cause large changes in plant, animal and microbe populations, as well soil organic matter content.[9] Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resources supply."[32]

The frequency and severity of disturbance determines the way it impacts ecosystem function. Major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience disturbances that undergo primary succession. Less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession.[9] More severe disturbance and more frequent disturbance result in longer recovery times. Ecosystems recover more quickly from less severe disturbance events.[9]

The early stages of primary succession are dominated by species with small propagules (seed and spores) which can be dispersed long distances. The early colonizers—often algae, cyanobacteria and lichens—stabilize the substrate. Nitrogen supplies are limited in new soils, and nitrogen-fixing species tend to play an important role early in primary succession. Unlike in primary succession, the species that dominate secondary succession, are usually present from the start of the process, often in the soil seed bank. In some systems the successional pathways are fairly consistent, and thus, are easy to predict. In others, there are many possible pathways—for example, the introduced nitrogen-fixing legume, Myrica faya, alter successional trajectories in Hawaiian forests.[9]

The theoretical ecologist Robert Ulanowicz has used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow, and eutrophication.[33]

Ecosystem ecology

A hydrothermal vent is an ecosystem on the ocean floor. (The scale bar is 1 m.)

Ecosystem ecology studies "the flow of energy and materials through organisms and the physical environment". It seeks to understand the processes which govern the stocks of material and energy in ecosystems, and the flow of matter and energy through them. The study of ecosystems can cover 10 orders of magnitude, from the surface layers of rocks to the surface of the planet.[34]

There is no single definition of what constitutes an ecosystem.[35] German ecologist Ernst-Detlef Schulze and coauthors defined an ecosystem as an area which is "uniform regarding the biological turnover, and contains all the fluxes above and below the ground area under consideration." They explicitly reject Gene Likens' use of entire river catchments as "too wide a demarcation" to be a single ecosystem, given the level of heterogeneity within such an area.[36] Other authors have suggested that an ecosystem can encompass a much larger area, even the whole planet.[6] Schulze and coauthors also rejected the idea that a single rotting log could be studied as an ecosystem because the size of the flows between the log and its surroundings are too large, relative to the proportion cycles within the log.[36] Philosopher of science Mark Sagoff considers the failure to define "the kind of object it studies" to be an obstacle to the development of theory in ecosystem ecology.[35]

Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation.[37] Studies can be carried out at a variety of scales, from microcosms and mesocosms which serve as simplified representations of ecosystems, through whole-ecosystem studies.[38] American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies carried out at the ecosystem scale, because microcosm experiments often fail to accurately predict ecosystem-level dynamics.[39]

The Hubbard Brook Ecosystem Study, established in the White Mountains, New Hampshire in 1963, was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem.[40] Long-term research at the site led to the discovery of acid rain in North America in 1972, and was able to document the consequent depletion of soil cations (especially calcium) over the next several decades.[41]

Classification


Classifying ecosystems into ecologically homogeneous units is an important step towards effective ecosystem management.[42] A variety of systems exist, based on vegetation cover, remote sensing, and bioclimatic classification systems.[42] American geographer Robert Bailey defines a hierarchy of ecosystem units ranging from microecosystems (individual homogeneous sites, on the order of 10 square kilometres (4 sq mi) in area), through mesoecosystems (landscape mosaics, on the order of 1,000 square kilometres (400 sq mi)) to macroecosystems (ecoregions, on the order of 100,000 square kilometres (40,000 sq mi)).[43]

Bailey outlined five different methods for identifying ecosystems: gestalt ("a whole that is not derived through considerable of its parts"), in which regions are recognized and boundaries drawn intuitively; a map overlay system where different layers like geology, landforms and soil types are overlain to identify ecosystems; multivariate clustering of site attributes; digital image processing of remotely sensed data grouping areas based on their appearance or other spectral properties; or by a "controlling factors method" where a subset of factors (like soils, climate, vegetation physiognomy or the distribution of plant or animal species) are selected from a large array of possible ones are used to delineate ecosystems.[44] In contrast with Bailey's methodology, Puerto Rico ecologist Ariel Lugo and coauthors identified ten characteristics of an effective classification system: that it be based on georeferenced, quantitative data; that it should minimize subjectivity and explicitly identify criteria and assumptions; that it should be structured around the factors that drive ecosystem processes; that it should reflect the hierarchical nature of ecosystems; that it should be flexible enough to conform to the various scales at which ecosystem management operates; that it should be tied to reliable measures of climate so that it can "anticipat[e] global climate change; that it be applicable worldwide; that it should be validated against independent data; that it take into account the sometimes complex relationship between climate, vegetation and ecosystem functioning; and that it should be able to adapt and improve as new data become available".[42]

Types


A freshwater ecosystem in Gran Canaria, an island of the Canary Islands.

Anthropogenic threats

As human populations grow, so do the resource demands imposed on ecosystems and the impacts of the human ecological footprint. Natural resources are not invulnerable and infinitely available. The environmental impacts of anthropogenic actions, which are processes or materials derived from human activities, are becoming more apparent—air and water quality are increasingly compromised, oceans are being overfished, pests and diseases are extending beyond their historical boundaries, and deforestation is exacerbating flooding downstream. It has been reported that approximately 40–50% of Earth's ice-free land surface has been heavily transformed or degraded by anthropogenic activities, 66% of marine fisheries are either overexploited or at their limit, atmospheric CO2 has increased more than 30% since the advent of industrialization, and nearly 25% of Earth's bird species have gone extinct in the last two thousand years.[45] Society is increasingly becoming aware that ecosystem services are not only limited, but also that they are threatened by human activities. The need to better consider long-term ecosystem health and its role in enabling human habitation and economic activity is urgent. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.

Collective intelligence


From Wikipedia, the free encyclopedia


Types of collective intelligence

Collective intelligence is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. The term appears in sociobiology, political science and in context of mass peer review and crowdsourcing applications. It may involve consensus, social capital and formalisms such as voting systems, social media and other means of quantifying mass activity. Collective IQ is a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed to bacteria[1] and animals.[2]

It can be understood as an emergent property from the synergies among: 1) data-information-knowledge; 2) software-hardware; and 3) experts (those with new insights as well as recognized authorities) that continually learns from feedback to produce just-in-time knowledge for better decisions than these three elements acting alone.[3] Or more narrowly as an emergent property between people and ways of processing information.[4] This notion of collective intelligence is referred to as Symbiotic intelligence by Norman Lee Johnson.[5] The concept is used in sociology, business, computer science and mass communications: it also appears in science fiction. Pierre Lévy defines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities."[6] According to researchers Lévy and Kerckhove, it refers to capacity of networked ICTs (Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions.[7]

Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According to Eric S. Raymond (1998) and JC Herz (2005), open source intelligence will eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations (Flew 2008). Media theorist Henry Jenkins sees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence.[8] Both Pierre Lévy (2007) and Henry Jenkins (2008) support the claim that collective intelligence is important for democratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.

Writers who have influenced the idea of collective intelligence include Douglas Hofstadter (1979), Peter Russell (1983), Tom Atlee (1993), Pierre Lévy (1994), Howard Bloom (1995), Francis Heylighen (1995), Douglas Engelbart, Cliff Joslyn, Ron Dembo, Gottfried Mayer-Kress (2003).

History

The concept (although not so named) originated in 1785 with the Marquis de Condorcet, whose "jury theorem" states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group (see Condorcet's jury theorem).[9] Many theorists have interpreted Aristotle's statement in the Politics that "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision.[10][11] Recent scholarship,[12] however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.[13]

A precursor of the concept is found in entomologist William Morton Wheeler's observation that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism (1911).[14] Wheeler saw this collaborative process at work in ants that acted like the cells of a single beast he called a "superorganism".

In 1912 Émile Durkheim identified society as the sole source of human logical thought. He argued, in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time.[15] Other antecedents are Vladimir Vernadsky's concept of "noosphere" and H.G. Wells's concept of "world brain" (see also the term "global brain"). Peter Russell, Elisabet Sahtouris, and Barbara Marx Hubbard (originator of the term "conscious evolution")[citation needed] are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. Doug Engelbart began using the term 'Collective IQ' in the mid 1990s as a measure of collective intelligence, to focus attention on the opportunity for business and society to pro-actively raise their Collective IQ[16]

Dimensions

Howard Bloom has discussed mass behavior—collective behavior from the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts of apoptosis, parallel distributed processing, group selection, and the superorganism to produce a theory of how collective intelligence works.[17] Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered by John Holland.[18]

Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life.[18] Ant societies exhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for example aphids for "milking". Leaf cutters care for fungi and carry leaves to feed the fungi.

David Skrbina[19] cites the concept of a 'group mind' as being derived from Plato's concept of panpsychism (that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a ‘group mind’ as articulated by Thomas Hobbes in "Leviathan" and Fechner's arguments for a collective consciousness of mankind. He cites Durkheim as the most notable advocate of a "collective consciousness" and Teilhard de Chardin as a thinker who has developed the philosophical implications of the group mind.

Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individual cognitive bias in order to allow a collective to cooperate on one process—while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration."[20] Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action". Their approach is rooted in Scientific Community Metaphor.

Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer. Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts.[citation needed] Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member. Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.

Robert David Steele Vivas in The New Craft of Intelligence portrayed all citizens as "intelligence minutemen," drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.

According to Don Tapscott and Anthony D. Williams, collective intelligence is mass collaboration. In order for this concept to happen, four principles need to exist;
Openness
Sharing ideas and intellectual property: though these resources provide the edge over competitors more benefits accrue from allowing others to share ideas and gain significant improvement and scrutiny through collaboration.
Peering
Horizontal organization as with the 'opening up' of the Linux program where users are free to modify and develop it provided that they make it available for others. Peering succeeds because it encourages self-organization – a style of production that works more effectively than hierarchical management for certain tasks.
Sharing
Companies have started to share some ideas while maintaining some degree of control over others, like potential and critical patent rights. Limiting all intellectual property shuts out opportunities, while sharing some expands markets and brings out products faster.
Acting Globally
The advancement in communication technology has prompted the rise of global companies at low overhead costs. The internet is widespread, therefore a globally integrated company has no geographical boundaries and may access new markets, ideas and technology.[21]

Examples

The Global Futures Collective Intelligence System (GFIS) at https://themp.org was created by The Millennium Project http://millennium-project.org/ in 2012.

Political parties mobilize large numbers of people to form policy, select candidates and finance and run election campaigns. Knowledge focusing through various voting methods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus. Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.[citation needed]

Military united, trade unions, and corporations satisfy some definitions of CI — the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions.Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.[citation needed]

In Learner generated context a group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context.[22][23][24] Learner generated contexts represent an ad hoc community that facilitates coordination of collective action in a network of trust. An example of Learner generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space" such as Wikipedia. As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas. (Flew 2008)

Improvisational actors also experience a type of collective intelligence which they term 'Group Mind'. A further example of collective intelligence is found in idea competitions.[25]

Specialized information site such as Digital Photography Review or Camera Labs is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites.

Mathematical techniques

One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient" (or "cooperation quotient")—which presumably can be measured like the "individual" intelligence quotient (IQ)—thus making it possible to determine the marginal extra intelligence added by each new individual participating in the collective, thus using metrics to avoid the hazards of group think and stupidity.

In 2001, Tadeusz (Ted) Szuba from the AGH University in Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.[26]

In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic. They are quasi-randomly displacing due to their interaction with their environments with their intended displacements. Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence. Thus, a non-Turing model of computation is used. This theory allows simple formal definition of collective intelligence as the property of social structure and seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure." While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation. Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.[26]

Digital media

New media are often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources (Flew 2008) resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users.

Francis Heylighen, Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science and cybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of a Global brain. The developer of the World Wide Web, Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early ‘90s, the Internet’s potential was still untapped, until the mid-1990s when ‘critical mass’, as termed by the head of the Advanced Research Project Agency (ARPA), Dr. J.C.R. Licklider, demanded more accessibility and utility.[27] The driving force of this form of collective intelligence[which?] is the digitization of information and communication. Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture (Flew 2008). He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating “whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals”. Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills. Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.

Levy and de Kerckhove consider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed (Flew 2008). With the development of the Internet and its widespread use, the opportunity to contribute to community-based knowledge forums[clarification needed], such as Wikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive" (Raymond 1998; JC Herz 2005 in Flew 2008).[citation needed] Researchers at the MIT Center for Collective Intelligence research and explore collective intelligence of groups of people and computers.[28]

In this context collective intelligence is often confused with shared knowledge. The former is knowledge that is generally available to all members of a community while the latter is information known by all members of a community.[29] Collective intelligence as represented by Web 2.0 has less user engagement than collaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one.[citation needed]

Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand. The full impact has yet to be felt but the anti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing. Atlee discusses the connections between these events and the political views that drive them.[citation needed] The Indymedia organization does this in a more journalistic way. Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goals.

Social bookmarking

In social bookmarking (also called collaborative tagging), users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured by models of collaborative tagging.

Recent research using data from the social bookmarking website Delicious, has shown that collaborative tagging systems exhibit a form of complex systems (or self-organizing) dynamics.[30][31][32] Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stable power law distributions.[30] Once such stable distributions form, examining the correlations between different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies.[33] Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.[34]

Video games

Games such as The Sims Series, and Second Life are designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations.[27] For them, collective intelligence has become a norm. In Terry Flew’s discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers,[35] he refers to Pierre Levy's concept of Collective Intelligence (Levy 1998) and argues this is active in videogames as clans or guilds in MMORPG constantly work to achieve goals. Henry Jenkins proposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends.[36] Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow. Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of the Trainz product. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".[37]

The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig[38] and Bray and Konsynski,[39] such as Intellectual Property and property ownership rights.

Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion of Alternate Reality Gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences"[40] as events that happen outside the game reality "reach out" into the player’s lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.

Stock market predictions

Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable. Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion. The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.[41][42]

Collective intelligence underpins the efficient-market hypothesis of Eugene Fama[43] – although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted by Michael Jensen[44] in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidence index funds became popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.

Views

Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions. A single person tends to make decisions motivated by self-preservation. In addition, humans lack a way to make choices that balance innovation and reality.[dubious ] Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.[45]

Phillip Brown and Hugh Lauder quotes Bowles and Gintis (1976) that in order to truly define collective intelligence, it is crucial to separate ‘intelligence’ from IQism. They go on to argue that intelligence is an achievement and can only be developed if allowed to. For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built (Brown & Lauder 2000, p. 230). This reflects how powerful collective intelligence can be if left to develop.

Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:
Talent Utilization
At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ.
Demand Creation
Firms can create a new market for complementary goods by engaging in open source community.
Costs Reduction
Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs.[21]
Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk of bodily harm and bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluid mass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells. This strain of thought is most obvious in the anti-globalization movement and characterized by the works of John Zerzan, Carol Moore, and Starhawk, who typically shun academics. These theorists are more likely to refer to ecological and collective wisdom and to the role of consensus process in making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".

Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as the new tribalists and the Gaians. Whether these can be said to be collective intelligence systems is an open question. Some, e.g. Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.

Battery Tech that Could Revolutionize Cars Will Debut in a Vacuum Cleaner

Original link:  http://www.pbs.org/wgbh/nova/next/tech/battery-tech-that-could-revolutionize-cars-will-debut-in-a-vacuum-cleaner/

There’s a battery company in Michigan developing a technology that could allow your smartphone run for days on end, bring electric cars into the mainstream, and even inspire inventions we haven’t yet imagined. But first, it’ll probably appear in a vacuum cleaner.

Dyson, the high-end vacuum and home gadget manufacturer, is placing a big bet on Sakti3, a company that is working on the mass production of solid state batteries at scales never seen before. Unlike traditional batteries, which rely on liquid electrolytes to transport ions from positive to negative terminals, solid-state batteries use, well, solid electrolytes, which are far better at moving ions. That translates into significantly higher energy density—potentially double today’s lithium ion batteries.
battery-icon
 
Batteries are a limiting factor in the roll-out of many new devices, from electric cars to vacuum cleaners.
Solid-state batteries are nothing new, but with today’s manufacturing techniques, they’re terribly expensive. Here’s Christopher Mims, reporting for the Wall Street Journal:
Solid batteries already exist, though they tend to be tiny—just big enough to fit next to a microchip, providing backup power in case of interruption. Using current technologies, a solid battery large enough to power a cellphone would cost $15,000, if it could be built. And one big enough to power a car would cost $90 million.
But Sakti3’s manufacturing approach, which leans heavily on lessons learned from microchip fabrication, could bring the price down to $100 per kilowatt-hour, a substantial drop from the $250 to $500 per kilowatt hour of today’s large-scale lithium ion batteries. At $100 per kilowatt-hour, electric cars would easily be competitive with traditional internal combustion models, according to a report by McKinsey, the consulting firm.

Producing enough solid-state batteries to satiate the electric vehicle market will take a while, which is likely why you’ll see Sakti3’s batteries in an compact Dyson vacuum before you see them on the road. Though it’s entirely possible that you may not see them at all—as Mims points out, new battery technology is notoriously fickle, and several companies have gone bankrupt attempting to bring a revolutionary technology to market. Even if Sakti3 succeeds, it may take them several years longer than they expected. We’ve certainly seen that happen before.

Mandatory Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine   Palestine 1920–...