Search This Blog

Wednesday, June 18, 2025

Cognitive computer

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cognitive_computer

A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.

In 2023, IBM's proof-of-concept NorthPole chip (optimized for 2-, 4- and 8-bit precision) achieved remarkable performance in image recognition.

In 2013, IBM developed Watson, a cognitive computer that uses neural networks and deep learning techniques. The following year, it developed the 2014 TrueNorth microchip architecture which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. In 2017, Intel also announced its version of a cognitive chip in "Loihi, which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems), Qualcomm, and others are improving neuromorphic processors steadily.

IBM TrueNorth chip

DARPA SyNAPSE board with 16 TrueNorth chips

TrueNorth was a neuromorphic CMOS integrated circuit produced by IBM in 2014. It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion.

Details

Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation. Skyrmions have been proposed as models of the synapse on a chip.

The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leaky integrate-and-fire model.

According to IBM, it does not have a clock, operates on unary numbers, and computes by counting to a maximum of 19 bits. The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronous packet-switched mesh network on chip (NOC).

IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, an integrated programming environment, and libraries. This lack of backward compatibility with any previous technology (e.g., C++ compilers) poses serious vendor lock-in risks and other adverse consequences that may prevent it from commercialization in the future.

Research

In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene.

IBM NorthPole chip

In 2023, IBM released its NorthPole chip, which is a proof-of-concept for dramatically improving performance by intertwining compute with memory on-chip, thus eliminating the Von Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can run ResNet-50 or Yolo-v4 image recognition tasks about 22 times faster, with 25 times less energy and 5 times less space, when compared to GPUs which use the same 12-nm node process that it was fabricated with. It includes 224 MB of RAM and 256 processor cores and can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425 MHz. This is an inferencing chip, but it cannot yet handle GPT-4 because of memory and accuracy limitations

Intel Loihi chip

Pohoiki Springs

Pohoiki Springs is a system that incorporates Intel's self-learning neuromorphic chip, named Loihi, introduced in 2017, perhaps named after the Hawaiian seamount Lōʻihi. Intel claims Loihi is about 1000 times more energy efficient than general-purpose computing systems used to train neural networks. In theory, Loihi supports both machine learning training and inference on the same silicon independently of a cloud connection, and more efficiently than convolutional neural networks or deep learning neural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities and deal with new events or conditions.

The first iteration of the chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024 artificial neurons each for a total of 131,072 simulated neurons. This offers around 130 million synapses, far less than the human brain's 800 trillion synapses, and behind IBM's TrueNorth. Loihi is available for research purposes among more than 40 academic research groups as a USB form factor.

In October 2019, researchers from Rutgers University published a research paper to demonstrate the energy efficiency of Intel's Loihi in solving simultaneous localization and mapping.

In March 2020, Intel and Cornell University published a research paper to demonstrate the ability of Intel's Loihi to recognize different hazardous materials, which could eventually aid to "diagnose diseases, detect weapons and explosives, find narcotics, and spot signs of smoke and carbon monoxide".

Pohoiki Beach

Intel's Loihi 2, named Pohoiki Beach, was released in September 2021 with 64 cores. It boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and improved programmability.

Hala Point

Hala Point packages 1,152 Loihi 2 processors produced on Intel 3 process node in a six-rack-unit chassis. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming 2,600 watts of power. It includes over 2,300 embedded x86 processors for ancillary computations.

Intel claimed in 2024 that Hala Point was the world’s largest neuromorphic system. It uses Loihi 2 chips. It is claimed to offer 10x more neuron capacity and up to 12x higher performance.

Hala Point provides up to 20 quadrillion operations per second, (20 petaops), with efficiency exceeding 15 trillion (8-bit) operations S−1 W−1 on conventional deep neural networks.

Hala Point integrates processing, memory and communication channels in a massively parallelized fabric, providing 16 PB S−1 of memory bandwidth, 3.5 PB S−1 of inter-core communication bandwidth, and 5 TB S−1 of inter-chip bandwidth.

The system can process its 1.15 billion neurons 20 times faster than a human brain. Its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.

Loihi-based systems can perform inference and optimization using 100 times less energy at speeds as much as 50 times faster than CPU/GPU architectures.

Intel claims that Hala Point can create LLMs but this has not been done. Much further research is needed.

SpiNNaker

SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group at the Department of Computer Science, University of Manchester.

Criticism

Critics argue that a room-sized computer – as in the case of IBM's Watson – is not a viable alternative to a three-pound human brain. Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources.

In 2021, The New York Times released Steve Lohr's article "What Ever Happened to IBM’s Watson?". He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor, was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.

Environmental economics

From Wikipedia, the free encyclopedia
Growth, Development and Environmental Economics in Asia discussion at Chatham House, London

Environmental economics is a sub-field of economics concerned with environmental issues. It has become a widely studied subject due to growing environmental concerns in the twenty-first century. Environmental economics "undertakes theoretical or empirical studies of the economic effects of national or local environmental policies around the world. Particular issues include the costs and benefits of alternative environmental policies to deal with air pollution, water quality, toxic substances, solid waste, and global warming."

Environmental Versus Ecological Economics

Environmental economics is distinguished from ecological economics in that ecological economics emphasizes the economy as a subsystem of the ecosystem with its focus upon preserving natural capital. While environmental economics focuses on human preferences, by trying to balance protecting natural resources with people's needs for products and services. Due to these differences it can be seen that ecological economics takes a more holistic approach to traditional economic theories, while environmental economics fits within traditional economic theories.

One survey of German economists found that ecological and environmental economics are different schools of economic thought, with ecological economists emphasizing "strong" sustainability and rejecting the proposition that human-made ("physical") capital can substitute for natural capital.[5] And environmental economics focusing on the efficient allocation of natural resources in order to fulfill human wants.

History

18th and 19th Century

Although the term "environmental economics" became popular beginning in the 1960's, the entwinement of the two fields of environmentalism and economics started much earlier. Starting with economist Marquis de Condorcet in the 18th century, who had an interest in the link between economic activity and environmental issues. This was seen in his usage of externalities while analyzing environmental issues.

During the classical period of economics, Adam Smith realized while the market is able to allocate private goods efficiently for most goods, it fails to do so in some environmental circumstances. It is within these scenarios where the market fails, that the government needs to take action. This is seen in his quote "Erecting and maintaining certain public works and certain public institutions which it can never be for the interest of any individual, or small number of individuals to erect and maintain; because the profit would never repay the expense to any individual or small number of individuals, though it may frequently do much more than repay it to a great society." Thomas Robert Malthus also was another classical economist who's work led to the beginnings of environmental economists. Malthus' theory of population argued that agricultural productivity faces diminishing returns, which ultimately limits the exponential growth of human populations. This thought process later led economists to think about the relationship between resource scarcity and economic development. David Ricardo's writings connected the natural environment and the standard of living which further helped to develop environmental economics later on. He stressed that the value of land varies based on how fertile it is. The last classical economist to add to environmental economics was John Stuart Mill. In his Principles of Political Economy, he wrote that the management of the environment cannot be done by the market and individuals, and is instead a governmental obligation, "[I]s there not the earth itself, its forests and waters, and all other natural riches, above and below the surface? These are the inheritance of the human race, and there must be regulations for the common enjoyment of it. What rights, and under what conditions, a person shall be allowed to exercise over any portion of this common inheritance cannot be left undecided. No function of government is less optional than the regulation of these things, or more completely involved in the idea of civilized society."

20th Century

Environmental economics became increasingly more popular in the 19th century. This is when the Resources for the Future (RFF) was established in Washington, D.C. This independent research organization applied economics to environmental issues. During this time H. Scott Gordon released his paper "Economic Theory of a Common Property Resource: The Fishery" which claimed that open access fisheries can be exploited leading to economic rents to be dissipated. Another important paper adding to environmental economics at this time was "Spaceship Earth" by Kenneth E. Boulding. This paper describes the physical limits of earths resources and how technology cannot truly help humans push those limits. Finally, Ronald Coase created an economic solution to environmental issues, which solidified environmental economics as a sub field of economics. He believed that there were 2 solutions 1) to tax the creator of the polluter and to create regulation that put the burden of action on the polluter or 2) the sufferer must pay the polluter to not pollute.

Topics and concepts

Market failure

Air pollution is an example of market failure, as the factory is imposing a negative external cost on the community.

Central to environmental economics is the concept of market failure. Market failure means that markets fail to allocate resources efficiently. As stated by Hanley, Shogren, and White (2007): "A market failure occurs when the market does not allocate scarce resources to generate the greatest social welfare. A wedge exists between what a private person does given market prices and what society might want him or her to do to protect the environment. Such a wedge implies wastefulness or economic inefficiency; resources can be reallocated to make at least one person better off without making anyone else worse off." This results in a inefficient market that needs to be corrected through avenues such as government intervention. Common forms of market failure include externalities, non-excludability and non-rivalry.

Externality

An externality exists when a person makes a choice that affects other people in a way that is not accounted for in the market price. An externality can be positive or negative but is usually associated with negative externalities in environmental economics. For instance, water seepage in residential buildings occurring in upper floors affect the lower floors. Another example concerns how the sale of Amazon timber disregards the amount of carbon dioxide released in the cutting. Or a firm emitting pollution will typically not take into account the costs that its pollution imposes on others. As a result, pollution may occur in excess of the 'socially efficient' level, which is the level that would exist if the market was required to account for the pollution. A classic definition influenced by Kenneth Arrow and James Meade is provided by Heller and Starrett (1976), who define an externality as "a situation in which the private economy lacks sufficient incentives to create a potential market in some good and the nonexistence of this market results in losses of Pareto efficiency". In economic terminology, externalities are examples of market failures, in which the unfettered market does not lead to an efficient outcome.

Common goods and public goods

When it is too costly to exclude some people from access to an environmental resource, the resource is either called a common property resource (when there is rivalry for the resource, such that one person's use of the resource reduces others' opportunity to use the resource) or a public good (when use of the resource is non-rivalrous). In either case of non-exclusion, market allocation is likely to be inefficient.

These challenges have long been recognized. Hardin's (1968) concept of the tragedy of the commons popularized the challenges involved in non-exclusion and common property. "Commons" refers to the environmental asset itself, "common property resource" or "common pool resource" refers to a property right regime that allows for some collective body to devise schemes to exclude others, thereby allowing the capture of future benefit streams; and "open-access" implies no ownership in the sense that property everyone owns nobody owns.

The basic problem is that if people ignore the scarcity value of the commons, they can end up expending too much effort, over harvesting a resource (e.g., a fishery). Hardin theorizes that in the absence of restrictions, users of an open-access resource will use it more than if they had to pay for it and had exclusive rights, leading to environmental degradation. See, however, Ostrom's (1990) work on how people using real common property resources have worked to establish self-governing rules to reduce the risk of the tragedy of the commons.

The mitigation of climate change effects is an example of a public good, where the social benefits are not reflected completely in the market price. Because the personal marginal benefits are less than the social benefits the market under-provides climate change mitigation. This is a public good since the risks of climate change are both non-rival and non-excludable. Such efforts are non-rival since climate mitigation provided to one does not reduce the level of mitigation that anyone else enjoys. They are non-excludable actions as they will have global consequences from which no one can be excluded. A country's incentive to invest in carbon abatement is reduced because it can "free ride" off the efforts of other countries. Over a century ago, Swedish economist Knut Wicksell (1896) first discussed how public goods can be under-provided by the market because people might conceal their preferences for the good, but still enjoy the benefits without paying for them.

Valuation

Assessing the economic value of the environment is a major topic within the field. The values of natural resources often are not reflected in prices that markets set and, in fact, many of them are available at no monetary charge. This mismatch frequently causes distortions in pricing of natural assets: both overuse of them and underinvestment in them. Economic value or tangible benefits of ecosystem services and, more generally, of natural resources, include both use and indirect (see the nature section of ecological economics). Non-use values include existence, option, and bequest values. For example, some people may value the existence of a diverse set of species, regardless of the effect of the loss of a species on ecosystem services. The existence of these species may have an option value, as there may be the possibility of using it for some human purpose. For example, certain plants may be researched for drugs. Individuals may value the ability to leave a pristine environment for their children.

Use and indirect use values can often be inferred from revealed behavior, such as the cost of taking recreational trips or using hedonic methods in which values are estimated based on observed prices. These use values can also be predicted through defensive behavior against pollution or environmental hazards, which can reveal how much people are willing to spend on healthcare and other preventative measures to avoid these hazards. Another health-based predictor of environmental use value is the value of a statistical life (VSL), which provides an estimate of how much people are willing to pay for small reductions in their risk of dying from environmental hazards. Non-use values are usually estimated using stated preference methods such as contingent valuation or choice modelling. Contingent valuation typically takes the form of surveys in which people are asked how much they would pay to observe and recreate in the environment (willingness to pay) or their willingness to accept (WTA) compensation for the destruction of the environmental good. Hedonic pricing examines the effect the environment has on economic decisions through housing prices, traveling expenses, and payments to visit parks.

State subsidy

Almost all governments and states magnify environmental harm by providing various types of subsidies that have the effect of paying companies and other economic actors more to exploit natural resources than to protect them. The damage to nature of such public subsidies has been conservatively estimated at $4-$6 trillion U.S. dollars per year.

Solutions

Solutions advocated to correct such externalities include:

  • Environmental regulations. Under this plan, the economic impact has to be estimated by the regulator. Usually, this is done using cost–benefit analysis. There is a growing realization that regulations (also known as "command and control" instruments) are not so distinct from economic instruments as is commonly asserted by proponents of environmental economics. E.g.1 regulations are enforced by fines, which operate as a form of tax if pollution rises above the threshold prescribed. E.g.2 pollution must be monitored and laws enforced, whether under a pollution tax regime or a regulatory regime. The main difference an environmental economist would argue exists between the two methods, however, is the total cost of the regulation. "Command and control" regulation often applies uniform emissions limits on polluters, even though each firm has different costs for emissions reductions, i.e., some firms, in this system, can abate pollution inexpensively, while others can only abate it at high cost. Because of this, the total abatement in the system comprises some expensive and some inexpensive efforts. Consequently, modern "Command and control" regulations are oftentimes designed in a way that addresses these issues by incorporating utility parameters. For instance, CO2 emission standards for specific manufacturers in the automotive industry are either linked to the average vehicle footprint (US system) or average vehicle weight (EU system) of their entire vehicle fleet. Environmental economic regulations find the cheapest emission abatement efforts first, and then move on to the more expensive methods. E.g. as said earlier, trading, in the quota system, means a firm only abates pollution if doing so would cost less than paying someone else to make the same reduction. This leads to a lower cost for the total abatement effort as a whole.
  • Quotas on pollution. Often it is advocated that pollution reductions should be achieved by way of tradeable emissions permits, which if freely traded may ensure that reductions in pollution are achieved at least cost. In theory, if such tradeable quotas are allowed, then a firm would reduce its own pollution load only if doing so would cost less than paying someone else to make the same reduction, i.e., only if buying tradeable permits from another firm(s) is costlier. These tradeable permit approaches can also, in theory, improve economic efficiency and cost-effectiveness while increasing government revenue through the use of permit auctions instead of grandfathering practices. This entails the government selling off a certain number of these tradeable permits, allowing the government to capture the value of emissions and use it to reduce marginal tax rates. In practice, tradeable permits approaches have had some success, such as the U.S.'s sulphur dioxide trading program or the EU Emissions Trading Scheme, and interest in its application is spreading to other environmental problems.
  • Taxes and tariffs on pollution. Increasing the costs of polluting will discourage polluting, and will provide a "dynamic incentive", that is, the disincentive continues to operate even as pollution levels fall. A pollution tax that reduces pollution to the socially "optimal" level would be set at such a level that pollution occurs only if the benefits to society (for example, in form of greater production) exceeds the costs. This concept was introduced by Arthur Pigou, a British economist active in the late nineteenth through the mid-twentieth century. He showed that these externalities occur when markets fail, meaning they do not naturally produce the socially optimal amount of a good or service. He argued that “a tax on the production of paint would encourage the [polluting] factory to reduce production to the amount best for society as a whole.” These taxes are known amongst economists as Pigouvian Taxes, and they regularly implemented where negative externalities are present. Some advocate a major shift from taxation from income and sales taxes to tax on pollution – the so-called "green tax shift".
  • Better defined property rights. The Coase Theorem states that assigning property rights will lead to an optimal solution, regardless of who receives them, if transaction costs are trivial and the number of parties negotiating is limited. For example, if people living near a factory had a right to clean air and water, or the factory had the right to pollute, then either the factory could pay those affected by the pollution or the people could pay the factory not to pollute. Or, citizens could take action themselves as they would if other property rights were violated. The US River Keepers Law of the 1880s was an early example, giving citizens downstream the right to end pollution upstream themselves if the government itself did not act (an early example of bioregional democracy). Many markets for "pollution rights" have been created in the late twentieth century—see emissions trading. Supply-side environmental policy implements efficiency by letting countries that are affected by climate change "buy coal" or other fossil fuel reserves abroad with the intention of conserving them: Bohm (1993); Harstad (2012). According to the Coase Theorem, the involved parties will bargain with each other, which results in an efficient solution. However, modern economic theory has shown that the presence of asymmetric information may lead to inefficient bargaining outcomes. Specifically, Rob (1989) has shown that pollution claim settlements will not lead to the socially optimal outcome when the individuals that will be affected by pollution have learned private information about their disutility already before the negotiations take place. Goldlücke and Schmitz (2018) have shown that inefficiencies may also result if the parties learn their private information only after the negotiations, provided that the feasible transfer payments are bounded. Using cooperative game theory, Gonzalez, Marciano and Solal (2019) have shown that in social cost problems involving more than three agents, the Coase theorem suffers from many counterexamples and that only two types of property rights lead to an optimal solution.
  • Accounting for environmental externalities in the final price. In fact, the world's largest industries burn about $7.3 trillion of free natural capital per year. Thus, the world's largest industries would hardly be profitable if they had to pay for this destruction of natural capital. Trucost has assessed over 100 direct environmental impacts and condensed them into 6 key environmental performance indicators (EKPIs). The assessment of environmental impacts is derived from different sources (academic journals, governments, studies, etc.) due to the lack of market prices. The table below gives an overview of the 5 regional sectors per EKPI with the highest impact on the overall EKPI:
Ranking of the 5 region-sectors by EKPI with the greatest impact across all EKPIs when measured in monetary terms
Rank IMPACT SECTOR REGION NATURAL CAPITAL COST, $BN REVENU, $BN IMPACT RATIO
1 GHG Coal Power Generation Eastern Asia 361.0 443.1 0.8
2 Land Use Cattle Ranching and Farming South America 312.1 16.6 18.7
3 GHG Iron and Steel Mills Eastern Asia 216.1 604.7 0.4
4 Water Wheat Farming Southern Asia 214.4 31.8 6.7
5 GHG Coal Power Generation Northern America 201.0 246.7 0.8

If companies are allowed to include some of these externalities in their final prices, this could undermine the Jevons paradox and provide enough revenue to help companies innovate.

Relationship to other fields

Environmental economics is related to ecological economics but there are differences. Most environmental economists have been trained as economists. They apply the tools of economics to address environmental problems, many of which are related to so-called market failures—circumstances wherein the "invisible hand" of economics is unreliable. Most ecological economists have been trained as ecologists, but have expanded the scope of their work to consider the impacts of humans and their economic activity on ecological systems and services, and vice versa. This field takes as its premise that economics is a strict subfield of ecology. Ecological economics is sometimes described as taking a more pluralistic approach to environmental problems and focuses more explicitly on long-term environmental sustainability and issues of scale.

Environmental economics is viewed as more idealistic in a price system; ecological economics as more realistic in its attempts to integrate elements outside of the price system as primary arbiters of decisions. These two groups of specialisms sometimes have conflicting views which may be traced to the different philosophical underpinnings.

Another context in which externalities apply is when globalization permits one player in a market who is unconcerned with biodiversity to undercut prices of another who is – creating a race to the bottom in regulations and conservation. This, in turn, may cause loss of natural capital with consequent erosion, water purity problems, diseases, desertification, and other outcomes that are not efficient in an economic sense. This concern is related to the subfield of sustainable development and its political relation, the anti-globalization movement.

EnvironmentEquitableSustainableBearable (Social ecology)ViableEconomicSocial
The three pillars of sustainability 

Environmental economics was once distinct from resource economics. Natural resource economics as a subfield began when the main concern of researchers was the optimal commercial exploitation of natural resource stocks. But resource managers and policy-makers eventually began to pay attention to the broader importance of natural resources (e.g. values of fish and trees beyond just their commercial exploitation). It is now difficult to distinguish "environmental" and "natural resource" economics as separate fields as the two became associated with sustainability. Many of the more radical green economists split off to work on an alternate political economy.

Environmental economics was a major influence on the theories of natural capitalism and environmental finance, which could be said to be two sub-branches of environmental economics concerned with resource conservation in production, and the value of biodiversity to humans, respectively. The theory of natural capitalism (Hawken, Lovins, Lovins) goes further than traditional environmental economics by envisioning a world where natural services are considered on par with physical capital.

The more radical green economists reject neoclassical economics in favour of a new political economy beyond capitalism or communism that gives a greater emphasis to the interaction of the human economy and the natural environment, acknowledging that "economy is three-fifths of ecology". This political group is a proponent of a transition to renewable energy.

These more radical approaches would imply changes to money supply and likely also a bioregional democracy so that political, economic, and ecological "environmental limits" were all aligned, and not subject to the arbitrage normally possible under capitalism.

An emerging sub-field of environmental economics studies its intersection with development economics. Dubbed "envirodevonomics" by Michael Greenstone and B. Kelsey Jack in their paper "Envirodevonomics: A Research Agenda for a Young Field", the sub-field is primarily interested in studying "why environmental quality [is] so poor in developing countries." A strategy for better understanding this correlation between a country's GDP and its environmental quality involves analyzing how many of the central concepts of environmental economics, including market failures, externalities, and willingness to pay, may be complicated by the particular problems facing developing countries, such as political issues, lack of infrastructure, or inadequate financing tools, among many others.

In the field of law and economics, environmental law is studied from an economic perspective. The economic analysis of environmental law studies instruments such as zoning, expropriation, licensing, third party liability, safety regulation, mandatory insurance, and criminal sanctions. A book by Michael Faure (2003) surveys this literature.

Professional bodies

The main academic and professional organizations for the discipline of Environmental Economics are the Association of Environmental and Resource Economists (AERE) and the European Association for Environmental and Resource Economics (EAERE). The main academic and professional organization for the discipline of Ecological Economics is the International Society for Ecological Economics (ISEE). The main organization for Green Economics is the Green Economics Institute.

Tuesday, June 17, 2025

Cognitive computing

From Wikipedia, the free encyclopedia

Cognitive computing refers to technology platforms that, broadly speaking, are based on the scientific disciplines of artificial intelligence and signal processing. These platforms encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, among other technologies.

Definition

At present, there is no widely agreed upon definition for cognitive computing in either academia or industry.

In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain (2004). In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. Cognitive computing applications link data analysis and adaptive page displays (AUI) to adjust content for a particular type of audience. As such, cognitive computing hardware and applications strive to be more affective and more influential by design.

Basic scheme of a cognitive system. With sensors, such as keyboards, touchscreens, cameras, microphones or temperature sensors, signals from the real world environment can be detected. For perception, these signals are recognised by the cognition of the cognitive system and converted into digital information. This information can be documented and is processed. The result of deliberation can also be documented and is used to control and execute an action in the real world environment with the help of actuators, such as engines, loudspeakers, displays or air conditioners for example.

The term "cognitive system" also applies to any artificial construct able to perform a cognitive process where a cognitive process is the transformation of data, information, knowledge, or wisdom to a new level in the DIKW Pyramid. While many cognitive systems employ techniques having their origination in artificial intelligence research, cognitive systems, themselves, may not be artificially intelligent. For example, a neural network trained to recognize cancer on an MRI scan may achieve a higher success rate than a human doctor. This system is certainly a cognitive system but is not artificially intelligent.

Cognitive systems may be engineered to feed on dynamic data in real-time, or near real-time, and may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

Cognitive analytics

Cognitive computing-branded technology platforms typically specialize in the processing and analysis of large, unstructured datasets.

Applications

Education
Even if cognitive computing can not take the place of teachers, it can still be a heavy driving force in the education of students. Cognitive computing being used in the classroom is applied by essentially having an assistant that is personalized for each individual student. This cognitive assistant can relieve the stress that teachers face while teaching students, while also enhancing the student's learning experience over all. Teachers may not be able to pay each and every student individual attention, this being the place that cognitive computers fill the gap. Some students may need a little more help with a particular subject. For many students, Human interaction between student and teacher can cause anxiety and can be uncomfortable. With the help of Cognitive Computer tutors, students will not have to face their uneasiness and can gain the confidence to learn and do well in the classroom. While a student is in class with their personalized assistant, this assistant can develop various techniques, like creating lesson plans, to tailor and aid the student and their needs.
Healthcare
Numerous tech companies are in the process of developing technology that involves cognitive computing that can be used in the medical field. The ability to classify and identify is one of the main goals of these cognitive devices. This trait can be very helpful in the study of identifying carcinogens. This cognitive system that can detect would be able to assist the examiner in interpreting countless numbers of documents in a lesser amount of time than if they did not use Cognitive Computer technology. This technology can also evaluate information about the patient, looking through every medical record in depth, searching for indications that can be the source of their problems.
Commerce
Together with Artificial Intelligence, it has been used in warehouse management systems  to collect, store, organize and analyze all related supplier data. All these aims at improving efficiency, enabling faster decision-making, monitoring inventory and fraud detection
Human Cognitive Augmentation
In situations where humans are using or working collaboratively with cognitive systems, called a human/cog ensemble, results achieved by the ensemble are superior to results obtainable by the human working alone. Therefore, the human is cognitively augmented. In cases where the human/cog ensemble achieves results at, or superior to, the level of a human expert then the ensemble has achieved synthetic expertise. In a human/cog ensemble, the "cog" is a cognitive system employing virtually any kind of cognitive computing technology.
Other use cases

Industry work

Cognitive computing in conjunction with big data and algorithms that comprehend customer needs, can be a major advantage in economic decision making.

The powers of cognitive computing and artificial intelligence hold the potential to affect almost every task that humans are capable of performing. This can negatively affect employment for humans, as there would be no such need for human labor anymore. It would also increase the inequality of wealth; the people at the head of the cognitive computing industry would grow significantly richer, while workers without ongoing, reliable employment would become less well off.

The more industries start to use cognitive computing, the more difficult it will be for humans to compete. Increased use of the technology will also increase the amount of work that AI-driven robots and machines can perform. Only extraordinarily talented, capable and motivated humans would be able to keep up with the machines. The influence of competitive individuals in conjunction with artificial intelligence/cognitive computing with has the potential to change the course of humankind.

Copenhagen Consensus

From Wikipedia, the free encyclopedia

Copenhagen Consensus is a project that seeks to establish priorities for advancing global welfare using methodologies based on the theory of welfare economics, using cost–benefit analysis. It was conceived and organized around 2004 by Bjørn Lomborg, the author of The Skeptical Environmentalist and the then director of the Danish government's Environmental Assessment Institute.

The project is run by the Copenhagen Consensus Center, which is directed by Lomborg and was part of the Copenhagen Business School, but it is now an independent 501(c)(3) non-profit organisation registered in the USA. The project considers possible solutions to a wide range of problems, presented by experts in each field. These are evaluated and ranked by a panel of economists. The emphasis is on rational prioritization by economic analysis. The panel is given an arbitrary budget constraint and instructed to use cost–benefit analysis to focus on a bottom line approach in solving/ranking presented problems. The approach is justified as a corrective to standard practice in international development, where, it is alleged, media attention and the "court of public opinion" results in priorities that are often far from optimal.

History

The project has held conferences in 2004, 2007, 2008, 2009, 2011 and 2012. The 2012 conference ranked bundled micronutrient interventions the highest priority, and the 2008 report identified supplementing vitamins for undernourished children as the world’s best investment. The 2009 conference, dealing specifically with global warming, proposed research into marine cloud whitening (ships spraying seawater into clouds to make them reflect more sunlight and thereby reduce temperature) as the top climate change priority, though climate change itself is ranked well below other world problems. In 2011 the Copenhagen Consensus Center carried out the Rethink HIV project together with the RUSH Foundation, to find smart solutions to the problem of HIV/AIDS. In 2007 looked into which projects would contribute most to welfare in Copenhagen Consensus for Latin America in cooperation with the Inter-American Development Bank.

The initial project was co-sponsored by the Danish government and The Economist. A book summarizing the Copenhagen Consensus 2004 conclusions, Global Crises, Global Solutions, edited by Lomborg, was published in October 2004 by Cambridge University Press, followed by the second edition published in 2009 based on the 2008 conclusions.

Copenhagen Consensus 2012

In May 2012, the third global Copenhagen Consensus was held, gathering economists to analyze the costs and benefits of different approaches to tackling the world‘s biggest problems. The aim was to provide an answer to the question: If you had $75bn for worthwhile causes, where should you start? A panel including four Nobel laureates met in Copenhagen, Denmark, in May 2012. The panel’s deliberations were informed by thirty new economic research papers that were written just for the project by scholars from around the world.

Economists

The panel members were the following, four of whom are Nobel Laureate economists.

Challenges

In addition, the Center commissioned research on Corruption and trade barriers, but the Expert Panel did not rank these for Copenhagen Consensus 2012, because the solutions to these challenges are political rather than investment-related.

Outcome

Given the budget restraints, they found 16 investments worthy of investment (in descending order of desirability):

  1. Bundled micronutrient interventions to fight hunger and improve education
  2. Expanding the subsidy for malaria combination treatment
  3. Expanded childhood immunization coverage
  4. Deworming of schoolchildren, to improve educational and health outcomes
  5. Expanding tuberculosis treatment
  6. R&D to Increase yield enhancements, to decrease hunger, fight biodiversity destruction, and lessen the effects of climate change
  7. Investing in effective early warning systems to protect populations against natural disaster
  8. Strengthening surgical capacity
  9. Hepatitis B immunization
  10. Using low‐Cost drugs in the case of acute heart attacks in poorer nations (these are already available in developed countries)
  11. Salt reduction campaign to reduce chronic disease
  12. Geo‐engineering R&D into the feasibility of solar radiation management
  13. Conditional cash transfer for school attendance
  14. Accelerated HIV vaccine R&D
  15. Extended field trial of information campaigns on the benefits from schooling
  16. Borehole and public hand pump intervention

Slate ranking

During the days of the Copenhagen Consensus 2012 conference, a series of articles was published in Slate Magazine each about a challenge that was discussed, and Slate readers could make their own ranking, voting for the solutions which they thought were best. Slate readers' ranking corresponded with that of the Expert Panel on many points, including the desirability of bundled micronutrient intervention; however, the most striking difference was in connection with the problem of overpopulation. Family planning ranked highest on the Slate priority list, whereas it didn't feature in the top 16 of the Expert Panel's prioritisation.

Copenhagen Consensus 2008

Economists

Nobel Prize winners marked with (¤)

Results

In the Copenhagen Consensus 2008, the solutions for global problems have been ranked in the following order:

  1. Micronutrient supplements for children (vitamin A and zinc)
  2. The Doha development agenda
  3. Micronutrient fortification (iron and salt iodization)
  4. Expanded immunization coverage for children
  5. Biofortification
  6. Deworming and other nutrition programs at school
  7. Lowering the price of schooling
  8. Increase and improve girls’ schooling
  9. Community-based nutrition promotion
  10. Provide support for women’s reproductive role
  11. Heart attack acute management
  12. Malaria prevention and treatment
  13. Tuberculosis case finding and treatment
  14. R&D in low-carbon energy technologies
  15. Bio-sand filters for household water treatment
  16. Rural water supply
  17. Conditional cash transfer
  18. Peace-keeping in post-conflict situations
  19. HIV combination prevention
  20. Total sanitation campaign
  21. Improving surgical capacity at district hospital level
  22. Microfinance
  23. Improved stove intervention to combat Air Pollution
  24. Large, multipurpose dam in Africa
  25. Inspection and maintenance of diesel vehicles
  26. Low sulfur diesel for urban road vehicles
  27. Diesel vehicle particulate control technology
  28. Tobacco tax
  29. R&D and carbon dioxide emissions reduction
  30. Carbon dioxide emissions reduction

Unlike the 2004 results, these were not grouped into qualitative bands such as Good, Poor, etc.

Gary Yohe, one of the authors of the global warming paper, subsequently accused Lomborg of "deliberate distortion of our conclusions", adding that "as one of the authors of the Copenhagen Consensus Project's principal climate paper, I can say with certainty that Lomborg is misrepresenting our findings thanks to a highly selective memory". Kåre Fog further pointed out that the future benefits of emissions reduction were discounted at a higher rate than for any of the other 27 proposals, stating "so there is an obvious reason why the climate issue always is ranked last" in Lomborg's environmental studies.

In a subsequent joint statement settling their differences, Lomborg and Yohe agreed that the "failure" of Lomborg's emissions reduction plan "could be traced to faulty design".

Climate Change Project

In 2009, the Copenhagen convened an expert panel specifically to examine solutions to climate change. The process was similar to the 2004 and 2008 Copenhagen Consensus, involving papers by specialists considered by a panel of economists. The panel ranked 15 solutions, of which the top 5 were:

  1. Research into marine cloud whitening (involving ships spraying sea-water into clouds so as to reflect more sunlight and thereby reduce temperatures)
  2. Technology-led policy response
  3. Research into stratospheric aerosol injection (involving injected ?sulphur dioxide into the upper atmosphere to reduce sunlight)
  4. Research into carbon storage
  5. Planning for adaptation

The benefits of the number 1 solution are that if the research proved successful this solution could be deployed relatively cheaply and quickly. Potential problems include environmental impacts e.g. from changing rainfall patterns.

Measures to cut carbon and methane emissions, such as carbon taxes, came bottom of the results list, partly because they would take a long time to have much effect on temperatures.

Copenhagen Consensus 2004

Process

Eight economists met May 24–28, 2004 at a roundtable in Copenhagen. A series of background papers had been prepared in advance to summarize the current knowledge about the welfare economics of 32 proposals ("opportunities") from 10 categories ("challenges"). For each category, one assessment article and two critiques were produced. After a closed-door review of the background papers, each of the participants gave economic priority rankings to 17 of the proposals (the rest were deemed inconclusive).

Economists

Challenges

Below is a list of the 10 challenge areas and the author of the paper on each. Within each challenge, 3–4 opportunities (proposals) were analyzed:

Preventing spread of HIV

Results

The panel agreed to rate seventeen of the thirty-two opportunities within seven of the ten challenges. The rated opportunities were further classified into four groups: Very Good, Good, Fair and Bad; all results are based using cost–benefit analysis.

Very good

The highest priority was assigned to implementing certain new measures to prevent the spread of HIV and AIDS. The economists estimated that an investment of $27 billion could avert nearly 30 million new infections by 2010.

Policies to reduce malnutrition and hunger were chosen as the second priority. Increasing the availability of micronutrients, particularly reducing iron deficiency anemia through dietary supplements, was judged to have an exceptionally high ratio of benefits to costs, which were estimated at $12 billion.

Control malaria

Third on the list was trade liberalization; the experts agreed that modest costs could yield large benefits for the world as a whole and for developing nations.

The fourth priority identified was controlling and treating malaria; $13 billion costs were judged to produce very good benefits, particularly if applied toward chemically-treated mosquito netting for beds.

Good

The fifth priority identified was increased spending on research into new agricultural technologies appropriate for developing nations. Three proposals for improving sanitation and water quality for a billion of the world’s poorest followed in priority (ranked sixth to eighth: small-scale water technology for livelihoods, community-managed water supply and sanitation, and research on water productivity in food production). Completing this group was the 'government' project concerned with lowering the cost of starting new businesses.

Fair

Ranked tenth was the project on lowering barriers to migration for skilled workers. Eleventh and twelfth on the list were malnutrition projects – improving infant and child nutrition and reducing the prevalence of low birth weight. Ranked thirteenth was the plan for scaled-up basic health services to fight diseases.

Poor

Ranked fourteenth to seventeenth were: a migration project (guest-worker programmes for the unskilled), which was deemed to discourage integration; and three projects addressing climate change (optimal carbon tax, the Kyoto Protocol and value-at-risk carbon tax), which the panel judged to be least cost-efficient of the proposals.

Global warming

The panel found that all three climate policies presented have "costs that were likely to exceed the benefits". It further stated "global warming must be addressed, but agreed that approaches based on too abrupt a shift toward lower emissions of carbon are needlessly expensive."

In regard to the science of global warming, the paper presented by Cline relied primarily on the framework set by Intergovernmental Panel on Climate Change, and accepted the consensus view on global warming that greenhouse gas emissions from human activities are the primary cause of the global warming. Cline relies on various research studies published in the field of economics and attempted to compare the estimated cost of mitigation policies against the expected reduction in the damage of the global warming.

Cline used a discount rate of 1.5%. (Cline's summary is on the project webpage.) He justified his choice of discount rate on the ground of "utility-based discounting", that is there is zero bias in terms of preference between the present and the future generation (see time preference). Moreover, Cline extended the time frame of the analysis to three hundred years in the future. Because the expected net damage of the global warming becomes more apparent beyond the present generation(s), this choice had the effect of increasing the present-value cost of the damage of global warming as well as the benefit of abatement policies.

Criticism

Members of the panel including Thomas Schelling and one of the two perspective paper writers Robert O. Mendelsohn (both opponents of the Kyoto protocol) criticised Cline, mainly on the issue of discount rates. (See "The opponent notes to the paper on Climate Change".) Mendelsohn, in particular, characterizing Cline's position, said that "[i]f we use a large discount rate, they will be judged to be small effects" and called it "circular reasoning, not a justification". Cline responded to this by arguing that there is no obvious reason to use a large discount rate just because this is what is usually done in economic analysis. In other words, climate change ought to be treated differently from other, more imminent problems. The Economist quoted Mendelsohn as worrying that "climate change was set up to fail".

Moreover, Mendelsohn argued that Cline's damage estimates were excessive. Citing various recent articles, including some of his own, he stated that "[a] series of studies on the impacts of climate change have systematically shown that the older literature overestimated climate damages by failing to allow for adaptation and for climate benefits."

The 2004 Copenhagen Consensus attracted various criticisms:

Approach and alleged bias

The 2004 report, especially its conclusion regarding climate change, was subsequently criticised from a variety of perspectives. The general approach adopted to set priorities was criticised by Jeffrey Sachs, an American economist and advocate of both the Kyoto protocol  and increased development aid, who argued that the analytical framework was inappropriate and biased and that the project "failed to mobilize an expert group that could credibly identify and communicate a true consensus of expert knowledge on the range of issues under consideration.".

Tom Burke, a former director of Friends of the Earth, repudiated the entire approach of the project, arguing that applying cost–benefit analysis in the way the Copenhagen panel did was "junk economics".

John Quiggin, an Australian economics professor, commented that the project is a mix of "a substantial contribution to our understanding of important issues facing the world" and an "exercises in political propaganda" and argued that the selection of the panel members was slanted towards the conclusions previously supported by Lomborg. Quiggin observed that Lomborg had argued in his controversial book The Skeptical Environmentalist that resources allocated to mitigating global warming would be better spent on improving water quality and sanitation, and was therefore seen as having prejudged the issues.

Under the heading "Wrong Question", Sachs further argued that: "The panel that drew up the Copenhagen Consensus was asked to allocate an additional US$50 billion in spending by wealthy countries, distributed over five years, to address the world’s biggest problems. This was a poor basis for decision-making and for informing the public. By choosing such a low sum — a tiny fraction of global income — the project inherently favoured specific low-cost schemes over bolder, larger projects. It is therefore no surprise that the huge and complex challenge of long-term climate change was ranked last, and that scaling up health services in poor countries was ranked lower than interventions against specific diseases, despite warnings in the background papers that such interventions require broader improvements in health services."

In response Lomborg argued that $50 billion was "an optimistic but realistic example of actual spending." "Experience shows that pledges and actual spending are two different things. In 1970 the UN set itself the task of doubling development assistance. Since then the percentage has actually been dropping". "But even if Sachs or others could gather much more than $50 billion over the next 4 years, the Copenhagen Consensus priority list would still show us where it should be invested first."

Thomas Schelling, one of the Copenhagen Consensus panel experts, later distanced himself from the way in which the Consensus results have been interpreted in the wider debate, arguing that it was misleading to put climate change at the bottom of the priority list. The Consensus panel members were presented with a dramatic proposal for handling climate change. If given the opportunity, Schelling would have put a more modest proposal higher on the list. The Yale economist Robert O. Mendelsohn was the official critic of the proposal for climate change during the Consensus. He thought the proposal was way out of the mainstream and could only be rejected. Mendelsohn worries that climate change was set up to fail.

Michael Grubb, an economist and lead author for several IPCC reports, commented on the Copenhagen Consensus, writing:

To try and define climate policy as a trade-off against foreign aid is thus a forced choice that bears no relationship to reality. No government is proposing that the marginal costs associated with, for example, an emissions trading system, should be deducted from its foreign aid budget. This way of posing the question is both morally inappropriate and irrelevant to the determination of real climate mitigation policy.

Panel membership

Quiggin argued that the members of the 2004 panel, selected by Lomborg, were "generally towards the right and, to the extent that they had stated views, to be opponents of Kyoto." Sachs also noted that the panel members had not previously been much involved in issues of development economics and were unlikely to reach useful conclusions in the time available to them. Commenting on the 2004 Copenhagen Consensus, climatologist and IPCC author Stephen Schneider criticised Lomborg for only inviting economists to participate:

In order to achieve a true consensus, I think Lomborg would've had to invite ecologists, social scientists concerned with justice and how climate change impacts and policies are often inequitably distributed, philosophers who could challenge the economic paradigm of "one dollar, one vote" implicit in cost–benefit analyses promoted by economists, and climate scientists who could easily show that Lomborg's claim that climate change will have only minimal effects is not sound science.

Lomborg countered criticism of the panel membership by stating that "Sachs disparaged the Consensus 'dream team' because it only consisted of economists. But that was the very point of the project. Economists have expertise in economic prioritization. It is they and not climatologists or malaria experts who can prioritize between battling global warming or communicable disease".

Clinical trial

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Clinical_...