Search This Blog

Thursday, November 27, 2025

History of computing

From Wikipedia, the free encyclopedia

The history of computing extends beyond the history of computing hardware and modern computing technology including earlier methods that relied on pen and paper or chalk and slate, with or without the aid of tables.

Concrete devices

Digital computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts are implicit in concrete practices such as:

Numbers

Eventually, the concept of numbers became concrete and familiar enough for counting to arise, at times with sing-song mnemonics to teach sequences to others. All known human languages, except the Piraha language, have words for at least the numerals "one" and "two", and even some animals like the blackbird can distinguish a surprising number of items.

Advances in the numeral system and mathematical notation eventually led to the discovery of mathematical operations such as addition, subtraction, multiplication, division, squaring, square root, and so forth. Eventually, the operations were formalized, and concepts about the operations became understood well enough to be stated formally, and even proven. See, for example, Euclid's algorithm for finding the greatest common divisor of two numbers.

By the High Middle Ages, the positional Hindu–Arabic numeral system had reached Europe, which allowed for the systematic computation of numbers. During this period, the representation of a calculation on paper allowed the calculation of mathematical expressions, and the tabulation of mathematical functions such as the square root and the common logarithm (for use in multiplication and division), and the trigonometric functions. By the time of Isaac Newton's research, paper or vellum was an important computing resource, and even in our present time, researchers like Enrico Fermi would cover random scraps of paper with calculation, to satisfy their curiosity about an equation. Even into the period of programmable calculators, Richard Feynman would unhesitatingly compute any steps that overflowed the memory of the calculators, by hand, just to learn the answer; by 1976 Feynman had purchased an HP-25 calculator with a 49 program-step capacity; if a differential equation required more than 49 steps to solve, he could just continue his computation by hand.

Early computation

Mathematical statements need not be abstract only; when a statement can be illustrated with actual numbers, the numbers can be communicated and a community can arise. This allows the repeatable, verifiable statements which are the hallmark of mathematics and science. These kinds of statements have existed for thousands of years, and across multiple civilizations, as shown below:

The earliest known tool used for computation is the Sumerian abacus, believed to have been invented in Babylon c. 2700–2300 BC. Its original style of usage was by lines drawn in sand with pebbles.

In c. 1050–771 BC, the south-pointing chariot was invented in ancient China. It was the first known geared mechanism to use a differential gear, which was later used in analog computers. The Chinese also invented a more sophisticated abacus from around the 2nd century BC known as the Chinese abacus.

In the 3rd century BC, Archimedes used the mechanical principle of balance (see Archimedes Palimpsest § The Method of Mechanical Theorems) to calculate mathematical problems, such as the number of grains of sand in the universe (The sand reckoner), which also required a recursive notation for numbers (e.g., the myriad myriad).

The Antikythera mechanism is believed to be the earliest known geared computing device. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC.

According to Simon Singh, Muslim mathematicians also made important advances in cryptography, such as the development of cryptanalysis and frequency analysis by AlkindusProgrammable machines were also invented by Muslim engineers, such as the automatic flute player by the Banū Mūsā brothers.

During the Middle Ages, several European philosophers made attempts to produce analog computer devices. Influenced by the Arabs and Scholasticism, Majorcan philosopher Ramon Llull (1232–1315) devoted a great part of his life to defining and designing several logical machines that, by combining simple and undeniable philosophical truths, could produce all possible knowledge. These machines were never actually built, as they were more of a thought experiment to produce new knowledge in systematic ways; although they could make simple logical operations, they still needed a human being for the interpretation of results. Moreover, they lacked a versatile architecture, each machine serving only very concrete purposes. Despite this, Llull's work had a strong influence on Gottfried Leibniz (early 18th century), who developed his ideas further and built several calculating tools using them.

The apex of this early era of mechanical computing can be seen in the Difference Engine and its successor the Analytical Engine both by Charles Babbage. Babbage never completed constructing either engine, but in 2002 Doron Swade and a group of other engineers at the Science Museum in London completed Babbage's Difference Engine using only materials that would have been available in the 1840s. By following Babbage's detailed design they were able to build a functioning engine, allowing historians to say, with some confidence, that if Babbage had been able to complete his Difference Engine it would have worked. The additionally advanced Analytical Engine combined concepts from his previous work and that of others to create a device that, if constructed as designed, would have possessed many properties of a modern electronic computer, such as an internal "scratch memory" equivalent to RAM, multiple forms of output including a bell, a graph-plotter, and simple printer, and a programmable input-output "hard" memory of punch cards which it could modify as well as read. The key advancement that Babbage's devices possessed beyond those created before him was that each component of the device was independent of the rest of the machine, much like the components of a modern electronic computer. This was a fundamental shift in thought; previous computational devices served only a single purpose but had to be at best disassembled and reconfigured to solve a new problem. Babbage's devices could be reprogrammed to solve new problems by the entry of new data and act upon previous calculations within the same series of instructions. Ada Lovelace took this concept one step further, by creating a program for the Analytical Engine to calculate Bernoulli numbers, a complex calculation requiring a recursive algorithm. This is considered to be the first example of a true computer program, a series of instructions that act upon data not known in full until the program is run.

Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. Two other inventors, Leonardo Torres Quevedo and Vannevar Bush, also did follow-on research based on Babbage's work. In his Essays on Automatics (1914) Torres presented the design of an electromechanical calculating machine and introduced the idea of Floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, an arithmetic unit connected to a remote typewriter, on which commands could be typed and the results printed automatically. Bush's paper Instrumental Analysis (1936) discussed using existing IBM punch card machines to implement Babbage's design. In the same year, he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.

Several examples of analog computation survived into recent times. A planimeter is a device that does integrals, using distance as the analog quantity. Until the 1980s, HVAC systems used air both as the analog quantity and the controlling element. Unlike modern digital computers, analog computers are not very flexible and need to be reconfigured (i.e., reprogrammed) manually to switch them from working on one problem to another. Analog computers had an advantage over early digital computers in that they could be used to solve complex problems using behavioral analogues while the earliest attempts at digital computers were quite limited.

A Smith Chart is a well-known nomogram.

Since computers were rare in this era, the solutions were often hard-coded into paper forms such as nomograms, which could then produce analog solutions to these problems, such as the distribution of pressures and temperatures in a heating system.

Digital electronic computers

The "brain" [computer] may one day come down to our level [of the common people] and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far.

— British newspaper The Star in a June 1949 news article about the EDSAC computer, long before the era of the personal computers.

In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. During 1880-81 he showed that NOR gates alone (or NAND gates alone) can be used to reproduce the functions of all the other logic gates, but this work on it was unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates.

Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).

The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. From 1934 to 1936, NEC engineer Akira Nakashima, Claude Shannon, and Victor Shestakov published papers introducing switching circuit theory, using digital electronics for Boolean algebraic operations.

In 1936 Alan Turing published his seminal paper On Computable Numbers, with an Application to the Entscheidungsproblem in which he modeled computation in terms of a one-dimensional storage tape, leading to the idea of the Universal Turing machine and Turing-complete systems.

The first digital electronic computer was developed between April 1936 and June 1939 at the IBM Patent Department in Endicott, New York, by Arthur Halsey Dickinson. In this computer IBM introduced, a calculating device with a keyboard, processor and electronic output (display). The competitor to IBM was the digital electronic computer NCR3566, developed in NCR, Dayton, Ohio by Joseph Desch and Robert Mumma in the period April 1939 - August 1939. The IBM and NCR machines were decimal, executing addition and subtraction in binary position code.

In December 1939 John Atanasoff and Clifford Berry completed their experimental model to prove the concept of the Atanasoff–Berry computer (ABC) which began development in 1937. This experimental model is binary, executed addition and subtraction in octal binary code and is the first binary digital electronic computing device. The Atanasoff–Berry computer was intended to solve systems of linear equations, though it was not programmable. The computer was never truly completed due to Atanasoff's departure from Iowa State University in 1942 to work for the United States Navy. Many people credit ABC with many of the ideas used in later developments during the age of early electronic computing.

The Z3 computer, built by German inventor Konrad Zuse in 1941, was the first programmable, fully automatic computing machine, but it was not electronic.

During World War II, ballistics computing was done by women, who were hired as "computers." The term computer remained one that referred to mostly women (now seen as "operator") until 1945, after which it took on the modern definition of machinery it presently holds.

The ENIAC (Electronic Numerical Integrator And Computer) was the first electronic general-purpose computer, announced to the public in 1946. It was Turing-complete, digital, and capable of being reprogrammed to solve a full range of computing problems. Women implemented the programming for machines like the ENIAC, and men created the hardware.

The Manchester Baby was the first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.

William Shockley, John Bardeen and Walter Brattain at Bell Labs invented the first working transistor, the point-contact transistor, in 1947, followed by the bipolar junction transistor in 1948. At the University of Manchester in 1953, a team under the leadership of Tom Kilburn designed and built the first transistorized computer, called the Transistor Computer, a machine using the newly developed transistors instead of valves. The first stored-program transistor computer was the ETL Mark III, developed by Japan's Electrotechnical Laboratory from 1954 to 1956. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications.

In 1954, 95% of computers in service were being used for engineering and scientific purposes.

Personal computers

The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960, It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. The MOSFET made it possible to build high-density integrated circuit chips. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics.

The silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild Semiconductor in 1968. This led to the development of the first single-chip microprocessor, the Intel 4004. The Intel 4004 was developed as a single-chip microprocessor from 1969 to 1970, led by Intel's Federico Faggin, Marcian Hoff, and Stanley Mazor, and Busicom's Masatoshi Shima. The chip was mainly designed and realized by Faggin, with his silicon-gate MOS technology. The microprocessor led to the microcomputer revolution, with the development of the microcomputer, which would later be called the personal computer (PC).

Most early microprocessors, such as the Intel 8008 and Intel 8080, were 8-bit. Texas Instruments released the first fully 16-bit microprocessor, the TMS9900 processor, in June 1976. They used the microprocessor in the TI-99/4 and TI-99/4A computers.

The 1980s brought about significant advances with microprocessors that greatly impacted the fields of engineering and other sciences. The Motorola 68000 microprocessor had a processing speed that was far superior to the other microprocessors being used at the time. Because of this, having a newer, faster microprocessor allowed for the newer microcomputers that came along after to be more efficient in the amount of computing they were able to do. This was evident in the 1983 release of the Apple Lisa. The Lisa was one of the first personal computers with a graphical user interface (GUI) that was sold commercially. It ran on the Motorola 68000 CPU and used both dual floppy disk drives and a 5 MB hard drive for storage. The machine also had 1MB of RAM used for running software from disk without rereading the disk persistently. After the failure of the Lisa in terms of sales, Apple released its first Macintosh computer, still running on the Motorola 68000 microprocessor, but with only 128KB of RAM, one floppy drive, and no hard drive to lower the price.

In the late 1980s and early 1990s, computers became more useful for personal and work purposes, such as word processing. In 1989, Apple released the Macintosh Portable, it weighed 7.3 kg (16 lb) and was extremely expensive, costing US$7,300. At launch, it was one of the most powerful laptops available, but due to the price and weight, it was not met with great success and was discontinued only two years later. That same year Intel introduced the Touchstone Delta supercomputer, which had 512 microprocessors. This technological advancement was very significant, as it was used as a model for some of the fastest multi-processor systems in the world. It was even used as a prototype for Caltech researchers, who used the model for projects like real-time processing of satellite images and simulating molecular models for various fields of research.

Supercomputers

In terms of supercomputing, the first widely acknowledged supercomputer was the Control Data Corporation (CDC) 6600 built in 1964 by Seymour Cray. Its maximum speed was 40  MHz or 3 million floating point operations per second (FLOPS). The CDC 6600 was replaced by the CDC 7600 in 1969; although its normal clock speed was not faster than the 6600, the 7600 was still faster due to its peak clock speed, which was approximately 30 times faster than that of the 6600. Although CDC was a leader in supercomputers, their relationship with Seymour Cray (which had already been deteriorating) completely collapsed. In 1972, Cray left CDC and began his own company, Cray Research Inc. With support from investors in Wall Street, an industry fueled by the Cold War, and without the restrictions he had within CDC, he created the Cray-1 supercomputer. With a clock speed of 80  MHz or 136 megaFLOPS, Cray developed a name for himself in the computing world. By 1982, Cray Research produced the Cray X-MP equipped with multiprocessing and in 1985 released the Cray-2, which continued with the trend of multiprocessing and clocked at 1.9 gigaFLOPS. Cray Research developed the Cray Y-MP in 1988, however afterward struggled to continue to produce supercomputers. This was largely because the Cold War had ended, and the demand for cutting-edge computing by colleges and the government declined drastically and the demand for microprocessing units increased.

In 1998, David Bader developed the first Linux supercomputer using commodity parts. While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333  MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously. Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world. Though Linux-based clusters using consumer-grade parts, such as Beowulf, existed before the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.

Today, supercomputers are still used by the governments of the world and educational institutions for computations such as simulations of natural disasters, genetic variant searches within a population relating to disease, and more. As of November 2024, the fastest supercomputer is El Capitan.

Starting with known special cases, the calculation of logarithms and trigonometric functions can be performed by looking up numbers in a mathematical table, and interpolating between known cases. For small enough differences, this linear operation was accurate enough for use in navigation and astronomy in the Age of Exploration. The uses of interpolation have thrived in the past 500 years: by the twentieth century Leslie Comrie and W.J. Eckert systematized the use of interpolation in tables of numbers for punch card calculation.

Weather prediction

The numerical solution of differential equations, notably the Navier-Stokes equations was an important stimulus to computing, with Lewis Fry Richardson's numerical approach to solving differential equations. The first computerized weather forecast was performed in 1950 by a team composed of American meteorologists Jule Charney, Philip Duncan Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and ENIAC programmer Klara Dan von Neumann. To this day, some of the most powerful computer systems on Earth are used for weather forecasts.

Symbolic computations

By the late 1960s, computer systems could perform symbolic algebraic manipulations well enough to pass college-level calculus courses.

Important women and their contributions

Women are often underrepresented in STEM fields when compared to their male counterparts. In the modern era before the 1960s, computing was widely seen as "women's work" since it was associated with the operation of tabulating machines and other mechanical office work. The accuracy of this association varied from place to place. In America, Margaret Hamilton recalled an environment dominated by men, while Elsie Shutt recalled surprise at seeing even half of the computer operators at Raytheon were men. Machine operators in Britain were mostly women into the early 1970s. As these perceptions changed and computing became a high-status career, the field became more dominated by men. Professor Janet Abbate, in her book Recoding Gender, writes:

Yet women were a significant presence in the early decades of computing. They made up the majority of the first computer programmers during World War II; they held positions of responsibility and influence in the early computer industry; and they were employed in numbers that, while a small minority of the total, compared favorably with women's representation in many other areas of science and engineering. Some female programmers of the 1950s and 1960s would have scoffed at the notion that programming would ever be considered a masculine occupation, yet these women’s experiences and contributions were forgotten all too quickly.

Some notable examples of women in the history of computing are:

Progress in artificial intelligence

Artificial intelligence, especially foundation models, has made rapid progress, surpassing human capabilities in various benchmarks.

Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields including medical diagnosis, finance, robotics, law, video games, agriculture, and scientific discovery. The society as a whole is looking for artificial intelligence to be on a key factor in the upcming years because of its potential. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."

"Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.

Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:

  1. Artificial narrow intelligence – AI capable only of specific tasks;
  2. Artificial general intelligence – AI with ability in several areas, and able to autonomously solve problems they were never even designed for;
  3. Artificial superintelligence – AI capable of general tasks, including scientific creativity, social skills, and general wisdom.

To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject-matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

In 2023, humans still substantially outperformed both GPT-4 and other models tested on the ConceptARC benchmark. Those models scored 60% on most, and 77% on one category, while humans scored 91% on all and 97% on one category. However, later research in 2025 showed that human-generated output grids were only accurate 73% of the time, while AI models available that year managed to score above 77%.

Current performance in specific areas

Game Champion year Legal states (log10) Game tree complexity (log10) Game of perfect information?
Draughts (checkers) 1994 21 31 Perfect
Othello (reversi) 1997 28 58 Perfect
Chess 1997 46 123 Perfect
Scrabble 2006



Shogi 2017 71 226 Perfect
Go 2017 172 360 Perfect
2p no-limit hold 'em 2017

Imperfect
StarCraft - 270+
Imperfect
StarCraft II 2019

Imperfect

There are many useful abilities that can be described as showing some form of intelligence. This gives better insight into the comparative success of artificial intelligence in different areas.

AI, like electricity or the steam engine, is a general-purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at. Some versions of Moravec's paradox observe that humans are more likely to outperform machines in areas such as physical dexterity that have been the direct target of natural selection. While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets. Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."

Games provide a high-profile benchmark for assessing rates of progress; many games have a large professional player base and a well-established competitive rating system. AlphaGo brought the era of classical board-game benchmarks to a close when Artificial Intelligence proved their competitive edge over humans in 2016. Deep Mind's AlphaGo AI software program defeated the world's best professional Go Player Lee Sedol. Games of imperfect knowledge provide new challenges to AI in the area of game theory; the most prominent milestone in this area was brought to a close by Libratus' poker victory in 2017. E-sports continue to provide additional benchmarks; Facebook AI, Deepmind, and others have engaged with the popular StarCraft franchise of videogames.

Broad classes of outcome for an AI test may be given as:

  • optimal: it is not possible to perform better (note: some of these entries were solved by humans)
  • super-human: performs better than all humans
  • high-human: performs better than most humans
  • par-human: performs similarly to most humans
  • sub-human: performs worse than most humans

Optimal

Super-human

High-human

Par-human

Sub-human

  • Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)
  • Object recognition
  • Various robotics tasks that may require advances in robot hardware as well as AI, including:
    • Stable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)
    • Humanoid soccer
  • Speech recognition: "nearly equal to human performance" (2017)
  • Explainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.
  • Many tests of fluid intelligence (2020)
  • Bongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)
  • Visual Commonsense Reasoning (VCR) benchmark (as of 2020)
  • Stock market prediction: Financial data collection and processing using Machine Learning algorithms
  • Angry Birds video game, as of 2020
  • Various tasks that are difficult to solve without contextual knowledge, including:

Proposed tests of artificial intelligence

In his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.

The Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior.

Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; however, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.

Exams

According to OpenAI, in 2023 ChatGPT GPT-4 scored the 90th percentile on the Uniform Bar Exam. On the SATs, GPT-4 scored the 89th percentile on math, and the 93rd percentile in Reading & Writing. On the GREs, it scored on the 54th percentile on the writing test, 88th percentile on the quantitative section, and 99th percentile on the verbal section. It scored in the 99th to 100th percentile on the 2020 USA Biology Olympiad semifinal exam. It scored a perfect "5" on several AP exams.

Independent researchers found in 2023 that ChatGPT GPT-3.5 "performed at or near the passing threshold" for the three parts of the United States Medical Licensing Examination. GPT-3.5 was also assessed to attain a low, but passing, grade from exams for four law school courses at the University of Minnesota. GPT-4 passed a text-based radiology board–style examination.

Competitions

Many competitions and prizes, such as the Imagenet Challenge, promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.

Past and current predictions

An expert poll around 2016, conducted by Katja Grace of the Future of Humanity Institute and associates, gave median estimates of 3 years for championship Angry Birds, 4 years for the World Series of Poker, and 6 years for StarCraft. On more subjective tasks, the poll gave 6 years for folding laundry as well as an average human worker, 7–10 years for expertly answering 'easily Googleable' questions, 8 years for average speech transcription, 9 years for average telephone banking, and 11 years for expert songwriting, but over 30 years for writing a New York Times bestseller or winning the Putnam math competition.

Chess

Deep Blue at the Computer History Museum

An AI defeated a grandmaster in a regulation tournament game for the first time in 1988; rebranded as Deep Blue, it beat the reigning human world chess champion in 1997 (see Deep Blue versus Garry Kasparov).

Estimates when computers would exceed humans at Chess
Year prediction made Predicted year Number of years Predictor Contemporaneous source
1957 1967 or sooner 10 or less Herbert A. Simon, economist
1990 2000 or sooner 10 or less Ray Kurzweil, futurist Age of Intelligent Machines

Go

AlphaGo defeated a European Go champion in October 2015, and Lee Sedol in March 2016, one of the world's top players (see AlphaGo versus Lee Sedol). According to Scientific American and other sources, most observers had expected superhuman Computer Go performance to be at least a decade away.

Estimates when computers would exceed humans at Go
Year prediction made Predicted year Number of years Predictor Affiliation Contemporaneous source
1997 2100 or later 103 or more Piet Hutt, physicist and Go fan Institute for Advanced Study New York Times
2007 2017 or sooner 10 or less Feng-Hsiung Hsu, Deep Blue lead Microsoft Research Asia IEEE Spectrum
2014 2024 10 Rémi Coulom, Computer Go programmer CrazyStone Wired

Human-level artificial general intelligence (AGI)

AI pioneer and economist Herbert A. Simon inaccurately predicted in 1965: "Machines will be capable, within twenty years, of doing any work a man can do". Similarly, in 1970 Marvin Minsky wrote that "Within a generation... the problem of creating artificial intelligence will substantially be solved."

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when AGI would arrive was 2040 to 2050, depending on the poll.

The Grace poll around 2016 found results varied depending on how the question was framed. Respondents asked to estimate "when unaided machines can accomplish every task better and more cheaply than human workers" gave an aggregated median answer of 45 years and a 10% chance of it occurring within 9 years. Other respondents asked to estimate "when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers" estimated a median of 122 years and a 10% probability of 20 years. The median response for when "AI researcher" could be fully automated was around 90 years. No link was found between seniority and optimism, but Asian researchers were much more optimistic than North American researchers on average; Asians predicted 30 years on average for "accomplish every task", compared with the 74 years predicted by North Americans.

Estimates of when AGI will arrive
Year prediction made Predicted year Number of years Predictor Contemporaneous source
1965 1985 or sooner 20 or less Herbert A. Simon The shape of automation for men and management
1993 2023 or sooner 30 or less Vernor Vinge, science fiction writer "The Coming Technological Singularity"
1995 2040 or sooner 45 or less Hans Moravec, robotics researcher Wired
2008 Never / Distant future
Gordon E. Moore, inventor of Moore's Law IEEE Spectrum
2017 2029 12 Ray Kurzweil Interview

Applications of artificial intelligence

From Wikipedia, the free encyclopedia

Artificial intelligence is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. Artificial intelligence has been used in applications throughout industry and academia. Within the field of Artificial Intelligence, there are multiple subfields. The subfield of Machine learning has been used for various scientific and commercial purposes including language translation, image recognition, decision-makingcredit scoring, and e-commerce. In recent years, there have been massive advancements in the field of generative artificial intelligence, which uses generative models to produce text, images, videos or other forms of data. This article describes applications of AI in different sectors.

Agriculture

In agriculture, AI has been proposed as a way for farmers to identify areas that need irrigation, fertilization, or pesticide treatments to increase yields, thereby improving efficiency. AI has been used to attempt to classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and optimize irrigation.

Architecture and design

A sketch being converted via AI to a 3D mesh of a similar-looking building

Artificial intelligence in architecture is the use of artificial intelligence in automation, design, and planning in the architectural process or in assisting human skills in the field of architecture.

AI has been used by some architects for design, and has been proposed as a way to automate planning and routine tasks in the field.

Business

A 2023 study found that generative AI increased productivity by 15% in contact centers. Another 2023 study found it increased productivity by up to 40% in writing tasks. An August 2025 review by MIT found that of surveyed companies, 95% did not report any improvement in revenue from the use of AI. A September 2025 article by the Harvard Business Review describes how increased use of AI does not automatically lead to increases in revenue or actual productivity. Referring to "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task" the article coins the term workslop. Per studies done in collaboration with the Stanford Social Media Lab, workslop does not improve productivity and undermines trust and collaboration among colleagues.

Computer science

Programming assistance

AI-assisted software development

AI can be used for real-time code completion, chat, and automated test generation. These tools are typically integrated with editors and IDEs as plugins. AI-assisted software development systems differ in functionality, quality, speed, and approach to privacy. Creating software primarily via AI is known as "vibe coding". Code created or suggested by AI can be incorrect or inefficient. The use of AI-assisted coding can potentially speed-up software development, but can also slow-down the process by creating more work when debugging and testing. The rush to prematurely adopt AI technology can also incur additional technical debt. AI also requires additional consideration and careful review for cybersecurity, since AI coding software is trained on a wide range of code of inconsistent quality and often replicates poor practices.

Neural network design

An overview of AI agent and its core capabilities (memory, tools usage, actions, and ability to plan)

AI can be used to create other AIs. For example, around November 2017, Google's AutoML project to evolve new neural net topologies created NASNet, a system optimized for ImageNet and POCO F1. NASNet's performance exceeded all previously published performance on ImageNet.

Quantum computing

Research and development of quantum computers has been performed with machine learning algorithms. For example, there is a prototype, photonic, quantum memristive device for neuromorphic computers (NC)/artificial neural networks and NC-using quantum materials with some variety of potential neuromorphic computing-related applications. The use of quantum machine learning for quantum simulators has been proposed for solving physics and chemistry problems.

Historical contributions

AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered AI. All of the following were originally developed in AI laboratories:

Customer service

Human resources

Another application of AI is in human resources. AI can screen resumes and rank candidates based on their qualifications, predict candidate success in given roles, and automate repetitive communication tasks via chatbots.

Online and telephone customer service

An automated online assistant providing customer service on a web page

AI underlies avatars (automated online assistants) on web pages. It can reduce operation and training costs. Pypestream automated customer service for its mobile application to streamline communication with customers.

A Google app analyzes language and converts speech into text. The platform can identify angry customers through their language and respond appropriately. Amazon uses a chatbot for customer service that can perform tasks like checking the status of an order, cancelling orders, offering refunds and connecting the customer with a human representative. Generative AI (GenAI), such as ChatGPT, is increasingly used in business to automate tasks and enhance decision-making.

Hospitality

In the hospitality industry, AI is used to reduce repetitive tasks, analyze trends, interact with guests, and predict customer needs. AI hotel services come in the form of a chatbot, application, virtual voice assistant and service robots.

Education

In educational institutions, AI has been used to automate routine tasks like attendance tracking, grading and marking. AI tools have been used to attempt to monitor student progress and analyze learning behaviors, with the intention of facilitating interventions for students facing academic problems.

Energy and environment

Energy system

The U.S. Department of Energy wrote in an April 2024 report that AI may have applications in modeling power grids, reviewing federal permits with large language models, predicting levels of renewable energy production, and improving the planning process for electrical vehicle charging networks. Other studies have suggested that machine learning can be used for energy consumption prediction and scheduling, e.g. to help with renewable energy intermittency management (see also: smart grid and climate change mitigation in the power grid).

Environmental monitoring

Autonomous ships that monitor the ocean, AI-driven satellite data analysis, passive acoustics or remote sensing and other applications of environmental monitoring make use of machine learning.

For example, "Global Plastic Watch" is an AI-based satellite monitoring-platform for analysis/tracking of plastic waste sites to help prevention of plastic pollution – primarily ocean pollution – by helping identify who and where mismanages plastic waste, dumping it into oceans.

Early-warning systems

Machine learning can be used to spot early-warning signs of disasters and environmental issues, possibly including natural pandemics, earthquakes, landslides, heavy rainfall, long-term water supply vulnerability, tipping-points of ecosystem collapsecyanobacterial bloom outbreaks, and droughts.

Economic and social challenges

The University of Southern California launched the Center for Artificial Intelligence in Society, with the goal of using AI to address problems such as homelessness. Stanford researchers use AI to analyze satellite images to identify high poverty areas.

Entertainment and media

Media

Image restoration

AI applications analyze media content such as movies, TV programs, advertisement videos or user-generated content. The solutions often involve computer vision.

Typical scenarios include the analysis of images using object recognition or face recognition techniques, or the analysis of video for scene recognizing scenes, objects or faces. AI-based media analysis can facilitate media search, the creation of descriptive keywords for content, content policy monitoring (such as verifying the suitability of content for a particular TV viewing time), speech to text for archival or other purposes, and the detection of logos, products or celebrity faces for ad placement.

Deep-fakes

Deep-fakes can be used for comedic purposes but are better known for fake news and hoaxes.

Deepfakes can portray individuals in harmful or compromising situations, causing significant reputational damage and emotional distress, especially when the content is defamatory or violates personal ethics. While defamation and false light laws offer some recourse, their focus on false statements rather than fabricated images or videos often leaves victims with limited legal protection and a challenging burden of proof.

In January 2016, the Horizon 2020 program financed the InVID Project to help journalists and researchers detect fake documents, made available as browser plugins.

In June 2016, the visual computing group of the Technical University of Munich and from Stanford University developed Face2Face, a program that animates photographs of faces, mimicking the facial expressions of another person.

In September 2018, U.S. Senator Mark Warner proposed to penalize social media companies that allow sharing of deep-fake documents on their platforms.

In 2018, Darius Afchar and Vincent Nozick found a way to detect faked content by analyzing the mesoscopic properties of video frames. DARPA gave 68 million dollars to work on deep-fake detection.

Audio deepfakes and AI software capable of detecting deep-fakes and cloning human voices have been developed.

Video surveillance analysis and manipulated media detection

Artificial intelligence for video surveillance utilizes computer software programs that analyze the audio and images from video surveillance cameras in order to recognize humans, vehicles, objects and events. Security contractors program is the software to define restricted areas within the camera's view (such as a fenced off area, a parking lot but not the sidewalk or public street outside the lot) and program for times of day (such as after the close of business) for the property being protected by the camera surveillance. The artificial intelligence ("A.I.") sends an alert if it detects a trespasser breaking the "rule" set that no person is allowed in that area during that time of day.

AI algorithms have been used to detect deepfake videos.

Video production

Artificial intelligence is also starting to be used in video production, with tools and software being developed that utilize generative AI in order to create new video, or alter existing video. Some of the major tools that are being used in these processes currently are DALL-E, Mid-journey, and Runway. Way mark Studios utilized the tools offered by both DALL-E and Mid-journey to create a fully AI generated film called The Frost in the summer of 2023. Way mark Studios is experimenting with using these AI tools to generate advertisements and commercials for companies in mere seconds. Yves Bergquist, a director of the AI & Neuroscience in Media Project at USC's Entertainment Technology Center, says post production crews in Hollywood are already using generative AI, and predicts that in the future more companies will embrace this new technology.

Music

AI has been used to compose music of various genres.

David Cope created an AI called Emily Howell that managed to become well known in the field of algorithmic computer music. The algorithm behind Emily Howell is registered as a US patent.

In 2012, AI Iamus created the first complete classical album.

AIVA (Artificial Intelligence Virtual Artist), composes symphonic music, mainly classical music for film scores. It achieved a world first by becoming the first virtual composer to be recognized by a musical professional association.

Melomics creates computer-generated music for stress and pain relief.

The Watson Beat uses reinforcement learning and deep belief networks to compose music on a simple seed input melody and a select style. The software was open sourced[95] and musicians such as Taryn Southern[96] collaborated with the project to create music.

South Korean singer, Hayeon's, debut song, "Eyes on You" was composed using AI which was supervised by real composers, including NUVO.[97]

Writing and reporting

Narrative Science sells computer-generated news and reports. It summarizes sporting events based on statistical data from the game. It also creates financial reports and real estate analyses. Automated Insights generates personalized recaps and previews for Yahoo Sports Fantasy Football.

Yseop, uses AI to turn structured data into natural language comments and recommendations. Yseop writes financial reports, executive summaries, personalized sales or marketing documents and more in multiple languages, including English, Spanish, French, and German.

TALESPIN made up stories similar to the fables of Aesop. The program started with a set of characters who wanted to achieve certain goals. Mark Riedl and Vadim Bulitko asserted that the essence of storytelling was experience management, or "how to balance the need for a coherent story progression with user agency, which is often at odds".

While AI storytelling focuses on story generation (character and plot), story communication also received attention. In 2002, researchers developed an architectural framework for narrative prose generation. They faithfully reproduced text variety and complexity on stories such as Little Red Riding Hood. In 2016, a Japanese AI co-wrote a short story and almost won a literary prize.

South Korean company Hanteo Global uses a journalism bot to write articles.

Literary authors are also exploring uses of AI. An example is David Jhave Johnston's work ReRites (2017–2019), where the poet created a daily rite of editing the poetic output of a neural network to create a series of performances and publications.

Sports writing

In 2010, artificial intelligence used baseball statistics to automatically generate news articles. This was launched by The Big Ten Network using software from Narrative Science.

After being unable to cover every Minor League Baseball game with a large team, Associated Press collaborated with Automated Insights in 2016 to create game recaps that were automated by artificial intelligence.

UOL in Brazil expanded the use of AI in its writing. Rather than just generating news stories, they programmed the AI to include commonly searched words on Google.

El Pais, a Spanish news site that covers many things including sports, allows users to make comments on each news article. They use the Perspective API to moderate these comments and if the software deems a comment to contain toxic language, the commenter must modify it in order to publish it.

A local Dutch media group used AI to create automatic coverage of amateur soccer, set to cover 60,000 games in just a single season. NDC partnered with United Robots to create this algorithm and cover what would have never been possible before without an extremely large team.

Lede AI has been used in 2023 to take scores from high school football games to generate stories automatically for the local newspaper. This was met with significant criticism from readers for the very robotic diction that was published. With some descriptions of games being a "close encounter of the athletic kind," readers were not pleased and let the publishing company, Gannett, know on social media. Gannett has since halted their used of Lede AI until they come up with a solution for what they call an experiment.

Wikipedia

AI-generated draft article getting nominated for speedy deletion under G15 criteria

Artificial intelligence is used in Wikimedia projects for the purpose of developing those projects.

Various articles on Wikipedia have been created entirely with or with the help of artificial intelligence. AI-generated content can be detrimental to Wikipedia when unreliable or containing fake citations.

To address the issue of low-quality AI-generated content, the Wikipedia community created in 2023 a WikiProject named AI Cleanup. In August 2025, Wikipedia adopted a policy that allowed editors to nominate suspected AI-generated articles for speedy deletion.

Millions of its articles have been edited by bots which however are usually not artificial intelligence software. Many AI platforms use Wikipedia data, mainly for training machine learning applications. There is research and development of various artificial intelligence applications for Wikipedia such as for identifying outdated sentences, detecting covert vandalism or recommending articles and tasks to new editors.

Machine translation (see above) has also be used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future. A content translation tool allows editors of some Wikipedias to more easily translate articles across several select languages.

Video games

In video games, AI is routinely used to generate behavior in non-player characters (NPCs). In addition, AI is used for pathfinding. Games with less typical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010). AI is also used in Alien Isolation (2014) as a way to control the actions the Alien will perform next.

Games have been a major application of AI's capabilities since the 1950s. In the 21st century, AIs have beaten human players in many games, including chess (Deep Blue), Jeopardy! (Watson), Go (AlphaGo), poker (Pluribus and Cepheus), E-sports (StarCraft), and general game playing (AlphaZero and MuZero).

Kuki AI is a set of chatbots and other apps which were designed for entertainment and as a marketing tool.

Visual images

A "cyborg elf" generated by Stable Diffusion

The first AI art program, called AARON, was developed by Harold Cohen in 1968 with the goal of being able to code the act of drawing. It started by creating simple black and white drawings, and later to painting using special brushes and dyes that were chosen by the program itself without mediation from Cohen.

AI platforms such as DALL-EStable DiffusionImagen, and Midjourney have been used for generating visual images from inputs such as text or other images. Some AI tools allow users to input images and output changed versions of that image, such as to display an object or product in different environments. AI image models can also attempt to replicate the specific styles of artists, and can add visual complexity to rough sketches.

AI has been used to generate quantitative analysis of existing digital art collections. Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art. While distant viewing includes the analysis of large collections, close reading involves one piece of artwork.

Computer animation

In 2023, Netflix of Japan's usage of AI to generate background images for short The Dog & the Boy was met with backlash online.

Finance

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking began in 1987 when Security Pacific National Bank launched a fraud prevention task-force to counter the unauthorized use of debit cards.

Banks use AI to organize operations for bookkeeping, investing in stocks, and managing properties. AI can adapt to changes during non-business hours. AI is used to combat fraud and financial crimes by monitoring behavioral patterns for any abnormal changes or anomalies.

The use of AI in applications such as online trading and decision-making has changed major economic theories. For example, AI-based buying and selling platforms estimate personalized demand and supply curves, thus enabling individualized pricing. AI systems reduce information asymmetry in the market and thus make markets more efficient. The application of artificial intelligence in the financial industry can alleviate the financing constraints of non-state-owned enterprises, especially for smaller and more innovative enterprises.

Trading and investment

Algorithmic trading involves using AI systems to make trading decisions at speeds of magnitude greater than any human is capable of, making millions of trades in a day without human intervention. Such high-frequency trading represents a fast-growing sector. Many banks, funds, and proprietary trading firms now have AI-managed portfolios. Automated trading systems are typically used by large institutional investors but include smaller firms trading with their own AI systems.

Large financial institutions use AI to assist with their investment practices. BlackRock's AI engine, Aladdin, is used both within the company and by clients to help with investment decisions. Its functions include the use of natural language processing to analyze text such as news, broker reports, and social media feeds. It then gauges the sentiment on the companies mentioned and assigns a score. Banks such as UBS and Deutsche Bank use SQREEM (Sequential Quantum Reduction and Extraction Model) to mine data to develop consumer profiles and match them with wealth management products.

Underwriting

Online lender Upstart uses machine learning for underwriting.

ZestFinance's Zest Automated Machine Learning (ZAML) platform is used for credit underwriting. This platform uses machine learning to analyze data, including purchase transactions and how a customer fills out a form, to score borrowers. The platform is handy for assigning credit scores to those with limited credit histories.

Audit

AI makes continuous auditing possible. Potential benefits include reducing audit risk, increasing the level of assurance, and reducing audit duration.

Continuous auditing with AI allows real-time monitoring and reporting of financial activities and provides businesses with timely insights that can lead to quick decision-making.

Anti–money laundering

AI software, such as LaundroGraph which uses contemporary suboptimal datasets, could be used for anti–money laundering (AML).

History

In the 1980s, AI started to become prominent in finance as expert systems were commercialized. For example, Dupont created 100 expert systems, which helped them to save almost $10 million per year. One of the first systems was the Pro-trader expert system that predicted the 87-point drop in the Dow Jones Industrial Average in 1986. "The major junctions of the system were to monitor premiums in the market, determine the optimum investment strategy, execute transactions when appropriate and modify the knowledge base through a learning mechanism."

One of the first expert systems to help with financial plans was PlanPowerm and Client Profiling System, created by Applied Expert Systems (APEX). It was launched in 1986. It helped create personal financial plans for people.

In the 1990s, AI was applied to fraud detection. In 1993, FinCEN Artificial Intelligence System (FAIS) was launched. It was able to review over 200,000 transactions per week, and over two years, it helped identify 400 potential cases of money laundering equal to $1 billion. These expert systems were later replaced by machine learning systems.

Outside finance, the late 1980s and early 1990s also saw expert systems used in technical and environmental domains. For example, researchers built a fishway design advisor to recommend fish passage structures under varying hydraulic and biological conditions using the VP-Expert shell. Transportation researchers applied the same shell to balance airport capacity with noise-mitigation plans. In agriculture, a potato insect expert system (PIES) supported pest management decisions for Colorado potato beetle. The U.S. Environmental Protection Agency's CORMIX system for modeling pollutant discharges combined rules with Fortran hydrodynamic models.

AI can enhance entrepreneurial activity, and AI is one of the most dynamic areas for start-ups, with significant venture capital flowing into AI.

Regulatory developments in the EU

In the European Union, the Artificial Intelligence Act (Regulation (EU) 2024/1689) classifies several finance‑sector uses of AI as "high‑risk", including systems used to evaluate the creditworthiness of natural persons or to establish a credit score and AI used for risk assessment and pricing in life or health insurance. These systems must meet requirements on risk management, data governance, technical documentation and logging, transparency, and human oversight. The Act's obligations are phased in: prohibitions and AI‑literacy rules apply from 2 February 2025, governance and most GPAI duties from 2 August 2025, the bulk of obligations from 2 August 2026, and certain safety‑component high‑risk obligations from 2 August 2027.

Health

Healthcare

X-ray of a hand, with automatic calculation of bone age by a computer software
A patient-side surgical arm of Da Vinci Surgical System

AI in healthcare is often used for classification, to evaluate a CT scan or electrocardiogram or to identify high-risk patients for population health. AI is helping with the high-cost problem of dosing. One study suggested that AI could save $16 billion. In 2016, a study reported that an AI-derived formula derived the proper dose of immunosuppressant drugs to give to transplant patients. Current research has indicated that non-cardiac vascular illnesses are also being treated with artificial intelligence (AI). For certain disorders, AI algorithms can aid in diagnosis, recommended treatments, outcome prediction, and patient progress tracking. As AI technology advances, it is anticipated that it will become more significant in the healthcare industry.

The early detection of diseases like cancer is made possible by AI algorithms, which diagnose diseases by analyzing complex sets of medical data. For example, the IBM Watson system might be used to comb through massive data such as medical records and clinical trials to help diagnose a problem. Microsoft's AI project Hanover helps doctors choose cancer treatments from among the more than 800 medicines and vaccines. Its goal is to memorize all the relevant papers to predict which (combinations of) drugs will be most effective for each patient. Myeloid leukemia is one target. Another study reported on an AI that was as good as doctors in identifying skin cancers. Another project monitors multiple high-risk patients by asking each patient questions based on data acquired from doctor/patient interactions. In one study done with transfer learning, an AI diagnosed eye conditions similar to an ophthalmologist and recommended treatment referrals.

Another study demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel judged better than a surgeon.

Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in concept processing technology in EMR software.

Other healthcare tasks thought suitable for an AI that are in development include:

  • Screening
  • Companion robots for elder care
  • Drug creation (e.g. by identifying candidate drugs and by using existing drug screening data such as in life extension research)
  • Clinical training
  • Identifying genomic pathogen signatures of novel pathogens or identifying pathogens via physics-based fingerprints (including pandemic pathogens)
  • Helping link genes to their functions, otherwise analyzing genes and identification of novel biological targets
  • Help development of biomarkers
  • Help tailor therapies to individuals in personalized medicine/precision medicine

Workplace health and safety

AI-enabled chatbots decrease the need for humans to perform basic call center tasks, and machine learning in sentiment analysis can spot fatigue in order to prevent overwork.

Decision support systems can potentially prevent industrial disasters and make disaster response more efficient. For manual workers in material handling, predictive analytics has been proposed to reduce musculoskeletal injury.

AI can attempt to process workers' compensation claims. AI has been proposed for detection of accident near misses, which are underreported.

Biochemistry

Machine learning has been used for drug designdrug discovery and development, drug repurposing, improving pharmaceutical productivity, and clinical trials.

Computer-planned syntheses via computational reaction networks, described as a platform that combines "computational synthesis with AI algorithms to predict molecular properties", has been used in drug-syntheses, and developing routes for recycling 200 industrial waste chemicals into important drugs and agrochemicals (chemical synthesis design). It has also been used to explore the origins of life on Earth.

Deep learning has been used with databases for the development of a 46-day process to design, synthesize and test a drug which inhibits enzymes of a particular gene, DDR1. DDR1 is involved in cancers and fibrosis which is one reason for the high-quality datasets that enabled these results.

The AI program AlphaFold 2 can determine the 3D structure of a (folded) protein in hours rather than the months required by earlier automated approaches and was used to provide the likely structures of all proteins in the human body and essentially all proteins known to science (more than 200 million).

Language processing

Language translation

Speech translation technology attempts to convert one language's spoken words into another language. This potentially reduces language barriers in global commerce and cross-cultural exchange, enabling speakers of various languages to communicate with one another.

AI has been used to automatically translate spoken language and textual content in products such as Microsoft Translator, Google Translate, and DeepL Translator. Additionally, research and development are in progress to decode and conduct animal communication.

Meaning is conveyed not only by text, but also through usage and context (see semantics and pragmatics). As a result, the two primary categorization approaches for machine translations are statistical machine translation (SMT) and neural machine translations (NMTs). The old method of performing translation was to use statistical methodology to forecast the best probable output with specific algorithms. However, with NMT, the approach employs dynamic algorithms to achieve better translations based on context.

Law and government

Government

AI facial recognition systems are used for mass surveillance, notably in China. In 2019, Bengaluru, India deployed AI-managed traffic signals. This system uses cameras to monitor traffic density and adjust signal timing based on the interval needed to clear traffic.

Law

AI is a mainstay of law-related professions. Algorithms and machine learning do some tasks previously done by entry-level lawyers. While its use is common, it is not expected to replace most work done by lawyers in the near future.

The electronic discovery industry uses machine learning to reduce manual searching.

Law enforcement has begun using facial recognition systems (FRS) to identify suspects from visual data. FRS results have proven to be more accurate when compared to eyewitness results. Furthermore, FRS has shown to have much a better ability to identify individuals when video clarity and visibility are low in comparison to human participants.

COMPAS is a commercial system used by U.S. courts to assess the likelihood of recidivism.

One concern relates to algorithmic bias, AI programs may become biased after processing data that exhibits bias. ProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than that of white defendants.

In 2019, the city of Hangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-related intellectual property claims. Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.

Manufacturing

Sensors

Artificial intelligence has been combined with digital spectrometry by IdeaCuria Inc., enable applications such as at-home water quality monitoring.

Toys and games

In the 1990s, early artificial intelligence tools controlled Tamagotchis and Giga Pets, the Internet, and the first widely released robot, Furby. Aibo was a domestic robot in the form of a robotic dog with intelligent features and autonomy.

Mattel created an assortment of AI-enabled toys that "understand" conversations, give intelligent responses, and learn.

Oil and gas

Oil and gas companies have used artificial intelligence tools to automate functions, foresee equipment issues, and increase oil and gas output.

Military

Various countries are deploying AI military applications. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles.

AI has been used in military operations in Iraq, Syria, Israel and Ukraine.

Internet and e-commerce

Web feeds and posts

Machine learning has been used for recommendation systems in determining which posts should show up in social media feeds. Various types of social media analysis also make use of machine learning and there is research into its use for (semi-)automated tagging/enhancement/correction of online misinformation and related filter bubbles.

AI has been used to customize shopping options and personalize offers. Online gambling companies have used AI for targeting gamblers.

Intelligent personal assistants use AI to attempt to respond to natural language requests. Siri, released in 2010 for Apple smartphones, popularized the concept.

Bing Chat has used artificial intelligence as part of its search engine.

Spam filtering

Machine learning can be used to combat spam, scams, and phishing. It can scrutinize the contents of spam and phishing attacks to attempt to identify malicious elements. Some models built via machine learning algorithms have over 90% accuracy in distinguishing between spam and legitimate emails. These models can be refined using new data and evolving spam tactics. Machine learning also analyzes traits such as sender behavior, email header information, and attachment types, potentially enhancing spam detection.

Facial recognition and image labeling

AI has been used in facial recognition systems. Some examples are Apple's Face ID and Android's Face Unlock, which are used to secure mobile devices.

China has used facial recognition and artificial intelligence technology in Xinjiang. In 2017, reporters visiting the region found surveillance cameras installed every hundred meters or so in several cities, as well as facial recognition checkpoints at areas like gas stations, shopping centers, and mosque entrances. Human rights groups have criticized the Chinese government for using artificial intelligence facial recognition technology for use in political suppression.

The Netherlands has deployed facial recognition and artificial intelligence technology since 2016. The database of the Dutch police currently contains over 2.2 million pictures of 1.3 million Dutch citizens. This accounts for about 8% of the population. In The Netherlands, face recognition is not used by the police on municipal CCTV.

Image labeling has been used by Google Image Labeler to detect products in photos and to allow people to search based on a photo. Image labeling has also been demonstrated to generate speech to describe images to blind people.

Scientific research

Evidence of general impacts

In April 2024, the Scientific Advice Mechanism to the European Commission published advice including a comprehensive evidence review of the opportunities and challenges posed by artificial intelligence in scientific research.

As benefits, the evidence review highlighted:

  • its role in accelerating research and innovation
  • its capacity to automate workflows
  • enhancing dissemination of scientific work

As challenges:

  • limitations and risks around transparency, reproducibility and interpretability
  • poor performance (inaccuracy)
  • risk of harm through misuse or unintended use
  • societal concerns including the spread of misinformation and increasing inequalities

Archaeology, history and imaging of sites

Machine learning can help to restore and attribute ancient texts. It can help to index texts for example to enable better and easier searching and classification of fragments.

Artificial intelligence can also be used to investigate genomes to uncover genetic history, such as interbreeding between archaic and modern humans by which for example the past existence of a ghost population, not Neanderthal or Denisovan, was inferred.

It can also be used for "non-invasive and non-destructive access to internal structures of archaeological remains".

Physics

A deep learning system was reported to learn intuitive physics from visual data (of virtual 3D environments) based on an unpublished approach inspired by studies of visual cognition in infants. Other researchers have developed a machine learning algorithm that could discover sets of basic variables of various physical systems and predict the systems' future dynamics from video recordings of their behavior. In the future, it may be possible that such can be used to automate the discovery of physical laws of complex systems.

Materials science

In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that the AI system GNoME had documented over 2 million new materials. GNoME uses deep learning techniques to examine potential material structures, and identify stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, with a success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database.

Reverse engineering

Machine learning is used in diverse types of reverse engineering. For example, machine learning has been used to reverse engineer a composite material part, enabling unauthorized production of high quality parts, and for quickly understanding the behavior of malware. It can be used to reverse engineer artificial intelligence models. It can also design components by engaging in a type of reverse engineering of not-yet existent virtual components such as inverse molecular design for particular desired functionality or protein design for pre-specified functional sites. Biological network reverse engineering could model interactions in a human understandable way, e.g. bas on time series data of gene expression levels.

Astronomy, space activities and ufology

Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. It could also be used for activities in space such as space exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.

In the search for extraterrestrial intelligence (SETI), machine learning has been used in attempts to identify artificially generated electromagnetic waves in available data – such as real-time observations – and other technosignatures, e.g. via anomaly detection. In ufology, the SkyCAM-5 project headed by Prof. Hakan Kayal and the Galileo Project headed by Avi Loeb use machine learning to attempt to detect and classify types of UFOs. The Galileo Project also seeks to detect two further types of potential extraterrestrial technological signatures with the use of AI: 'Oumuamua-like interstellar objects, and non-manmade artificial satellites.

Machine learning can also be used to produce datasets of spectral signatures of molecules that may be involved in the atmospheric production or consumption of particular chemicals – such as phosphine possibly detected on Venus – which could prevent miss assignments and, if accuracy is improved, be used in future detections and identifications of molecules on other planets.

Chemistry and biology

There is research about which types of computer-aided chemistry would benefit from machine learning. A deep learning AI-based process has been developed that uses genome databases to design novel proteins based on evolutionary algorithms. Machine learning has also been used for protein design with pre-specified functional sites, predicting molecular properties, and exploring large chemical/reaction spaces.

Using drug discovery AI algorithms, researchers generated 40,000 potential chemical weapon candidates, helping in the regulation of such chemicals to prevent synthesizing them for real harm.

There are various types of applications for machine learning in decoding human biology, such as helping to map gene expression patterns to functional activation patterns or identifying functional DNA motifs. It is widely used in genetic research. There also is some use of machine learning in synthetic biology, disease biology, nanotechnology (e.g. nanostructured materials and bionanotechnology), and materials science.

Security and surveillance

Cyber security

Cyber security companies are adopting neural networks, machine learning, and natural language processing to improve their systems.

Applications of AI in cyber security include:

  • Network protection: Machine learning improves intrusion detection systems by broadening the search beyond previously identified threats.
  • Endpoint protection: Attacks such as ransomware can be thwarted by learning typical malware behaviors.
    • AI-related cyber security application cases vary in both benefit and complexity. Security features such as Security Orchestration, Automation, and Response (SOAR) and Extended Endpoint Detection and Response (XDR) offer significant benefits for businesses, but require significant integration and adaptation efforts.
  • Application security: can help counterattacks such as server-side request forgery, SQL injection, cross-site scripting, and distributed denial-of-service.
    • AI technology can also be utilized to improve system security and safeguard our privacy. Randrianasolo (2012) suggested a security system based on artificial intelligence that can recognize intrusions and adapt to perform better. In order to improve cloud computing security, Sahil (2015) created a user profile system for the cloud environment with AI techniques.
  • Suspect user behavior: Machine learning can identify fraud or compromised applications as they occur.

Transportation and logistics

Automotive and public transit

Side view of a Waymo-branded self-driving car

Transportation's complexity means that in most cases, training an AI in a real-world driving environment is impractical, and is achieved through simulator-based testing. AI-based systems control functions such as braking, lane changing, collision prevention, navigation and mapping.

Some autonomous vehicles do not allow human drivers (they have no steering wheels or pedals).

There are prototypes of autonomous automotive public transport vehicles such as autonomous rail transport in operation, electric mini-buses, and autonomous delivery vehicles, including delivery robots.

Autonomous trucks are in the testing phase. The UK government passed legislation to begin testing of autonomous truck platoons in 2018. A group of autonomous trucks follow closely behind each other. German corporation Daimler is testing its Freightliner Inspiration.

AI has been used to optimize traffic management, which can reduce wait times, energy use, and emissions.

Cameras with radar and ultrasonic acoustic location sensors, while using predictive algorithms to have artificially intelligent traffic lights to make traffic flow better

Military

Aircraft simulators use AI for training aviators. Flight conditions can be simulated that allow pilots to make mistakes without risking themselves or expensive aircraft. Air combat can also be simulated.

AI can also be used to operate planes analogously to their control of ground vehicles. Autonomous drones can fly independently or in swarms.

AOD uses the Interactive Fault Diagnosis and Isolation System, or IFDIS, which is a rule-based expert system using information from TF-30 documents and expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the F-111C. The system replaced specialized workers. The system allowed regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.

Speech recognition allows traffic controllers to give verbal directions to drones.

Artificial intelligence supported design of aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule-based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.

NASA

In 2003 a Dryden Flight Research Center project created software that could enable a damaged aircraft to continue flight until a safe landing can be achieved. The software compensated for damaged components by relying on the remaining undamaged components.

The 2016 Intelligent Autopilot System combined apprenticeship learning and behavioral cloning whereby the autopilot observed low-level actions required to maneuver the airplane and high-level strategy used to apply those actions.

Maritime

Neural networks are used by situational awareness systems in ships and boats. There also are autonomous boats.

Scattering

From Wikipedia, the free encyclopedia For other uses, see Scattering (disambiguation) . Scattering Feynm...