Search This Blog

Thursday, November 27, 2025

Gravitational-wave astronomy

From Wikipedia, the free encyclopedia

Gravitational-wave astronomy is a subfield of astronomy concerned with the detection and study of gravitational waves emitted by astrophysical sources.

Gravitational waves are minute distortions or ripples in spacetime caused by the acceleration of massive objects. They are produced by cataclysmic events such as the merger of binary black holes, the coalescence of binary neutron stars, supernova explosions and processes including those of the early universe shortly after the Big Bang. Studying them offers a new way to observe the universe, providing valuable insights into the behavior of matter under extreme conditions. Similar to electromagnetic radiation (such as light wave, radio wave, infrared radiation and X-rays) which involves transport of energy via propagation of electromagnetic field fluctuations, gravitational radiation involves fluctuations of the relatively weaker gravitational field. The existence of gravitational waves was first suggested by Oliver Heaviside in 1893 and then later conjectured by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves before they were predicted by Albert Einstein in 1916 as a corollary to his theory of general relativity.

Data about the first observation of gravitational waves from the two LIGO detectors and Virgo interferometer

In 1978, Russell Alan Hulse and Joseph Hooton Taylor Jr. provided the first experimental evidence for the existence of gravitational waves by observing two neutron stars orbiting each other and won the 1993 Nobel Prize in physics for their work. In 2015, nearly a century after Einstein's forecast, the first direct observation of gravitational waves as a signal from the merger of two black holes confirmed the existence of these elusive phenomena and opened a new era in astronomy. Subsequent detections have included binary black hole mergers, neutron star collisions, and other violent cosmic events. Gravitational waves are now detected using laser interferometry, which measures tiny changes in the length of two perpendicular arms caused by passing waves. Observatories like LIGO (Laser Interferometer Gravitational-wave Observatory), Virgo and KAGRA (Kamioka Gravitational Wave Detector) use this technology to capture the faint signals from distant cosmic events. LIGO co-founders Barry C. Barish, Kip S. Thorne, and Rainer Weiss were awarded the 2017 Nobel Prize in Physics for their ground-breaking contributions in gravitational wave astronomy.

Potential and challenges

When distant astronomical objects are observed using electromagnetic waves, different phenomena like scattering, absorption, reflection, refraction, etc. cause information loss. There are various regions in space only partially penetrable by photons, such as the insides of nebulae, the dense dust clouds at the galactic core, the regions near black holes, etc. Gravitational astronomy has the potential to be used in parallel with electromagnetic astronomy to study the universe at a better resolution. In an approach known as multi-messenger astronomy, gravitational wave data is combined with data from other wavelengths to get a more complete picture of astrophysical phenomena. Gravitational wave astronomy helps understand the early universe, test theories of gravity, and reveal the distribution of dark matter and dark energy. In particular, it can help find the Hubble constant, which describes the rate of accelerated expansion of the universe. All of these open doors to a physics beyond the Standard Model (BSM).

Challenges that remain in the field include noise interference, the lack of ultra-sensitive instruments, and the detection of low-frequency waves. Ground-based detectors face problems with seismic vibrations produced by environmental disturbances and the limitation of the arm length of detectors due to the curvature of the Earth's surface. In the future, the field of gravitational wave astronomy will try develop upgraded detectors and next-generation observatories, along with possible space-based detectors such as LISA (Laser Interferometer Space Antenna). LISA will be able to listen to distant sources like compact supermassive black holes in the galactic core and primordial black holes, as well as low-frequency sensitive signals sources such as binary white dwarf merger and sources from the early universe.

Gravitational waves

Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as waves similar to electromagnetic waves but the gravitational equivalent.

Gravitational waves were later predicted in 1916 by Albert Einstein on the basis of his general theory of relativity as ripples in spacetime. Later he refused to accept gravitational waves. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity.

The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery.

Direct observation of gravitational waves was not made until 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves.

In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang.

Instruments for different frequencies

Collaboration between detectors aids in collecting unique and valuable information, owing to different specifications and sensitivity of each.

Noise curves for a selection of gravitational-wave detectors as a function of frequency. At very low frequencies are pulsar timing arrays, at low frequencies are space-borne detectors, and at high frequencies are ground-based detectors. The characteristic strain of potential astrophysical sources are also shown. To be detectable the characteristic strain of a signal must be above the noise curve.

High frequency

There are several ground-based laser interferometers which span several miles/kilometers, including: the two Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors in Washington and Louisiana, USA; Virgo, at the European Gravitational Observatory in Italy; GEO600 in Germany, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. While LIGO, Virgo, and KAGRA have made joint observations to date, GEO600 is currently utilized for trial and test runs due to lower sensitivity of its instruments and has not participated in joint runs with the others recently.

In 2015, the LIGO project was the first to directly observe gravitational waves using laser interferometers. The LIGO detectors observed gravitational waves from the merger of two stellar-mass black holes, matching predictions of general relativity. These observations demonstrated the existence of binary stellar-mass black hole systems, and were the first direct detection of gravitational waves and the first observation of a binary black hole merger. This finding has been characterized as revolutionary to science, because of the verification of our ability to use gravitational-wave astronomy to progress in our search and exploration of dark matter and the big bang.

Detection of an event by three or more detectors allows the celestial location to be estimated from the relative delays.

Low frequency

An alternative means of observation is using pulsar timing arrays (PTAs). There are three consortia, the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), and the Parkes Pulsar Timing Array (PPTA), which co-operate as the International Pulsar Timing Array. These use existing radio telescopes, but since they are sensitive to frequencies in the nanohertz range, many years of observation are needed to detect a signal and detector sensitivity improves gradually. Current bounds are approaching those expected for astrophysical sources.

Plot of correlation between pulsars observed by NANOGrav (2023) vs angular separation between pulsars, compared with a theoretical Hellings-Downs model (dashed purple) and if there were no gravitational wave background (solid green)

In June 2023, four PTA collaborations, the three mentioned above and the Chinese Pulsar Timing Array, delivered independent but similar evidence for a stochastic background of nanohertz gravitational waves. Each provided an independent first measurement of the theoretical Hellings-Downs curve, i.e., the quadrupolar correlation between two pulsars as a function of their angular separation in the sky, which is a telltale sign of the gravitational wave origin of the observed background. The sources of this background remain to be identified, although binaries of supermassive black holes are the most likely candidates.

Intermediate frequencies

Further in the future, there is the possibility of space-borne detectors. The European Space Agency has selected a gravitational-wave mission for its L3 mission, due to launch 2034, the current concept is the evolved Laser Interferometer Space Antenna (eLISA). Also in development is the Japanese Deci-hertz Interferometer Gravitational wave Observatory (DECIGO).

Scientific value

Astronomy has traditionally relied on electromagnetic radiation. Originating with the visible band, as technology advanced, it became possible to observe other parts of the electromagnetic spectrum, from radio to gamma rays. Each new frequency band gave a new perspective on the Universe and heralded new discoveries. During the 20th century, indirect and later direct measurements of high-energy, massive particles provided an additional window into the cosmos. Late in the 20th century, the detection of solar neutrinos founded the field of neutrino astronomy, giving an insight into previously inaccessible phenomena, such as the inner workings of the Sun. The observation of gravitational waves provides a further means of making astrophysical observations.

Russell Hulse and Joseph Taylor were awarded the 1993 Nobel Prize in Physics for showing that the orbital decay of a pair of neutron stars, one of them a pulsar, fits general relativity's predictions of gravitational radiation. Subsequently, many other binary pulsars (including one double pulsar system) have been observed, all fitting gravitational-wave predictions. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the first detection of gravitational waves.

Gravitational waves provide complementary information to that provided by other means. By combining observations of a single event made using different means, it is possible to gain a more complete understanding of the source's properties. This is known as multi-messenger astronomy. Gravitational waves can also be used to observe systems that are invisible (or almost impossible to detect) by any other means. For example, they provide a unique method of measuring the properties of black holes.

Gravitational waves can be emitted by many systems, but, to produce detectable signals, the source must consist of extremely massive objects moving at a significant fraction of the speed of light. The main source is a binary of two compact objects. Example systems include:

  • Compact binaries made up of two closely orbiting stellar-mass objects, such as white dwarfs, neutron stars or black holes. Wider binaries, which have lower orbital frequencies, are a source for detectors like LISA. Closer binaries produce a signal for ground-based detectors like LIGO. Ground-based detectors could potentially detect binaries containing an intermediate mass black hole of several hundred solar masses.
  • Supermassive black hole binaries, consisting of two black holes with masses of 105–109 solar masses. Supermassive black holes are found at the centre of galaxies. When galaxies merge, it is expected that their central supermassive black holes merge too. These are potentially the loudest gravitational-wave signals. The most massive binaries are a source for PTAs. Less massive binaries (about a million solar masses) are a source for space-borne detectors like LISA.
  • Extreme-mass-ratio systems of a stellar-mass compact object orbiting a supermassive black hole. These are sources for detectors like LISA. Systems with highly eccentric orbits produce a burst of gravitational radiation as they pass through the point of closest approach; systems with near-circular orbits, which are expected towards the end of the inspiral, emit continuously within LISA's frequency band. Extreme-mass-ratio inspirals can be observed over many orbits. This makes them excellent probes of the background spacetime geometry, allowing for precision tests of general relativity.

In addition to binaries, there are other potential sources:

  • Supernovae generate high-frequency bursts of gravitational waves that could be detected with LIGO or Virgo.
  • Rotating neutron stars are a source of continuous high-frequency waves if they possess axial asymmetry.
  • Early universe processes, such as inflation or a phase transition.
  • Cosmic strings could also emit gravitational radiation if they do exist. Discovery of these gravitational waves would confirm the existence of cosmic strings.

Gravitational waves interact only weakly with matter. This is what makes them difficult to detect. It also means that they can travel freely through the Universe, and are not absorbed or scattered like electromagnetic radiation. It is therefore possible to see to the center of dense systems, like the cores of supernovae or the Galactic Center. It is also possible to see further back in time than with electromagnetic radiation, as the early universe was opaque to light prior to recombination, but transparent to gravitational waves.

The ability of gravitational waves to move freely through matter also means that gravitational-wave detectors, unlike telescopes, are not pointed to observe a single field of view but observe the entire sky. Detectors are more sensitive in some directions than others, which is one reason why it is beneficial to have a network of detectors. Directionalization is also poor, due to the small number of detectors.

In cosmic inflation

Cosmic inflation, a hypothesized period when the universe rapidly expanded during the first 10−36 seconds after the Big Bang, would have given rise to gravitational waves; that would have left a characteristic imprint in the polarization of the CMB radiation.

It is possible to calculate the properties of the primordial gravitational waves from measurements of the patterns in the microwave radiation, and use those calculations to learn about the early universe.

Development

The LIGO Hanford Control Room

As a young area of research, gravitational-wave astronomy is still in development; however, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy.

Gravitational-wave observations complement observations in the electromagnetic spectrum. These waves also promise to yield information in ways not possible via detection and analysis of electromagnetic waves. Electromagnetic waves can be absorbed and re-radiated in ways that make extracting information about the source difficult. Gravitational waves, however, only interact weakly with matter, meaning that they are not scattered or absorbed. This should allow astronomers to view the center of a supernova, stellar nebulae, and even colliding galactic cores in new ways.

Ground-based detectors have yielded new information about the inspiral phase and mergers of binary systems of two stellar mass black holes, and merger of two neutron stars. They could also detect signals from core-collapse supernovae, and from periodic sources such as pulsars with small deformations. If there is truth to speculation about certain kinds of phase transitions or kink bursts from long cosmic strings in the very early universe (at cosmic times around 10−25 seconds), these could also be detectable. Space-based detectors like LISA should detect objects such as binaries consisting of two white dwarfs, and AM CVn stars (a white dwarf accreting matter from its binary partner, a low-mass helium star), and also observe the mergers of supermassive black holes and the inspiral of smaller objects (between one and a thousand solar masses) into such black holes. LISA should also be able to listen to the same kind of sources from the early universe as ground-based detectors, but at even lower frequencies and with greatly increased sensitivity.

Detecting emitted gravitational waves is a difficult endeavor. It involves ultra-stable high-quality lasers and detectors calibrated with a sensitivity of at least 2·10−22 Hz−1/2 as shown at the ground-based detector, GEO600. It has also been proposed that even from large astronomical events, such as supernova explosions, these waves are likely to degrade to vibrations as small as an atomic diameter.

Pinpointing the location of where the gravitational waves comes from is also a challenge. But deflected waves through gravitational lensing combined with machine learning could make it easier and more accurate. Just as the light from the SN Refsdal supernova was detected a second time almost a year after it was first discovered, due to gravitational lensing sending some of the light on a different path through the universe, the same approach could be used for gravitational waves. While still at an early stage, a technique similar to the triangulation used by cell phones to determine their location in relation to GPS satellites, will help astronomers tracking down the origin of the waves.

History of computing

From Wikipedia, the free encyclopedia

The history of computing extends beyond the history of computing hardware and modern computing technology including earlier methods that relied on pen and paper or chalk and slate, with or without the aid of tables.

Concrete devices

Digital computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts are implicit in concrete practices such as:

Numbers

Eventually, the concept of numbers became concrete and familiar enough for counting to arise, at times with sing-song mnemonics to teach sequences to others. All known human languages, except the Piraha language, have words for at least the numerals "one" and "two", and even some animals like the blackbird can distinguish a surprising number of items.

Advances in the numeral system and mathematical notation eventually led to the discovery of mathematical operations such as addition, subtraction, multiplication, division, squaring, square root, and so forth. Eventually, the operations were formalized, and concepts about the operations became understood well enough to be stated formally, and even proven. See, for example, Euclid's algorithm for finding the greatest common divisor of two numbers.

By the High Middle Ages, the positional Hindu–Arabic numeral system had reached Europe, which allowed for the systematic computation of numbers. During this period, the representation of a calculation on paper allowed the calculation of mathematical expressions, and the tabulation of mathematical functions such as the square root and the common logarithm (for use in multiplication and division), and the trigonometric functions. By the time of Isaac Newton's research, paper or vellum was an important computing resource, and even in our present time, researchers like Enrico Fermi would cover random scraps of paper with calculation, to satisfy their curiosity about an equation. Even into the period of programmable calculators, Richard Feynman would unhesitatingly compute any steps that overflowed the memory of the calculators, by hand, just to learn the answer; by 1976 Feynman had purchased an HP-25 calculator with a 49 program-step capacity; if a differential equation required more than 49 steps to solve, he could just continue his computation by hand.

Early computation

Mathematical statements need not be abstract only; when a statement can be illustrated with actual numbers, the numbers can be communicated and a community can arise. This allows the repeatable, verifiable statements which are the hallmark of mathematics and science. These kinds of statements have existed for thousands of years, and across multiple civilizations, as shown below:

The earliest known tool used for computation is the Sumerian abacus, believed to have been invented in Babylon c. 2700–2300 BC. Its original style of usage was by lines drawn in sand with pebbles.

In c. 1050–771 BC, the south-pointing chariot was invented in ancient China. It was the first known geared mechanism to use a differential gear, which was later used in analog computers. The Chinese also invented a more sophisticated abacus from around the 2nd century BC known as the Chinese abacus.

In the 3rd century BC, Archimedes used the mechanical principle of balance (see Archimedes Palimpsest § The Method of Mechanical Theorems) to calculate mathematical problems, such as the number of grains of sand in the universe (The sand reckoner), which also required a recursive notation for numbers (e.g., the myriad myriad).

The Antikythera mechanism is believed to be the earliest known geared computing device. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC.

According to Simon Singh, Muslim mathematicians also made important advances in cryptography, such as the development of cryptanalysis and frequency analysis by AlkindusProgrammable machines were also invented by Muslim engineers, such as the automatic flute player by the Banū Mūsā brothers.

During the Middle Ages, several European philosophers made attempts to produce analog computer devices. Influenced by the Arabs and Scholasticism, Majorcan philosopher Ramon Llull (1232–1315) devoted a great part of his life to defining and designing several logical machines that, by combining simple and undeniable philosophical truths, could produce all possible knowledge. These machines were never actually built, as they were more of a thought experiment to produce new knowledge in systematic ways; although they could make simple logical operations, they still needed a human being for the interpretation of results. Moreover, they lacked a versatile architecture, each machine serving only very concrete purposes. Despite this, Llull's work had a strong influence on Gottfried Leibniz (early 18th century), who developed his ideas further and built several calculating tools using them.

The apex of this early era of mechanical computing can be seen in the Difference Engine and its successor the Analytical Engine both by Charles Babbage. Babbage never completed constructing either engine, but in 2002 Doron Swade and a group of other engineers at the Science Museum in London completed Babbage's Difference Engine using only materials that would have been available in the 1840s. By following Babbage's detailed design they were able to build a functioning engine, allowing historians to say, with some confidence, that if Babbage had been able to complete his Difference Engine it would have worked. The additionally advanced Analytical Engine combined concepts from his previous work and that of others to create a device that, if constructed as designed, would have possessed many properties of a modern electronic computer, such as an internal "scratch memory" equivalent to RAM, multiple forms of output including a bell, a graph-plotter, and simple printer, and a programmable input-output "hard" memory of punch cards which it could modify as well as read. The key advancement that Babbage's devices possessed beyond those created before him was that each component of the device was independent of the rest of the machine, much like the components of a modern electronic computer. This was a fundamental shift in thought; previous computational devices served only a single purpose but had to be at best disassembled and reconfigured to solve a new problem. Babbage's devices could be reprogrammed to solve new problems by the entry of new data and act upon previous calculations within the same series of instructions. Ada Lovelace took this concept one step further, by creating a program for the Analytical Engine to calculate Bernoulli numbers, a complex calculation requiring a recursive algorithm. This is considered to be the first example of a true computer program, a series of instructions that act upon data not known in full until the program is run.

Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. Two other inventors, Leonardo Torres Quevedo and Vannevar Bush, also did follow-on research based on Babbage's work. In his Essays on Automatics (1914) Torres presented the design of an electromechanical calculating machine and introduced the idea of Floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, an arithmetic unit connected to a remote typewriter, on which commands could be typed and the results printed automatically. Bush's paper Instrumental Analysis (1936) discussed using existing IBM punch card machines to implement Babbage's design. In the same year, he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.

Several examples of analog computation survived into recent times. A planimeter is a device that does integrals, using distance as the analog quantity. Until the 1980s, HVAC systems used air both as the analog quantity and the controlling element. Unlike modern digital computers, analog computers are not very flexible and need to be reconfigured (i.e., reprogrammed) manually to switch them from working on one problem to another. Analog computers had an advantage over early digital computers in that they could be used to solve complex problems using behavioral analogues while the earliest attempts at digital computers were quite limited.

A Smith Chart is a well-known nomogram.

Since computers were rare in this era, the solutions were often hard-coded into paper forms such as nomograms, which could then produce analog solutions to these problems, such as the distribution of pressures and temperatures in a heating system.

Digital electronic computers

The "brain" [computer] may one day come down to our level [of the common people] and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far.

— British newspaper The Star in a June 1949 news article about the EDSAC computer, long before the era of the personal computers.

In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. During 1880-81 he showed that NOR gates alone (or NAND gates alone) can be used to reproduce the functions of all the other logic gates, but this work on it was unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates.

Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).

The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. From 1934 to 1936, NEC engineer Akira Nakashima, Claude Shannon, and Victor Shestakov published papers introducing switching circuit theory, using digital electronics for Boolean algebraic operations.

In 1936 Alan Turing published his seminal paper On Computable Numbers, with an Application to the Entscheidungsproblem in which he modeled computation in terms of a one-dimensional storage tape, leading to the idea of the Universal Turing machine and Turing-complete systems.

The first digital electronic computer was developed between April 1936 and June 1939 at the IBM Patent Department in Endicott, New York, by Arthur Halsey Dickinson. In this computer IBM introduced, a calculating device with a keyboard, processor and electronic output (display). The competitor to IBM was the digital electronic computer NCR3566, developed in NCR, Dayton, Ohio by Joseph Desch and Robert Mumma in the period April 1939 - August 1939. The IBM and NCR machines were decimal, executing addition and subtraction in binary position code.

In December 1939 John Atanasoff and Clifford Berry completed their experimental model to prove the concept of the Atanasoff–Berry computer (ABC) which began development in 1937. This experimental model is binary, executed addition and subtraction in octal binary code and is the first binary digital electronic computing device. The Atanasoff–Berry computer was intended to solve systems of linear equations, though it was not programmable. The computer was never truly completed due to Atanasoff's departure from Iowa State University in 1942 to work for the United States Navy. Many people credit ABC with many of the ideas used in later developments during the age of early electronic computing.

The Z3 computer, built by German inventor Konrad Zuse in 1941, was the first programmable, fully automatic computing machine, but it was not electronic.

During World War II, ballistics computing was done by women, who were hired as "computers." The term computer remained one that referred to mostly women (now seen as "operator") until 1945, after which it took on the modern definition of machinery it presently holds.

The ENIAC (Electronic Numerical Integrator And Computer) was the first electronic general-purpose computer, announced to the public in 1946. It was Turing-complete, digital, and capable of being reprogrammed to solve a full range of computing problems. Women implemented the programming for machines like the ENIAC, and men created the hardware.

The Manchester Baby was the first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.

William Shockley, John Bardeen and Walter Brattain at Bell Labs invented the first working transistor, the point-contact transistor, in 1947, followed by the bipolar junction transistor in 1948. At the University of Manchester in 1953, a team under the leadership of Tom Kilburn designed and built the first transistorized computer, called the Transistor Computer, a machine using the newly developed transistors instead of valves. The first stored-program transistor computer was the ETL Mark III, developed by Japan's Electrotechnical Laboratory from 1954 to 1956. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications.

In 1954, 95% of computers in service were being used for engineering and scientific purposes.

Personal computers

The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960, It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. The MOSFET made it possible to build high-density integrated circuit chips. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics.

The silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild Semiconductor in 1968. This led to the development of the first single-chip microprocessor, the Intel 4004. The Intel 4004 was developed as a single-chip microprocessor from 1969 to 1970, led by Intel's Federico Faggin, Marcian Hoff, and Stanley Mazor, and Busicom's Masatoshi Shima. The chip was mainly designed and realized by Faggin, with his silicon-gate MOS technology. The microprocessor led to the microcomputer revolution, with the development of the microcomputer, which would later be called the personal computer (PC).

Most early microprocessors, such as the Intel 8008 and Intel 8080, were 8-bit. Texas Instruments released the first fully 16-bit microprocessor, the TMS9900 processor, in June 1976. They used the microprocessor in the TI-99/4 and TI-99/4A computers.

The 1980s brought about significant advances with microprocessors that greatly impacted the fields of engineering and other sciences. The Motorola 68000 microprocessor had a processing speed that was far superior to the other microprocessors being used at the time. Because of this, having a newer, faster microprocessor allowed for the newer microcomputers that came along after to be more efficient in the amount of computing they were able to do. This was evident in the 1983 release of the Apple Lisa. The Lisa was one of the first personal computers with a graphical user interface (GUI) that was sold commercially. It ran on the Motorola 68000 CPU and used both dual floppy disk drives and a 5 MB hard drive for storage. The machine also had 1MB of RAM used for running software from disk without rereading the disk persistently. After the failure of the Lisa in terms of sales, Apple released its first Macintosh computer, still running on the Motorola 68000 microprocessor, but with only 128KB of RAM, one floppy drive, and no hard drive to lower the price.

In the late 1980s and early 1990s, computers became more useful for personal and work purposes, such as word processing. In 1989, Apple released the Macintosh Portable, it weighed 7.3 kg (16 lb) and was extremely expensive, costing US$7,300. At launch, it was one of the most powerful laptops available, but due to the price and weight, it was not met with great success and was discontinued only two years later. That same year Intel introduced the Touchstone Delta supercomputer, which had 512 microprocessors. This technological advancement was very significant, as it was used as a model for some of the fastest multi-processor systems in the world. It was even used as a prototype for Caltech researchers, who used the model for projects like real-time processing of satellite images and simulating molecular models for various fields of research.

Supercomputers

In terms of supercomputing, the first widely acknowledged supercomputer was the Control Data Corporation (CDC) 6600 built in 1964 by Seymour Cray. Its maximum speed was 40  MHz or 3 million floating point operations per second (FLOPS). The CDC 6600 was replaced by the CDC 7600 in 1969; although its normal clock speed was not faster than the 6600, the 7600 was still faster due to its peak clock speed, which was approximately 30 times faster than that of the 6600. Although CDC was a leader in supercomputers, their relationship with Seymour Cray (which had already been deteriorating) completely collapsed. In 1972, Cray left CDC and began his own company, Cray Research Inc. With support from investors in Wall Street, an industry fueled by the Cold War, and without the restrictions he had within CDC, he created the Cray-1 supercomputer. With a clock speed of 80  MHz or 136 megaFLOPS, Cray developed a name for himself in the computing world. By 1982, Cray Research produced the Cray X-MP equipped with multiprocessing and in 1985 released the Cray-2, which continued with the trend of multiprocessing and clocked at 1.9 gigaFLOPS. Cray Research developed the Cray Y-MP in 1988, however afterward struggled to continue to produce supercomputers. This was largely because the Cold War had ended, and the demand for cutting-edge computing by colleges and the government declined drastically and the demand for microprocessing units increased.

In 1998, David Bader developed the first Linux supercomputer using commodity parts. While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333  MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously. Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world. Though Linux-based clusters using consumer-grade parts, such as Beowulf, existed before the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.

Today, supercomputers are still used by the governments of the world and educational institutions for computations such as simulations of natural disasters, genetic variant searches within a population relating to disease, and more. As of November 2024, the fastest supercomputer is El Capitan.

Starting with known special cases, the calculation of logarithms and trigonometric functions can be performed by looking up numbers in a mathematical table, and interpolating between known cases. For small enough differences, this linear operation was accurate enough for use in navigation and astronomy in the Age of Exploration. The uses of interpolation have thrived in the past 500 years: by the twentieth century Leslie Comrie and W.J. Eckert systematized the use of interpolation in tables of numbers for punch card calculation.

Weather prediction

The numerical solution of differential equations, notably the Navier-Stokes equations was an important stimulus to computing, with Lewis Fry Richardson's numerical approach to solving differential equations. The first computerized weather forecast was performed in 1950 by a team composed of American meteorologists Jule Charney, Philip Duncan Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and ENIAC programmer Klara Dan von Neumann. To this day, some of the most powerful computer systems on Earth are used for weather forecasts.

Symbolic computations

By the late 1960s, computer systems could perform symbolic algebraic manipulations well enough to pass college-level calculus courses.

Important women and their contributions

Women are often underrepresented in STEM fields when compared to their male counterparts. In the modern era before the 1960s, computing was widely seen as "women's work" since it was associated with the operation of tabulating machines and other mechanical office work. The accuracy of this association varied from place to place. In America, Margaret Hamilton recalled an environment dominated by men, while Elsie Shutt recalled surprise at seeing even half of the computer operators at Raytheon were men. Machine operators in Britain were mostly women into the early 1970s. As these perceptions changed and computing became a high-status career, the field became more dominated by men. Professor Janet Abbate, in her book Recoding Gender, writes:

Yet women were a significant presence in the early decades of computing. They made up the majority of the first computer programmers during World War II; they held positions of responsibility and influence in the early computer industry; and they were employed in numbers that, while a small minority of the total, compared favorably with women's representation in many other areas of science and engineering. Some female programmers of the 1950s and 1960s would have scoffed at the notion that programming would ever be considered a masculine occupation, yet these women’s experiences and contributions were forgotten all too quickly.

Some notable examples of women in the history of computing are:

Progress in artificial intelligence

Artificial intelligence, especially foundation models, has made rapid progress, surpassing human capabilities in various benchmarks.

Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields including medical diagnosis, finance, robotics, law, video games, agriculture, and scientific discovery. The society as a whole is looking for artificial intelligence to be on a key factor in the upcming years because of its potential. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."

"Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.

Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:

  1. Artificial narrow intelligence – AI capable only of specific tasks;
  2. Artificial general intelligence – AI with ability in several areas, and able to autonomously solve problems they were never even designed for;
  3. Artificial superintelligence – AI capable of general tasks, including scientific creativity, social skills, and general wisdom.

To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject-matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

In 2023, humans still substantially outperformed both GPT-4 and other models tested on the ConceptARC benchmark. Those models scored 60% on most, and 77% on one category, while humans scored 91% on all and 97% on one category. However, later research in 2025 showed that human-generated output grids were only accurate 73% of the time, while AI models available that year managed to score above 77%.

Current performance in specific areas

Game Champion year Legal states (log10) Game tree complexity (log10) Game of perfect information?
Draughts (checkers) 1994 21 31 Perfect
Othello (reversi) 1997 28 58 Perfect
Chess 1997 46 123 Perfect
Scrabble 2006



Shogi 2017 71 226 Perfect
Go 2017 172 360 Perfect
2p no-limit hold 'em 2017

Imperfect
StarCraft - 270+
Imperfect
StarCraft II 2019

Imperfect

There are many useful abilities that can be described as showing some form of intelligence. This gives better insight into the comparative success of artificial intelligence in different areas.

AI, like electricity or the steam engine, is a general-purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at. Some versions of Moravec's paradox observe that humans are more likely to outperform machines in areas such as physical dexterity that have been the direct target of natural selection. While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets. Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."

Games provide a high-profile benchmark for assessing rates of progress; many games have a large professional player base and a well-established competitive rating system. AlphaGo brought the era of classical board-game benchmarks to a close when Artificial Intelligence proved their competitive edge over humans in 2016. Deep Mind's AlphaGo AI software program defeated the world's best professional Go Player Lee Sedol. Games of imperfect knowledge provide new challenges to AI in the area of game theory; the most prominent milestone in this area was brought to a close by Libratus' poker victory in 2017. E-sports continue to provide additional benchmarks; Facebook AI, Deepmind, and others have engaged with the popular StarCraft franchise of videogames.

Broad classes of outcome for an AI test may be given as:

  • optimal: it is not possible to perform better (note: some of these entries were solved by humans)
  • super-human: performs better than all humans
  • high-human: performs better than most humans
  • par-human: performs similarly to most humans
  • sub-human: performs worse than most humans

Optimal

Super-human

High-human

Par-human

Sub-human

  • Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)
  • Object recognition
  • Various robotics tasks that may require advances in robot hardware as well as AI, including:
    • Stable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)
    • Humanoid soccer
  • Speech recognition: "nearly equal to human performance" (2017)
  • Explainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.
  • Many tests of fluid intelligence (2020)
  • Bongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)
  • Visual Commonsense Reasoning (VCR) benchmark (as of 2020)
  • Stock market prediction: Financial data collection and processing using Machine Learning algorithms
  • Angry Birds video game, as of 2020
  • Various tasks that are difficult to solve without contextual knowledge, including:

Proposed tests of artificial intelligence

In his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.

The Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior.

Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; however, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.

Exams

According to OpenAI, in 2023 ChatGPT GPT-4 scored the 90th percentile on the Uniform Bar Exam. On the SATs, GPT-4 scored the 89th percentile on math, and the 93rd percentile in Reading & Writing. On the GREs, it scored on the 54th percentile on the writing test, 88th percentile on the quantitative section, and 99th percentile on the verbal section. It scored in the 99th to 100th percentile on the 2020 USA Biology Olympiad semifinal exam. It scored a perfect "5" on several AP exams.

Independent researchers found in 2023 that ChatGPT GPT-3.5 "performed at or near the passing threshold" for the three parts of the United States Medical Licensing Examination. GPT-3.5 was also assessed to attain a low, but passing, grade from exams for four law school courses at the University of Minnesota. GPT-4 passed a text-based radiology board–style examination.

Competitions

Many competitions and prizes, such as the Imagenet Challenge, promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.

Past and current predictions

An expert poll around 2016, conducted by Katja Grace of the Future of Humanity Institute and associates, gave median estimates of 3 years for championship Angry Birds, 4 years for the World Series of Poker, and 6 years for StarCraft. On more subjective tasks, the poll gave 6 years for folding laundry as well as an average human worker, 7–10 years for expertly answering 'easily Googleable' questions, 8 years for average speech transcription, 9 years for average telephone banking, and 11 years for expert songwriting, but over 30 years for writing a New York Times bestseller or winning the Putnam math competition.

Chess

Deep Blue at the Computer History Museum

An AI defeated a grandmaster in a regulation tournament game for the first time in 1988; rebranded as Deep Blue, it beat the reigning human world chess champion in 1997 (see Deep Blue versus Garry Kasparov).

Estimates when computers would exceed humans at Chess
Year prediction made Predicted year Number of years Predictor Contemporaneous source
1957 1967 or sooner 10 or less Herbert A. Simon, economist
1990 2000 or sooner 10 or less Ray Kurzweil, futurist Age of Intelligent Machines

Go

AlphaGo defeated a European Go champion in October 2015, and Lee Sedol in March 2016, one of the world's top players (see AlphaGo versus Lee Sedol). According to Scientific American and other sources, most observers had expected superhuman Computer Go performance to be at least a decade away.

Estimates when computers would exceed humans at Go
Year prediction made Predicted year Number of years Predictor Affiliation Contemporaneous source
1997 2100 or later 103 or more Piet Hutt, physicist and Go fan Institute for Advanced Study New York Times
2007 2017 or sooner 10 or less Feng-Hsiung Hsu, Deep Blue lead Microsoft Research Asia IEEE Spectrum
2014 2024 10 Rémi Coulom, Computer Go programmer CrazyStone Wired

Human-level artificial general intelligence (AGI)

AI pioneer and economist Herbert A. Simon inaccurately predicted in 1965: "Machines will be capable, within twenty years, of doing any work a man can do". Similarly, in 1970 Marvin Minsky wrote that "Within a generation... the problem of creating artificial intelligence will substantially be solved."

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when AGI would arrive was 2040 to 2050, depending on the poll.

The Grace poll around 2016 found results varied depending on how the question was framed. Respondents asked to estimate "when unaided machines can accomplish every task better and more cheaply than human workers" gave an aggregated median answer of 45 years and a 10% chance of it occurring within 9 years. Other respondents asked to estimate "when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers" estimated a median of 122 years and a 10% probability of 20 years. The median response for when "AI researcher" could be fully automated was around 90 years. No link was found between seniority and optimism, but Asian researchers were much more optimistic than North American researchers on average; Asians predicted 30 years on average for "accomplish every task", compared with the 74 years predicted by North Americans.

Estimates of when AGI will arrive
Year prediction made Predicted year Number of years Predictor Contemporaneous source
1965 1985 or sooner 20 or less Herbert A. Simon The shape of automation for men and management
1993 2023 or sooner 30 or less Vernor Vinge, science fiction writer "The Coming Technological Singularity"
1995 2040 or sooner 45 or less Hans Moravec, robotics researcher Wired
2008 Never / Distant future
Gordon E. Moore, inventor of Moore's Law IEEE Spectrum
2017 2029 12 Ray Kurzweil Interview

Inner core super-rotation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inner_core_super-rotation   Cutaway ...