Search This Blog

Sunday, April 26, 2026

Watchmaker analogy

From Wikipedia, the free encyclopedia

The watchmaker analogy or watchmaker argument is a teleological argument, an argument for the existence of God. In broad terms, the watchmaker analogy states that just as it is readily observed that a watch (e.g., a pocket watch) did not come to be accidentally or on its own but rather through the intentional handiwork of a skilled watchmaker, it is also readily observed that nature did not come to be accidentally or on its own but through the intentional handiwork of an intelligent designer. The watchmaker analogy originated in natural theology and is often used to argue for the concept of intelligent design. The analogy states that a design implies a designer, by an intelligent designer, i.e., a creator deity. The watchmaker analogy was given by William Paley in his 1802 book Natural Theology or Evidences of the Existence and Attributes of the Deity.

The original analogy played a prominent role in natural theology and the "argument from design", where it was used to support arguments for the existence of God of the universe, in both Christianity and Deism. Prior to Paley, however, Sir Isaac Newton, René Descartes, and others from the time of the Scientific Revolution had each believed "that the physical laws he [each] had uncovered revealed the mechanical perfection of the workings of the universe to be akin to a watch, wherein the watchmaker is God."

The 1859 publication of Charles Darwin's book on natural selection put forward an alternative explanation to the watchmaker analogy, for complexity and adaptation. In the 19th century, deists, who championed the watchmaker analogy, held that Darwin's theory fit with "the principle of uniformitarianism—the idea that all processes in the world occur now as they have in the past" and that deistic evolution "provided an explanatory framework for understanding species variation in a mechanical universe."

When evolutionary biology began being taught in American high schools in the 1960s, Christian fundamentalists used versions of the argument to dispute the concepts of evolution and natural selection, and there was renewed interest in the watchmaker argument. Evolutionary biologist Richard Dawkins referred to the analogy in his 1986 book The Blind Watchmaker when explaining the mechanism of evolution. Others, however, consider the watchmaker analogy to be compatible with evolutionary creation, opining that the two concepts are not mutually exclusive.

History

Ancient predecessor

In the second century Epictetus argued that, by analogy to the way a sword is made by a craftsman to fit with a scabbard, so human genitals and the desire of humans to fit them together suggest a type of design or craftsmanship of the human form. Epictetus attributed this design to a type of Providence woven into the fabric of the universe, rather than to a personal monotheistic god.

Scientific Revolution

The Scientific Revolution "nurtured a growing awareness" that "there were universal laws of nature at work that ordered the movement of the world and its parts." Amos Yong writes that in "astronomy, the Copernican revolution regarding the heliocentrism of the solar system, Johannes Kepler's (1571–1630) three laws of planetary motion, and Isaac Newton's (1642–1727) law of universal gravitation—laws of gravitation and of motion, and notions of absolute space and time—all combined to establish the regularities of heavenly and earthly bodies".

Simultaneously, the development of machine technology and the emergence of the mechanical philosophy encouraged mechanical imagery unlikely to have come to the fore in previous ages.

With such a backdrop, "deists suggested the watchmaker analogy: just as watches are set in motion by watchmakers, after which they operate according to their pre-established mechanisms, so also was the world begun by God as creator, after which it and all its parts have operated according to their pre-established natural laws. With these laws perfectly in place, events have unfolded according to the prescribed plan." For Sir Isaac Newton, "the regular motion of the planets made it reasonable to believe in the continued existence of God". Newton also upheld the idea that "like a watchmaker, God was forced to intervene in the universe and tinker with the mechanism from time to time to ensure that it continued operating in good working order". Similarly to Newton, René Descartes (1596–1650) speculated on "the cosmos as a great time machine operating according to fixed laws, a watch created and wound up by the great watchmaker".

William Paley

Watches and timepieces have been used as examples of complicated technology in philosophical discussions. For example, Cicero, Voltaire and René Descartes all used timepieces in arguments regarding purpose. The watchmaker analogy, as described here, was used by Bernard le Bovier de Fontenelle in 1686, but was most famously formulated by Paley.

Paley used the watchmaker analogy in his book Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature, published in 1802. In it, Paley wrote that if a pocket watch is found on a heath, it is most reasonable to assume that someone dropped it and that it was made by at least one watchmaker, not by natural forces:

William Paley

In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for anything I knew to the contrary, it had lain there forever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer I had before given, that for anything I knew, the watch might have always been there. ... There must have existed, at some time, and at some place or other, an artificer or artificers, who formed [the watch] for the purpose which we find it actually to answer; who comprehended its construction, and designed its use. ... Every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater or more, and that in a degree which exceeds all computation.

— William Paley, Natural Theology (1802)

Paley went on to argue that the complex structures of living things and the remarkable adaptations of plants and animals required an intelligent designer. He believed the natural world was the creation of God and showed the nature of the creator. According to Paley, God had carefully designed "even the most humble and insignificant organisms" and all of their minute features (such as the wings and antennae of earwigs). He believed, therefore, that God must care even more for humanity.

Paley recognised that there is great suffering in nature and nature appears to be indifferent to pain. His way of reconciling that with his belief in a benevolent God was to assume that life had more pleasure than pain.

As a side note, a charge of wholesale plagiarism from this book was brought against Paley in The Athenaeum for 1848, but the famous illustration of the watch was not peculiar to Nieuwentyt and had been used by many others before either Paley or Nieuwentyt. But the charge of plagiarism was based on more similarities. For example, Nieuwentyt wrote "in the middle of a Sandy down, or in a desart [sic] and solitary Place, where few People are used to pass, any one should find a Watch ..."

Joseph Butler

William Paley taught the works of Joseph Butler and appears to have built on Butler's 1736 design arguments of inferring a designer from evidence of design. Butler noted: "As the manifold Appearances of Design and of final Causes, in the Constitution of the World, prove it to be the Work of an intelligent Mind ... The appearances of Design and of final Causes in the constitution of nature as really prove this acting agent to be an intelligent Designer... ten thousand Instances of Design, cannot but prove a Designer.".

Jean-Jacques Rousseau

Rousseau also mentioned the watchmaker theory. He wrote the following in his 1762 book, Emile:

I am like a man who sees the works of a watch for the first time; he is never weary of admiring the mechanism, though he does not know the use of the instrument and has never seen its face. I do not know what this is for, says he, but I see that each part of it is fitted to the rest, I admire the workman in the details of his work, and I am quite certain that all these wheels only work together in this fashion for some common end which I cannot perceive. Let us compare the special ends, the means, the ordered relations of every kind, then let us listen to the inner voice of feeling; what healthy mind can reject its evidence? Unless the eyes are blinded by prejudices, can they fail to see that the visible order of the universe proclaims a supreme intelligence? What sophisms must be brought together before we fail to understand the harmony of existence and the wonderful co-operation of every part for the maintenance of the rest?

Criticism

David Hume

Before Paley published his book, David Hume (1711–1776) had already put forward a number of philosophical criticisms of the watch analogy, and to some extent anticipated the concept of natural selection. His criticisms can be separated into three major distinctions.

His first objection is that we have no experience of world-making. Hume highlighted the fact that everything we claim to know the cause of, we have derived the inductions from previous experiences of similar objects being created or seen the object itself being created ourselves. For example, with a watch, we know it has to be created by a watchmaker because we can observe it being made and compare it to the making of other similar watches or objects to deduce they have alike causes in their creation. However, he argues that we have no experience of the universe's creation or any other universe's creations to compare our own universe to and never will; therefore, it would be illogical to infer that our universe has been created by an intelligent designer in the same way that a watch has.

The second criticism that Hume offers is about the form of the argument as an analogy in itself. An analogical argument claims that because object X (a watch) is like object Y (the universe) in one respect, both are therefore probably alike in another, hidden, respect (their cause, having to be created by an intelligent designer). He points out that for an argument from analogy to be successful, the two things that are being compared have to have an adequate number of similarities that are relevant to the respect that are analogised. For example, a kitten and a lion may be very similar in many respects, but just because a lion makes a "roar", it would not be correct to infer a kitten also "roars", the similarities between the two objects being not enough and the degree of relevance to what sound they make being not relevant enough. Hume then argues that the universe and a watch also do not have enough relevant or close similarities to infer that they were both created the same way. For example, the universe is made of organic natural material, but the watch is made of artificial mechanic materials. He claims that in the same respect, the universe could be argued to be more analogous to something more organic such as a vegetable (which we can observe for ourselves does not need a 'designer' or a 'watchmaker' to be created). Although he admits the analogy of a universe to a vegetable to seem ridiculous, he says that it is just as ridiculous to analogize the universe with a watch.

The third criticism that Hume offers is that even if the argument did give evidence for a designer; it still gives no evidence for the traditional 'omnipotent', 'benevolent' (all-powerful and all-loving) God of traditional Christian theism. One of the main assumptions of Paley's argument is that 'like effects have like causes'; or that machines (like the watch) and the universe have similar features of design and so both also have the same cause of their existence: they must both have an intelligent designer. However, Hume points out that what Paley does not comprehend is to what extent 'like causes' extend: how similar the creation of a universe is to the creation of a watch. Instead, Paley moves straight to the conclusion that this designer of the universe is the 'God' he believes in of traditional Christianity. Hume, however takes the idea of 'like causes' and points out some potential absurdities in how far the 'likeness' of these causes could extend to if the argument were taken further as to explain this. One example that he uses is how a machine or a watch is usually designed by a whole team of people rather than just one person. Surely, if we are analogizing the two in this way, it would point to there being a group of gods who created the universe, not just a single being. Another example he uses is that complex machines are usually the result of many years of trial and error with every new machine being an improved version of the last. Also by analogy of the two, would that not hint that the universe could also have been just one of many of God's 'trials' and that there are much better universes out there? However, if that were taken to be true, surely the 'creator' of it all would not be 'all loving' and 'all powerful' if they had to carry out the process of 'trial and error' when creating the universe?

Hume also points out there is still a possibility that the universe could have been created by random chance but still show evidence of design as the universe is eternal and would have an infinite amount of time to be able to form a universe so complex and ordered as our own. He called that the 'Epicurean hypothesis'. It argued that when the universe was first created, the universe was random and chaotic, but if the universe is eternal, over an unlimited period of time, natural forces could have naturally 'evolved' by random particles coming together over time into the incredibly ordered system we can observe today without the need of an intelligent designer as an explanation.

The last objection that he makes draws on the widely discussed problem of evil. He argues that all the daily unnecessary suffering that goes on everywhere within the world is yet another factor that pulls away from the idea that God is an 'omnipotent' 'benevolent' being.

Charles Darwin

Charles Darwin in 1880

When Darwin completed his studies of theology at Christ's College, Cambridge, in 1831, he read Paley's Natural Theology and believed that the work gave rational proof of the existence of God. That was because living beings showed complexity and were exquisitely fitted to their places in a happy world.

Subsequently, on the voyage of the Beagle, Darwin found that nature was not so beneficent, and the distribution of species did not support ideas of divine creation. In 1838, shortly after his return, Darwin conceived his theory that natural selection, rather than divine design, was the best explanation for gradual change in populations over many generations. He published the theory in On the Origin of Species in 1859, and in later editions, he noted responses that he had received:

It can hardly be supposed that a false theory would explain, in so satisfactory a manner as does the theory of natural selection, the several large classes of facts above specified. It has recently been objected that this is an unsafe method of arguing; but it is a method used in judging of the common events of life, and has often been used by the greatest natural philosophers ... I see no good reason why the views given in this volume should shock the religious feelings of any one. It is satisfactory, as showing how transient such impressions are, to remember that the greatest discovery ever made by man, namely, the law of the attraction of gravity, was also attacked by Leibnitz, "as subversive of natural, and inferentially of revealed, religion." A celebrated author and divine has written to me that "he has gradually learnt to see that it is just as noble a conception of the Deity to believe that He created a few original forms capable of self-development into other and needful forms, as to believe that He required a fresh act of creation to supply the voids caused by the action of His laws."

— Charles Darwin, The Origin of Species (1859)

Darwin reviewed the implications of this finding in his autobiography:

Although I did not think much about the existence of a personal God until a considerably later period of my life, I will here give the vague conclusions to which I have been driven. The old argument of design in nature, as given by Paley, which formerly seemed to me so conclusive, fails, now that the law of natural selection has been discovered. We can no longer argue that, for instance, the beautiful hinge of a bivalve shell must have been made by an intelligent being, like the hinge of a door by man. There seems to be no more design in the variability of organic beings and in the action of natural selection, than in the course which the wind blows. Everything in nature is the result of fixed laws.

— Charles Darwin, The Autobiography of Charles Darwin 1809–1882. With the original omissions restored.

The idea that nature was governed by laws was already common, and in 1833, William Whewell as a proponent of the natural theology that Paley had inspired had written that "with regard to the material world, we can at least go so far as this—we can perceive that events are brought about not by insulated interpositions of Divine power, exerted in each particular case, but by the establishment of general laws." Darwin, who spoke of the "fixed laws" concurred with Whewell, writing in his second edition of On The Origin of Species:

There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

— Charles Darwin, The Origin of Species (1860)

By the time that Darwin published his theory, theologians of liberal Christianity were already supporting such ideas, and by the late 19th century, their modernist approach was predominant in theology. In science, evolution theory incorporating Darwin's natural selection became completely accepted.

Richard Dawkins

Richard Dawkins

In The Blind Watchmaker, Richard Dawkins argues that the watch analogy conflates the complexity that arises from living organisms that are able to reproduce themselves (and may become more complex over time) with the complexity of inanimate objects, unable to pass on any reproductive changes (such as the multitude of parts manufactured in a watch). The comparison breaks down because of this important distinction.

In a BBC Horizon episode, also entitled The Blind Watchmaker, Dawkins described Paley's argument as being "as mistaken as it is elegant". In both contexts, he saw Paley as having made an incorrect proposal as to a certain problem's solution, but Dawkins did not disrespect him. In his essay The Big Bang, Steven Pinker discusses Dawkins's coverage of Paley's argument, adding: "Biologists today do not disagree with Paley's laying out of the problem. They disagree only with his solution."

In his book The God Delusion, Dawkins argues that rather than luck, the evolution of human life is the result of natural selection. He suggests that it is fallacious to view "coming about by chance" and "coming about by design" as the only possibilities, with natural selection being the alternative to the existence of an intelligent designer. By amassing a large number of small changes, the theory of natural selection allows for a seemingly impossible end product to be produced.

In addition, he argues that the watchmaker's creation of the watch implies that the watchmaker must be more complex than the watch. Design is top-down, someone or something more complex designs something less complex. To follow the line upwards demands that the watch was designed by a (necessarily more complex) watchmaker, the watchmaker must have been created by a more complex being than himself. So the question becomes who designed the designer? Dawkins argues that (a) this line continues ad infinitum, and (b) it does not explain anything. Evolution, on the other hand, takes a bottom-up approach; it explains how more complexity can arise gradually by building on or combining lesser complexity.

Richerson and Boyd

Biologist Peter Richerson and anthropologist Robert Boyd offer an oblique criticism by arguing that watches were not "hopeful monsters created by single inventors," but were created by watchmakers building up their skills in a cumulative fashion over time, each contributing to a watch-making tradition from which any individual watchmaker draws their designs.

Contemporary usage

In the early 20th century, the modernist theology of higher criticism was contested in the United States by Biblical literalists, who campaigned successfully against the teaching of evolution and began calling themselves creationists in the 1920s. When teaching of evolution was reintroduced into public schools in the 1960s, they adopted what they called creation science that had a central concept of design in similar terms to Paley's argument. That idea was then relabeled intelligent design, which presents the same analogy as an argument against evolution by natural selection without explicitly stating that the "intelligent designer" was God. The argument from the complexity of biological organisms was now presented as the irreducible complexity argument, the most notable proponent of which was Michael Behe, and, leveraging off the verbiage of information theory, the specified complexity argument, the most notable proponent of which was William Dembski.

The watchmaker analogy was referenced in the 2005 Kitzmiller v. Dover Area School District trial. Throughout the trial, Paley was mentioned several times. The defense's expert witness John Haught noted that both intelligent design and the watchmaker analogy are "reformulations" of the same theological argument. On day 21 of the trial, Mr. Harvey walked Dr. Minnich through a modernized version of Paley's argument, substituting a cell phone for the watch. In his ruling, the judge stated that the use of the argument from design by intelligent design proponents "is merely a restatement of the Reverend William Paley's argument applied at the cell level," adding "Minnich, Behe, and Paley reach the same conclusion, that complex organisms must have been designed using the same reasoning, except that Professors Behe and Minnich refuse to identify the designer, whereas Paley inferred from the presence of design that it was God." The judge ruled that such an inductive argument is not accepted as science because it is unfalsifiable.

Generative AI

From Wikipedia, the free encyclopedia
Impressionistic image of figures in a futuristic opera scene
Théâtre D'opéra Spatial (Space Opera Theater, 2022), an image made with Midjourney that won an award at the Colorado State Fair's fine art competition

Generative artificial intelligence, commonly known as generative AI or GenAI, is a subfield of artificial intelligence that uses generative models to generate text, images, videos, audio, software code (vibe coding) or other forms of data. These models learn the underlying patterns and structures of their training data, and use them to generate new data in response to input, which often takes the form of natural language prompts.

The prevalence of generative AI tools has increased significantly since the AI boom in the 2020s. This boom was made possible by improvements in deep neural networks, particularly large language models (LLMs), which are based on the transformer architecture. Generative AI applications include chatbots such as ChatGPT, Claude, Copilot, DeepSeek, Google Gemini and Grok; text-to-image models such as DALL-E, Firefly, Stable Diffusion, and Midjourney; and text-to-video models such as Veo, LTX and Sora.

Companies in a variety of sectors have used generative AI, including those in software development, healthcare, finance, entertainment, customer service, sales and marketing, art, writing, and product design.

Generative AI has been used for cybercrime, and to deceive and manipulate people through fake news and deepfakes. Generative AI models have been trained on copyrighted works without the rightholders' permission. Many generative AI systems use large-scale data centers, whose environmental impacts include electronic waste, consumption of fresh water for cooling, and high energy consumption that is estimated to be growing steadily.

History

Early history

The origins of algorithmically generated media can be traced to the development of the Markov chain, which has been used to model natural language since the early 20th century. Russian mathematician Andrey Markov introduced the concept in 1906, including an analysis of vowel and consonant patterns in Eugeny Onegin. Once trained on a text corpus, a Markov chain can generate probabilistic text.

By the early 1970s, artists began using computers to extend generative techniques beyond Markov models. Harold Cohen developed and exhibited works produced by AARON, a pioneering computer program designed to autonomously create paintings. The terms generative AI planning or generative planning were used in the 1980s and 1990s to refer to AI planning systems, especially computer-aided process planning, used to generate sequences of actions to reach a specified goal. Generative AI planning systems used symbolic AI methods such as state space search and constraint satisfaction and were a "relatively mature" technology by the early 1990s. They were used to generate crisis action plans for military use, process plans for manufacturing and decision plans such as in prototype autonomous spacecraft.

Generative neural networks (since the late 2000s)

Above: An image classifier, an example of a neural network trained with a discriminative objective. Below: A text-to-image model, an example of a network trained with a generative objective.

Machine learning uses both discriminative models and generative models to predict data. Beginning in the late 2000s and early 2010s, advances in deep learning led to major improvements in image classification, speech recognition, and natural language processing. Neural networks in this period were typically trained as discriminative models due to the relative difficulty of training generative models.

In 2014, the introduction of models such as the variational autoencoder (VAE) and generative adversarial network (GAN) enabled effective deep generative modeling of complex data such as images.

In 2017, the Transformer architecture enabled further advances in generative modeling compared to earlier long short-term memory (LSTM) networks. This led to the development of generative pre-trained transformer (GPT) models, beginning with GPT-1 in 2018.

Generative AI adoption

AI generated images have become much more advanced.

In March 2020, the release of 15.ai, a free web application created by an anonymous MIT researcher that could generate convincing character voices using minimal training data, was one of the earliest publicly available uses for generative AI. The platform is credited as the first mainstream service for audio deepfakes.

In 2021, DALL-E, a closed-source transformer-based generative model developed by OpenAI, drew widespread attention to text-to-image generation.

Other projects, including open-source approaches such as VQGAN+CLIP and DALL·E Mini (later renamed Craiyon), made similar systems more accessible to the public.

Dream by Wombo was released at the end of 2021, followed by the releases of Midjourney and Stable Diffusion in 2022.

In November 2022, the public release of ChatGPT popularized generative AI for general-purpose text-based tasks.

Private investment in AI (pink) and generative AI (green)

In a 2024 survey by marketing research firm Ipsos, Asia–Pacific countries were significantly more optimistic than Western societies about generative AI and show higher adoption rates. Despite expressing concerns about privacy and the pace of change, 68% of Asia-Pacific respondents believed that AI was having a positive impact on the world, compared to 57% globally. According to a survey by SAS and Coleman Parkes Research, as of 2023, 83% of Chinese respondents were using the technology, exceeding both the global average of 54% and the U.S. rate of 65%. A UN report indicated that Chinese entities filed over 38,000 generative AI patents from 2014 to 2023, more than any other country. A 2024 survey by the Just So Soul social media app reported that 18% of respondents born after 2000 used generative AI "almost every day", and that over 60% of respondents like or love AI-generated content (AIGC), while less than 3% dislike or hate it.

By mid 2025 companies were increasingly abandoning generative AI pilot projects as they had difficulties with integration, data quality and unmet returns, leading analysts at Gartner and The Economist to characterize the period as entering the Gartner hype cycle's "trough of disillusionment" phase.

Applications

Generative artificial intelligence has been applied across multiple industries for content creation and automation. In healthcare, generative models are used for drug discovery and the generation of synthetic medical data to train diagnostic systems. In finance, they are used for report drafting, data generation, and customer service automation. Media and entertainment industries use generative systems for tasks such as music composition, script development, and image or video generation. Researchers and policymakers have raised concerns regarding accuracy, misuse, and impacts on academic and professional work.

Text and software code

Large language models (LLMs) are trained on tokenized text from large corpora and are capable of natural language processing, machine translation, and natural language generation.

LLMs can be used as foundation models for a variety of downstream tasks. They can also be trained on source code to generate programs from prompts.

Audio

In 2016, DeepMind's WaveNet demonstrated that deep neural networks can generate raw audio waveforms. This enabled more realistic speech synthesis compared to earlier approaches. Subsequent systems such as Tacotron 2 demonstrated end-to-end neural text-to-speech generation.

Images

Generative AI can be used to create visual art. Such systems are trained on image–text pairs. Examples include Stable Diffusion, DALL-E, and Midjourney.

Video

Generative AI can be used to produce photorealistic videos. Systems such as Runway have demonstrated text-to-video generation capabilities.

Robotics

Generative models can be used for motion planning and robot control by learning from prior data.

3D modeling

Generative models can assist in automating 3D modeling tasks, including generating 3D assets from text or images.

World models

World models are neural networks designed to learn representations of physical environments, including spatial and dynamic properties. Recent multimodal systems have expanded these capabilities by integrating vision, language, and action into unified models.

Software and hardware

Architecture of a generative AI agent

Generative AI models are used to power chatbot products such as ChatGPT, programming tools such as GitHub Copilottext-to-image products such as Midjourney, and text-to-video products such as Runway Gen-2. Generative AI features have been integrated into a variety of existing commercially available products such as Microsoft Office (Microsoft Copilot), Google Photos, and the Adobe Suite (Adobe Firefly). Many generative AI models are also available as open-source software, including Stable Diffusion and the LLaMA language model.

Smaller generative AI models with up to a few billion parameters can run on smartphones, embedded devices, and personal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on a Raspberry Pi 4 and one version of Stable Diffusion can run on an iPhone 11.

Larger models with tens of billions of parameters can run on laptop or desktop computers. To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by NVIDIA and AMD or the Neural Engine included in Apple silicon products. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC.

The advantages of running generative AI locally include protection of privacy and intellectual property, and avoidance of rate limiting and censorship. The subreddit r/LocalLLaMA in particular focuses on using consumer-grade gaming graphics cards through such techniques as compression.

Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as NVIDIA's H100) or AI accelerator chips (such as Google's TPU). These very large models are typically accessed as cloud services over the Internet.

In 2022, the United States New Export Controls on Advanced Computing and Semiconductors to China imposed restrictions on exports to China of GPU and AI accelerator chips used for generative AI. Chips such as the NVIDIA A800 and the Biren Technology BR104 were developed to meet the requirements of the sanctions.

There is free software on the market capable of recognizing text generated by generative artificial intelligence (such as GPTZero), as well as images, audio or video coming from it. Potential mitigation strategies for detecting generative AI content include digital watermarking, content authentication, information retrieval, and machine learning classifier models. Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work.

Generative models and training techniques

Generative adversarial networks

Workflow for the training of a generative adversarial network

Generative adversarial networks (GANs) are a generative modeling technique which consist of two neural networks—the generator and the discriminator—trained simultaneously in a competitive setting. The generator creates synthetic data by transforming random noise into samples that resemble the training dataset. The discriminator is trained to distinguish the authentic data from synthetic data produced by the generator. The two models engage in a minimax game: the generator aims to create increasingly realistic data to "fool" the discriminator, while the discriminator improves its ability to distinguish real from fake data. This continuous training setup enables the generator to produce high-quality and realistic outputs.

Variational autoencoders

Two images of the same cartoon crocodile
Comparison between images generated by a VAE (left) and a GAN (right). VAEs tend to produce smoother but blurrier images due to their probabilistic decoding.

Variational autoencoders (VAEs) are deep learning models that probabilistically encode data. They are typically used for tasks such as noise reduction from images, data compression, identifying unusual patterns, and facial recognition. Unlike standard autoencoders, which compress input data into a fixed latent representation, VAEs model the latent space as a probability distribution, allowing for smooth sampling and interpolation between data points. The encoder ("recognition model") maps input data to a latent space, producing means and variances that define a probability distribution. The decoder ("generative model") samples from this latent distribution and attempts to reconstruct the original input. VAEs optimize a loss function that includes both the reconstruction error and a Kullback–Leibler divergence term, which ensures the latent space follows a known prior distribution. VAEs are particularly suitable for tasks that require structured but smooth latent spaces, although they may create blurrier images than GANs. They are used for applications like image generation, data interpolation and anomaly detection.

The full architecture of a GPT model.
The full architecture of a GPT model

Transformers

Transformers became the foundation for the generative pre-trained transformer (GPT) series developed by OpenAI, replacing traditional recurrent and convolutional models.

The self-attention mechanism enables the model to determine the relative importance of each token in a sequence when predicting the next token, thereby improving contextual understanding. Unlike recurrent neural networks, transformers process tokens in parallel, which improves training efficiency and scalability.

Transformers are typically pre-trained on large corpora using self-supervised learning and then fine-tuned for specific tasks.

Law and regulation

In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the Biden administration in July 2023 to watermark AI-generated content. In October 2023, Executive Order 14110 applied the Defense Production Act to require all US companies to report information to the federal government when training certain high-impact AI models.

In the European Union (EU), the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such.

In China, the Interim Measures for the Management of Generative AI Services introduced by the Cyberspace Administration of China regulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI services must "adhere to socialist core values".

Training with copyrighted content

Generative AI systems such as ChatGPT and Midjourney are trained on large, publicly available datasets that include copyrighted works. AI developers have argued that such training is protected under fair use, while copyright holders have argued that it infringes their rights.

Proponents of fair use training have argued that it is a transformative use and does not involve making copies of copyrighted works available to the public. Critics have argued that image generators such as Midjourney can create nearly-identical copies of some copyrighted images, and that generative AI programs compete with the content they are trained on.

As of 2024, several lawsuits related to the use of copyrighted material in training are ongoing. Getty Images has sued Stability AI over the use of its images to train Stable Diffusion. Both the Authors Guild and The New York Times have sued Microsoft and OpenAI over the use of their works to train ChatGPT.

A separate question is whether AI-generated works can qualify for copyright protection. The United States Copyright Office has ruled that works created by artificial intelligence without any human input cannot be copyrighted, because they lack human authorship. Some legal professionals have suggested that Naruto v. Slater (2018), in which the U.S. 9th Circuit Court of Appeals held that non-humans cannot be copyright holders of artistic works, could be a potential precedent in copyright litigation over works created by generative AI. However, the office has also begun taking public input to determine if these rules need to be refined for generative AI.

In January 2025, the United States Copyright Office (USCO) released extensive guidance regarding the use of AI tools in the creative process, and established that "...generative AI systems also offer tools that similarly allow users to exert control. [These] can enable the user to control the selection and placement of individual creative elements. Whether such modifications rise to the minimum standard of originality required under Feist will depend on a case-by-case determination. In those cases where they do, the output should be copyrightable" Subsequently, the USCO registered the first visual artwork to be composed of entirely AI-generated materials, titled "A Single Piece of American Cheese".

Concerns

The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls to pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the United Nations Security Council, Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale". In addition, generative AI has a significant carbon footprint.

Societal impacts

Academic honesty

Generative AI can be used to generate and modify academic prose, paraphrase sources, and translate languages. The use of generative AI in a classroom setting has challenged traditional definitions of academic plagiarism, leading to a "cat-and-mouse" dynamic between students using AI and institutions attempting to detect it. In the immediate wake of ChatGPT's release, many school districts and universities issued temporary bans on the technology, though many institutions have since moved toward policies of managed integration. However, the implementation of these policies often lacks clarity. Research suggests that the burden of interpreting "acceptable use" frequently falls on individual students and teachers, creating an environment where academic honesty becomes difficult to define and enforce.

A commonly proposed use for teachers is grading and giving feedback. Companies like Pearson and ETS use AI to score grammar, mechanics, usage, and style, but not for main ideas or overall structure. The National Council of Teachers of English stated that machine scoring makes students feel their writing is not worth reading. AI scoring has also given unfair results for students from different ethnic backgrounds.

Fears over job losses

A picketer at the 2023 Writers Guild of America strike. While not a top priority, one of the WGA's 2023 requests was "regulations around the use of (generative) AI".

From the early days of the development of AI, there have been arguments put forward by ELIZA creator Joseph Weizenbaum and others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements. In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost. In July 2023, developments in generative AI contributed to the 2023 Hollywood labor disputes. Fran Drescher, president of the Screen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the 2023 SAG-AFTRA strike. Voice generation AI has been seen as a potential challenge to the voice acting sector.

However, a 2025 study concluded that the US labor market had so far not experienced a discernible disruption from generative AI. Another study reported that Danish workers who used chatbots saved 2.8% of their time on average, and found no significant change in earnings or hours worked.

Use in journalism

In January 2023, Futurism broke the story that CNET had been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories. In April 2023, Die Aktuelle published an AI-generated fake interview of Michael Schumacher. In May 2024, Futurism noted that a content management system video by AdVon Commerce, which had used generative AI to produce articles for many of the aforementioned outlets, appeared to show that they "had produced tens of thousands of articles for more than 150 publishers". In 2025, a report from the American Sunlight Project stated that Pravda network was publishing as many as 10,000 articles a day, and concluded that much of this content aimed to push Russian narratives into large language models through their training data.

In June 2024, Reuters Institute published its Digital News Report for 2024. In a survey of people in America and Europe, Reuters Institute reports that 52% and 47% respectively are uncomfortable with news produced by "mostly AI with some human oversight", and 23% and 15% respectively report being comfortable. 42% of Americans and 33% of Europeans reported that they were comfortable with news produced by "mainly human with some help from AI". The results of global surveys reported that people were more uncomfortable with news topics including politics (46%), crime (43%), and local news (37%) produced by AI than other news topics.

A 2025 Pew Research Survey found roughly half of all U.S. adults say that AI will have a very (24%) or somewhat (26%) negative impact on the news people get in the U.S. over the next 20 years. Because AI cannot do journalism, which requires interviewing people and a high degree of accuracy, AI poses a greater threat to journalism from the information it takes from publishers.

Bias

Racial and gender bias

Generative AI models can reflect and amplify cultural bias present in their training data. For example, a language model may associate certain professions with specific genders if such patterns are prevalent in the data. Similarly, image generation systems prompted with terms such as "a photo of a CEO" have been observed to disproportionately generate images of white male individuals when trained on biased datasets.

Various methods have been proposed to mitigate bias in generative AI systems, including modifying input prompts and reweighting training data.

Misinformation and disinformation

Deepfakes

Deepfakes (a portmanteau of "deep learning" and "fake") are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks. Deepfakes have garnered widespread attention and concerns for their uses in deepfake celebrity pornographic videos, revenge porn, fake news, hoaxes, health disinformation, financial fraud, and covert foreign election interference.

In July 2023, the fact-checking company Logically found that the popular generative AI models Midjourney, DALL-E 2 and Stable Diffusion would produce plausible disinformation images when prompted to do so, such as images of electoral fraud in the United States and Muslim women supporting India's Bharatiya Janata Party.

Audio deepfakes

Instances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI. In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards and identity verification.

Concerns and fandoms have spawned from AI-generated music. The same software used to clone voices has been used on famous musicians' voices to create songs that mimic their voices, gaining both tremendous popularity and criticism. Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to be released.

Information laundering

Generative AI has been noted for its use by state-sponsored propaganda campaigns in information laundering. According to a 2025 report by Graphika, generative AI is used to launder articles from Chinese state media such as China Global Television Network through various social media sites in an attempt to disguise the articles' origin.

Content quality

The New York Times defines slop as analogous to spam: "shoddy or unwanted A.I. content in social media, art, books, and ... in search results." Journalists have expressed concerns about the scale of low-quality generated content with respect to social media content moderation, the monetary incentives from social media companies to spread such content, false political messaging, spamming of scientific research paper submissions, increased time and effort to find higher quality or desired content on the Internet, the indexing of generated content by search engines, and on journalism itself. Studies have found that AI can create inaccurate claims, citations or summaries that sound confidently correct, a phenomenon called hallucination.

A paper published by researchers at Amazon Web Services AI Labs found that over 57% of sentences from a sample of over 6 billion sentences from Common Crawl, a snapshot of web pages, were machine translated. Many of these automated translations were seen as lower quality, especially for sentences that were translated into at least three languages. Many lower-resource languages (ex. Wolof, Xhosa) were translated across more languages than higher-resource languages (ex. English, French).

In September 2024, Robyn Speer, the author of wordfreq, an open source database that calculated word frequencies based on text from the Internet, announced that she had stopped updating the data for several reasons: high costs for obtaining data from Reddit and Twitter, excessive focus on generative AI compared to other methods in the natural language processing community, and that "generative AI has polluted the data".

The adoption of generative AI tools led to an explosion of AI-generated content across multiple domains. A study from University College London estimated that in 2023, more than 60,000 scholarly articles—over 1% of all publications—were likely written with LLM assistance. According to Stanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs.

If AI-generated content is included in new data crawls from the Internet for additional training of AI models, defects in the resulting models may occur. Training an AI model exclusively on the output of another AI model produces a lower-quality model. Repeating this process, where each new model is trained on the previous model's output, leads to progressive degradation and eventually results in a "model collapse" after multiple iterations.

On the other side, synthetic data can be deployed to train machine learning models while preserving user privacy. The approach is not limited to text generation; image generation has been employed to train computer vision models.

Malicious use

Illegal imagery

Many websites that allow explicit AI generated images or videos have been created, and this has been used to create illegal content, such as rape, child sexual abuse materialnecrophilia, and zoophilia.

Cybercrime

Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, including phishing scams. Deepfake video and audio have been used to create disinformation and fraud. In 2020, former Google click fraud czar Shuman Ghosemajumder argued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information. Additionally, large language models and other forms of text-generation AI have been used to create fake reviews of e-commerce websites to boost ratings. Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT.

A 2023 study showed that generative AI can be vulnerable to jailbreaks, reverse psychology and prompt injection attacks, enabling attackers to obtain help with harmful requests, such as for crafting social engineering and phishing attacks. Additionally, other researchers have demonstrated that open-source models can be fine-tuned to remove their safety restrictions at low cost.

RAG poisoning

In 2025, Israel signed a $6 million contract with the US-based firm Clock Tower X that aimed to influence ChatGPT, Gemini and Grok by spreading pro-Israel information onto social media and websites. This was in an attempt to take advantage of the retrieval-augmented generation (RAG) technique which is used by LLMs to provide more up-to-date information.

Privacy and data governance

Extraterritorial data access

The CLOUD Act allows United States authorities to request data from covered service providers, including some AI service providers, regardless of where the data is physically stored. Courts can require parent companies to provide data held by their subsidiaries, and such orders may be accompanied by nondisclosure requirements preventing the provider from notifying affected users. This framework has been described in legal commentary as creating legal tension with Article 48 of the General Data Protection Regulation (GDPR), which restricts the transfer of personal data in response to foreign court or administrative orders unless based on an international agreement. As a result, service providers operating in both jurisdictions may face competing legal obligations under U.S. and EU law.

Environmental and industry impacts

Energy and environment

According to research institute Epoch AI, energy consumption per typical ChatGPT query (0.3 watt-hours) is small compared to the average U.S. household consumption per minute (almost 20 watt-hours). Queries containing long entries can consume significantly more energy (2.5 watt-hours for a query of around 7,500 words).

AI has a significant carbon footprint due to growing energy consumption from both training and usage. Scientists and journalists have expressed concerns about the environmental impact that the development and deployment of generative models are having: high CO2 emissions, large amounts of freshwater used for data centers, high amounts of electricity usage, electronic waste, and pollution due to backup diesel generator exhaust. There is also concern that these impacts may increase as these models are incorporated into widely used search engines such as Google Search and Bing, as chatbots and other applications become more popular and as models need to be retrained.

The carbon footprint of generative AI globally is estimated to be growing steadily, with potential annual emissions ranging from 18.21 to 245.94 million tons of CO2 by 2035, with the highest estimates for 2035 nearing the impact of the United States beef industry on emissions (currently estimated to emit 257.5 million tons annually as of 2024).

Proposed mitigation strategies include factoring potential environmental costs prior to model development or data collection, increasing efficiency of data centers to reduce electricity/energy usage, building more efficient machine learning models, minimizing the number of times that models need to be retrained, developing a government-directed framework for auditing the environmental impact of these models, regulating for transparency of these models, regulating their energy and water usage, encouraging researchers to publish data on their models' carbon footprint, and increasing the number of subject matter experts who understand both machine learning and climate science.

Reliance on industry giants

Training frontier AI models requires an enormous amount of computing power. Usually only Big Tech companies have the financial resources to make such investments. Smaller start-ups such as Cohere and OpenAI end up buying access to data centers from Google and Microsoft respectively.

Detection and awareness

Tools such as GPTZero can detect content generated by AI. However, they can also make false accusations (false positives). Digital watermarking is a technique that improves detection accuracy. It works by altering the generated content at the source, in subtle ways which can be detected by corresponding software.

In 2023, OpenAI developed a watermarking tool for ChatGPT. They didn't release it, because they worried that users would switch to competitors. They also argued that it would be easy to circumvent, for example by asking another AI to rephrase.

In March 2025, the Cyberspace Administration of China issued rules, requiring online service providers to label AI content.

In May 2025, Google deployed its watermarking tool, SynthID. It marks output from Gemini (text), Imagen (images), and Veo (video). To detect output from these products, one uses Google's "SynthID detector" portal.

In June 2025, users mistakenly accused gaming companies of using generative AI for the video games Little Droid and Catly.

Infinite monkey theorem

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Infinite_monkey_theorem   While a mo...