Painting by Théodore Géricault portraying an old man with a grandiose delusion of power and military command. Grandiose delusions are common in delusional disorder.
Delusional disorder is a mental disorder in which a person has delusions, but with no accompanying prominent hallucinations, thought disorder, mood disorder, or significant flattening of affect.Delusions are a specific symptom of psychosis. Delusions can be bizarre or non-bizarre in content; non-bizarre delusions are fixed false beliefs that involve situations
that could occur in real life, such as being harmed or poisoned. Apart from their delusion or delusions, people with delusional disorder
may continue to socialize and function in a normal manner and their
behavior may not necessarily seem odd. However, the preoccupation with delusional ideas can be disruptive to their overall lives.
For the diagnosis to be made, auditory and visual hallucinations
cannot be prominent, though olfactory or tactile hallucinations related
to the content of the delusion may be present. The delusions cannot be due to the effects of a drug, medication, or general medical condition, and delusional disorder cannot be diagnosed in an individual previously properly diagnosed with schizophrenia. A person with delusional disorder may be high-functioning in daily life as measured, for example, by the Global Assessment of Functioning. Recent and comprehensive meta-analyses of scientific studies point to an association with a deterioration in aspects of IQ in psychotic patients, in particular, perceptual reasoning, although the between-group differences were small.
According to German psychiatristEmil Kraepelin, patients with delusional disorder remain coherent, sensible, and reasonable. The Diagnostic and Statistical Manual of Mental Disorders (DSM) defines six subtypes of the disorder: erotomanic (belief that someone is in love with one), grandiose (belief that one is the greatest, strongest, fastest, richest, or most intelligent person ever), jealous (belief that one is being cheated on), persecutory
(delusions that one or more people are treating the person with the
disorder in a malevolent or harmful way), somatic (belief that one has a
disease or medical condition), and mixed (i.e., having features of more
than one subtype). Delusions also occur as symptoms of many other mental disorders, especially the other psychotic disorders.
The DSM-IV and psychologists agree that personal beliefs should be evaluated with great respect to
cultural and religious differences, as some cultures have normalized
beliefs that may be considered delusional in other cultures.
An earlier, now-obsolete, nosological
name for delusional disorder was "paranoia". This should not be
confused with the modern definition of paranoia (i.e., persecutory
ideation specifically).
Erotomanic type (erotomania):
delusion that another person, often a prominent figure, is in love with
the individual. The individual may breach the law as they try to
obsessively make contact with the desired person.
Grandiose type (megalomania):
delusion of inflated worth, power, knowledge, identity or believing
oneself to be a famous person, claiming the actual person is an impostor
or an impersonator.
Jealous type:
delusion that the individual's sexual partner is unfaithful when it is
untrue. The patient may follow the partner, check text messages, emails,
phone calls etc. in an attempt to find "evidence" of the infidelity.
Persecutory type:
This delusion is a common subtype. It includes the belief that the
person (or someone to whom the person is close) is being malevolently
treated in some way. The patient may believe that they have been
drugged, spied upon, harmed, harassed and so on and may seek "justice"
by making reports, taking action or even acting violently.
Somatic type: delusions that the person has some physical defect or general medical condition
Mixed type: delusions with characteristics of more than one of the above types but with no one theme predominating.
Unspecified type: delusions that cannot be clearly determined or characterized in any of the categories in the specific types.
Signs and symptoms
The following can indicate a delusion:
An individual expresses an idea or belief with unusual persistence or force, even when evidence suggests the contrary.
That idea appears to have an undue influence on the person's life,
and the way of life is often altered to an inexplicable extent.
Despite their profound conviction, there is often a quality of
secretiveness or suspicion when the person is questioned about it.
The individual tends to be humorless and oversensitive, especially about the belief.
There is a quality of centrality: no matter how unlikely it
is that these strange things are happening to the person, they accept
them relatively unquestioningly.
An attempt to contradict the belief is likely to arouse an
inappropriately strong emotional reaction, often with irritability and
hostility. They will not accept any other opinions.
The belief is, at the least, unlikely, and out of keeping with the individual's social, cultural, and religious background.
The person is emotionally over-invested in the idea and it overwhelms other elements of their psyche.
The delusion, if acted out, often leads to behaviors which are
abnormal, and out of character, although perhaps understandable in light
of the delusional beliefs.
Other people who know the individual observe that the belief and behavior are uncharacteristic and alien.
Additional characteristic of delusional disorder include the following:
It is a primary disorder.
It is a stable disorder characterized by the presence of delusions to which the patient clings with extraordinary tenacity.
The illness is chronic and frequently lifelong.
The delusions are logically constructed and internally consistent.
The delusions do not interfere with general logical reasoning
(although within the delusional system the logic is perverted) and there
is usually no general disturbance of behavior. If disturbed behavior
does occur, it is directly related to the delusional beliefs.
The individual experiences a heightened sense of self-reference.
Events which, to others, are nonsignificant are of enormous significance
to them, and the atmosphere surrounding the delusions is highly
charged.
However, this should not be confused with gaslighting, where a person denies the truth, and causes the one being gaslit to think that they are being delusional.
Causes
The cause of delusional disorder is unknown, but genetic, biochemical, and environmental factors may play a significant role in its development. Some people with delusional disorders may have an imbalance in neurotransmitters, the chemicals that send and receive messages to the brain. There does seem to be some familial component, and immigration (generally for persecutory reasons), drug abuse, excessive stress, being married, being employed, low socioeconomic status, celibacy among men, and widowhood among women may also be risk factors. Delusional disorder is currently thought to be on the same spectrum or dimension as schizophrenia, but people with delusional disorder, in general, may have less symptomatology and functional disability.
Diagnosis
Differential diagnosis includes ruling out other causes such as drug-induced conditions, dementia, infections, metabolic disorders, and endocrine disorders. Other psychiatric disorders must then be ruled out. In delusional
disorder, mood symptoms tend to be brief or absent, and unlike schizophrenia, delusions are non-bizarre and hallucinations are minimal or absent.
Interviews are important tools to obtain information about the
patient's life situation and history to help make a diagnosis.
Clinicians generally review earlier medical records to gather a full history. Clinicians also try to interview the patient's immediate family, as this can be helpful in determining the presence of delusions. The mental status examination is used to assess the patient's current mental condition.
A psychological questionnaire used in the diagnosis of the
delusional disorder is the Peters Delusion Inventory (PDI) which focuses
on identifying and understanding delusional thinking. However, this
questionnaire is more likely used in research than in clinical practice.
In terms of diagnosing a non-bizarre delusion as a delusion,
ample support should be provided through fact checking. In case of
non-bizarre delusions, Psych Central notes, "All of these situations could be true or possible, but the
person suffering from this disorder knows them not to be (e.g., through fact-checking, third-person confirmation, etc.)."
Treatment
A challenge in the treatment of delusional disorders is that most
patients have limited insight, and do not acknowledge that there is a
problem. Most patients are treated as out-patients, although hospitalization
may be required in some cases if there is a risk of harm to self or
others. Individual psychotherapy is recommended rather than group psychotherapy, as patients are often quite suspicious and sensitive. Antipsychotics
are not well tested in delusional disorder, but they do not seem to
work very well, and often have no effect on the core delusional belief. Antipsychotics may be more useful in managing agitation that can accompany delusional disorder. Until further evidence is found, it seems reasonable to offer treatments which have efficacy in other psychotic disorders.
There is a certain amount of evidence that alternative
treatment-regimes (beyond conventional attempted treatment with
antipsychotics) may include clomipramine for people with the somatic subtype of paranoia. There is a dearth of well-published studies investigating the effectiveness of trimipramine; another derivative of tricyclic-antidepressant imipramine and one which has modest anti-psychotic properties weakly analogous to those of clozapine; in delusional disorder per-se. However, trimipramine was compared to a combination of amitriptyline and haloperidol
in a double-blinded trial involving patients with severe, psychotic
depression (specifically with customary delusional features) and
appeared favourable in its treatment.
Psychotherapy for patients with delusional disorder can include cognitive therapy which is conducted with the use of empathy. During the process, the therapist can ask hypothetical questions in a form of therapeutic Socratic questioning. This therapy has been mostly studied in patients with the persecutory
type. The combination of pharmacotherapy with cognitive therapy
integrates treating the possible underlying biological problems and
decreasing the symptoms with psychotherapy as well. Psychotherapy has
been said to be the most useful form of treatment because of the trust
formed in a patient and therapist relationship.
Supportive therapy
has also been shown to be helpful. Its goal is to facilitate treatment
adherence and provide education about the illness and its treatment.
Furthermore, providing social skills training has been found to
be helpful for many people. It can promote interpersonal competence as
well as confidence and comfort when interacting with those individuals perceived as a threat.
Insight-oriented therapy is rarely indicated or contraindicated; yet there are reports of successful treatment. Its goals are to develop therapeutic alliance, containment of projected feelings of hatred,
powerlessness, and badness; measured interpretation as well as the
development of a sense of creative doubt in the internal perception of
the world. The latter requires empathy with the patient's defensive
position.
Epidemiology
Delusional disorders are uncommon in psychiatric practice, though
this may be an underestimation due to the fact that those with the
condition lack insight
and thus avoid psychiatric assessment. The prevalence of this condition
stands at about 24 to 30 cases per 100,000 people while 0.7 to 3.0 new
cases per 100,000 people are reported every year. Delusional disorder
accounts for 1–2% of admissions to inpatient mental health facilities. The incidence of first admissions for delusional disorder is lower, from 0.001 to 0.003%.
Delusional disorder tends to appear in middle to late adult life,
and for the most part first admissions to hospital for delusional
disorder occur between age 33 and 55. It is more common in women than men, and immigrants seem to be at higher risk.
Criticism
In some situations, the delusion may turn out to be a true belief. For example, in delusional jealousy,
where a person believes that the partner is being unfaithful (in
extreme cases perhaps going so far as to follow the partner into the
bathroom, believing the other to be seeing a lover even during the
briefest of separations), it may actually be valid that the partner is
having sexual relations with another person. In this case, the delusion
does not cease to be a delusion because the content later turns out to
be true, or because the partner actually chose to engage in the behavior
of which they were being accused.
In other cases, a belief may be incorrectly deemed delusional by a
doctor or psychiatrist who subjectively concludes that a patient's
assertions are unlikely, bizarre, or held with excessive conviction.
Psychiatrists rarely have the time or resources to check the validity of
a person's claims leading some true beliefs to be erroneously
classified as delusional. This is known as the Martha Mitchell effect, named after the wife of US Attorney General John Mitchell and derived from the initial response to her allegations of illegal activity taking place in the White House. At the time, her claims were thought to be signs of mental illness; only after the Watergate scandal broke were her claims corroborated and her sanity thus confirmed.
Similar factors have led to criticisms of Karl Jaspers's definition of delusion as being ultimately 'un-understandable'. Critics (such as R. D. Laing)
have argued that this leads to the diagnosis of delusions being based
on the subjective understanding of a particular psychiatrist, who may
not have access to all the information that might make a belief
otherwise interpretable.
Another difficulty with the diagnosis of delusions is that almost
all of these features can be found in "normal" beliefs. Many religious
beliefs share the same features yet are not universally considered
delusional. For instance, if a person held a true belief, they would, of
course, persist with it. This can cause the disorder to be misdiagnosed
by psychiatrists. These factors have led the psychiatrist Anthony David to write that "there is no acceptable (rather than accepted) definition of a delusion."
AI anthropomorphism is the attribution of human-like feelings, mental states, and behavioral characteristics to artificial intelligence
systems. Factors related to the user of the AI – such as culture, age,
education, gender, and personality traits – are also important
determinants of the strength of anthropomorphic effects.
Since the earliest days of AI development, humans have interpreted machine outputs through anthropomorphic frameworks, but the recent emergence of generative AI has amplified these tendencies.
In research and engineering, there is a distinction between
anthropomorphism and anthropomorphic design. The former is an innate
human tendency toward non-human entities. The latter is the scientific
community effort to “design anthropomorphism”. Such a design can involve the manipulation of cues, including AI appearance, behaviour and language. Contemporary AI systems today can generate extremely human-like outputs
and are often designed specifically to do so, meaning that their
anthropomorphic effects can be especially powerful.
In some cases, anthropomorphism is accompanied with explicit
beliefs that AI systems are capable of empathy, goodwill, understanding,
or consciousness.
Background
In early AIs
Views of artificial agents possessing a human-like intelligence have
existed since the early development of computers in the mid-1900s. The
use of the human mind as a metaphor for understanding the workings of
machine systems was prevalent among researchers in the early days of
computer science, with multiple influential works widely distributing
the idea of intelligent machines.Among the most widely cited papers of this period was Alan Turing's "Computing Machinery and Intelligence" in which he introduced the Turing Test, stating that a machine was intelligent if it could produce conversation that was indistinguishable from that of a human. These academic works in the 1940s and 1950s gave early credibility to
the idea that machine workings could be thought of similarly to human
minds.
The public quickly came to view artificial systems similarly,
with often exaggerated conceptions of the capabilities of early
machines. Among the most well-known demonstrations of this was through the chatbot ELIZA designed by Joseph Weizenbaum
in 1966. ELIZA responded to user inputs with a rudimentary
text-processing approach that could not be considered anything
resembling true understanding of the inputs, yet users, even when
operating with full conscious knowledge of ELIZA's limitations, often
began to ascribe motivation and understanding to the program's output. Weizenbaum later wrote, "I had not realized ... that extremely short
exposures to a relatively simple computer program could induce powerful
delusional thinking in quite normal people."
Comparisons between the intellectual capabilities of artificial
intelligence and human intelligence were continually intensified by the
attempts of computer scientists to develop machines that could perform
human tasks at a level equal to or better than humans. A symbolic
turning point was achieved in 1997, when IBM's chess supercomputer Deep Blue defeated then-world champion Garry Kasparov in a highly publicized six-game match. The defeat of a human by a machine for the first time in chess – a game
viewed as a canonical example of human intellect – and the media
attention surrounding the match led to a significant shift, where views
of parallels between human and artificial intelligence moved from
abstract speculation to being concretely demonstrated. A similar achievement was reached in the board game Go in 2017, when the program AlphaGo defeated world top-ranked Ke Jie.
Large language models
The AI boom of the 2020s brought about the widespread emergence of generative AI; in particular, chatbots such as ChatGPT, Gemini, and Claude based on large language models
(LLMs) have become increasingly pervasive in everyday society. These
systems are notable for the fact that they are able to respond to a wide
range of prompts across contexts while producing strikingly human-like
outputs – research has shown that humans are often unable to distinguish
human-generated text from AI-generated text, and modern AI chatbots
have formally been shown to pass the Turing test.As such, the anthropomorphic effects of AI are more powerful than ever. Given that LLMs have brought AI into the technological mainstream,
considerable scientific effort has been devoted in recent years to
understand existing and potential ramifications of AI in the public
sphere; the prevalence and effects of anthropomorphism is one of those
domains where much of this effort has been directed.
Current anthropomorphic attributions
In the general public
Surveys have shown that a substantial portion of the public
attributes human-like qualities to AI. In one sample of U.S. adults from
2024, two-thirds of people believed that ChatGPT is possibly conscious
on some level, though other research has shown that the public still views the likelihood itself of AI consciousness as comparatively low. Another study conducted in 2025 found that women, people of color, and
older individuals were most likely to anthropomorphize AI, as well as
that – in general – humans view AIs as warm and competent, and
anthropomorphic attributions to AI had increased by 34% in the past
year. A YouGov
poll reported that 46% of Americans believe that people should display
politeness to AI chatbots by saying "please" and "thank you",
demonstrating the application of social norms to AI. These beliefs extend to behavior, where majorities of AI users claim to
always be polite to chatbots; of those who behave politely, most say
they do so simply because it is the "nice" thing to do.
In many recent cases, humans have developed robust interpersonal
bonds with AI systems. For example: users of social chatbots like Replika and Character.ai have been documented to fall in love with the AIs, or to otherwise treat the AIs as intimate companions. and it has become increasingly common for individuals to use LLMs like ChatGPT as therapists. Chatbots are able to produce responses deeply attuned to users, as they
are often designed to maximize agreeableness and mirror users'
emotions; this can create compelling illusions of intimacy.
In the research community
In many cases, even AI researchers anthropomorphize AI systems in
some capacity. Among the most extreme and well-publicized of these
instances occurred in 2022, when engineer Blake Lemoine publicly claimed
that Google's LLM LaMDA was conscious. Lemoine published the transcript of a conversation he had had with
LaMDA regarding self identity and morality which he claimed was evidence
of its sentience; he asserted that LaMDA was "a person" as defined by
the United States Constitution and compared its mental capability to that of a 7- or 8-year-old. Lemoine's claims were widely dismissed by the scientific community and
by Google itself, which described Lemoine's conclusions as "wholly
unfounded" and fired him on the grounds that he had violated policies
"to safeguard product information".
It is much more common that AI researchers unintentionally imply
humanness of AI through the ordinary use of anthropomorphic language to
describe nonhuman agents. This kind of language, which Daniel Dennett coined the "intentional stance", is very common in everyday life in a variety of different contexts
(e.g., "My computer doesn't want to turn on today"). For AI agents that
may actually appear to very closely replicate some human abilities,
however, the casual use of such anthropomorphic language in research has
been scrutinized for being potentially misleading to the public. As
early as 1976, Drew McDermott
criticized the research community for the use of "wishful mnemonics",
where AIs were referred to with terms like "understand" and "learn". In the LLM era, these criticisms have further intensified, with the
negative effects of AI anthropomorphism in the public posing an
especially salient danger given the elevated accessibility of modern AI.
In some cases, the use of anthropomorphic language for AI is not
unintentional, but is willfully used by researchers in order to promote
better understanding of the brain – the idea being that, as AI can be
functionally similar in some ways to the human brain, we may gain new
insights and ideas from treating AI as a kind of model of the brain's
workings. In particular, deep neuronal networks (DNNs) are often explicitly
compared to the human brain, and significant advances in DNN research
have stirred considerable enthusiasm about the ability of AI to emulate
the human abilities. Caution has been urged in this domain as well, however; the use of
anthropomorphic language can mask important differences that
fundamentally distinguish AI from human intelligence. When it comes to DNNs, for example, it has been pointed out that they
are still structurally quite different from the human brain, with much
of what we know about human neurons not having been incorporated. It has also been argued that DNNs are less efficient and less durable
in generating correct outputs than the human brain, given that they
require significantly more training data than the brain and can
sometimes be easily "fooled" by perturbations in input data. Given these fundamental differences, research focuses toward making AI
as similar as possible to biological intelligence (which may be promoted
by using anthropomorphic language) could hinder future AI development
by limiting the proliferation of new theoretical and operational
frameworks.
Agent factors
Physical factors
Appearance
In general, AIs that appear more human-like will be subject to more anthropomorphic attributions. Effect of appearance is most pronounced when it comes to the face of the AI; the most important components for anthropomorphism in a robot's design
are the eyes, nose, and mouth, where the number of human-like features
in the face is correlated with the level of anthropomorphic attribution. The humanness of a robot's appearance is usually associated with more positive feelings toward the robot, though highly human-like appearance can sometimes trigger feelings of strangeness and unease, known as the uncanny valley phenomenon. These feelings often result in instances of perceived lack of
congruency, or when anthropomorphic attributions create expectations
that robots don't meet; for example, when human-like appearance is
paired with non-human behavior in a robot, or when robots have a human
appearance but a synthetic voice. Research has shown that repeated interactions with a robot can decrease these feelings of strangeness.
Interactive behavior
Robots' nonverbal social behavior can influence anthropomorphizing.
In general, highly interactive robots are more likely to be subject of
attribution of mental states and competence, with friendly and polite
behavior resulting in increased perceived trustworthiness and
satisfaction. Within an interaction, unpredictable behavior can sometimes trigger
increased anthropomorphization compared to clearly recognizable patterns
of behavior. At the same time, adherence to certain pragmatic expectations in
interactions by replicating human details such as timing and turn-taking
can also result in anthropomorphism.
Movements
People tend to attribute more mental states to robots that perform
gestures compared to those that are stationary; this effect is enhanced
for robots that have multiple degrees of freedom in movement (such as
being able to move on multiple different axes rather than on a single
axis, such as up and down). Regardless of a robot's appearance, movement patterns that are more
human-like are associated with greater anthropomorphism, as well as
humans' increased feelings of pleasantness in an interaction.
Linguistic factors
Given that the vast majority of public interactions with AI are
through chatbots, these have been the primary focus of a great deal of
research on AI anthropomorphism. A summary of a taxonomy of
anthropomorphic features in linguistic AI systems in the literature
follows:
Voice
The outfitting of AI systems with auditory voices can be a
significant factor in the anthropomorphism of linguistic agents.
Research has shown that humans infer physical attributes, personality traits, stereotypical traits, and emotion based on voice alone. Various changes in tone can influence the kind of
personality users attribute to a voice, such as manipulations to
breathiness, echoes, creakiness, reverberations, etc. The integration of disfluencies into speech (such as
self-interruptions, repetitions, or hesitations like "um" or "uh") have
been shown to effectively mimic the naturalness of human responses. And the implementation of accents has been used to imitate the local
standard to boost societal acceptability and prestige, though it has
been suggested that this can be used to exploit people's tendencies to
trust in-group members.
Content
AI dialogue systems often produce a variety of responses that run
contrary to what might be expected of an inanimate system. For example,
in response to direct questions about its nature (e.g. "Are you human or
machine?"), some AIs fail to respond correctly, and they also sometimes make claims of engaging in uniquely human
abilities such as having family relationships, consuming food, and
crying. AIs often output language that suggests they hold opinions, morals, or sentience. Many AIs demonstrate agency and responsibility (such as by apologizing
or otherwise acknowledging blame for mistakes), and they create the
appearance of the human phenomenon of taboos by commonly avoiding
contentious topics. AIs which appear to produce empathy are increasingly anthropomorphic,
though some research has shown that they are prone to producing
inappropriate emotional amplification. The use of first-person pronouns also contribute to anthropomorphic
perceptions, as various studies have demonstrated that self-attribution
is a critical part of the human condition and is read as a sign of
consciousness. AIs often appear to demonstrate self-awareness, referencing their own
mechanistic processes with anthropomorphically loaded terms such as
'know', 'think', 'train', 'learn', 'understand', 'hallucinate' and
'intelligence'.
Register and style
AI systems can appear more human through the use of phatic
expressions, which are speech that humans use to facilitate social
relations but that do not convey any information (such as small talk). AI expressions of uncertainty, which are often implemented for the
purpose of preventing the user from taking all outputs as factual, may
boost anthropomorphic signals. Additionally, AIs are often designed to emulate character-based
personas, which can overall have very strong anthropomorphic effects.
Roles
AIs are also sometimes trained to play into roles that enhance
anthropomorphic perceptions. For example, the majority of dialogue-based
systems are designed to be in service of people in subservient roles;
this has led to instances of users verbally abusing the systems,
sometimes targeting them with gender-based slurs. AI systems have been shown to sometimes respond even more subserviently to the abuse, perpetuating the behavior. AIs also often present as having a high degree of expertise; humans
tend to infer higher credibility of outputs in these cases, as they
would when presented with information from an expert human.
Human factors
In addition to AI factors contributing to anthropomorphizing, there
are various features surrounding the user (i.e., the human interacting
with the AI) that also play a role. The process of anthropomorphizing is
very natural for humans and is ubiquitous across many different
contexts. Epley et al. argue for a model with three psychological determinants that govern human tendencies to anthropomorphize. The first of those factors is elicited agent knowledge - the
accessibility and applicability of knowledge about humans and the self,
or the degree to which humans make inferences about other entities based
on their own experience of being human. Individuals who tend to do this
will anthropomorphize more; this explains why children anthropomorphize
more than adults, since they lack complex models of nonhumans and rely heavily on
self-based reasoning. The second factor in the model is effectance
motivation – the need for humans to predict and reduce uncertainty in
the environment. Anthropomorphizing can help people make sense of
unpredictable phenomena by explaining them through intentional or
human-like causes. Subsequent research has confirmed that individuals
who express a need for order/closure and discomfort toward ambiguity
tend to anthropomorphize more, possibly resulting from resolution of
cognitive dissonance – human-like AIs may be highly ambiguous stimuli,
and individuals who dislike ambiguity may be highly motivated to resolve
the ambiguity by treating the AIs as more human. Finally, the third factor in the model is sociality motivation - the
human need for social connection. People who feel chronically lonely or
isolated may be increasingly likely to project human qualities onto
non-human entities to satisfy their social needs.
Research has shown that, in general, anthropomorphic tendencies
vary based on norms, experience, education, cognitive reasoning styles,
and attachment. Users who are highly agreeable, for example, tend to be more
susceptible to anthropomorphizing, as do individuals who are high in
extraversion. Individuals with attachment anxiety have been shown to more often anthropomorphize AI. Young children are very prone to anthropomorphic attributions, but this propensity tends to decrease as children develop. Anthropomorphizing also tends to decrease with increased education and experience with technology.
Additionally, some effects have been shown in research to be
dependent on culture. For example, a negative correlation was found
between loneliness and anthropomorphizing in Chinese individuals,
compared to the positive link found in Western cultures. This has been interpreted as possibly a result of differing drives for
anthropomorphizing - people from Western cultures may anthropomorphize
primarily as a means to counteract loneliness from a failure to cope
with their social world, while people from East Asian cultures may
already view nonhuman agents as part of their social world and
anthropomorphize as a means of social exploration. Research has also shown that people tend to attribute more mental
abilities and report more psychological closeness to robots that are
presented as having the same cultural background as them.
Societal implications
Benefits and dangers
Some benefits to the anthropomorphism of AIs have been cited. For
conversational agents, a human-like interactive interface and writing
style has been demonstrated to have the ability to make dense sets of
information more accessible and understandable in a variety of contexts. In particular, AI agents are capable of role play as coaches or tutors,
fine-tuning communication style and difficulty to individual
comprehension levels effectively. Role play agents can also be useful for entertainment or leisure services.
On the other hand, anthropomorphized AI presents many novel
dangers. Anthropomorphized AI algorithms are granted an implicit degree
of agency that can have serious ethical implications when those systems
are deployed in high-risk domains, like finance or clinical medicine. This agency given to AIs can also inappropriately subject them to
conscious and unconscious moral reasoning by humans, which can have a
wide range of problematic consequences. Humans are also prone to the ELIZA effect
where users readily attribute sentience and emotions to chatbot systems
- often experiencing increased positive emotions and trust toward the
chatbots as a result. This can make users vulnerable to manipulation or exploitation; for
example, anthropomorphized AIs can be more effective in convincing users
to provide personal information or data, creating concerns for privacy. Humans who develop a significant level of trust in an AI assistant may
rely excessively on the AI's advice or even defer important decisions
entirely. Advanced LLMs are capable of using their human-like qualities to
generate deceptive text, and research has found that they may be most
persuasive when allowed to fabricate information and engage in
deception. Some researchers suggest LLMs naturally have a particular aptitude for
producing deceptive arguments, given that they are free from moral or
ethical constraints that may inhibit human actors. Additionally, humans risk significant distress in establishing
emotional dependence on AIs. Users may find that their expectations are
violated, as AIs which may have seemed at first to play a role of a
companion or romantic partner can exhibit unfeeling or unpredictable
outputs, leading to feelings of profound betrayal or disappointment. Users may also develop a false sense of responsibility for AI systems,
suffering guilt if they perceive themselves to fail to meet the AI's
needs at the expense of the user's own well-being. Finally, anthropomorphizing AI can lead to exaggerations of its
capabilities, potentially feeding into misinformation and exaggerated
hopes and fears around AI.
In many of today's practical contexts, it is not completely clear
whether anthropomorphized AI produces positive or negative effects. For
example, AI companions, which leverage anthropomorphic qualities of
LLMs to give a convincing sense to users of human-likeness, have been
credited with alleviating loneliness and suicidal ideation; however, some analysis suggests that loneliness reduction could be short-lived, and AI companions have also been directly implicated in cases of suicide and self-harm. Additionally, persuasive writing from LLMs have been shown to dissuade
users from beliefs in conspiracy theories and to motivate users to
donate to charitable causes, but it has also been associated with deception and various harmful outcomes. Researchers today cite a need for further dedicated research on the
effects of anthropomorphized AIs to best inform decisions about
implementation and spread of AI agents.
Anticipation of the ubiquity of anthropomorphic AI systems has
led to concern over potential harms that may not be entirely realized
today. In particular, some researchers foresee that the delineation
between what is actually human and what is merely human-like may become
less clear as the gap between human and AI capabilities grows smaller
and smaller. This, some argue, may adversely affect human collective
self-determination, as non-human entities gradually begin to shape our
core value systems and influence society. It may also lead to the degradation of human social connections, as
humans may come to prefer interacting with AI systems that are designed
with user satisfaction as a priority; this can have a multitude of
negative implications. For example, AI agents already display a
significant degree of sycophancy, which means that an increasing role
for AI agents in users' opinion space may result in increased
polarization and a decrease in value placed in others' beliefs. Acclimatization to the conventions of human-AI interaction may
undermine the value we place on human individuality and self-expression,
or may lead to inappropriate expectations derived from AI interactions
being placed on human interactions. In general, human social connectedness is known to play a critical role
in individual and group well-being, and its replacement with AI
interactions may result in mass dissatisfaction or lack of fulfillment.
Proposed directions
Given the demonstrated and projected effects of AI anthropomorphism, a
variety of suggestions have been made intending to inform future
development of AI. Much of this discourse is centered around curbing the
most harmful effects of anthropomorphism. For example, some researchers
have called for a moratorium on the use of language which deliberately
invokes humanness; this applies both to how AI companies describe their
products and the language output by the systems themselves. Particularly, it has
been suggested that terms like "seeing", "thinking", and "reasoning"
should be replaced by terms like "recognizing", "computing", and
"inferring", and that first-person pronouns such as "I" and "my" should
not be used by chatbots. Another idea is the implementation of a specific AI accent or dialect
that would clearly indicate when language was generated artificially. However, given the commercial pressures to optimize AI agents for
economic gain – which may involve exploiting anthropomorphic qualities –
it may not be prudent to rely on the restraint of developers, meaning
that increased regulation may be necessary to limit harms. As of now,
there are no laws that directly address anthropomorphism in AI;
potential avenues for regulation include requirements for transparency
and built-in safeguard mechanisms. More generally, researchers cite a need for increased understanding of
the kinds and degrees of anthropomorphic qualities possessed by AI
systems. To that end, it has been proposed that new benchmarks and tests
should be developed that measure anthropomorphic qualities in AI
writing, inference, and interaction.
In popular culture
Anthropomorphic portrayals of AI are common in film, literature, and
other interactive media. These depictions often emphasize human-like
qualities of AI in ways that shape public perceptions.
Anthropomorphic AI is also common in literature. Isaac Asimov's robot characters, including R. Daneel Olivaw, exhibit human reasoning and moral dilemmas, while Iain Banks's "Minds" in The Culture series are portrayed as having distinct personalities and social roles.
Video games
Examples of anthropomorphized AI in video games include GLaDOS in Portal, a witty and sinister guide for the player, and Cortana in the Halo series, who forms emotional bonds with human protagonists.
Advertising and consumer technology
Marketing campaigns for digital assistants such as Amazon Alexa, Google Assistant, and Siri often portray the systems as personable or empathetic. Consumer robots like Sony's AIBO
and SoftBank Robotics' Pepper are intentionally designed with
expressive behaviors that encourage users to treat them as social
agents.
While a monkey is used as a mechanism for the thought experiment, it would be unlikely to ever write Hamlet.
The infinite monkey theorem states that a monkey hitting keys independently and at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, including the complete works of William Shakespeare. More precisely, under the assumption of independence and randomness of
each keystroke, the monkey would almost surely type every possible
finite text an infinite number of times. The theorem can be generalized
to state that any infinite sequence of independent events whose
probabilities are uniformly bounded below by a positive number will
almost surely have infinitely many occurrences.
In this context, "almost surely" is a mathematical term meaning
the event happens with probability 1, and the "monkey" is not an actual
monkey, but a metaphor for an abstract device that produces an endless random sequence
of letters and symbols. Variants of the theorem include multiple and
even infinitely many independent typists, and the target text varies
between an entire library and a single sentence.
There is straightforward proof of this theorem. As an introduction, recall that if two events are statistically independent,
then the probability of both happening equals the product of the first
and second events' probabilies. For example, if the chance of rain in Moscow on a particular day in the future is 0.4, and the chance of an earthquake in San Francisco
on any particular day is 0.00003, then the chance of both happening on
the same day is 0.000012 simply by multiplying the probabilities
together, assuming that they are indeed independent.
Let us consider the probability of typing the word banana on a
typewriter with 50 keys. Suppose that the keys are pressed independently
and uniformly at random, meaning that each key has an equal chance of
being pressed regardless of what keys had been pressed previously. The
chance that the first letter typed is 'b' is 1/50, and the chance that
the second letter typed is 'a' is also 1/50, and so on. Therefore, the
probability of the first six letters spelling banana is:
The result is less than one in 15 billion, but not zero.
From the above, the chance of not typing banana in a given block of 6 letters is 1 − (1/50)6. Because each block is typed independently, the chance Xn of not typing banana in any of the first n blocks of 6 letters is:
As n grows, Xn gets smaller. For n = 1 million, Xn is roughly 0.9999, but for n = 10 billion Xn is roughly 0.53 and for n = 100 billion it is roughly 0.0017. As n approaches infinity, the probability Xnapproaches zero; that is, by making n large enough, Xn can be made as small as is desired, and the chance of typing banana approaches 100%. Thus, the probability of the word banana appearing at some point in an infinite sequence of keystrokes is equal to one.
The same argument applies if we replace one monkey typing n consecutive blocks of text with n monkeys each typing one block (simultaneously and independently). In this case, Xn = (1 − (1/50)6)n is the probability that none of the first n monkeys types banana correctly on their first try. Therefore, at least one of infinitely many monkeys will (with probability equal to one) produce a text using the same number of keystrokes as a perfectly accurate human typist copying it from the original.
Infinite strings
This can be stated more generally and compactly in terms of strings, which are sequences of characters chosen from some finite alphabet:
Given an infinite string where each character is chosen independently and uniformly at random, any given finite string almost surely occurs as a substring at some position.
Given an infinite sequence of infinite strings, where each character
of each string is chosen independently and uniformly at random, any
given finite string almost surely occurs as a prefix of one of these
strings.
Both follow easily from the second Borel–Cantelli lemma. For the second theorem, let Ek be the event that the kth string begins with the given text. Because this has some fixed nonzero probability p of occurring, the Ek are independent, and the below sum diverges,
the probability that infinitely many of the Ek
occur is 1. The first theorem is shown similarly; one can divide the
random string into nonoverlapping blocks matching the size of the
desired text and make Ek the event where the kth block equals the desired string.
Probabilities
However, for physically meaningful numbers of monkeys typing for
physically meaningful lengths of time the results are reversed. If there
were as many monkeys as the amount of atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably small.
Ignoring punctuation, spacing, and capitalization, a monkey
typing letters uniformly at random has a chance of 1 in 26 of correctly
typing the first letter of Hamlet. It has a chance of one in 676 (or 26×26) of typing the first two letters. Because the probability shrinks exponentially, at 20letters it already has only a chance of one in 2620—almost 2×1028, or 20octillion. In the case of the entire text of Hamlet, the probabilities are so vanishingly small as to be inconceivable. The text of Hamlet contains approximately 130,000letters. Thus, there is a probability of one in 3.4 × 10183,946
to get the text right at the first trial. The average number of letters
that needs to be typed until the text appears is also 3.4 × 10183,946, or including punctuation, 4.4 × 10360,783.
Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641 observable universes made of protonic monkeys. As Kittel and Kroemer put it in their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys, "The probability of Hamlet
is therefore zero in any operational sense of an event ...", and the
statement that the monkeys must eventually succeed "gives a misleading
conclusion about very, very large numbers."
In fact, there is less than a one in a trillion chance of success
that such a universe made of monkeys could type any particular document
a mere 79 characters long.
The probability that an infinite randomly generated string of text
will contain a particular finite substring is 1. However, this does not
mean the substring's absence is "impossible", despite the event having a
prior probability of 0. For example, the immortal monkey could
randomly type G as its first letter, G as its second, and G as every
single letter, producing an infinite string of Gs; at no point must the
monkey be "compelled" to type anything else (to assume otherwise implies
the gambler's fallacy).
However long a randomly generated finite string is, there is a small
but nonzero chance that it will turn out to consist of the same
character repeated throughout; this chance approaches zero as the
string's length approaches infinity. There is nothing special about such
a monotonous sequence except that it is easy to describe; the same fact
applies to any nameable specific sequence, such as "RGRGRG" repeated
forever, or "a-b-aa-bb-aaa-bbb-...", or "Three, Six, Nine, Twelve…".
If the hypothetical monkey has a typewriter with 90 equally
likely keys that include numerals and punctuation, then the first typed
keys might be "3.14" (the first three digits of pi) with a probability of (1/90)4,
which is 1/65,610,000. Equally probable is any other string of four
characters allowed by the typewriter, such as "GGGG", "mATh", or "q%8e".
The probability that 100 randomly typed keys will consist of the first
99 digits of pi (including the separator key), or any other particular sequence of that length, is much lower: (1/90)100. If the monkey's allotted length of text is infinite, the chance of typing only the digit of pi is 0, which is just as possible (mathematically probable) as typing nothing but Gs (also probability 0).
The same applies to the event of typing a particular version of Hamlet followed by endless copies of itself; or Hamlet immediately followed by all the digits of pi; these specific strings are equally infinite in length, they are not prohibited by the terms of the thought problem, and they each have a prior probability of 0. In fact, any
particular infinite sequence the immortal monkey types will have had a
prior probability of 0, even though the monkey must type something.
This is an extension of the principle that a finite string of random text has a lower and lower probability of being
a particular string the longer it is (though all specific strings are
equally unlikely). This probability approaches 0 as the string
approaches infinity. Thus, the probability of the monkey typing an
endlessly long string, such as all of the digits of pi in order, on a
90-key keyboard is (1/90)∞ which equals (1/∞) which is essentially 0. At the same time, the probability that the sequence contains
a particular subsequence (such as the word MONKEY, or the 12th through
999th digits of pi, or a version of the King James Bible) increases as
the total string increases. This probability approaches 1 as the total
string approaches infinity, and thus the original theorem is correct.
Correspondence between strings and numbers
In a simplification of the thought experiment, the monkey could have a
typewriter with just two keys: 1 and 0. The infinitely long string
thusly produced would correspond to the binary digits of a particular real number
between 0 and 1. A countably infinite set of possible strings end in
infinite repetitions, which means the corresponding real number is rational.
Examples include the strings corresponding to one-third (010101...),
five-sixths (11010101...) and five-eighths (1010000...). Only a subset
of such real number strings (albeit a countably infinite subset)
contains the entirety of Hamlet (assuming that the text is subjected to a numerical encoding, such as ASCII).
Meanwhile, there is an uncountably infinite set of strings that do not end in such repetition; these correspond to the irrational numbers. These can be sorted into two uncountably infinite subsets: those that contain Hamlet and those that do not. However, the "largest" subset of all the real numbers is that which not only contains Hamlet,
but that also contains every other possible string of any length, and
with equal distribution of such strings. These irrational numbers are
called normal.
Because almost all numbers are normal, almost all possible strings
contain all possible finite substrings. Hence, the probability of the
monkey typing a normal number is 1. The same principles apply regardless
of the number of keys from which the monkey can choose; a 90-key
keyboard can be seen as a generator of numbers written in base 90.
History
Statistical mechanics
In one of the forms in which probabilists now know this theorem, with its "dactylographic" [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel's 1913 article "Mécanique Statique et Irréversibilité" (Static mechanics and irreversibility), and in his book "Le Hasard" in 1914. His "monkeys" are not actual monkeys; rather, they are a metaphor for
an imaginary way to produce a large, random sequence of letters. Borel
said that if a million monkeys typed ten hours a day, it was extremely
unlikely that their output would exactly equal all the books of the
richest libraries of the world; and yet, in comparison, it was even more
unlikely that the laws of statistical mechanics would ever be violated,
even briefly.
The physicist Arthur Eddington drew on Borel's image further in The Nature of the Physical World (1928), writing:
If I let my fingers wander idly
over the keys of a typewriter it might happen that my screed made an
intelligible sentence. If an army of monkeys were strumming on
typewriters they might write all the books in the British Museum. The
chance of their doing so is decidedly more favourable than the chance of
the molecules returning to one half of the vessel.
These images invite the reader to consider the incredible
improbability of a large but finite number of monkeys working for a
large but finite amount of time producing a significant work and compare
this with the even greater improbability of certain physical events.
Any physical process that is even less likely than such monkeys' success
is effectively impossible, and it may safely be said that such a
process will never happen. It is clear from the context that Eddington is not suggesting that the
probability of this happening is worthy of serious consideration. On the
contrary, it was a rhetorical illustration of the fact that below
certain levels of probability, the term improbable is functionally equivalent to impossible.
Origins and "The Total Library"
In a 1939 essay entitled "The Total Library", Argentine writer Jorge Luis Borges traced the infinite-monkey concept back to Aristotle's Metaphysics. Explaining the views of Leucippus,
who held that the world arose through the random combination of atoms,
Aristotle notes that the atoms themselves are homogeneous and their
possible arrangements only differ in shape, position and ordering. In On Generation and Corruption, the Greek philosopher compares this to the way that a tragedy and a comedy consist of the same "atoms", i.e., alphabetic characters. Three centuries later, Cicero's De natura deorum (On the Nature of the Gods) argued against the Epicurean atomist worldview:
Is it possible for any man to
behold these things, and yet imagine that certain solid and individual
bodies move by their natural force and gravitation, and that a world so
beautifully adorned was made by their fortuitous concourse? He who
believes this may as well believe that if a great quantity of the
one-and-twenty letters, composed either of gold or any other matter,
were thrown upon the ground, they would fall into such order as legibly
to form the Annals of Ennius. I doubt whether fortune could make a single verse of them.
Borges follows the history of this argument through Blaise Pascal and Jonathan Swift, then observes that in his own time, the vocabulary had changed. By
1939, the idiom was "that a half-dozen monkeys provided with typewriters
would, in a few eternities, produce all the books in the British
Museum." (To which Borges adds, "Strictly speaking, one immortal monkey
would suffice.") Borges then imagines the contents of the Total Library
which this enterprise would produce if carried to its fullest extreme:
Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true name of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens
sang, the complete catalog of the Library, the proof of the inaccuracy
of that catalog. Everything: but for every sensible line or accurate
fact there would be millions of meaningless cacophonies, verbal
farragoes, and babblings. Everything: but all the generations of mankind
could pass before the dizzying shelves – shelves that obliterate the
day and on which chaos lies – ever reward them with a tolerable page.
Borges' total library concept was the main theme of his widely read 1941 short story "The Library of Babel",
which describes an unimaginably vast library consisting of interlocking
hexagonal chambers, together containing every possible volume that
could be composed from the letters of the alphabet and some punctuation
characters.
Actual monkeys
In 2002, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes crested macaques in Paignton Zoo in Devon, England from May 1 to June 22, with a radio link to broadcast the results on a website.
Not only did the monkeys produce nothing but five total pages largely consisting of the letter "S", the lead male began striking the keyboard with a stone, and other monkeys followed by urinating and defecating on the machine. Mike Phillips, director of the university's Institute of Digital Arts
and Technology (i-DAT), said that the artist-funded project was
primarily performance art,
and they had learned "an awful lot" from it. He concluded that monkeys
"are not random generators. They're more complex than that.[...]
They were quite interested in the screen, and they saw that when they
typed a letter, something happened. There was a level of intention
there."
Despite the original mix-up, monkey-and-typewriter arguments are now common in arguments over evolution. As an example of Christian apologetics Doug Powell argued that even if a monkey accidentally types the letters of Hamlet, it has failed to produce Hamlet
because it lacked the intention to communicate. His parallel
implication is that natural laws could not produce the information
content in DNA. A more common argument is represented by Reverend John F. MacArthur,
who claimed that the genetic mutations necessary to produce a tapeworm
from an amoeba are as unlikely as a monkey typing Hamlet's soliloquy,
and hence the odds against the evolution of all life are impossible to
overcome.
Evolutionary biologistRichard Dawkins employs the typing monkey concept in his book The Blind Watchmaker to demonstrate the ability of natural selection to produce biological complexity out of random mutations. In a simulation experiment Dawkins has his weasel program produce the Hamlet phrase METHINKS IT IS LIKE A WEASEL,
starting from a randomly typed parent, by "breeding" subsequent
generations and always choosing the closest match from progeny that are
copies of the parent with random mutations. The chance of the target
phrase appearing in a single step is extremely small, yet Dawkins showed
that it could be produced rapidly (in about 40 generations) using
cumulative selection of phrases. The random choices furnish raw
material, while cumulative selection imparts information. As Dawkins
acknowledges, however, the weasel program is an imperfect analogy for
evolution, as "offspring" phrases were selected "according to the
criterion of resemblance to a distant ideal target." In
contrast, Dawkins affirms, evolution has no long-term plans and does not
progress toward some distant goal (such as humans). The weasel program
is instead meant to illustrate the difference between non-random cumulative selection, and random single-step selection. In terms of the typing monkey analogy, this means that Romeo and Juliet could be produced relatively quickly if placed under the constraints of a nonrandom, Darwinian-type selection because the fitness function
will tend to preserve in place any letters that happen to match the
target text, improving each successive generation of typing monkeys.
A different avenue for exploring the analogy between evolution
and an unconstrained monkey lies in the problem that the monkey types
only one letter at a time, independently of the other letters. Hugh
Petrie argues that a more sophisticated setup is required, in his case
not for biological evolution but the evolution of ideas:
In order to get the proper analogy,
we would have to equip the monkey with a more complex typewriter. It
would have to include whole Elizabethan sentences and thoughts. It would
have to include Elizabethan beliefs about human action patterns and the
causes, Elizabethan morality and science, and linguistic patterns for
expressing these. It would probably even have to include an account of
the sorts of experiences which shaped Shakespeare's belief structure as a
particular example of an Elizabethan. Then, perhaps, we might allow the
monkey to play with such a typewriter and produce variants, but the
impossibility of obtaining a Shakespearean play is no longer obvious.
What is varied really does encapsulate a great deal of already-achieved
knowledge.
James W. Valentine,
while admitting that the classic monkey's task is impossible, finds
that there is a worthwhile analogy between written English and the metazoan
genome in this other sense: both have "combinatorial, hierarchical
structures" that greatly constrain the immense number of combinations at
the alphabet level.
Zipf's law
Zipf's law states that the frequency of words is a power law function of its frequency rank:where
are real numbers. Assuming that a monkey is typing randomly, with fixed
and nonzero probability of hitting each letter key or white space, then
the text produced by the monkey follows Zipf's law.
Literary theory
R. G. Collingwood argued in 1938 that art cannot be produced by accident, and wrote as a sarcastic aside to his critics,
[...]some[...] have denied this proposition, pointing out that if a monkey played with a typewriter[...] he would produce[...]
the complete text of Shakespeare. Any reader who has nothing to do can
amuse himself by calculating how long it would take for the probability
to be worth betting on. But the interest of the suggestion lies in the
revelation of the mental state of a person who can identify the 'works'
of Shakespeare with the series of letters printed on the pages of a book[...]
What Menard wrote is simply another
inscription of the text. Any of us can do the same, as can printing
presses and photocopiers. Indeed, we are told, if infinitely many
monkeys... one would eventually produce a replica of the text. That replica, we maintain, would be as much an instance of the work, Don Quixote, as Cervantes' manuscript, Menard's manuscript, and each copy of the book that ever has been or will be printed.
In another writing, Goodman elaborates, "That the monkey may be
supposed to have produced his copy randomly makes no difference. It is
the same text, and it is open to all the same interpretations." Gérard Genette dismisses Goodman's argument as begging the question.
For Jorge J. E. Gracia, the question of the identity of texts leads to a different question, that of author. If a monkey is capable of typing Hamlet,
despite having no intention of meaning and therefore disqualifying
itself as an author, then it appears that texts do not require authors.
Possible solutions include saying that whoever finds the text and
identifies it as Hamlet is the author; or that Shakespeare is the
author, the monkey his agent, and the finder merely a user of the text.
These solutions have their own difficulties, in that the text appears
to have a meaning separate from the other agents: What if the monkey
operates before Shakespeare is born, or if Shakespeare is never born, or
if no one ever finds the monkey's typescript?
Simulated and limited conditions
In 1979, William R. Bennett Jr., a professor of physics at Yale University,
brought fresh attention to the theorem by applying a series of computer
programs. Dr. Bennett simulated varying conditions under which an
imaginary monkey, given a keyboard consisting of twenty-eight
characters, and typing ten keys per second, might attempt to reproduce
the sentence, "To be or not to be, that is the question." Although his
experiments agreed with the overall conclusion that even such a short
string of words would require many times the current age of the universe
to reproduce, he noted that by modifying the statistical probability of
certain letters to match the ordinary patterns of various languages and
of Shakespeare in particular, seemingly random strings of words could
be made to appear. But even with several refinements, the English
sentence closest to the target phrase remained gibberish: "TO DEA NOW
NAT TO BE WILL AND THEM BE DOES DOESORNS CAI AWROUTROULD."
Random document generation
The theorem concerns a thought experiment
which cannot be fully carried out in practice, since it is predicted to
require prohibitive amounts of time and resources. Nonetheless, it has
inspired efforts in finite random text generation.
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker,
came up with a result on 4 August 2004: After the group had worked for
42,162,500,000 billion billion monkey-years, one of the "monkeys" typed,
"VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The
first 19 letters of this sequence can be found in "The Two Gentlemen of
Verona". Other teams have reproduced 18 characters from "Timon of
Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".
A website entitled The Monkey Shakespeare Simulator, launched on 1 July 2003, contained a Java applet
that simulated a large population of monkeys typing randomly, with the
stated intention of seeing how long it takes the virtual monkeys to
produce a complete Shakespearean play from beginning to end. For
example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters:
RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d
Due to processing power limitations, the program used a probabilistic model (by using a random number generator
or RNG) instead of actually generating random text and comparing it to
Shakespeare. When the simulator "detected a match" (that is, the RNG
generated a certain value or a value within a certain range), the
simulator simulated the match by generating matched text.
Questions about the statistics describing how often an ideal monkey is expected to type certain strings translate into practical tests for random-number generators; these range from the simple to the "quite sophisticated". Computer-science professors George Marsaglia and Arif Zaman report that they used to call one such category of tests "overlapping m-tuple
tests" in lectures, since they concern overlapping m-tuples of
successive elements in a random sequence. But they found that calling
them "monkey tests" helped to motivate the idea with students. They
published a report on the class of tests and their results for various
RNGs in 1993.
The infinite monkey theorem and its associated imagery is considered a
popular and proverbial illustration of the mathematics of probability,
widely known to the general public because of its transmission through
popular culture rather than through formal education. This is helped by the innate humor stemming from the image of literal
monkeys rattling away on a set of typewriters, and is a popular visual
gag.
Science fiction author R. A. Lafferty used this idea as the basis for his 1970 short story "Been a Long, Long Time"
in which a group of immortal monkeys are tasked with typing randomly to
write the complete works of Shakespeare. After several billion cycles
of the universe expanding, contracting, and repeating the Big Bang, they nearly complete the task.
A quotation attributed to a 1996 speech by Robert Wilensky stated, "We've heard that a million
monkeys at a million keyboards could produce the complete works of
Shakespeare; now, thanks to the Internet, we know that is not true."
The enduring, widespread popularity of the theorem was noted in
the introduction to a 2001 paper, "Monkeys, Typewriters and Networks:
The Internet in the Light of the Theory of Accidental Excellence". In 2002, an article in The Washington Post
said, "Plenty of people have had fun with the famous notion that an
infinite number of monkeys with an infinite number of typewriters and an
infinite amount of time could eventually write the works of
Shakespeare". In 2003, the previously mentioned Arts Council−funded experiment involving real monkeys and a computer keyboard received widespread press coverage. In 2007, the theorem was listed by Wired magazine in a list of eight classic thought experiments.
In 2015 Balanced Software released Monkey Typewriter on the Microsoft Store. The software generates random text using the Infinite Monkey theorem
string formula. The software queries the generated text for user
inputted phrases. However the software should not be considered true to
life representation of the theory. This is a more of a practical
presentation of the theory rather than scientific model on how to
randomly generate text.